id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.14776 | Bound States in Bent Soft Waveguides | The aim of this paper is to show that a two-dimensional Schr\"odinger
operator with the potential in the form of a `ditch' of a fixed profile can
have a geometrically induced discrete spectrum; this happens if such a
potential channel has a single or multiple bends being straight outside a
compact. Moreover, under stronger geometric restrictions the claim remains true
in the presence of a potential bias at one of the channel `banks'. | Pavel Exner, Semjon Vugalter | 2023-04-28T11:36:25Z | http://arxiv.org/abs/2304.14776v2 | # Bound states in bent soft waveguides
###### Abstract.
The aim of this paper is to show that a two-dimensional Schrodinger operator with the potential in the form of a 'ditch' of a fixed profile can have a geometrically induced discrete spectrum; this happens if such a potential channel has a single or multiple bends being straight outside a compact. Moreover, under stronger geometric restrictions the claim remains true in the presence of a potential bias at one of the channel 'banks'.
## 1. Introduction
Behavior of quantum particles confined to tubular regions attracted a lot of attention in the last decades with the motivation coming from two sources. On the physics side it was the possibility to use such models to describe a guided dynamics in various condensed matter systems. At the same time, this appeared to be a source of interesting mathematical problems, in particular, those concerning spectral effects coming from the geometry of the confinement; for an introduction to the topic and a bibliography we refer to the book [1].
There are different ways how to localize a particle in the configuration space. One possibility is a hard confinement where the Hamiltonian is typically the Dirichlet Laplacian associated with a tube in \(\mathbb{R}^{d}\) (or more complicated regions such as layers, networks, etc.). From the point of view of application to objects like semiconductor wires such a model has a drawback; it does not take into account the tunneling between different parts the waveguide. This fact motivated investigation of the 'leaky' confinement in which the Hamiltonian is instead Schrodinger operator with an attractive singular interaction supported by a curve (or a surface, metric graph, etc.); to have it well defined, the codimension of the interaction support must not exceed three.
If we stay for simplicity in the two dimensional situation, both models exhibit _curvature-induced bound states_: whenever the strip, or the curve supporting the \(\delta\) interaction, is non-straight but asymptotically straight, the corresponding Hamiltonian has a non-void discrete spectrum; this claim is valid universally modulo technical requirements on the regularity and asymptotic behavior.
Leaky guide model has another drawback in assuming that the interaction support has zero width. This motivated recently investigation
of a more realistic situation when the potential in the Schrodinger operator is regular in the form of a channel of a fixed profile [10]. The term coined was _soft waveguides_; the analogous problem was studied in three dimensions [10] as well as for soft layers [11, 12]. One has to add that such operators were considered before [1, 13], however, the focus was then on the limit in which the potential shrinks transversally to a manifold; in the physics literature the idea of determining the right 'quantization' on a manifold through such a limit was examined a long time ago [12, 13].
Not very surprisingly, soft waveguides were already shown to share properties with their hard and leaky counterparts, an example is the ground state optimization in a loop-shaped geometry [1]. Some results have also been obtained concerning the problem we are interested in here, the existence of curvature-induced bound states, however, so far they lack the universal character indicated above. In [10] Birman-Schwinger principle was used to derive a sufficient condition under which the discrete spectrum is nonempty, expressed in terms of of positivity of a certain integral which, in general, is not easy to evaluate. An alternative is to apply the variational method; in this way the existence was established in the example of a particular geometry, often referred to as a 'bookcover' [14]. We note in passing that it is paradoxically easier to establish the existence in conic-shaped soft layers, where the discrete spectrum is infinite [11, 12].
The trouble with the variational approach is that it is not easy, beyond the simple example mentioned, to find a suitable trial function. The aim of this paper is to extend the existence result using a variational method to a much wider, even if still not optimal class of soft waveguides. The main restrictions in our analysis are the limitation of the curved part into a bounded region, a compact support of the potential defining the channel profile, and the requirement of the profile symmetry. The latter restriction can be relaxed in some situations, in particular, if the profile potential is sign-changing and the transverse part of operator, the operator (2.2) below, has zero-energy resonance.
We will also consider the situation when the system has a constant positive potential bias in one of the regions separated by the profile potential support. In this case we have a stronger geometry restriction: we have to assume that one of the two regions is _convex_. If the bias potential is supported in it, we can again prove the existence of a discrete spectrum, even without the symmetry assumption. If the bias is supported in the opposite region, we have the existence again, however, except in the situation when the operator (2.2) has a zero-energy resonance; this is in agreement with the result of [1] where we treated a system which can be regarded as a singular version of the present system. Let us stress that the convexity makes it also possible
to prove the existence in the absence of the bias and the symmetry restriction, provided that operator (2.2) has a negative eigenvalue.
In the following section we will state the problem in proper terms and present the main results. The rest of the paper is devoted to the proofs. The next two sections deal with case without the bias; in Sec. 3 we prove part (a) of Theorem 2.2 which concerns the situation when the operator (2.2) has a zero-energy resonance, Sec. 4 provides the proof of part (a) of Theorem 2.4 which addresses the case when the operator has a negative eigenvalue and the channel profile is symmetric. Finally, in Sec. 5 we prove parts (b) of the two theorems which establish the existence results in the situation when one of the two regions to which the potential channel, not necessarily symmetric, divides the plane is convex, even in the absence of the bias, except in the zero-energy resonance case.
## 2. Statement of the problem and main results
Let us now state the problem described in the introduction. We begin with the assumptions which are split into two groups; the first one concerns the support of the potential, the other the channel profile. The former is a strip built around a curve \(\Gamma\), understood as the graph of a function \(\Gamma:\ \mathbb{R}\to\mathbb{R}^{2}\) such that \(|\dot{\Gamma}(s)|=1\). Without repeating it at every occasion we always exclude the trivial situation when \(\Gamma\) is a straight line; in addition to that we suppose:
1. \(\Gamma\) is \(C^{3}\)-smooth, non-straight but straight outside a compact; its curved part consists of a finite number of segments such that on each of them the monotonicity character of the signed curvature \(\kappa(\cdot)\) of \(\Gamma\) and its sign are preserved,
2. \(|\Gamma(s_{+})-\Gamma(s_{-})|\to\infty\) as \(s_{\pm}\to\pm\infty\), in other words, the two straight parts of \(\Gamma\) are either not parallel, or if they are, they point in the opposite directions,
3. the strip neighborhood \(\Omega^{a}:=\{x\in\mathbb{R}^{2}:\,\mathrm{dist}(x,\Gamma)<a\}\) of \(\Gamma\) with a halfwidth \(a>0\) does not intersect itself.
Assumption (s3) has various equivalent expressions: one can say, for instance, that the function \(\mathrm{dist}(x,\Gamma(\cdot))\) has for any fixed \(x\in\Omega^{a}\) a unique minimum, or that the map
\[x(s,t)\mapsto\left(\Gamma_{1}(s)-t\Gamma_{2}^{\prime}(s),\Gamma_{2}(s)+t \Gamma_{1}^{\prime}(s)\right) \tag{2.1}\]
from the straight strip \(\Omega^{a}_{0}:=\mathbb{R}\times(-a,a)\) to \(\mathbb{R}^{2}\) is a bijection, in fact, a diffeomorphism; \(\vec{n}(s)=(-\Gamma_{2}^{\prime}(s),\Gamma_{1}^{\prime}(s))\) is, of course, the (inward) normal to the curve at the point \(\Gamma(s)\). Under assumption (s1), the signed curvature \(\kappa:\,\kappa(s)=(\Gamma_{2}^{\prime}\Gamma_{1}^{\prime\prime}-\Gamma_{1}^ {\prime}\Gamma_{2}^{\prime\prime})(s)\) is smooth and compactly supported function; a necessary, but in general not sufficient condition for (s3) to hold is \(a\|\kappa\|_{\infty}<1\) which ensures the local injectivity of the map. The curve divides the plane into open regions which we denote
as \(\Omega_{\pm}\); for the sake of definiteness we assume that \(\Omega_{+}\) is _at the left side_ when one looks in the direction of the increasing arc length variable \(s\).
We also introduce \(\Omega_{\pm}^{a}:=\Omega_{\pm}\cap\Omega^{a}\) so that we have \(\Omega^{a}=\Omega_{+}^{a}\cup\Gamma\cup\Omega_{-}^{a}\); given our choice of the normal orintation, the labels correspond to the sign of the transversal variable \(t\). Finally, we will use a natural symbol for the complement of the strip, namely \(\Omega_{\rm out}:=\mathbb{R}^{2}\setminus\Omega^{a}\), and its one-sided components will be denoted as \(\Omega_{\pm}^{\rm out}:=\Omega_{\pm}\setminus\Omega^{a}\) - cf. Fig. 1.
The second group of assumptions concerns the potential. Its profile is determined by a function \(v:\,\mathbb{R}\to\mathbb{R}\) of which we assume
1. \(v\in L^{2}(\mathbb{R})\) and \(\;\operatorname{supp}v\subset[-a,a]\,\);
in some situations, specifically in part (a) of Theorem 2.4 below, we will require it additionally to be mirror-symmetric,
1. \(v(t)=v(-t)\) for \(t\in[-a,a]\).
In addition to the potential defining the channel we are going to consider, in general, also a one-sided potential bias of the system. To this aim, we introduce the one-dimensional Schrodinger operator
\[h:=-\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}+v(t)+V_{0}\chi_{[a,\infty)}(t),\quad V _{0}\geq 0. \tag{2.2}\]
The crucial role will be played by the spectral bottom of this operator, specifically we will be concerned with the following two possibilities:
1. \(\inf\sigma(h)\) is a negative (ground state) eigenvalue \(\mu\) associated with a real-valued eigenfunction \(\phi_{0}\) which we may without loss of generality normalize by the requirement \(\phi_{0}(-a)=1\),
2. operator \(h\) has a zero-energy resonance, meaning that \(h\geq 0\) and \(-(1-\varepsilon)\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}+v(t)+V_{0}\chi_{[a, \infty)}(t)\) has a negative eigenvalue for any \(\varepsilon>0\). In this case, the equation \(h\phi=0\) has a real-valued solution \(\phi_{0}\in H^{2}_{\rm loc}(\mathbb{R})\) not increasing at infinity; it will be again supposed to satisfy the normalization condition \(\phi_{0}(-a)=1\).
Figure 1. Scheme of the waveguide
The main object of our interest is the Schrodinger operator
\[H_{\Gamma,V}=-\Delta+V(x)\] (2.3a) on \[L^{2}(\mathbb{R}^{2})\] with the potential defined using the locally orthogonal coordinates \[(s,t)\] appearing in ( 2.1 ) as \[V(x)=\left\{\begin{array}{ll}v(t)&\quad\text{if}\,\,\,x\in\Omega^{a}\\ V_{0}&\quad\text{if}\,\,\,x\in\Omega_{+}\setminus\Omega^{a}\\ 0&\quad\text{otherwise}\end{array}\right. \tag{2.3b}\]
We will often drop the subscript of \(H_{\Gamma,V}\) if it is clear from the context.
**Proposition 2.1**.: _Under the assumptions (s1)-(s3), (p1) and (p3), the operator (2.3) is self-adjoint, \(D(H_{\Gamma,V})=H^{2}(\mathbb{R}^{2})\), and \(\sigma_{\rm ess}(H_{\Gamma,V})=[\mu,\infty)\). If \(h\geq 0\), the same is true with \(\mu=0\)._
Proof.: The self-adjointness is easy to check; it is sufficient to ascertain that the potential (2.3b) is infinitely small with respect to \(-\Delta\), that is, to any \(a>0\) there is a \(b>0\) such that \(\|V\psi\|\leq a\|\Delta\psi\|+b\|\psi\|\) holds for all \(\psi\in H^{2}(\mathbb{R}^{2})\). Suppose first that \(V_{0}=0\). In view of assumption (p1) we can use Kato-Rellich theorem [RS, Sec. X.1]. We decompose any given \(\psi\) into a sum \(\psi=\psi_{+}+\psi_{0}+\psi_{-}\) of \(H^{2}\) functions such that \(\operatorname{supp}\psi_{\pm}\restriction_{\Omega^{a}}\) lies in the straight parts of the strip and \(\operatorname{supp}\psi_{0}\restriction_{\Omega^{a}}\) contains the curved part. To the latter the theorem applies directly since \(\psi_{0}\) is essentially bounded with bounded support so that \(V\restriction_{\operatorname{supp}\psi_{0}}\in L^{2}\). In the straight parts we get first using (p1) the one-dimensional version of the inequality in the transverse variable, then we lift it to two dimensions using the fact that \(\|\partial_{t}^{2}\psi\|\leq\|\Delta\psi\|\). The constants \(b_{j}\), \(j=0,\pm\), in the obtained inequalities are in general different; we put \(b:=\sqrt{3}\max\{b_{+},b_{0},b_{-}\}\). Using then the triangle and Schwarz inequalities which, in particular, give \(\|\psi\|\leq\sqrt{3}\sum_{j}\|\psi_{j}\|\), we arrive at the desired conclusion. Finally, the self-adjointness is not affected by adding the bounded potential \(V_{0}\chi_{\Omega_{+}^{\rm out}}\).
The identification of the essential spectrum of \(H_{\Gamma,V}\) with the interval \([\mu,\infty)\), where \(\mu=\inf\sigma(h)\), was established in [Ex20, Proposition 3.1] under slightly different assumptions. The argument can be easily modified for our present purpose; the requirement on the smoothness of \(\Gamma\) we made is stronger than there, and neither the substitution of a bounded negative \(v\) by a possibly sign indefinite square integrable one, nor the addition of a potential bias alters the conclusion.
Note also that the above Hamiltonian can be investigated using the associate quadratic form \(Q_{\Gamma,V}\), mostly written without the indices specifying the curve and the potential, and defined by
\[Q[\psi]=\|\nabla\psi\|^{2}+\int_{\Omega^{a}}V(x)|\psi(x)|^{2}\,\mathrm{d}x, \quad D(Q)=H^{1}(\mathbb{R}^{2})\,; \tag{2.3c}\]
in the presence of a potential bias we have instead formula (2.2) below.
Now we are in position to state our main results. The assumptions may appear in various combinations; we group them according the according to the spectral threshold \(\mu\) starting with the situation when operator (2.2) has a zero-energy resonance:
**Theorem 2.2** (threshold resonance case).: _Assume (s1)-(s3), (p1) and (p4); then the following claims are valid: (a) If the bias is absent, \(V_{0}=0\), and_
\[[\phi_{0}(a)^{2}-\phi_{0}(-a)^{2}]\int_{\mathbb{R}}\kappa(s)\,\mathrm{d}s\leq 0 \tag{2.4}\]
_holds, then \(H_{\Gamma,V}\) has at least one negative eigenvalue. (b) The same is true if \(V_{0}>0\) and \(\Omega_{+}\) is convex._
**Remark 2.3**.: Recall that \(\kappa\) does not vanish identically. The condition (2.4) is naturally satisfied if \(\phi_{0}(a)=\phi_{0}(-a)\), in particular, under the mirror-symmetry assumption (p2). Consider further the asymmetric situation, \(\phi_{0}(a)\neq\phi_{0}(-a)\), and recall that the integral in (2.4) equals \(\pi-2\theta\) where \(2\theta\) is the angle between the asymptotes. Consequently, at least one bound state exists then in the zero-energy resonance case if the asymptotes of \(\Gamma\) are parallel and pointing in the opposite directions, \(\theta=\frac{1}{2}\pi\), or if they are not parallel and the resonance solution \(\phi_{0}\) is larger at the 'outer' side of the strip \(\Omega^{a}\).
If \(h\) has negative eigenvalues so that \(\mu<0\), the situation is more complicated and we have to make stronger restrictions on the profile or the shape of the waveguide:
**Theorem 2.4** (eigenvalue case).: _Assume (s1)-(s3) together with (p1) and (p3). Then \(\sigma_{\mathrm{disc}}(H_{\Gamma,V})\) is nonempty under any of the following conditions: (a) \(V_{0}=0\) and assumption (p2) is satisfied. (b) \(V_{0}\geq 0\) and one of the regions \(\Omega_{\pm}\) is convex._
## 3. Proof of Theorem 2.2 - the first part
With the later purpose in mind we will formulate the argument first in the general situation which involves both the bound-state and zero-energy-resonance cases as well as the possible potential bias. In view of Proposition 2.1, it is sufficient to construct a trial function \(\psi\in H^{1}(\mathbb{R}^{2})\) such that \(Q[\psi]<\mu\|\psi\|^{2}\). Let us first fix the geometry. If the two straight parts of \(\Gamma\) are not parallel - cf. Fig. 1 - their line extensions intersect at a point which we choose as the origin \(O\), and use polar coordinates with this center, in which the two halflines correspond to the angles \(\pm\theta_{0}\) for the appropriate \(\theta_{0}\in(0,\frac{1}{2}\pi)\). Furthermore, we fix the point \(s=0\) in such a way that for large \(|s|\) the points with the coordinates \(\pm s\) have the same Euclidean distance from \(O\).
If the asymptotes are parallel (and pointing in the opposite directions according to (s2)), we choose the origin as the point with equal distance from the endpoints of the two halflines. The point with \(s=0\) on the curve is likewise chosen so that those with the coordinates \(\pm s\) have the same Euclidean distance from the origin; in both cases one can check easily that such a choice is unique.
### Trial function inside the strip
For fixed values \(s_{0}\), such that the points with coordinates \(\pm s_{0}\) lay outside the curved part of \(\Gamma\), and \(s^{*}>s_{0}\), to be chosen later, we define
\[\chi_{\text{in}}(s):=\left\{\begin{array}{cl}1&\text{if}\;\,|s|<s_{0}\\ \ln\frac{s^{*}}{|s|}\big{(}\ln\frac{s^{*}}{s_{0}}\big{)}^{-1}&\text{if}\;\,s_{ 0}\leq|s|\leq s^{*}\\ 0&\text{if}\;\,|s|>s^{*}\end{array}\right. \tag{3.1}\]
Recalling that \(\phi_{0}\) is the ground-state eigenfunction or the zero-energy solution normalized by \(\phi_{0}(-a)=1\), we put
\[\psi(s,t)=\phi_{0}(t)\chi_{\text{in}}(s)+\nu g(s,t),\quad|t|\leq a, \tag{3.2}\]
where the parameter \(\nu\) and the function \(g\), compactly supported within \((-s_{0},s_{0})\times(-a,a)\), will be chosen later. We denote by \(Q_{\text{int}}[\psi]\) the contribution to the shifted quadratic form, \(Q[\psi]-\mu\|\psi\|^{2}\), coming from the strip \(\Omega^{a}\), which can be using the parallel coordinates expressed as
\[Q_{\text{int}}[\psi]= \int_{|t|\leq a}\Big{\{}\Big{(}\frac{\partial\psi}{\partial s} \Big{)}^{2}(1-\kappa(s)t)^{-1}+\Big{(}\frac{\partial\psi}{\partial t}\Big{)}^{ 2}(1-\kappa(s)t)\] \[+(v(t)-\mu)|\psi|^{2}(1-\kappa(s)t)\Big{\}}\,\mathrm{d}s\mathrm{d }t.\]
The first term on the right-hand side can be estimated as
\[\int_{|t|\leq a}\Big{(}\frac{\partial\psi}{\partial s}\Big{)}^{2}(1-\kappa(s) t)^{-1}\,\mathrm{d}s\mathrm{d}t\leq 2\tau_{0}^{-1}\|\phi_{0}\,\!\restriction_{[-a,a]}\|^{ 2}\|\chi_{\text{in}}^{\prime}\|^{2}+C\nu^{2},\]
where the norm refers to \(L^{2}(\mathbb{R}),\;\tau_{0}:=1-a\|\kappa\|_{\infty}\) is positive by (s3), and \(C\) depends on \(g\) only; we will use the same letter for generic constants in the following. Note that choosing the parameter \(s^{*}\) in (3.1) large one can make the norm \(\|\chi_{\text{in}}^{\prime}\|=\big{(}\ln\frac{s^{*}}{s_{0}}\big{)}^{-1}\big{(} \frac{1}{s_{0}}-\frac{1}{s^{*}}\big{)}^{1/2}\) small. As for the
other two terms, we have
\[\int_{|t|\leq a}\Big{\{}\Big{(}\frac{\partial\psi}{\partial t}\Big{)} ^{2}(1-\kappa(s)t)+(v(t)-\mu)|\psi|^{2}(1-\kappa(s)t)\Big{\}}\,\mathrm{d}s \mathrm{d}t\] \[=\int_{|t|\leq a}\big{\{}(\phi_{0}^{\prime}(t))^{2}+(v(t)-\mu)| \phi_{0}(t)|^{2}\big{\}}\chi_{\mathrm{in}}^{2}(s)(1-\kappa(s)t)\,\mathrm{d}s \mathrm{d}t\] \[\quad+2\nu\int_{|t|\leq a}\Big{\{}\phi_{0}^{\prime}\frac{\partial g }{\partial t}+(v(t)-\mu)\phi_{0}g\Big{\}}\chi_{\mathrm{in}}(s)(1-\kappa(s)t) \,\mathrm{d}s\mathrm{d}t\] \[\quad+\nu^{2}\int_{|t|\leq a}\Big{\{}\Big{(}\frac{\partial g}{ \partial t}\Big{)}^{2}+(v(t)-\mu)|g|^{2}\Big{\}}(1-\kappa(s)t)\,\mathrm{d}s \mathrm{d}t, \tag{3.3}\]
where the last term on right-hand side can be again estimated by \(C\nu^{2}\) with a \(C\) depending on the function \(g\) only. Furthermore, integrating the middle term by parts with respect to \(t\), we get
\[2\nu\int_{|t|\leq a}\big{[}-\phi_{0}^{\prime\prime}+(v(t)-\mu) \phi_{0}\big{]}\chi_{\mathrm{in}}(s)g(s,t)(1-\kappa(s)t)\,\mathrm{d}s\mathrm{ d}t\] \[\quad-2\nu\int_{|t|\leq a}\phi_{0}^{\prime}(t)\chi_{\mathrm{in}} (s)g(s,t)\kappa(s)\,\mathrm{d}s\mathrm{d}t, \tag{3.4}\]
where the square bracket in the first integral is zero by assumption.
Notice next that \(\phi_{0}^{\prime}\) cannot vanish identically in the interval \([-a,a]\). Indeed, it is continuous in \(\mathbb{R}\) and we have \(v(t)=0\) for \(|t|>a\), hence should the derivative \(\phi_{0}^{\prime}\) be zero in \([-a,a]\), the function must have been a constant one, however, that is impossible for an eigenfunction or a zero-energy resonance solution. This observation allows us to choose the function \(g\) in such a way that the last integral is positive; it suffices to have it supported in a region where both \(\phi_{0}^{\prime}\) and \(\kappa\) do not change sign and to pick the sign of \(g(s,t)\) accordingly. With such a choice the expression (3.4) will be smaller than \(\delta\nu\) with some \(\delta>0\); for small \(\nu\) this linear term will dominate over those estimated by \(C\nu^{2}\).
It remains to deal with the first term on the right-hand side of (3.3). To simplify the notation, we introduce the following symbols,
\[\phi_{+}=\phi_{0}(a),\quad\xi_{+}=-\sqrt{|\mu|+V_{0}},\quad\xi_{-}=\sqrt{|\mu |}. \tag{3.5}\]
which allows us to write \(\phi_{0}^{\prime}(a)=\xi_{+}\phi_{+}\) and \(\phi^{\prime}(-a)=\xi_{-}\); recall that \(\phi_{0}(-a)=1\) holds by assumption. The expression in question then can
be rewritten using integration by parts as follows:
\[\int_{|t|\leq a}\big{\{}(\phi_{0}^{\prime}(t))^{2}+(v(t)-\mu)|\phi_{0 }(t)|^{2}\big{\}}\chi_{\rm in}^{2}(s)(1-\kappa(s)t)\,\mathrm{d}s\mathrm{d}t\] \[=\int_{\mathbb{R}}\big{[}\xi_{+}\phi_{+}^{2}(1-\kappa(s)a)-\xi_{- }(1+\kappa(s)a)\big{]}\chi_{\rm in}^{2}(s)\,\mathrm{d}s\] \[\quad+\int_{|t|\leq a}\big{\{}-\phi_{0}^{\prime\prime}(t)+(v(t)- \mu)\phi_{0}(t)\big{\}}\phi_{0}(t)(1-\kappa(s)t)\chi_{\rm in}^{2}(s)\,\mathrm{d }s\mathrm{d}t\] \[\quad+\int_{|t|\leq a}\big{(}-\phi_{0}^{\prime}(t)\phi_{0}(t) \big{)}(\kappa(s))\chi_{\rm in}^{2}(s)\,\mathrm{d}s\mathrm{d}t\] \[=\big{[}\xi_{+}\phi_{+}^{2}-\xi_{-}\big{]}\|\chi_{\rm in}\|^{2}- \big{[}\xi_{+}\phi_{+}^{2}+\xi_{-}\big{]}\,a\int_{\mathbb{R}}\kappa(s)\chi_{ \rm in}^{2}(s)\,\mathrm{d}s\] \[\quad+\frac{1}{2}(\phi_{+}^{2}-1)\int_{\mathbb{R}}\kappa(s)\chi_ {\rm in}^{2}(s)\,\mathrm{d}s, \tag{3.6}\]
where the norm in the last expression refers to \(L^{2}(\mathbb{R})\) and we have used the identity \(\phi_{0}^{\prime}\phi_{0}=\frac{1}{2}(\phi_{0}^{2})^{\prime}\). Since \(\kappa\) has a compact support and \(\chi_{\rm in}^{2}(s)=1\) holds on it by (3.1), we can replace the integrals in the last part of (3.6) by \(\int_{\mathbb{R}}\kappa(s)\,\mathrm{d}s\). Summarizing the estimate, we have obtained for all sufficiently small \(\nu\) the inequality
\[Q_{\rm int}[\psi]\leq \,-\frac{1}{2}\delta\nu+\big{[}\xi_{+}\phi_{+}^{2}-\xi_{-}\big{]} \|\chi_{\rm in}\|^{2}-\big{[}\xi_{+}\phi_{+}^{2}+\xi_{-}\big{]}\,a\int_{ \mathbb{R}}\kappa(s)\,\mathrm{d}s\] \[\quad+\frac{1}{2}(\phi_{+}^{2}-1)\int_{\mathbb{R}}\kappa(s)\, \mathrm{d}s+\tau_{0}^{-1}\|\phi_{0}\,\|_{[-a,a]}\,\|^{2}\|\chi_{\rm in}^{\prime }\|^{2}. \tag{3.7}\]
Choosing then the coordinates \(s^{*}\gg s_{0}\) at the right-hand side of (3.1), one can achieve that the last term in (3.7) will be smaller than \(\frac{1}{4}\delta\nu\).
Now we finally use assumptions of Theorem 2.2. First of all, in the zero-energy resonance situation we have \(\xi_{\pm}=0\), so that the second and the third term on the right-hand side vanish, and since the fourth one is supposed to be nonpositive, the estimate reduces to
\[Q_{\rm int}[\psi]\leq-\frac{1}{4}\delta\nu-2|\mu|^{1/2}\|\chi_{\rm in}\|^{2}. \tag{3.8}\]
At the same time, we have \(\mu=0\) in this case, hence (3.8) simply becomes \(Q_{\rm int}[\psi]\leq-\frac{1}{4}\delta\nu\) and to conclude the proof we have to choose the outer part of trial function in such a way that its contribution to the quadratic form can be made smaller than any fixed positive number.
### Trial function outside the strip
In the zero-energy resonance case, \(\mu=0\), the absence of the bias \(V_{0}\) means that \(\phi_{0}(t)=\mathrm{const}\) holds for \(|t|\geq a\). To construct a suitable mollifier \(\chi_{\rm out}\) we require the following properties:
1. in \(\mathbb{R}^{2}\setminus\Omega^{a}\) the function depends on \(\rho=\mathrm{dist}(x,O)\) only,
2. we have continuity at the boundary: at the points \(x(s,\pm a)\) the relation \(\chi_{\rm out}(x)=\chi_{\rm in}(s)\) holds.
Let us consider the situation where the extensions of the asymptotes of \(\Gamma\) cross; the case of parallel asymptotes pointing in the opposite directions can dealt with analogously. We again choose \(s_{0}\) in such a way that the points \(\Gamma(\pm s_{0})\) belong to the straight parts of the curve, then \({\rm dist}(\Gamma(s),O)=\rho_{s}:=(|s|-s_{0})+d_{0}\), where \(d_{0}={\rm dist}(\Gamma(s_{0}),O)\) (recall that \(\Gamma(-s_{0})=\Gamma(s_{0})\) holds by assumption).
Given that the distance of the points \(x(s,\pm a)\) from the origin is \(\sqrt{\rho_{s}^{2}+a^{2}}\), in accordance with the requirements (i), (ii) we put
\[\chi_{\rm out}(\rho):=\left\{\begin{array}{cl}\chi_{\rm in}(\sqrt{\rho^{2}- a^{2}}-d_{0}+s_{0})&\quad\mbox{if}\;\;\sqrt{\rho^{2}-a^{2}}\geq d_{0}\\ 1&\quad\mbox{if}\;\;\rho\leq\sqrt{d_{0}^{2}+a^{2}}\end{array}\right.\]
This, in particular means, that \(\chi_{\rm out}\) vanishes if its argument exceeds \(s^{*}\), in other words, for \(\rho>\sqrt{(s^{*}-s_{0}+d_{0})^{2}+a^{2}}\). The external trial function is then just the appropriate restriction mollifier \(\chi_{\rm out}\) itself:
\[\psi_{\rm out}(x):=\chi_{\rm out}(x)\quad\mbox{if}\;\;x\in\Omega_{\pm} \setminus\Omega_{\pm}^{a}. \tag{3.9}\]
Since \(\mu=0\) by assumption and the potential is zero away from \(\Omega^{a}\), the quantity to be estimated is the kinetic energy contribution to the form (2.3c) from the outer part of the trial function,
\[\int_{\Omega\setminus\Omega^{a}}|\nabla\psi_{\rm out}(x)|^{2}{ \rm d}x \tag{3.10}\] \[=2\pi\int_{\sqrt{d_{0}^{2}+a^{2}}}^{\sqrt{(s^{*}-s_{0}+d_{0})^{2 }+a^{2}}}\Big{|}\frac{{\rm d}}{{\rm d}\rho}\chi_{\rm in}(\sqrt{\rho^{2}-a^{2}} -d_{0}+s_{0})\Big{|}^{2}\rho{\rm d}\rho.\]
Relation (3.8) tells us that one can choose parameters \(\delta\) and \(\nu\) for which the inner contribution to the form is negative (using a sufficiently large \(s^{*}\)), hence to prove the claim it is enough to show that the integral on the right-hand side of (3.10) vanishes if \(s_{0},\,d_{0}\to\infty\) with the difference \(s_{0}-d_{0}\) bounded and \(\frac{s^{*}}{s_{0}}\to\infty\). The values of the integrated function on the support of \(\nabla\psi_{\rm out}\) can be expressed using (3.1) to be
\[\Big{|}\frac{{\rm d}}{{\rm d}\rho}\chi_{\rm in}(\sqrt{\rho^{2}-a^ {2}}-d_{0}+s)\Big{|}^{2}=\Big{|}\Big{(}\ln\frac{s^{*}}{s_{0}}\Big{)}^{-1} \frac{1}{\sqrt{\rho^{2}-a^{2}}-d_{0}+s_{0}}\,\frac{\partial s}{\partial\rho} \Big{|}^{2}\\ =\Big{(}\ln\frac{s^{*}}{s_{0}}\Big{)}^{-2}\,\big{(}\sqrt{\rho^{2}- a^{2}}-d_{0}+s_{0}\big{)}^{-2},\]
because \(\frac{\partial s}{\partial\rho}=1\) in the considered region. Substituting from here to (3.9) we get
\[\int_{\Omega\setminus\Omega^{a}}|\nabla\psi_{\rm out}(x)|^{2}{\rm d}x\] \[\qquad=2\pi\Big{(}\ln\frac{s^{*}}{s_{0}}\Big{)}^{-2}\int_{\sqrt{ d_{0}^{2}+a^{2}}}^{\sqrt{(s^{*}-s_{0}+d_{0})^{2}+a^{2}}}\frac{\rho{\rm d}\rho}{( \sqrt{\rho^{2}-a^{2}}-d_{0}+s_{0})^{2}}\]
Since \(s_{0}-d_{0}\) is bounded and \(a\) is fixed, we can choose a sufficiently large \(\rho\geq d_{0}\) in such a way that
\[\sqrt{\rho^{2}-a^{2}}-d_{0}+s_{0}\geq\frac{1}{2}\rho\]
in which case we have
\[\int_{\Omega\setminus\Omega^{a}}|\nabla\psi_{\rm out}(x)|^{2}{\rm d}x\leq 8\pi \Big{(}\ln\frac{s^{*}}{s_{0}}\Big{)}^{-2}\ln\rho\left|{\sqrt{(s^{*}-s_{0}+d_{0 })^{2}+a^{2}}}\right. \tag{3.11}\]
Using again the fact that \(s^{*},\,d_{0}\to\infty\) while \(a\) is fixed and \(s_{0}-d_{0}\) bounded, we see that the parameters can be chosen so that
\[\ln\frac{\sqrt{(s^{*}-s_{0}+d_{0})^{2}+a^{2}}}{\sqrt{d_{0}^{2}+a^{2}}}\leq\ln \frac{2s^{*}}{\frac{1}{2}s_{0}}=\ln\frac{s^{*}}{s_{0}}+2\ln 2, \tag{3.12}\]
and substituting from (3.12) into (3.11), we get the needed result; this concludes the proof of part (a) of Theorem 2.2.
## 4. Proof of Theorem 2.4 - the first part
Let us pass to the situation where there is again _no bias_, \(V_{0}=0\), the channel profile is _symmetric_, and the transverse operator (2.2) is _subcritical_, \(\mu<0\). The most difficult part is now to construct the exterior part of the trial function, for the interior we can use the result of Sec. 3.1 noting that in view of the assumption (p2) we have \(\phi_{+}=1\) and \(\xi_{+}=-\xi_{-}\) which means that the inequality (3.8) is still valid.
### Curves with a piecewise constant curvature
We divide the construction into two parts, considering first a particular class of the generating curves assuming additionally that
* the curved part of \(\Gamma\) is piecewise \(C^{\infty}\)-smooth consisting of a _finite array of circular arcs_; at its endpoints it is \(C^{1}\)-smoothly connected to the halflines
Consequently, the signed curvature \(\kappa(\cdot)\) of such a curve is a step function. To begin with, we define in \(\Omega_{\rm out}\) function \(\phi\) by
\[\phi(x):=\exp\{-\xi({\rm dist}(x,\Gamma)-a)\},\quad x\in\mathbb{R}^{2} \setminus\Omega^{a}, \tag{4.1}\]
where \(\xi:=\xi_{-}=-\xi_{+}=|\mu|^{1/2}\). The sought trial function will be then of the form \(\psi_{\rm out}=\phi\chi_{\rm out}\) with the mollifier \(\chi_{\rm out}\) to be specified below.
As before, we will focus on the situation where the asymptotes of \(\Gamma\) are not parallel, the case with \(\theta_{0}=\frac{\pi}{2}\) can be treated in a similar way.
Since \(\theta_{0}>0\) by assumption, we can choose conical neighborhoods of the asymptotes which do not intersect, that is, to pick \(\Delta\theta_{0}\) sufficiently small so that \([-\theta_{0}-\Delta\theta_{0},-\theta_{0}+\Delta\theta_{0}]\cap[\theta_{0}- \Delta\theta_{0},\theta_{0}+\Delta\theta_{0}]=\emptyset\). Furthermore, we pick an \(r_{0}>0\) large enough to ensure that the curved part of \(\Gamma\) is contained in the disk of one half that radius, \(B_{\frac{1}{2}r_{0}}(O)\), centered at the coordinate origin \(O\). At the points of the corresponding conical sectors, \(x=(\rho,\theta)\in\mathbb{R}^{2}\setminus B_{r_{0}}(O)\) with \(\theta\in[\theta_{0}-\Delta\theta_{0},\theta_{0}+\Delta\theta_{0}]\) or \(\theta\in[-\theta_{0}-\Delta\theta_{0},-\theta_{0}+\Delta\theta_{0}]\) we can use the \((s,t)\) coordinates and define the mollifier \(\chi_{\mathrm{out}}\) depending on the longitudinal variable only,
\[\chi_{\mathrm{out}}(s,t)=\chi_{\mathrm{in}}(s),\]
where the right-hand side is given by (3.1). Furthermore, at the points \(x\in B_{r_{0}}(O)\setminus\Omega^{a}\) we put \(\chi_{\mathrm{out}}(x)=1\), and finally, in the remaining part of the plane we choose \(\chi_{\mathrm{out}}\) independent of \(\theta\), in other words, as a function of the distance \(\rho\) from the origin \(O\) only, and such that \(\chi_{\mathrm{out}}\) is continuous in \(\Omega_{\mathrm{out}}\). It is clear that the radial decay of such an external mollifier is determined by the behavior of the function (3.1).
Since the potential is supported in \(\Omega^{a}\), the contribution to the quadratic form (2.3c) in the exterior region comes from the kinetic term only. The trial function factorizes into a product and our first goal is to show that the cross-term containing the integral of \(2\nabla\phi\cdot\nabla\chi_{\mathrm{out}}\) is small for large \(r_{0}\), in particular, that one can make it smaller than \(\frac{1}{16}\delta\nu\) with respect to the quantities appearing in (3.8).
**Lemma 4.1**.: _We have_
\[\int_{\Omega_{\mathrm{out}}}|\nabla\psi_{\mathrm{out}}(x)|^{2} \mathrm{d}x\leq\int_{\Omega_{\mathrm{out}}}|\nabla\phi(x)|^{2}\chi_{\mathrm{ out}}^{2}(x)\,\mathrm{d}x\] \[\qquad+\int_{\Omega_{\mathrm{out}}}|\phi(x)|^{2}|\nabla\chi_{ \mathrm{out}}(x)|^{2}\mathrm{d}x+\mathcal{O}(r_{0}^{-1})\quad\text{as}\;\;r_{ 0}\to\infty. \tag{4.2}\]
Proof.: Since \(\psi_{\mathrm{out}}=\phi\chi_{\mathrm{out}}\) for \(x\in\Omega_{\mathrm{out}}\), we have to estimate the integral \(\int_{\Omega_{\mathrm{out}}}|\nabla\phi(x)\cdot\nabla\chi_{\mathrm{out}}(x)| \,\mathrm{d}x\) to deal with the cross-term. To this aim we first note that \(\chi_{\mathrm{out}}=1\) holds inside \(B_{r_{0}}(O)\) so we have to consider only the complement of the disk. In the conical sectors of \(\mathbb{R}^{2}\setminus B_{r_{0}}(O)\) with \([\pm\theta_{0}-\Delta\theta_{0},\pm\theta_{0}+\Delta\theta_{0}]\) the point nearest to \((x,\theta)\) lies on the straight part of \(\Gamma\), as the distance to it is at most \(\rho\Delta\theta<\frac{1}{2}\rho\) while that to the curved part is at least \(\rho-\frac{1}{2}r_{0}>\frac{1}{2}\rho\). This implies that \(\nabla\phi\) is perpendicular to \(\nabla\chi_{\mathrm{out}}\), and the corresponding contribution to the integral vanishes too. Finally, in view of our definition of \(\chi_{\mathrm{out}}\) in combination with (3.1) we see that \(\nabla\chi_{\mathrm{out}}\) is bounded outside \(B_{r_{0}}(O)\) and the two sectors, and furthermore, we have \(|\nabla\phi(x)|\leq\xi\phi(x)\leq C\,\mathrm{e}^{-\rho/2}\) which yields
\[\int_{\Omega_{\mathrm{out}}}|\nabla\phi(x)\cdot\nabla\chi_{\mathrm{out}}(x)| \,\mathrm{d}x\leq C^{\prime}\int_{\Omega_{\mathrm{out}}}|\nabla\phi(x)|\, \mathrm{d}x=\mathcal{O}(r_{0}^{-1})\]
as \(r_{0}\to\infty\) which we set out to prove.
Let us turn to the second term on the right-hand side of (4.2).
**Lemma 4.2**.: _We have_
\[\int_{\Omega_{\rm out}}|\phi(x)|^{2}|\nabla\chi_{\rm out}(x)|^{2}{\rm d}x= \mathcal{O}(r_{0}^{-1})\quad\text{as}\;\;r_{0}\to\infty. \tag{4.3}\]
Proof.: The integral over the disk is again zero and using an argument analogous to that of the previous proof, one can check that the integral over the region outside the conical sectors is \(\mathcal{O}(r_{0}^{-1})\) as \(r_{0}\to\infty\). Inside the sectors we have
\[|\phi(x)|^{2}|\nabla\chi_{\rm out}(x)|^{2}=|\phi_{0}(t)|^{2}|\chi^{\prime}_{ \rm in}(s)|^{2}\leq|\chi^{\prime}_{\rm in}(s)|^{2}\]
with \(\chi_{\rm in}\) given by (3.1); recall that outside \(\Omega^{a}\) the function \(\phi_{0}\) decays exponentially with the distance from \(\Gamma\) and \(\phi_{0}(\pm a)=1\) holds by assumption. Hence the integral in (4.3) can be estimated by the squared norm of \(\chi^{\prime}_{\rm in}\), and since to a given \(r_{0}\) one can choose \(s^{*}=s^{*}(r_{0})\) in such a way that \(\ln\frac{s^{*}}{s_{0}}>Cr_{0}\) for some \(C>0\), the claim follows.
Combining the two lemmata, we see that choosing \(r_{0}\) sufficiently large one can achieve that the outer contribution to the first term of (2.3c) can be estimated by the first expression on right-hand side of (4.2) with a small error, say
\[\int_{\Omega_{\rm out}}|\nabla\psi_{\rm out}(x)|^{2}{\rm d}x\leq\int_{\Omega_ {\rm out}}|\nabla\phi(x)|^{2}\chi^{2}_{\rm out}(x)\,{\rm d}x+\frac{|\mu|}{8} \delta\nu. \tag{4.4}\]
Next we note that in part (a) of Theorem 2.4 the bias is absent, \(V_{0}=0\), which means that the function (4.1) satisfies
\[|\nabla\phi|^{2}-\mu|\phi|^{2}=2|\nabla\phi|^{2}\]
almost everywhere in \(\Omega_{\rm out}\). This means that we can estimate the whole exterior contribution to the form \(Q[\psi]-\mu\|\psi\|^{2}\) by doubling the kinetic term and neglecting the one containing the eigenvalue \(\mu\); in combination with (3.8) this tells us that in order to prove the theorem it is sufficient to check that
\[2\int_{\Omega_{\rm out}}|\nabla\phi(x)|^{2}\chi^{2}_{\rm out}(x)\,{\rm d}x \leq 2|\mu|^{1/2}\|\chi_{\rm in}\|^{2}+\frac{|\mu|}{8}\delta\nu,\]
and that is in view of \(|\nabla\phi|^{2}=-\mu|\phi|^{2}\) further equivalent to
\[\int_{\Omega_{\rm out}}|\phi(x)\chi_{\rm out}(x)|^{2}\,{\rm d}x\leq|\mu|^{-1/ 2}\|\chi_{\rm in}\|^{2}+\frac{1}{16}\delta\nu. \tag{4.5}\]
The rest of the proof consists of verification of the inequality (4.5). To begin with, we estimate the contribution to its left-hand side from the parts of the plane adjacent to the straight parts of the waveguide; we choose them as conical sectors similar to those used in the proof of Lemma 4.1. We recall that for \(x=(\rho,\theta)\) with \(\rho\geq r_{0}\) and \(\theta\in[\pm\theta_{0}-\Delta\theta_{0},\pm\theta_{0}+\Delta\theta_{0}]\) we can use the \((s,t)\) coordinates simultaneously
with the polar ones. We choose an \(\hat{s}\geq r_{0}\) so that the parts of \(\Gamma\) with \(|s|\geq\hat{s}\) lay outside \(B_{r_{0}}(O)\), and at the same time we choose \(s_{0}\) of (3.1) is such a way that \(s_{0}>\hat{s}\). Then we define
\[K_{\pm}:=\big{\{}\,x:\,|s|\geq\hat{s},\,|t|\geq a,\,\theta\in[\pm\theta_{0}- \Delta\theta_{0},\pm\theta_{0}+\Delta\theta_{0}]\,\big{\}} \tag{4.6}\]
Within these sets, the closest points of \(\Gamma\) are those on the straight parts of the curve with the same coordinate \(s\). Then it is easy to see that
\[\int_{\Omega_{\mathrm{out}}\cap\{K_{+}\cup K_{-}\}}|\phi(x)\chi_{\mathrm{out}} (x)|^{2}\,\mathrm{d}x\leq|\mu|^{-1/2}\|\chi_{\mathrm{in}}\|_{L^{2}((-\infty,- \hat{s}]\cup[\hat{s},\infty))}^{2} \tag{4.7}\]
It remains to integrate the function \(|\phi\chi_{\mathrm{out}}|^{2}\) over \(\Omega_{\mathrm{out}}\setminus\{K_{+}\cup K_{-}\}\). Obviously, the integral will increase if we replace \(\chi_{\mathrm{out}}\) by one, hence to complete the proof, it is in view of (3.8) and (4.7) enough to check that
\[\int_{\Omega_{\mathrm{out}}\setminus\{K_{+}\cup K_{-}\}}|\phi(x)|^{2}\, \mathrm{d}x\leq 2\hat{s}\,|\mu|^{-1/2}+\frac{1}{16}\delta\nu\,; \tag{4.8}\]
we have used here the fact that \(\|\chi_{\mathrm{in}}\|_{L^{2}((-\hat{s},\hat{s}))}^{2}=2\hat{s}\).
To estimate the indicated integral we employ the additional assumption (s4); the two part of \(\Gamma\) corresponding to \(|s|>\hat{s}\) will be considered as arcs of zero curvature, cf. Remark 4.4 below. First of all, we note that the function \(d_{x}:\mathbb{R}\to\mathbb{R}_{+}\), defined by \(d_{x}(s):=\mathrm{dist}(x,\Gamma(s))\), is \(C^{1}\) smooth for any \(x\in\mathbb{R}^{2}\), and under the the assumption (s4) it is piecewise monotonous because on each arc it can have at most one extremum. At the same time, \(d_{x}(s)\to\infty\) holds as \(s\to\pm\infty\), hence the function has a global minimum, positive as long as \(x\) does not lie on the curve, and in view of its continuity it may also have a finite number of local extrema which come in pairs, a minimum adjacent to a maximum. Let \(s_{x}^{0}\) be the coordinate of the global minimum and denote by \(s_{x}^{i}\) with \(i\) running over an appropriate finite set of integers the coordinates of all the extrema; we introduce the symbol \(M_{x}^{\uparrow}\) for the subset referring to the local maxima and \(M_{x}^{\downarrow}\) for the set of coordinates of the minima. Then it obviously holds
\[\exp\{-2\xi(d_{x}(s_{x}^{0})-a)\}\leq -\!\!\!\sum_{s_{x}^{i}\in M_{x}^{\uparrow}}\exp\{-2\xi(d_{x}(s_{x} ^{i})-a)\}\] \[+\!\!\!\sum_{s_{x}^{i}\in M_{x}^{\downarrow}}\exp\{-2\xi(d_{x}(s_ {x}^{i})-a)\} \tag{4.9}\]
for all \(x\in\Omega_{\mathrm{out}}\). To estimate the integral in (4.8), we have to integrate the right-hand side of (4.9) over \(\Omega_{\mathrm{out}}\setminus\{K_{+}\cup K_{-}\}\). To this aim, let us first collect several simple geometric statements easy to check:
**Proposition 4.3**.: _Let \(\Gamma_{j}\) be one the arcs of \(\Gamma\) and denote by \(\omega_{1j},\omega_{2j},\omega_{3j}\) and \(\Omega_{j}^{a}\) the open regions shown in Fig. 2. Then the following holds true:_
1. _If_ \(x\in\omega_{1j}\cup\omega_{2j}\)_, then_ \(d_{x}(\cdot)\) _has a minimum in the interior of_ \(\Gamma_{j}\)_._
2. _If_ \(x\in\omega_{3j}\)_, then_ \(d_{x}(\cdot)\) _has a maximum in the interior of_ \(\Gamma_{j}\)
_;_
3. \(x\not\in\bar{\omega}_{1j}\cup\bar{\omega}_{2j}\cup\bar{\omega}_{3j}\cup\bar{ \Omega}_{j}^{a}\)_, then_ \(d_{x}(\cdot)\) _has no extremum on_ \(\Gamma_{j}\)_._
4. \(d_{x}(\cdot)\) _cannot have more than one critical point in the interior of_ \(\Gamma_{j}\)_._
5. _If_ \(x\in\omega_{kj}\) _for any of_ \(k=1,2,3\)_, then the one-sided derivative_ \(d_{x}^{\prime}(s)\neq 0\) _at the endpoints of_ \(\Gamma_{j}\)_._
**Remark 4.4**.: With an abuse of terminology we include into (s4) also situations when a \(\Gamma_{j}\) is a straight segment, that is, \(\kappa(s)=0\) holds on \(\Gamma_{j}\). In that case the wedge-shaped regions \(\omega_{1j}\) and \(\omega_{2j}\) become semi-infinite strips and \(\omega_{3j}\) does not exist. This concerns, in particular, the two straight parts of \(\Gamma\) corresponding to \(|s|>\hat{s}\).
Within the regions we introduced the minimal and maximal distances are easily expressed; we have
\[\begin{split} d_{x}(s_{x}^{i})=\operatorname{dist}(x,\Gamma_{j}) &\text{if }\ s_{x}^{i}\in\Gamma_{j}\cap M_{x}^{\downarrow},\\ d_{x}(s_{x}^{i})=|\kappa_{j}|^{-1}+\operatorname{dist}(x,O_{j}) &\text{if }\ s_{x}^{i}\in\Gamma_{j}\cap M_{x}^{\uparrow},\end{split} \tag{4.10}\]
where \(O_{j}\) is the center of the corresponding circular arc.
Let \(\iota_{j}^{1,2}\) and \(\iota_{j}^{3}\) be the characteristic functions of the sets \(\omega_{1j}\cup\omega_{2j}\) and \(\omega_{3j}\), respectively. In view of of Proposition 4.3 and relations (4.10), we can replace the first term at the right-hand side of (4.9), everywhere except the zero measure set referring to the boundaries of the regions \(\omega_{kj}\), \(k=1,2,3\), with
\[-\sum_{j}\exp\{-2\xi(|\kappa_{j}|^{-1}+\operatorname{dist}(x,O_{j})-a)\}\iota _{j}^{3}(x) \tag{4.11}\]
and the second term similarly by
\[\sum_{j}\exp\{-2\xi(\operatorname{dist}(x,\Gamma_{j})-a)\}\iota_{j}^{1,2}(x) \tag{4.12}\]
Integrating now (4.11) and (4.12) over \(\Omega_{\operatorname{out}}\setminus\{K_{+}\cup K_{-}\}\) and exchanging the order of integration over \(x\) with summation over \(j\), we can using
Figure 2. The regions used in Proposition 4.3
(4.9) estimate \(\int_{\Omega_{\mathrm{out}}\setminus\{K_{+}\cup K_{-}\}}\exp\{-2\xi(d_{x}(s_{x}^{0 })-a)\}\,\mathrm{d}x\) from above by
\[\sum_{j}\int_{(\omega_{1j}\cup\omega_{2j})\cap\{\Omega_{\mathrm{out }}\setminus\{K_{+}\cup K_{-}\}\}}\exp\{-2\xi(\mathrm{dist}(x,\Gamma_{j})-a)\}\, \mathrm{d}x \tag{4.13}\] \[-\sum_{j}\int_{\omega_{3j}\cap\{\Omega_{\mathrm{out}}\setminus\{K _{+}\cup K_{-}\}\}}\exp\{-2\xi(|\kappa_{j}|^{-1}+\mathrm{dist}(x,O_{j})-a)\}\, \mathrm{d}x,\]
where the sums run over all the indices of \(\Gamma_{j}\) including those of the straight segments of the curve with \(|s|>\hat{s}\). Note that this estimate includes in general a double counting since the same \(x\) may belong to different \(\omega_{kj}\); this does not matter as long as we consider the contributions referring of a given \(\Gamma_{j}\) together.
Our next goal is to show that the expression (4.13) cannot decrease if we replace the integration domains by \((\omega_{1j}\cup\omega_{2j})\setminus\{K_{+}\cup K_{-}\}\) and \(\omega_{3j}\setminus\{K_{+}\cup K_{-}\}\), respectively. To this aim, consider a fixed arc \(\Gamma_{j_{0}}\) and the respective segment \(\Omega_{j_{0}}^{a}\) of the strip \(\Omega^{a}\) as indicated in Fig. 2. For a point \(x\in\Omega_{j_{0}}^{a}\) the function \(d_{x}(\cdot)\) has the global minimum on \(\Gamma_{j_{0}}\) with a coordinate \(s_{x}^{0}\) and all the local extrema, if they exist, come in pairs situated outside \(\Gamma_{j_{0}}\). This yields the estimate
\[0\leq-\!\!\!\!\sum_{s_{x}^{i}\in M_{x}^{i}}\exp\{-2\xi(d_{x}(s_{x}^{i})-a)\}\!+ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and summing this result over \(j_{0}\) we get
\[0\leq -\sum_{j}\int_{\omega_{3j}\cap\Omega^{a}}\exp\{-2\xi(|\kappa_{j}|^{- 1}+\operatorname{dist}(x,O_{j})-a)\}\,\mathrm{d}x\] \[+\sum_{j}\int_{(\omega_{1j}\cup\omega_{2j})\cap\Omega^{a}}\exp\{-2 \xi(\operatorname{dist}(x,\Gamma_{j})-a)\}\,\mathrm{d}x. \tag{4.15}\]
Combining now (4.13) and (4.15), we obtain
\[\int_{\Omega_{\operatorname{out}}\setminus\{K_{+}\cup K_{-}\}}| \phi(x)|^{2}\,\mathrm{d}x \tag{4.16}\] \[\leq\sum_{j}\int_{(\omega_{1j}\cup\omega_{2j})\setminus\{K_{+} \cup K_{-}\}}\exp\{-2\xi(\operatorname{dist}(x,\Gamma_{j})-a)\}\,\mathrm{d}x\] \[-\sum_{j}\int_{\omega_{3j}\setminus\{K_{+}\cup K_{-}\}}\exp\{-2 \xi(|\kappa_{j}|^{-1}+\operatorname{dist}(x,O_{j})-a)\}\,\mathrm{d}x.\]
The summation in (4.16) runs over all the curve segments including the straight ones. Let us first estimate the contribution of these infinite 'arcs' to the positive part of (4.16) having in mind that in accordance with Remark 4.4 the segments with \(\kappa=0\) do not contribute to the negative one. We denote by \(\Gamma_{+}\) the segment with \(s>\hat{s}\) and by \(\omega_{1+},\omega_{2+}\) the corresponding semi-infinite strips, then we have
\[\int_{(\omega_{1+}\cup\omega_{2+})\setminus\{K_{+}\cup K_{-}\}} \exp\{-2\xi(\operatorname{dist}(x,\Gamma_{+})-a)\}\,\mathrm{d}x \tag{4.17}\] \[\qquad\leq 2\int_{\omega_{1+}\setminus K_{+}}\exp\{-2\xi( \operatorname{dist}(x,\Gamma_{+})-a)\}\,\mathrm{d}x\] \[\qquad=2\int_{\rho(\hat{s})\cos\Delta\theta}^{\infty}\int_{s\sin \Delta\theta_{0}}^{\infty}\exp\{-2\xi(t-a)\}\,\mathrm{d}t\mathrm{d}s\] \[\qquad=\frac{\operatorname{e}^{2\xi a}}{4\xi^{2}\sin\Delta\theta _{0}}\operatorname{e}^{-\xi\sin 2\Delta_{0}\theta\cdot\rho(\hat{s})},\]
where \(\rho(\hat{s})\) is the distance of the point \(\Gamma(\hat{s})\) to the origin. In view of our choice of \(\hat{s}\) we have \(\rho(\hat{s})\geq r_{0}\) and the integral at the right-hand side of (4.17) can be made arbitrarily small by choosing \(r_{0}\) large enough. An analogous argument applies to the segment of \(\Gamma\) with \(s<-\hat{s}\).
Denote now by \(\sum_{j}^{*}\) the sum over all the \(\Gamma_{j}\) except of \(\Gamma_{\pm}\). The conclusion just made allows us to replace the sum \(\sum_{j}\) in (4.17) by \(\sum_{j}^{*}\) with an error which can be made arbitrarily small by choosing an appropriately large \(r_{0}\). Furthermore, we note that the positive part of (4.13) cannot decrease if we enlarge the integration domain in all the integrals there replacing \((\omega_{1j}\cup\omega_{2j})\setminus\{K_{+}\cup K_{-}\}\) by \(\omega_{1j}\cup\omega_{2j}\).
Our next goal is to argue that we can do the same in the negative part of (4.13) replacing \(\omega_{3j}\setminus\{K_{+}\cup K_{-}\}\) by \(\omega_{3j}\). In such a case, of
course, the corresponding change of the integrals goes in the wrong way; our aim is to show that it again produces an error which can be made small if \(r_{0}\) is large. Indeed, regions \(\omega_{3j}\) exist only for the curved segments of \(\Gamma\) and those are by assumption inside \(B_{\frac{1}{2}r_{0}}(O)\), while the regions \(K_{\pm}\) are outside \(B_{r_{0}}(O)\). Consequently, the contributions from the extended integration domains are
\[\int_{\omega_{3j}\cap\{K_{+}\cup K_{-}\}}\exp\{-2\xi(|\kappa_{j}|^{ -1}+\operatorname{dist}(x,O_{j})-a)\}\,\mathrm{d}x \tag{4.18}\] \[\quad\leq\mathrm{e}^{2\xi a}\,|\Gamma_{j}|\int_{\rho\geq\sqrt{3}r _{0}/2}^{\infty}\mathrm{e}^{-\sqrt{3}\,\xi\rho}\,\rho\,\mathrm{d}\rho=|\Gamma_ {j}|\,\mathcal{O}(\mathrm{e}^{-3\xi r_{0}/2})\]
uniformly in \(j\), and since the length of the curved part is finite, the error coming from the extension of the integration domain is \(\mathcal{O}(\mathrm{e}^{-3\xi r_{0}/2})\). Combining (4.18) with (4.16) we get
\[\int_{\Omega_{\mathrm{out}}\setminus\{K_{+}\cup K_{-}\}}|\phi(x) |^{2}\,\mathrm{d}x \tag{4.19}\] \[\leq\sideset{}{{}^{*}}{\sum}_{j}\int_{\omega_{1j}\cup\omega_{2j}} \exp\{-2\xi(\operatorname{dist}(x,\Gamma_{j})-a)\}\,\mathrm{d}x\] \[\quad-\sideset{}{{}^{*}}{\sum}_{j}\int_{\omega_{3j}}\exp\{-2\xi( |\kappa_{j}|^{-1}+\operatorname{dist}(x,O_{j})-a)\}\,\mathrm{d}x+\mathcal{O}( \mathrm{e}^{-3\xi r_{0}/2}).\]
It is not difficult to evaluate the integrals appearing at the right-hand side of (4.19): we have
\[\int_{\omega_{2j}} \exp\{-2\xi(\operatorname{dist}(x,O_{j})-a)\}\,\mathrm{d}x=\Big{(} \frac{1}{2\xi}+\frac{a|\kappa_{j}|}{2\xi}+\frac{|\kappa_{j}|}{4\xi^{2}}\Big{)} |\Gamma_{j}|\] \[=\frac{|\Gamma_{j}|}{2\xi}+\frac{a}{2\xi}\int_{\Gamma_{j}}|\kappa (s)|\,\mathrm{d}s+\frac{1}{4\xi^{2}}\int_{\Gamma_{j}}|\kappa(s)|\,\mathrm{d}s \tag{4.20}\]
and
\[\int_{\omega_{1j}} \exp\{-2\xi(\operatorname{dist}(x,O_{j})-a)\}\,\mathrm{d}x=\frac{ |\Gamma_{j}|}{2\xi}-\frac{a}{2\xi}\int_{\Gamma_{j}}|\kappa(s)|\,\mathrm{d}s\] \[-\frac{1}{4\xi^{2}}\int_{\Gamma_{j}}|\kappa(s)|\,\mathrm{d}s+ \frac{1}{4\xi^{2}}\int_{\Gamma_{j}}\mathrm{e}^{-2\xi(|\kappa(s)|^{-1}-a)}| \kappa(s)|\,\mathrm{d}s.\]
for the positive part of the estimate, while in the negative one we use
\[\int_{\omega_{3j}}\exp\{-2\xi(\kappa_{j}^{-1}+\operatorname{dist}(x,O_{j}))\} \,\mathrm{d}x=\frac{1}{4\xi^{2}}\int_{\Gamma_{j}}\mathrm{e}^{-2\xi(|\kappa(s)| ^{-1}-a)}|\kappa(s)|\,\mathrm{d}s.\]
Summing finally the contributions from given \(\Gamma_{j}\) we get \(|\Gamma_{j}|\xi^{-1}\), hence the expression (4.13) is smaller that \(2|\mu|^{-1/2}\hat{s}+o(r_{0})\) which according to inequality (4.8) proves part (a) of Theorem 2.4 under the additional assumption (s4).
### Completing the proof
We will use the same trial function as before, in particular, its outer part will be again of the form \(\psi_{\mathrm{out}}=\phi\chi_{\mathrm{out}}\) with \(\phi\) given (4.1). We have to show that (4.8) remains to be valid without the assumption (s4). The idea is to approximate the curve \(\Gamma\) satisfying (s1) by curves with a piecewise constant curvature, the same length and the same halfline asymptotes. Specifically, we are going to use the following result:
**Theorem 4.5** (Sabitov-Slovesnov [15]).: _Let \(\Gamma\) be a \(C^{3}\)-smooth curve consisting of a finite number of segments such that on each of them the monotonicity character of the signed curvature \(\kappa(\cdot)\) of \(\Gamma\) and its sign are preserved. Then \(\Gamma\) can be approximated by a \(C^{1}\)-smooth function \(\hat{\Gamma}\) of the same length, the curvature of which is piecewise constant having jumps at the points \(s_{1}<s_{2}<\cdots<s_{N}\), in the sense that the estimates_
\[\|\Gamma^{(m)}-\hat{\Gamma}^{(m)}\|_{\infty}\leq C\max_{1\leq k\leq N-1}(s_{k +1}-s_{k})^{3-m},\quad m=0,1,2, \tag{4.21}\]
_hold with some \(C>0\) for the function \(\Gamma\) and its two first derivatives._
The approximation bears a local character so we can refine it at a fixed part of \(\Gamma\) without changing anything at the rest. The length here means the arc length distance between a fixed pair of points, and naturally, the second derivative \(\hat{\Gamma}^{(2)}\) does not exist in general at the points \(\{s_{k}\}\). Note also that the result does not require \(\Gamma\) to be a unit-speed curve. It is obvious that the hypotheses of Theorem 4.5 are satisfied under our assumption (s1).
Following the construction of [15], we can to any given \(\varepsilon>0\) divide \(\Gamma\) into a finite union of segments such that on each of the corresponding intervals of the arc-length coordinate \(s\) we approximate it by a pair of circular arcs with the following properties:
1. the distance between \(\Gamma\) and the arcs does not exceed \(\varepsilon\),
2. The curvature of the arcs is in the interval \([\kappa_{-},\kappa_{+}]\), where \(\kappa_{\pm}\) is, respectively, the maximum and minimum of \(|\kappa(s)|\) over the interval in question,
3. the arcs are \(C^{1}\)-smoothly connected mutually and the the rest of the curve corresponding to \(s\) outside the interval,
4. the sum of arc length is the same as that of the approximated segment of \(\Gamma\).
We use therefore a family \(\{\Gamma_{n}\}\) of such arcwise curves approximating the non-straight part of \(\Gamma\) which corresponds to a decreasing sequence \(\{\varepsilon_{n}\}\) with \(\varepsilon_{n}\to 0\) as \(n\to\infty\); in view of Theorem 4.5 the corresponding sequence of partitions, \(\{s_{k}^{(n)}\}\), must be refining in the parts of the curve where its curvature is non-constant, so that \(s_{k+1}^{(n)}-s_{k}^{(n)}\to 0\) as \(n\to\infty\); without loss of generality we may suppose that \(\{s_{k}^{(n)}\}\subset\{s_{k}^{(n+1)}\}\). If \(\kappa(\cdot)\) is non-constant in the vicinity of a point \(s\), we can thus find a subsequence \(\{s_{k_{n}}^{(n)}\}\) such \(s_{k_{n}}^{(n)}\to s-\) as \(n\to\infty\) and \(s_{k+n+1}^{(n)}\to s+\), by
the \(C^{3}\)-smoothness of \(\Gamma\) and the property (ii) above we then infer that \(\kappa_{k_{n}}^{(n)}\) and \(\kappa_{k_{n}+1}^{(n)}\), the curvatures of the arcs with the endpoints at \(\{s_{k_{n}}^{(n)}\}\) and \(s_{k_{n}+1}^{(n)}\), respectively, converge to \(\kappa(s)\) when \(n\to\infty\).
Denoting then the piecewise constant curvature of the approximating curve \(\Gamma_{n}\) as \(\kappa_{n}\) and putting
\[\delta_{n}:=2\int_{|t|\leq a}\phi_{0}^{\prime}(t)\chi_{\rm in}(s)g(s,t)\kappa_{ n}(s)\,{\rm d}s{\rm d}t,\]
we see that \(\delta_{n}\to\delta\), two times the integral in the second term at the right-hand of (3.4), as \(n\to\infty\). Keeping the same parameter \(\nu\), we have also \(\delta_{n}\nu\to\delta\nu\) which means that the estimate (3.8) holds uniformly for all the approximating curves with \(n\) large enough, with the first term on its right-hand side replaced, say, by \(-\frac{1}{5}\delta\nu\).
Note further that the trial function we have constructed is supported in a disk of radius \(R_{\rm supp}\) which depends on the value of \(\delta\nu\) - it must be large enough to ensure that the estimate (4.8) is still valid - and on the radius \(\frac{1}{2}r_{0}\) of the disk containing the curved part of \(\Gamma\) and its approximants \(\Gamma_{n}\). Since \(\|\Gamma-\Gamma_{n}\|_{\infty}\to 0\) as \(n\to\infty\) by (4.21), we can choose \(R_{\rm supp}\) satisfying the requirements for all \(n\) large enough. Moreover, by construction
\[{\rm dist}(x,\Gamma)\geq{\rm dist}(x,\Gamma_{n})-\varepsilon_{n}\]
holds for \(x\in\Omega_{\rm out}\), and consequently,
\[\int_{\tilde{\Omega}_{\rm out}} \exp\{-2\xi({\rm dist}(x,\Gamma_{n})-a)\}\,{\rm d}x\] \[\leq{\rm e}^{2\xi_{n}}\,\pi R_{\rm supp}^{2}\int_{\tilde{\Omega} _{\rm out}}\exp\{-2\xi({\rm dist}(x,\Gamma_{n})-a)\}\,{\rm d}x.\]
Applying now the estimates of the previous section to the curves \(\Gamma_{n}\) which satisfy assumption (s4) and taking the limit \(\varepsilon_{n}\to 0\), we get the sought estimate for the curve \(\Gamma\) which concludes the proof.
## 5. Concluding proofs of Theorems 2.2 and Theorem 2.4
It remains to establish parts (b) of both the main results. Let us begin with Theorem 2.4 and prove it in the situation when \(\Omega_{+}\) is _convex_; by assumption (p3) we have \(\mu<0\). Inside \(\Omega^{a}\) we choose the trial function as in the previous proofs so that inequality (3.7) is valid for any \(V_{0}\geq 0\); recall that we derived it assuming the presence of a bias. Moreover, picking a suitable coordinate \(s^{*}\gg s_{0}\) at the right-hand side of (3.1), the last term in (3.7) can be made as before smaller than \(\frac{1}{4}\delta\nu\). Outside the strip \(\Omega^{a}\) we set
\[\phi(x):=\phi_{\pm}\exp\{-|\xi_{\pm}|({\rm dist}(x,\Gamma)-a)\}\quad\text{if} \;\;x\in\Omega_{\pm}^{\rm out}, \tag{5.1}\]
recalling that \(\phi_{-}=1\), which is a natural generalization of (4.1), and we employ the same mollifier \(\chi_{\rm out}\) as before, cf. Sec. 4.1. Repeating the
argument of this section, we arrive at the inequality (4.4), however, now with the function \(\phi\) given by (5.1). Let us split the outer contribution to the quadratic form into two parts referring, respectively, to \(\Omega^{\mathrm{out}}_{\pm}\), for which we have
\[Q^{(+)}_{\mathrm{out}}[\psi_{\mathrm{out}}]=\int_{\Omega^{\mathrm{ out}}_{+}}|\nabla\psi_{\mathrm{out}}(x)|^{2}\,\mathrm{d}x+\int_{\Omega^{\mathrm{ out}}_{+}}(V_{0}-\mu)|\psi_{\mathrm{out}}(x)|^{2}\,\mathrm{d}x\] \[\qquad\leq\int_{\Omega^{\mathrm{out}}_{+}}\big{\{}|\nabla\phi(x) |^{2}+(V_{0}-\mu)|\phi(x)|^{2}\big{\}}\chi_{\mathrm{out}}(x)^{2}\,\mathrm{d}x+ \frac{1}{16}\delta\nu, \tag{5.2a}\] \[Q^{(-)}_{\mathrm{out}}[\psi_{\mathrm{out}}]\leq\int_{\Omega^{ \mathrm{out}}_{-}}\big{\{}|\nabla\phi(x)|^{2}-\mu|\phi(x)|^{2}\big{\}}\chi_{ \mathrm{out}}(x)^{2}\,\mathrm{d}x+\frac{1}{16}\delta\nu \tag{5.2b}\]
in view Lemmata 4.1 and 4.2 provided that \(r_{0}\) is chosen large enough. As in the proof of the first part of Theorem 2.2 we choose an \(\hat{s}\in[r_{0},s_{0})\) for which the parts of \(\Gamma\) with \(|s|\geq\hat{s}\) are outside \(B_{r_{0}}(O)\) and use the regions \(K_{\pm}\) defined by (4.6). By the definition (5.1) we have
\[|\nabla\phi|^{2}=\xi_{\pm}^{2}|\phi|^{2}\quad\text{for}\;\;x\in\Omega^{\mathrm{ out}}_{\pm}.\]
Within \(\Omega_{\mathrm{out}}\cap\{K_{+}\cup K_{-}\}\) we may use the \((s,t)\) coordinates, and noting the \(\phi\) is independent of \(s\) there, and as a function of \(t\) it coincides with the eigenfunction \(\phi_{0}\) of \(h\), cf. (2.2), we get
\[\int_{\Omega^{\mathrm{out}}_{+}\cap\{K_{+}\cup K_{-}\}} \big{\{}|\nabla\phi(x)|^{2}+(V_{0}-\mu)|\phi(x)|^{2}\big{\}}\chi_{ \mathrm{out}}(x)^{2}\,\mathrm{d}x\] \[\leq|\xi_{+}|\,\phi_{+}^{2}\,\|\chi_{\mathrm{in}}\|^{2}_{L^{2}((- \infty,-\hat{s}]\cup[\hat{s},\infty))}\] (5.3a) and \[\int_{\Omega^{\mathrm{out}}_{-}\cap\{K_{+}\cup K_{-}\}} \big{\{}|\nabla\phi(x)|^{2}-\mu|\phi(x)|^{2}\big{\}}\chi_{\mathrm{ out}}(x)^{2}\,\mathrm{d}x\] \[\leq\xi_{-}\|\chi_{\mathrm{in}}\|^{2}_{L^{2}((-\infty,-\hat{s}] \cup[\hat{s},\infty))}. \tag{5.3b}\]
So far we have not employed the convexity of \(\Omega_{+}\); we will need it from now on to estimate the integrals (5.3). As before we will first prove the second claim of Theorem 2.2 under the additional assumption (s4) using again the notation introduced in Fig. 2.
The part \(\Omega^{(-)}_{\mathrm{out}}\) consists then of a finite number of sectors \(\omega_{2j}\) which in view of the convexity assumption do not overlap mutually. Let \(\Gamma_{\pm}\) and \(\omega_{k\pm}\), \(k=1,2\), be the same as in part (a) of Theorem 2.4. By the same reasoning as in the proof of the latter, cf. (4.17), one can check that the contribution of the regions \(\omega_{k\pm}\setminus\{K_{+}\cup K_{-}\}\) to the integrals (5.3) can be made arbitrarily small by choosing \(r_{0}\) su
further the fact that \(|\chi_{\rm out}|\leq 1\) in combination with (4.20), we get
\[\int_{\Omega_{\rm out}^{(-)}\setminus\{K_{+}\cup K_{-}\}}\big{\{}| \nabla\phi(x)|^{2}-\mu|\phi(x)|^{2}\big{\}}\chi_{\rm out}(x)^{2}\,{\rm d}x \tag{5.4}\] \[\qquad\leq 2|\mu|\Big{[}\,\frac{2\hat{s}}{2\xi_{-}}+\frac{a}{2\xi_ {-}}\int_{-\hat{s}}^{\hat{s}}\kappa(s)\,{\rm d}s+\frac{1}{4\xi_{-}^{2}}\int_{- \hat{s}}^{\hat{s}}\kappa(s)\,{\rm d}s\Big{]}+\mathcal{O}({\rm e}^{-cr_{0}})\] \[\qquad=2\xi_{-}\hat{s}+a\xi_{-}\int_{-\hat{s}}^{\hat{s}}\kappa(s) \,{\rm d}s+\frac{1}{2}\int_{-\hat{s}}^{\hat{s}}\kappa(s)\,{\rm d}s+\mathcal{O }({\rm e}^{-cr_{0}})\]
for some \(c>0\), and since \(\kappa(s)=0\) for \(|s|>\hat{s}\) we can let the variable \(s\) in the above integrals run over the whole \(\mathbb{R}\). Comparing now the right-hand side of (5.4) with that of (3.7), we see that the terms containing \(\xi_{-}\) in the latter have their counterparts here with the opposite sign, hence they cancel mutually.
Next we estimate the contribution to (5.3a) coming from \(\Omega_{\rm out}^{(+)}\). We note that \(|\nabla\phi|^{2}=(-\mu+V_{0})|\phi|^{2}=|\xi_{+}|^{2}|\phi(x)|^{2}\) holds almost everywhere in \(\Omega_{+}^{\rm out}\) which means that the integral at the right-hand side of (5.2a) can be rewritten as \(2|\xi_{+}|^{2}\int_{\Omega_{+}^{\rm out}}|\phi(x)|^{2}\chi_{\rm out}(x)^{2}\, {\rm d}x\). In analogy with (4.9) we can estimate the function \(\phi\) using local extrema of the distance function, namely
\[|\phi(x)|^{2}=\phi_{+}^{2}\,\exp\{-2|\xi_{+}|(d_{x}(s_{x}^{0})-a)\} \tag{5.5}\] \[\leq\phi_{+}^{2}\Big{[}-\!\!\sum_{s_{x}^{i}\in M_{x}^{\uparrow}} \!\!\exp\{-2|\xi_{+}|(d_{x}(s_{x}^{i})\!-\!a)\}+\!\!\sum_{s_{x}^{i}\in M_{x}^{ \downarrow}}\!\!\exp\{-2|\xi_{+}|(d_{x}(s_{x}^{i})\!-\!a)\}\Big{]}.\]
As in part (a) of Theorem 2.4, we want to replace the integral of the expression at the right-hand side of (5.5) over \(\Omega_{\rm out}^{(+)}\setminus\{K_{+}\cup K_{-}\}\) by the sum of the integrals over the regions \(\omega_{1j}\) and \(\omega_{3j}\) corresponding to the partition of the curve segment with \(s\in[-\hat{s},\hat{s}]\) into circular arcs. In analogy with relation (4.13) we get
\[\int_{\Omega_{\rm out}^{(+)}\setminus\{K_{+}\cup K_{-}\}}|\phi(x) |^{2}\,{\rm d}x \tag{5.6}\] \[\leq\phi_{+}^{2}\sum_{j}\Big{\{}\int_{\omega_{1j}\cap\{\Omega_{\rm out }^{(+)}\setminus\{K_{+}\cup K_{-}\}\}}\exp\{-2|\xi_{+}|({\rm dist}(x,\Gamma_{j })-a)\}\,{\rm d}x\] \[\quad-\int_{\omega_{3j}\cap\{\Omega_{\rm out}^{(+)}\setminus\{K_ {+}\cup K_{-}\}\}}\exp\{-2|\xi_{+}|(|\kappa_{j}|^{-1}+{\rm dist}(x,O_{j})-a) \}\,{\rm d}x\Big{\}},\]
where in contrast to (4.13) the right-hand side (5.6) does not involve integrals over \(\omega_{2j}\) because in view of the convexity assumption we have \(\Omega_{\rm out}^{(+)}\cap\omega_{2j}=\emptyset\) holds for any \(j\).
Following the strategy used in the proof of part (a) of Theorem 2.4, we want to replace integrals over \(\omega_{kj}\cap\{\Omega_{\rm out}^{(+)}\setminus\{K_{+}\cup K_{-}\}\}\), \(k=1,3\), with those over the extended regions \(\omega_{kj}\setminus\{K_{+}\cup K_{-}\}\), respectively. To this aim, we employ the following simple geometric result:
**Lemma 5.1**.: _Suppose that \(x\in\Omega_{-}\) does not belong to the boundaries of \(\omega_{kj},\,k=1,2,3\), for any \(j\). Let further the distance function \(d_{x}(s)\) reach a minimum which is not global at a point of the curve belonging to an arc \(\Gamma_{j^{*}}\), then we have \(x\in\omega_{1j^{*}}\)._
The lemma in fact says that if \(\Omega_{+}\) is convex, it cannot happen that \(x\in\omega_{2j^{*}}\), which is obviously equivalent to the following claim:
**Lemma 5.1'**.: _Let \(x\in\Omega_{-}\). For any distance function extremum, except the global minimum, the segment \(L^{i}_{x}\) connecting the points \(x\) and \(\Gamma(s^{i}_{x})\) approaches the curve from the side of \(\Omega_{+}\)._
Proof.: The point of global minimum is obviously approached for the region where \(x\) lies, that is, from \(\Omega_{-}\). The next two extrema on both sides of \(s^{0}_{x}\), provided they exist, are necessarily maxima, and in view of the assumed convexity of \(\Omega_{+}\) the segments \(L^{i}_{x}\) cannot approach \(\Gamma(s^{i}_{x})\) from the side of \(\Omega_{-}\). We denote by \(L(s)\) the segment connecting the point \(x\) with \(\Gamma(s)\). The side from which \(L(s)\) approaches the curve can change only at the points where the angle \(\beta(s)\) between the segment \(L(s)\) and \(L^{0}_{x}\) corresponding to the global minimum of \(d_{x}(\cdot)\) has, as a function of \(s\), a local maximum or minimum. Since the curve \(\Gamma\) is by assumption \(C^{1}\)-smooth, and so is \(\beta(\cdot)\), the lines connecting such points with \(x\) are tangent to \(\Omega\), however, a convex region cannot cross its own tangent, hence the extrema of the function \(\beta(\cdot)\) are global, one maximum and one minimum. The corresponding points \(s^{i}_{x}\), provided both of them exist, lie on both sides of \(s^{0}_{x}\) because a convex region can have only two tangents passing through an exterior point \(x\) and the point \(\Gamma(s^{i}_{x})\) lies between the two tangent points on the boundary of \(\Omega_{+}\). The same tangent argument shows that once the \(L(s)\) switches the side from which it approached \(\Gamma\) it can never come back.
As before all the local extrema of \(d_{x}(\cdot)\) for \(x\in\Omega_{-}\) except the global minimum come in pairs, so in analogy with (4.13) we are able to estimate the expression \(\phi_{+}^{2}\int_{\Omega_{-}\setminus\{K_{+}\cup K_{-}\}}\exp\{-2|\xi_{+}|(d_ {x}(s^{0}_{x})-a)\}\,\mathrm{d}x\) from above by
\[\phi_{+}^{2}\sum_{j}\Big{\{}\int_{\omega_{1j}\cap\{\Omega_{-} \setminus\{K_{+}\cup K_{-}\}\}}\exp\{-2|\xi_{+}|(\mathrm{dist}(x,\Gamma_{j})-a )\}\,\mathrm{d}x \tag{5.7}\] \[-\int_{\omega_{3j}\cap\{\Omega_{-}\setminus\{K_{+}\cup K_{-}\}\}} \exp\{-2|\xi_{+}|(|\kappa_{j}|^{-1}+\mathrm{dist}(x,O_{j})-a)\}\,\mathrm{d}x \Big{\}},\]
where in view of Lemma 5.1 the first part does not include integration over \(\omega_{2j}\cap\{\Omega_{-}\setminus\{K_{+}\cup K_{-}\}\}\). Adding (5.7) to (5.6), we get
\[\int_{\Omega_{\mathrm{out}}^{(+)}\setminus\{K_{+}\cup K_{-}\}}| \phi(x)|^{2}\,\mathrm{d}x \tag{5.8}\] \[\leq\phi_{+}^{2}\sum_{j}\Big{\{}\int_{\omega_{1j}\cap\tilde{ \Omega}}\exp\{-2|\xi_{+}|(\mathrm{dist}(x,\Gamma_{j})-a)\}\,\mathrm{d}x\] \[\quad-\int_{\omega_{3j}\cap\tilde{\Omega}}\exp\{-2|\xi_{+}|(|\kappa _{j}|^{-1}+\mathrm{dist}(x,O_{j})-a)\}\,\mathrm{d}x\Big{\}},\]
where \(\tilde{\Omega}:=\Omega_{-}\cup\{\Omega_{\mathrm{out}}^{(+)}\setminus\{K_{+} \cup K_{-}\}\}\). Moreover, applying again the argument that lead to (4.18) we infer that one can replace \(\omega_{1j}\cap\tilde{\Omega}\) and \(\omega_{3j}\cap\tilde{\Omega}\) in (5.8) by \(\omega_{1j}\setminus\{K_{+}\cup K_{-}\}\) and \(\omega_{3j}\setminus\{K_{+}\cup K_{-}\}\), respectively, with an error which can be made arbitrarily small by choosing \(r_{0}\) large enough. The rest of the proof of part (b) of Theorem 2.4 for a convex \(\Omega_{+}\) repeats the corresponding part of the proof of the part (a); in the final step we take into account that a convex \(\Gamma\) can be approximated by convex curves of piecewise constant curvature.
To complete the proof of part (b) of Theorem 2.4, assume next that \(\Omega_{+}\) is _concave_. This case is already easy given the fact that in the first part of the proof we have not used the difference between \(|\xi_{+}|\) and \(|\xi_{-}|\), or between \(\phi_{+}\) and \(\phi_{-}\); the latter was set to one for convenience only. The role of the convexity was just to help us to distinguish the extrema of the distance function referring to the two outer parts of the trial function; if \(\Omega_{-}\) is convex, we can repeat the argument step by step interchanging the roles of \(\Omega_{-}\) and \(\Omega_{+}\) arriving thus at the sought claim.
It remains to prove part (b) of Theorem 2.2 where we have \(\mu=0\) by assumption and \(\Omega_{+}\) is again convex. Since \(V_{0}>0\), the equation \(h\phi=0\) has a resonance solution \(\phi_{0}\) which is constant for \(t\leq-a\) and decays exponentially for \(t>a\); as before we normalize it putting \(\phi_{-}=1\). We have to construct a trial function \(\psi\in H^{2}(\mathbb{R}^{2})\) which makes the quadratic form (2.3c), now containing the potential bias, negative. We use elements of the previous proofs. In particular, inside \(\Omega^{a}\) the function will be given by (3.2) and (3.1). Outside \(\Omega^{a}\) the trial function in \(\Omega_{-}\) will be the same as in the proof of part (a) of Theorem 2.2, cf. (3.9), while in \(\Omega_{+}\) we choose it as in the of part (b) of Theorem 2.4 discussed above, putting there \(\mu=0\), in other words, as (5.1) in which in view of (3.5) we set \(\xi_{+}=-\sqrt{V_{0}}\). Repeating then the estimates used to prove part (a) Theorem 2.2 in \(\Omega_{-}\) and part (a) of Theorem 2.4 in \(\Omega_{+}\), we obtain
\[Q[\psi]=-\frac{1}{8}\delta\nu-\int_{\mathbb{R}}\kappa(s)\,\mathrm{d}s+o(\psi), \tag{5.9}\]
where the error term can be made arbitrarily small by choosing large \(r_{0}\) and \(s^{*}\) in (3.1). In view of the assumed convexity of \(\Omega_{+}\) we have
\(\int_{\mathbb{R}}\kappa(s)\,\mathrm{d}s>0\), hence choosing the parameters properly we can make the form negative; this concludes the proof of part (b) of Theorem 2.2.
**Remark 5.2**.: As we have noted in the introduction, the 'two-sided' validity of part (b) Theorem 2.4 does not extend to the zero-energy resonance case. The above proof indicates the source of this difference. While for \(\mu<0\) we can use the trial function from the proof of part (b) of Theorem 2.2 and simply switch the roles of \(\Omega_{+}\) and \(\Omega_{-}\), a similar interchange does not work if \(\mu=0\) because it leads to the sign change of the second term on the right-hand side of (5.9) and we are obviously not free to choose \(\delta\nu\) to compensate this positive number.
### Data availability statement
Data are available in the article.
### Conflict of interest
The authors have no conflict of interest.
### Acknowledgments
The work of P.E. was supported by the Czech Science Foundation within the project 21-07129S. S.V. was funded by Deutsch Forschungsgemeinschaft-Project-ID 258734477-SFB-1173.
|
2305.04894 | The approximation property for locally compact quantum groups | We study the Haagerup--Kraus approximation property for locally compact
quantum groups, generalising and unifying previous work by Kraus--Ruan and
Crann. Along the way we discuss how multipliers of quantum groups interact with
the $\mathrm{C}^*$-algebraic theory of locally compact quantum groups. Several
inheritance properties of the approximation property are established in this
setting, including passage to quantum subgroups, free products of discrete
quantum groups, and duals of double crossed products. We also discuss a
relation to the weak$^*$ operator approximation property. For discrete quantum
groups, we introduce a central variant of the approximation property, and
relate this to a version of the approximation property for rigid
$\mathrm{C}^*$-tensor categories, building on work of Arano--De Laat--Wahl. | Matthew Daws, Jacek Krajczok, Christian Voigt | 2023-05-08T17:35:29Z | http://arxiv.org/abs/2305.04894v2 | # The approximation property for locally compact quantum groups
###### Abstract.
We study the Haagerup-Kraus approximation property for locally compact quantum groups, generalising and unifying previous work by Kraus-Ruan and Crann. We establish some results about how multipliers of quantum groups interact with the C*-algebraic theory of locally compact quantum groups. Several inheritance properties of the approximation property are established in this setting, including passage to quantum subgroups, free products of discrete quantum groups, and duals of double crossed products. We also discuss a relation to the weak\({}^{\star}\) operator approximation property. For discrete quantum groups, we introduce a central variant of the approximation property, and relate this to a version of the approximation property for rigid C\({}^{\star}\)-tensor categories, building on work of Arano-De Laat-Wahl.
Key words and phrases:Locally compact quantum groups, approximation property 2020 Mathematics Subject Classification: Primary 46L67, Secondary 22D55, 43A30 This work was supported by EPSRC grants EP/T03064X/1 and EP/T030992/1. For the purpose of open access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising. No data were created or analysed in this study.
a very loose form of amenability. It is known that AP passes from locally compact groups to their lattices and vice versa, and that it has better permanence properties with respect to standard constructions like extensions and free products, in comparison to weak amenability.
It was an open problem for a long time to exhibit examples of exact groups without AP. In the remarkable paper [47], Lafforgue and de la Salle proved that \(\mathrm{SL}(3,\mathbb{R})\) fails to have AP, thus confirming a conjecture in [31]. Building on this result, it was shown later by Haagerup, Knudby and De Laat that a connected Lie group has AP if and only if all simple factors in its Levi decomposition have real rank at most one [28], [29], [30].
AP has a wide range of applications. As shown by Haagerup-Kraus [31], in the case of discrete groups there is a connection between AP and the slice map property (or equivalently, the operator approximation property) of the associated crossed products. It was recently proven in the full generality of locally compact groups that AP implies exactness [62] (see also [15]), which makes it relevant to a number of problems in operator algebras. Let us also mention that AP was shown to be equivalent to a non-commutative version of Fejer theorem [15], and used to prove results concerning convolution operators on \(\mathrm{L}^{p}(G)\)[12, 21].
Amenability, weak amenability and the Haagerup property have also been studied extensively in the broader setting of locally compact quantum groups, see [9] for a survey. An interesting new feature in the quantum setting is the interplay between discrete quantum groups, their Drinfeld doubles, and the associated \(\mathrm{C}^{*}\)-tensor categories [22]. In fact, the central versions of amenability, the Haagerup property, weak amenability and central property (T) for discrete quantum groups have been recast at the level of \(\mathrm{C}^{*}\)-tensor categories [56], thus building a natural bridge to the study of subfactors.
In the present paper we undertake a systematic study of the approximation property for locally compact quantum groups. Kraus and Ruan introduced a version of the approximation property for Kac algebras in [41], requiring the existence of a net in the Fourier algebra \(\mathrm{A}(\mathbb{G})\) such that the associated net of completely bounded operators on \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) converges to the identity in the stable point weak\({}^{*}\)-topology of \(\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\). Crann studied this property for general locally compact quantum groups, and showed for example that in the presence of this property, amenability is equivalent to coamenability of the dual quantum group [14, Corollary 7.4].
Our starting point is the original work by Haagerup and Kraus. We say that a locally compact quantum group has AP if it admits a net of elements in the Fourier algebra \(\mathrm{A}(\mathbb{G})\) which converges weak\({}^{*}\) to \(\mathbb{1}\) in the space of left CB multipliers \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). We show that this definition is in fact equivalent to the definition of AP used in [41], [14], thereby verifying a conjecture in [41]. Along the way, we obtain a useful alternative description of the weak\({}^{*}\)-topology on the space of left CB multipliers. We discuss carefully that working with left or right multipliers does not change the theory, and that passing from a quantum group to its opposite or commutant preserves AP. We also show that if quantum group has the AP exhibited by a net which is uniformly bounded in the norm of \(\mathrm{A}(\mathbb{G})\) (resp. \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\)), then it is amenable (resp. weakly amenable).
We then derive a number of permanence properties of AP in analogy to the classical setting. In particular, we show that AP passes to closed quantum subgroups of locally compact quantum groups, and to duals of double crossed products. This includes the passage to direct products of quantum groups as a special case. In the setting of discrete quantum groups we verify that AP is inherited by free products and direct limits of directed systems of discrete quantum groups with injective connecting maps. We also introduce a central version of the approximation property for discrete quantum groups and show that it is related to a natural notion of AP for rigid \(\mathrm{C}^{*}\)-tensor categories, building on work of Arano-De Laat-Wahl [2], [3].
Let us now briefly describe more of our results and explain how the paper is organised. In Section 2 we collect some background material on locally compact quantum groups and fix our notation. In Section 3 we review several characterisations of the space \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) of left cb-multipliers of the Fourier algebra \(\mathrm{A}(\mathbb{G})\) and its natural predual \(Q^{l}(\mathrm{A}(\mathbb{G}))\). By definition, \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) is a (in general not closed) subalgebra of \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\), but also \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) is isomorphic to \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}( \widehat{\mathbb{G}}))\), the space of a normal left module maps on \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\). Any such map in \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}( \widehat{\mathbb{G}}))\) restricts
to \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\), and we provide a characterisation of the maps on \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\) which so arise. This leads to a description of \(Q^{l}(\mathrm{A}(\mathbb{G}))\) as quotient of the projective tensor product \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{L}^{1}(\widehat {\mathbb{G}})\) which we were unable to locate in the literature even for classical groups.
In Section 4 we define the Haagerup-Kraus approximation property for locally compact quantum groups and verify that it passes to opposite and commutant quantum groups. We compare our definition to the version of AP given by Kraus and Ruan, showing that they are equivalent. We show that \(\mathrm{M}^{l}_{\mathrm{c}b}(\mathrm{A}(\mathbb{G}))\) admits an involution linked to the antipode of \(\mathbb{G}\), and to the fact that elements of \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\mathrm{C}\mathrm{B}^{\sigma}(\mathrm{L}^ {\infty}(\widehat{\mathbb{G}}))\) act boundedly on the Hilbert space \(\mathrm{L}^{2}(\mathbb{G})\). We finish the section by showing that AP is independent of working with left or right multipliers.
In Section 5 we discuss the relation of AP with weak amenability and coamenability.
Section 6 is devoted to the special case of discrete quantum groups. When \(\mathbb{T}\) is discrete, we have the notion of a finitely-supported function leading to the algebra \(\mathrm{c}_{00}(\mathbb{T})\). It suffices to work with \(\mathrm{c}_{00}(\mathbb{T})\) when considering AP, and we show further that the approximating net can be chosen to satisfy other properties. We introduce the central approximation property for discrete quantum groups and prove that central AP is equivalent to AP in the unimodular case. Building on the work of Kraus-Ruan [41] and Crann [14], we show that if a locally compact quantum group \(\mathbb{G}\) has AP then the von Neumann algebra \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) has \(\mathrm{W}^{*}\mathrm{OAP}.\) We study the relation between AP of \(\mathbb{T}\) and (strong) OAP of \(\mathrm{C}(\widehat{\mathbb{T}})\) or \(\mathrm{W}^{*}\mathrm{OAP}\) of \(\mathrm{L}^{\infty}(\widehat{\mathbb{T}})\). Finally, we introduce strengthenings of these concepts which take into account also the algebra \(\ell^{\infty}(\mathbb{T})\), and show that these are equivalent to AP even in the non-unimodular case.
In Section 7 we establish a number of permanence properties. We show that the AP is inherited by arbitrary closed quantum subgroups and by the duals of double crossed products. In particular, the direct product of two quantum groups with AP also has AP. For discrete quantum groups we investigate the passage to free products and direct unions, again showing that AP is preserved.
Finally, in Section 8 we define the approximation property for rigid \(\mathrm{C}^{*}\)-tensor categories and verify that the categorical AP is equivalent to the central AP for discrete quantum groups. This implies in particular that the central AP is invariant under monoidal equivalence. We also relate these properties to the AP of the Drinfeld double.
We conclude with some general remarks on notation. If \(\mathcal{A}\) is a \(\mathrm{C}^{*}\)-algebra we write \(\mathrm{M}(\mathcal{A})\) for its multiplier algebra. For a map \(\Phi\colon\mathcal{A}\to\mathcal{A}\), the symbol \(\Phi^{\dagger}\) stands for the map \(\mathcal{A}\ni a\mapsto\Phi(a^{*})^{*}\in\mathcal{A}\). If \(\omega:\mathcal{A}\to\mathbb{C}\) is a linear functional we write \(\overline{\omega}\) for the linear functional given by \(\overline{\omega}(x)=\overline{\omega(x^{*})}\).
We write \(\odot\) for the algebraic tensor product, \(\otimes\) for the tensor product of Hilbert spaces or the minimal tensor product of \(\mathrm{C}^{*}\)-algebras, \(\widehat{\otimes}\) for the injective tensor product of operator spaces and \(\bar{\otimes}\) for the spatial tensor product of von Neumann algebras. We denote by \(\chi\) the flip map for tensor products of algebras, and use the symbol \(\Sigma\) for the flip map of Hilbert spaces.
We freely use the basic theory of operator spaces, following [23], see also [53, 54] for example. When \(X,Y\) are operator spaces, \(\mathrm{CB}(X,Y)\) denotes the space of completely bounded (CB) linear maps \(X\to Y\). For dual operator spaces \(X,Y\) we write \(\mathrm{CB}^{\sigma}(X,Y)\) for the subset of \(\mathrm{CB}(X,Y)\) consisting of all maps which are weak\({}^{*}\)-weak\({}^{*}\)-continuous. In the case \(X=Y\) we simply write \(\mathrm{CB}(X)=\mathrm{CB}(X,X)\) and \(\mathrm{CB}^{\sigma}(X)=\mathrm{CB}^{\sigma}(X,X)\). If \(\mathrm{M}\) is a von Neumann algebra, then \(\mathrm{CB}^{\sigma}(\mathrm{M})\) can be equipped with the stable point-weak\({}^{*}\)-topology: \(T_{i}\xrightarrow[i\in I]{}T\) with respect to this topology if and only if \((T_{i}\otimes\mathrm{id})x\xrightarrow[i\in I]{}(T\otimes\mathrm{id})x\) in the weak\({}^{*}\)-topology, for any separable Hilbert space \(\mathsf{H}\) and \(x\in\mathrm{M}\,\bar{\otimes}\,\mathrm{B}(\mathsf{H})\), see [31]. Whenever we have a left N-module structure on an operator space \(X\), the space of left N-module CB maps is denoted by \({}_{\mathrm{N}}\mathrm{CB}(X)\). Similarly, if \(X\) is a right module or a bimodule, the corresponding spaces are denoted by \(\mathrm{CB}_{\mathrm{M}}(X)\) and \({}_{\mathrm{N}}\mathrm{CB}_{\mathrm{M}}(X)\), respectively. We denote the operator space projective tensor product by \(\widehat{\otimes}\), and recall that \((X\widehat{\otimes}Y)^{*}=\mathrm{CB}(X,Y^{*})\) completely isometrically. The canonical pairing between an operator space \(X\) and its dual \(X^{*}\) is denoted by \(\langle\omega,x\rangle_{X^{*},X}\) for \(\omega\in X^{*},x\in X\), or simply \(\langle\omega,x\rangle\) if there is no risk of confusion.
For a n.s.f. weight \(\theta\) on a von Neumann algebra \(\mathrm{M}\), we denote the GNS Hilbert space by \(\mathsf{H}_{\theta}\), and we use the notation \(\mathfrak{N}_{\theta}=\{x\in\mathrm{M}\mid\theta(x^{*}x)<+\infty\}\). We write \(\Lambda_{\theta}\colon\mathfrak{N}_{\theta}\to\mathsf{H}_{\theta}\) for the GNS map. Typically we then represent \(\mathrm{M}\) on \(\mathsf{H}_{\theta}\) and identify \(\mathrm{M}\subseteq\mathrm{B}(\mathsf{H}_{\theta})\).
## 2. Preliminaries
Throughout the paper we will work in the setting of locally compact quantum groups introduced by Kustermans and Vaes [45]. In this section we recall some fundamental constructions and results of the theory, more information can be found in [43, 46, 68]. For background on operator algebras and operator spaces we refer to [10, 23, 63].
By definition, a _locally compact quantum group_\(\mathbb{G}\) is given by a von Neumann algebra \(\mathrm{L}^{\infty}(\mathbb{G})\) together with a normal unital \(\star\)-homomorphism \(\Delta_{\mathbb{G}}\colon\,\mathrm{L}^{\infty}(\mathbb{G})\to\mathrm{L}^{ \infty}(\mathbb{G})\bar{\otimes}\,\mathrm{L}^{\infty}(\mathbb{G})\) called _comultiplication_, satisfying \((\Delta_{\mathbb{G}}\otimes\mathrm{id})\Delta_{\mathbb{G}}=(\mathrm{id} \otimes\Delta_{\mathbb{G}})\Delta_{\mathbb{G}}\), and _left_ resp. _right Haar integrals_\(\varphi\) and \(\psi\). These are normal, semifinite, faithful (n.s.f.) weights on \(\mathrm{L}^{\infty}(\mathbb{G})\) satisfying certain invariance conditions with respect to \(\Delta_{\mathbb{G}}\). In general, the von Neumann algebra \(\mathrm{L}^{\infty}(\mathbb{G})\) is non-commutative and will not be an algebra of function on a measure space. Following this notational convention, the predual of \(\mathrm{L}^{\infty}(\mathbb{G})\) is denoted by \(\mathrm{L}^{1}(\mathbb{G})\) and the GNS Hilbert space of \(\varphi\) is denoted by \(\mathrm{L}^{2}(\mathbb{G})\).
Every locally compact group \(G\) can be seen as a locally compact quantum group \(\mathbb{G}\) by taking \(\mathrm{L}^{\infty}(\mathbb{G})=\mathrm{L}^{\infty}(G)\), the algebra of classes of measurable, bounded functions on \(G\), and letting \(\Delta_{\mathbb{G}}\) be the pullback of multiplication in \(G\). The weights \(\varphi,\psi\) are given by integration with respect to left (right) Haar measure in this case.
Out of the axioms, one can construct a number of additional objects associated to a locally compact quantum group \(\mathbb{G}\). First of all, there is the _Kac-Takesaki operator_\(\mathrm{W}^{\mathbb{G}}\in\mathrm{L}^{\infty}(\mathbb{G})\bar{\otimes}\, \mathrm{L}^{\infty}(\widehat{\mathbb{G}})\), which is a unitary operator on \(\mathrm{L}^{2}(\mathbb{G})\otimes\mathrm{L}^{2}(\mathbb{G})\) defined via
\[((\omega\otimes\mathrm{id})\mathrm{W}^{\mathbb{G}*})\Lambda_{\varphi}(x)= \Lambda_{\varphi}((\omega\otimes\mathrm{id})\Delta_{\mathbb{G}}(x))\qquad( \omega\in\mathrm{L}^{1}(\mathbb{G}),x\in\mathfrak{N}_{\varphi}).\]
It implements the comultiplication via \(\Delta_{\mathbb{G}}(x)=\mathrm{W}^{\mathbb{G}*}(\mathbb{1}\otimes x)\mathrm{W }^{\mathbb{G}}\) for \(x\in\mathrm{L}^{\infty}(\mathbb{G})\). Tomita-Takesaki theory yields two groups of _modular automorphisms_\((\sigma_{t}^{\varphi})_{t\in\mathbb{R}},(\sigma_{t}^{\psi})_{t\in\mathbb{R}}\) and _modular conjugations_\(J_{\varphi},J_{\psi}\) associated with the weights \(\varphi,\psi\), respectively [63]. The left and right Haar integrals are linked by the _modular element_\(\delta_{\mathbb{G}}\), which is a strictly positive, self-adjoint operator affiliated with \(\mathrm{L}^{\infty}(\mathbb{G})\).
In the theory of quantum groups, the role of the inverse operation is played by two maps: the _antipode_\(S_{\mathbb{G}}\) and the _unitary antipode_\(R_{\mathbb{G}}\). The antipode is in general an unbounded (densely defined) map on \(\mathrm{L}^{\infty}(\mathbb{G})\) such that
\[(\mathrm{id}\otimes\omega)\mathrm{W}^{\mathbb{G}}\in\mathrm{Dom}(S_{\mathbb{G }})\ \text{ and }S_{\mathbb{G}}((\mathrm{id}\otimes\omega)\mathrm{W}^{\mathbb{G}})=( \mathrm{id}\otimes\omega)\mathrm{W}^{\mathbb{G}*}\qquad(\omega\in\mathrm{L}^{1 }(\widehat{\mathbb{G}})).\]
The unitary antipode, on the other hand, is a bounded, normal, \(\star\)-preserving, antimultiplicative map on \(\mathrm{L}^{\infty}(\mathbb{G})\) satisfying \(\Delta_{\mathbb{G}}R_{\mathbb{G}}=\chi(R_{\mathbb{G}}\otimes R_{\mathbb{G}}) \Delta_{\mathbb{G}}\). These maps are linked via \(S_{\mathbb{G}}=R_{\mathbb{G}}\tau_{-i/2}^{\mathbb{G}}=\tau_{-i/2}^{\mathbb{G}} R_{\mathbb{G}}\), where \((\tau_{t}^{\mathbb{G}})_{t\in\mathbb{R}}\) is the group of _scaling automorphisms_ of \(\mathrm{L}^{\infty}(\mathbb{G})\). The left and right Haar integrals are unique up to a scalar, and we shall fix normalisations such that \(\varphi=\psi\circ R_{\mathbb{G}}\).
With any locally compact quantum group \(\mathbb{G}\) one can associate the _dual_ locally compact quantum group \(\widehat{\mathbb{G}}\) in such a way that the correspondence between \(\mathbb{G}\) and \(\widehat{\mathbb{G}}\) extends Pontryagin duality. Furthermore, the Hilbert spaces \(\mathrm{L}^{2}(\mathbb{G}),\mathrm{L}^{2}(\widehat{\mathbb{G}})\) can be identified in a canonical way, and the Kac-Takesaki operators of \(\mathbb{G}\) and \(\widehat{\mathbb{G}}\) are linked via \(\mathrm{W}^{\widehat{\mathbb{G}}}=\chi(\mathrm{W}^{\mathbb{G}*})\). If there is no risk of confusion we will simply write \(\Delta\) for \(\Delta_{\mathbb{G}}\), \(\widehat{\Delta}\) for \(\Delta_{\widehat{\mathbb{G}}}\), and similarly \(R,S,\widehat{R},\widehat{S}\) for the (unitary) antipode of \(\mathbb{G}\) or \(\widehat{\mathbb{G}}\). Using the canonical identification of \(\mathrm{L}^{2}(\mathbb{G})\) and \(\mathrm{L}^{2}(\widehat{\mathbb{G}})\) one obtains a number of useful formulae. Let us mention in particular that the right regular representation \(\mathrm{V}^{\mathbb{G}}\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})^{\prime} \bar{\otimes}\,\mathrm{L}^{\infty}(\mathbb{G})\) is given by \(\mathrm{V}^{\mathbb{G}}=(J_{\widehat{\varphi}}\otimes J_{\widehat{\varphi}}) \chi(\mathrm{W}^{\mathbb{G}})^{*}(J_{\widehat{\varphi}}\otimes J_{\widehat{ \varphi}})\).
We will also work with the weak\({}^{*}\)-dense \(\mathrm{C}^{*}\)-subalgebra \(\mathrm{C}_{0}(\mathbb{G})\subseteq\mathrm{L}^{\infty}(\mathbb{G})\). It is defined as the norm-closure of the space \(\{(\mathrm{id}\otimes\omega)\mathrm{W}^{\mathbb{G}}\,|\,\omega\in\mathrm{L}^{1 }(\widehat{\mathbb{G}})\}\). After restriction, the comultiplication becomes a non-degenerate \(\star\)-homomorphism \(\mathrm{C}_{0}(\mathbb{G})\to\mathrm{M}(\mathrm{C}_{0}(\mathbb{G})\otimes\mathrm{C }_{0}(\mathbb{G}))\). Similarly one defines \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\), and then \(\mathrm{W}^{\mathbb{G}}\in\mathrm{M}(\mathrm{C}_{0}(\mathbb{G})\otimes\mathrm{C }_{0}(\widehat{\mathbb{G}}))\). Using the comultiplication of \(\mathrm{L}^{\infty}(\mathbb{G})\), we define a Banach algebra structure on \(\mathrm{L}^{1}(\mathbb{G})\) via \(\omega\star\nu=(\omega\otimes\nu)\Delta_{\mathbb{G}}\) for \(\omega,\nu\in\mathrm{L}^{1}(\mathbb{G})\). As \(\mathrm{L}^{\infty}(\mathbb{G})\) is the dual of \(\mathrm{L}^{1}(\mathbb{G})\), we have a canonical \(\mathrm{L}^{1}(\mathbb{G})\)-bimodule structure on \(\mathrm{L}^{\infty}(\mathbb{G})\), which is given by \(\omega\star x=(\mathrm{id}\otimes\omega)\Delta_{\mathbb{G}}(x)\) and \(x\star\omega=(\omega\otimes\mathrm{id})\Delta_{\mathbb{G}}(x)\). Treating \(\mathrm{L}^{1}(\mathbb{G})\) as the predual of the von Neumann algebra \(\mathrm{L}^{\infty}(\mathbb{G})\) gives, as usual, an \(\mathrm{L}^{\infty}(\mathbb{G})\)-bimodule structure on \(\mathrm{L}^{1}(\mathbb{G})\) defined via \(x\omega=\omega(\cdot\,x),\omega x=\omega(x\,\cdot)\) for \(x\in\mathrm{L}^{\infty}(\mathbb{G}),\omega\in\mathrm{L}^{1}(\mathbb{G})\).
Let us introduce the map \(\lambda_{\mathbb{G}}\colon\operatorname{L}^{1}(\mathbb{G})\to\operatorname{C}_{0}( \widehat{\mathbb{G}})\) by \(\lambda_{\mathbb{G}}(\omega)=(\omega\otimes\operatorname{id})\mathrm{W}^{ \mathbb{G}}\), and similarly for \(\widehat{\mathbb{G}}\). Using this we define the _Fourier algebra of \(\mathbb{G}\)_ as \(\operatorname{A}(\mathbb{G})=\lambda_{\widehat{\mathbb{G}}}(\operatorname{L}^{ 1}(\widehat{\mathbb{G}}))\). One can check that \(\lambda_{\widehat{\mathbb{G}}}\) is multiplicative, so that \(\operatorname{A}(\mathbb{G})\) is a dense subalgebra of \(\operatorname{C}_{0}(\widehat{\mathbb{G}})\). As \(\lambda_{\widehat{\mathbb{G}}}\) is also injective, we can define an operator space structure on \(\operatorname{A}(\mathbb{G})\) by imposing the condition that \(\lambda_{\widehat{\mathbb{G}}}\colon\operatorname{L}^{1}(\widehat{\mathbb{G} })\to\operatorname{A}(\mathbb{G})\) is completely isometric.
In the text, we will use certain subspaces of \(\operatorname{L}^{1}(\mathbb{G})\), which consist of functionals having nice additional properties. Firstly, let us introduce
\[\begin{split}\operatorname{L}^{1}_{\sharp}(\mathbb{G})& =\{\omega\in\operatorname{L}^{1}(\mathbb{G})\,|\,\exists_{\theta \in\operatorname{L}^{1}(\mathbb{G})}\,\lambda_{\mathbb{G}}(\omega)^{*}=\lambda _{\mathbb{G}}(\theta)\}\\ &=\{\omega\in\operatorname{L}^{1}(\mathbb{G})\,|\,\exists_{\theta \in\operatorname{L}^{1}(\mathbb{G})}\,\forall_{x\in\operatorname{Dom}(S_{ \mathbb{G}})}\,\overline{\omega}(S_{\mathbb{G}}(x))=\theta(x)\}.\end{split} \tag{2.1}\]
For a given \(\omega\in\operatorname{L}^{1}_{\sharp}(\mathbb{G})\), the functional \(\theta\) is characterised uniquely by any of the properties in (2.1), hence we can write \(\theta=\omega^{\sharp}\). The mapping \(\omega\mapsto\omega^{\sharp}\), and the restriction of the multiplication from \(\operatorname{L}^{1}(\mathbb{G})\), turn \(\operatorname{L}^{1}_{\sharp}(\mathbb{G})\) into a \(\star\)-algebra. The second subspace we will use is
\[\mathscr{J}=\{\omega\in\operatorname{L}^{1}(\mathbb{G})\,|\,\exists_{M>0}\, \forall_{x\in\mathfrak{N}_{\varphi}}\,|\omega(x^{*})|\leq M\|\Lambda_{ \varphi}(x)\|\}. \tag{2.2}\]
This subspace appears in the construction of the left Haar integral \(\widehat{\varphi}\) for \(\widehat{\mathbb{G}}\). Indeed, for \(\omega\in\mathscr{J}\) we have
\[\lambda_{\mathbb{G}}(\omega)\in\mathfrak{N}_{\widehat{\varphi}}\quad\text{ and }\quad\forall_{x\in\mathfrak{N}_{\varphi}}\,\langle\Lambda_{\varphi}(x)\,|\, \Lambda_{\widehat{\varphi}}(\lambda_{\mathbb{G}}(\omega))\rangle=\omega(x^{*}).\]
In a couple of places we will need the following result, which says that there are "a lot" of functionals with desirable properties.
**Lemma 2.1**.: _The subspace_
\[\mathscr{J}_{0}=\{\omega\in\mathscr{J}\cap\operatorname{L}^{1}_{ \sharp}(\mathbb{G})\,|\,\overline{\omega},\overline{\omega}^{\sharp}\in \mathscr{J}\cap\operatorname{L}^{1}_{\sharp}(\mathbb{G})\text{ and }f\colon\mathbb{R} \ni t\mapsto(\omega\delta^{it}_{\mathbb{G}})\circ\tau^{\mathbb{G}}_{t}\in \operatorname{L}^{1}(\mathbb{G})\] \[\text{extends to an entire function with }\forall_{z\in\mathbb{C}}f(z)\in\mathscr{J}\cap \operatorname{L}^{1}_{\sharp}(\mathbb{G})\}\]
_is dense in \(\operatorname{L}^{1}(\mathbb{G})\), and \(\lambda_{\mathbb{G}}(\mathscr{J}_{0})\) forms a \(\sigma\text{-}\text{\sc{sot}}^{*}\times\|\cdot\|\) core for \(\Lambda_{\widehat{\varphi}}\)._
Proof.: Our approach is standard, compare for example [38, Lemma 14.5] for a similar result. Therefore we only give a sketch of the argument.
According to [46, Proposition 2.6], the space \(\mathscr{J}^{\sharp}=\{\omega\in\mathscr{J}\cap\operatorname{L}^{1}_{\sharp} (\mathbb{G})\,|\,\omega^{\sharp}\in\mathscr{J}\}\) is dense in \(\operatorname{L}^{1}(\mathbb{G})\) and \(\lambda_{\mathbb{G}}(\mathscr{J}^{\sharp})\) is a \(\sigma\text{-}\text{\sc{sot}}^{*}\times\|\cdot\|\) core of \(\Lambda_{\widehat{\varphi}}\). Let us introduce three mollifier operations: for \(n\in\mathbb{N}\) let
\[M^{\varphi}_{n}\colon\operatorname{L}^{1}(\mathbb{G})\ni\omega \mapsto\sqrt{\frac{n}{\pi}}\int_{\mathbb{R}}e^{-nt^{2}}\omega \circ\sigma^{\varphi}_{t}\,\mathrm{d}t\in\operatorname{L}^{1}(\mathbb{G}),\] \[M^{\tau}_{n}\colon\operatorname{L}^{1}(\mathbb{G})\ni\omega \mapsto\sqrt{\frac{n}{\pi}}\int_{\mathbb{R}}e^{-ns^{2}}\omega \circ\tau^{\mathbb{G}}_{s}\,\mathrm{d}s\in\operatorname{L}^{1}(\mathbb{G}),\] \[M^{\delta}_{n}\colon\operatorname{L}^{1}(\mathbb{G})\ni\omega \mapsto\sqrt{\frac{n}{\pi}}\int_{\mathbb{R}}e^{-np^{2}}\omega \delta^{ip}_{\mathbb{G}}\,\mathrm{d}p\in\operatorname{L}^{1}(\mathbb{G}).\]
Next, let \(\omega_{n}=M^{\tau}_{n}\circ M^{\varphi}_{n}\circ M^{\delta}_{n}(\omega)\) and set \(\mathscr{J}_{00}=\operatorname{span}\{\omega_{n}\,|\,n\in\mathbb{N},\omega\in \mathscr{J}^{\sharp}\}\). It suffices to show that \(\mathscr{J}_{00}\) is dense in \(\operatorname{L}^{1}(\mathbb{G})\), that \(\mathscr{J}_{00}\subseteq\mathscr{J}_{0}\), and that \(\lambda_{\mathbb{G}}(\mathscr{J}_{00})\) forms a \(\sigma\text{-}\text{\sc{sot}}^{*}\times\|\cdot\|\) core for \(\Lambda_{\widehat{\varphi}}\).
Choose \(n\in\mathbb{N},\omega\in\mathscr{J}^{\sharp},x\in\mathfrak{N}_{\varphi},y\in \operatorname{Dom}(S_{\mathbb{G}})\). It is elementary to check that \(\omega_{n}\in\mathscr{J}\cap\mathrm{L}^{1}_{\sharp}(\mathbb{G})\) and \(\mathbb{R}\ni t\mapsto(\omega_{n}\delta^{it}_{\mathbb{G}})\circ\tau^{\mathbb{G }}_{t}\in\mathrm{L}^{1}(\mathbb{G})\) extends to an entire function with the desired property. Since
\[|\overline{\omega_{n}}(x^{*})|=|\omega_{n}(x)|=(\tfrac{n}{\pi})^{3/2} \big{|}\int_{\mathbb{R}^{3}}e^{-n(t^{2}+s^{2}+p^{2})}\omega(\delta^{ip}_{ \mathbb{G}}\sigma^{\varphi}_{t}\circ\tau^{\mathbb{G}}_{s}(x))\,\mathrm{d}t\, \mathrm{d}s\,\mathrm{d}p\big{|}\] \[=(\tfrac{n}{\pi})^{3/2}\big{|}\omega\big{(}\int_{\mathbb{R}^{3}}e ^{-n(t^{2}+s^{2}+p^{2})}\delta^{ip}_{\mathbb{G}}\sigma^{\varphi}_{t}\circ\tau^ {\mathbb{G}}_{s}(x)\,\mathrm{d}t\,\mathrm{d}s\,\mathrm{d}p\big{)}\big{|}\] \[\leq(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\Lambda_{\varphi}\big{(}\int_{ \mathbb{R}^{3}}e^{-n(t^{2}+s^{2}+p^{2})}\sigma^{\varphi}_{t}\circ\tau^{ \mathbb{G}}_{s}(x^{*})\delta^{-ip}_{\mathbb{G}}\,\mathrm{d}t\,\mathrm{d}s\, \mathrm{d}p\big{)}\big{|}\] \[=(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\big{|}J_{\varphi}\nabla^{1/2}_{ \varphi}\Lambda_{\varphi}\big{(}\int_{\mathbb{R}^{3}}e^{-n(t^{2}+s^{2}+p^{2})} \delta^{ip}_{\mathbb{G}}\sigma^{\varphi}_{t}\circ\tau^{\mathbb{G}}_{s}(x)\, \mathrm{d}t\,\mathrm{d}s\,\mathrm{d}p\big{)}\big{|}\] \[=(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\Lambda_{\varphi}\big{(}\int_{ \mathbb{R}^{3}}e^{-n((t+i/2)^{2}+s^{2}+p^{2})}\nu^{p/2}_{\mathbb{G}}\delta^{ip }_{\mathbb{G}}\sigma^{\varphi}_{t}\circ\tau^{\mathbb{G}}_{s}(x)\,\mathrm{d}t \,\mathrm{d}s\,\mathrm{d}p\big{)}\big{|}\] \[\leq(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{(}\int_{\mathbb{R}^{3}}|e^{-n((t+i/2) ^{2}+s^{2}+p^{2})}|\nu^{p/2-s/2}_{\mathbb{G}}\,\mathrm{d}t\,\mathrm{d}s\, \mathrm{d}p\big{)}\big{|}\Lambda_{\varphi}(x)\big{|},\]
where \(\nu_{\mathbb{G}}\) is the scaling constant of \(\mathbb{G}\), we have \(\overline{\omega_{n}}\in\mathscr{J}\). Here we used \(\sigma^{\varphi}_{t}(\delta^{ip}_{\mathbb{G}})=\nu^{itp}_{\mathbb{G}}\delta^{ip}\) and \(\varphi\circ\tau_{s}=\nu^{-s}_{\mathbb{G}}\varphi\). It is automatic that \(\overline{\omega_{n}}\in\mathrm{L}^{1}_{\sharp}(\mathbb{G})\). Indeed,
\[\overline{\overline{\omega_{n}}}(S_{\mathbb{G}}(y))=\omega_{n}(S _{\mathbb{G}}(y)) =\sqrt{\tfrac{n}{\pi}}\int_{\mathbb{R}}e^{-ns^{2}}M^{\varphi}_{n} \circ M^{\delta}_{n}(\omega)(\tau^{\mathbb{G}}_{s}\circ R_{\mathbb{G}}\circ \tau^{\mathbb{G}}_{-i/2}(y))\,\mathrm{d}s\] \[=\big{(}\sqrt{\tfrac{n}{\pi}}\int_{\mathbb{R}}e^{-n(s+i/2)^{2}}M^ {\varphi}_{n}\circ M^{\delta}_{n}(\omega)\circ\tau^{\mathbb{G}}_{s}\circ R_{ \mathbb{G}}\,\mathrm{d}s\big{)}(y).\]
The above calculation shows also that \(\overline{\omega_{n}}^{\sharp}=\sqrt{\tfrac{n}{\pi}}\int_{\mathbb{R}}e^{-n(s+i /2)^{2}}M^{\varphi}_{n}\circ M^{\delta}_{n}(\omega)\circ\tau^{\mathbb{G}}_{s} \circ R_{\mathbb{G}}\,\mathrm{d}s\). Moreover we have \(\overline{\omega_{n}}^{\sharp}\in\mathscr{J}\), which is a consequence of the following calculation:
\[|\overline{\omega_{n}}^{\sharp}(x^{*})|=(\tfrac{n}{\pi})^{3/2} \big{|}\int_{\mathbb{R}^{3}}e^{-n(t^{2}+(s+i/2)^{2}+p^{2})}\omega\big{(}\delta^{ ip}_{\mathbb{G}}\sigma^{\varphi}_{t}\circ\tau^{\mathbb{G}}_{s}\circ R_{ \mathbb{G}}(x^{*})\big{)}\,\mathrm{d}t\,\mathrm{d}s\,\mathrm{d}p\big{|}\] \[\leq(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\Lambda_{\varphi}\big{(}\int_{ \mathbb{R}^{3}}e^{-n(t^{2}+(s-i/2)^{2}+p^{2})}\sigma^{\varphi}_{t}\circ\tau^{ \mathbb{G}}_{s}\circ R_{\mathbb{G}}(x)\delta^{-ip}_{\mathbb{G}}\,\mathrm{d}t\, \mathrm{d}s\,\mathrm{d}p\big{)}\big{|}\] \[=(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\Lambda_{\psi}\big{(}\int_{ \mathbb{R}^{3}}e^{-n(t^{2}+(s-i/2)^{2}+(p+i/2)^{2})}\sigma^{\varphi}_{t}\circ \tau^{\mathbb{G}}_{s}\circ R_{\mathbb{G}}(x)\delta^{ip}_{\mathbb{G}}\,\mathrm{d}t \,\mathrm{d}s\,\mathrm{d}p\big{)}\big{|}\] \[=(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\Lambda_{\psi}\big{(}\int_{\mathbb{R}^{3}}e ^{-n(t^{2}+(s-i/2)^{2}+(p+i/2)^{2})}\delta^{-it}_{\mathbb{G}}\sigma^{\psi}_{t} \circ\tau^{\mathbb{G}}_{s}\circ R_{\mathbb{G}}(x)\delta^{i(t+p)}_{\mathbb{G}}\, \mathrm{d}t\,\mathrm{d}s\,\mathrm{d}p\big{)}\big{|}\] \[=(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\big{|}\Lambda_{\varphi}\big{(}\int_{ \mathbb{R}^{3}}e^{-n(t^{2}+(s+i/2)^{2}+(p-i/2)^{2})}\lambda^{-i(t+p)}_{ \mathbb{G}}\sigma^{\psi}_{t}\circ\tau^{\mathbb{G}}_{s}\circ R_{\mathbb{G}}(x^{* })\delta^{it}_{\mathbb{G}}\,\mathrm{d}t\,\mathrm{d}s\,\mathrm{d}p\big{)}\big{|}\] \[=(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\big{|}\big{|}\Lambda_{\varphi}\big{(}\int_{ \mathbb{R}^{3}}e^{-n((t+i/2)^{2}+(s+i/2)^{2}+(p-i/2)^{2})}\lambda^{-p/2}_{ \mathbb{G}}\delta^{-i(t+p)}_{\mathbb{G}}\sigma^{\psi}_{t}\circ\tau^{\mathbb{G}}_{s} \circ R_{\mathbb{G}}(x^{*})\delta^{it}_{\mathbb{G}}\,\mathrm{d}t\,\mathrm{d}s\, \mathrm{d}p\big{)}\big{|}\] \[\leq(\tfrac{n}{\pi})^{3/2}\big{|}\Lambda_{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\big{|}\int_{\mathbb{R}^{3}}|e^{-n((t+i/2)^{2}+(s+i/2)^ {2}+(p-i/2)^{2})}|\nu^{s/2-t/2-p/2}_{\mathbb{G}}\|\Lambda_{
\((\tau_{-t}^{\mathbb{G}}\otimes\sigma_{-t}^{\widehat{\varphi}})\mathrm{W}^{\mathbb{G}} =(\delta_{\mathbb{G}}^{it}\otimes\mathbb{1})\mathrm{W}^{\mathbb{G}}\) and
\[\lambda_{\mathbb{G}}((\omega\delta_{\mathbb{G}}^{ip})\circ\tau_{s} ^{\mathbb{G}}\circ\sigma_{t}^{\varphi})=\big{(}(\omega\delta_{\mathbb{G}}^{ip}) \circ\tau_{s}^{\mathbb{G}}\otimes\mathrm{id}\big{)}\big{(}(\mathrm{id} \otimes\tau_{-t}^{\widehat{\mathbb{G}}})(\mathrm{W}^{\mathbb{G}})(1\otimes \delta_{\widehat{\mathbb{G}}}^{it})\big{)}\] \[=(\omega\otimes\tau_{-t-s}^{\widehat{\mathbb{G}}}\circ\sigma_{-p} ^{\widehat{\varphi}})(\mathrm{W}^{\mathbb{G}})\delta_{\widehat{\mathbb{G}}}^{it} =\tau_{p-t-s}^{\widehat{\mathbb{G}}}\circ\sigma_{-p}^{\widehat{\varphi}}( \lambda_{\mathbb{G}}(\omega))\delta_{\widehat{\mathbb{G}}}^{it}.\]
Hence
\[\Lambda_{\widehat{\varphi}}(\lambda_{\mathbb{G}}(\omega_{n})) =(\tfrac{n}{\pi})^{3/2}\int_{\mathbb{R}^{3}}e^{-n(t^{2}+s^{2}+p^{ 2})}\Lambda_{\widehat{\varphi}}(\lambda_{\mathbb{G}}((\omega\delta_{\mathbb{ G}}^{ip})\circ\tau_{s}^{\mathbb{G}}\circ\sigma_{t}^{\varphi}))\,\mathrm{d}t\,\mathrm{d}s\, \mathrm{d}p\] \[=(\tfrac{n}{\pi})^{3/2}\int_{\mathbb{R}^{3}}e^{-n(t^{2}+s^{2}+p^{ 2})}\Lambda_{\widehat{\varphi}}\big{(}\tau_{p-t-s}^{\widehat{\mathbb{G}}}\circ \sigma_{-p}^{\widehat{\varphi}}(\lambda_{\mathbb{G}}(\omega))\delta_{\widehat{ \mathbb{G}}}^{it}\big{)}\,\mathrm{d}t\,\mathrm{d}s\,\mathrm{d}p\] \[=(\tfrac{n}{\pi})^{3/2}\int_{\mathbb{R}^{3}}e^{-n(t^{2}+s^{2}+p^{ 2})}\nu_{\mathbb{G}}^{s/2-p/2}J_{\widehat{\varphi}}\delta_{\widehat{\mathbb{G} }}^{-it}J_{\widehat{\varphi}}P^{i(p-t-s)}\nabla_{\widehat{\varphi}}^{-ip} \Lambda_{\widehat{\varphi}}(\lambda_{\mathbb{G}}(\omega))\,\mathrm{d}t\, \mathrm{d}s\,\mathrm{d}p,\]
where \(P\) is the self-dual operator implementing the scaling group via \(P^{it}\Lambda_{\varphi}(x)=\nu_{\mathbb{G}}^{t/2}\Lambda_{\varphi}(\tau_{t}^{ \mathbb{G}}(x))\). Convergence \(\Lambda_{\widehat{\varphi}}(\lambda_{\mathbb{G}}(\omega_{n}))\xrightarrow[n \to\infty]{}\Lambda_{\widehat{\varphi}}(\lambda_{\mathbb{G}}(\omega))\) follows now using a standard argument.
## 3. Completely bounded multipliers
Unless stated otherwise, in this section \(\mathbb{G}\) is an arbitrary locally compact quantum group.
### Definitions and fundamental facts
We start by discussing the notions of (left/right) centralisers and multipliers. In the main part of the text we will focus on the left version of these objects - this is simply a matter of choice, compare Proposition 4.14.
Following [33], we say that a linear map \(T\colon\mathrm{L}^{1}(\widehat{\mathbb{G}})\to\mathrm{L}^{1}(\widehat{ \mathbb{G}})\) is a _left (respectively, right) centraliser_ if
\[T(\omega\star\omega^{\prime})=T(\omega)\star\omega^{\prime}\quad\big{(} \text{respectively}\;T(\omega\star\omega^{\prime})=\omega\star T(\omega^{\prime}) \big{)}\qquad(\omega,\omega^{\prime}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})).\]
We denote by \(C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) the space of completely bounded left centralisers. Together with the completely bounded norm and composition as product, \(C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) becomes a Banach algebra. Similarly, \(C^{r}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) stands for the space of completely bounded right centralisers, where now it is natural to use the opposite composition as product. We equip these spaces with an operator space structure by requiring that the embeddings \(C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}})),C^{r}_{cb}(\mathrm{L}^{1}( \widehat{\mathbb{G}}))\hookrightarrow\mathrm{CB}(\mathrm{L}^{1}(\widehat{ \mathbb{G}}))\) are completely isometric; both then become completely contractive Banach algebras.
An operator \(b\in\mathrm{L}^{\infty}(\mathbb{G})\) is said to be a _completely bounded left multiplier_ if \(b\,\mathrm{A}(\mathbb{G})\subseteq\mathrm{A}(\mathbb{G})\) and the associated map
\[\Theta^{l}(b)_{*}\colon\,\mathrm{L}^{1}(\widehat{\mathbb{G}})\to\mathrm{L}^{1} (\widehat{\mathbb{G}})\quad\text{satisfying}\quad b\widehat{\lambda}(\omega)= \widehat{\lambda}(\Theta^{l}(b)_{*}(\omega))\qquad(\omega\in\mathrm{L}^{1}( \widehat{\mathbb{G}}))\]
is completely bounded. As \(\widehat{\lambda}\) is injective, this definition makes sense. We follow here the notation of [13]; sometimes the notation \(m_{b}^{l}=\Theta^{l}(b)_{*}\) is used instead. As \(\widehat{\lambda}\) is multiplicative, for any \(b\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) we have that \(\Theta^{l}(b)_{*}\in C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\). We write \(\Theta^{l}(b)=(\Theta^{l}(b)_{*})^{*}\), and denote the space of CB left multipliers by \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). Any Fourier algebra element \(\widehat{\lambda}(\omega)\in\mathrm{A}(\mathbb{G})\) is a CB left multiplier with \(\Theta^{l}(\widehat{\lambda}(\omega))_{*}\in\mathrm{CB}(\mathrm{L}^{1}( \widehat{\mathbb{G}}))\) being the left multiplication by \(\omega\) and \(\Theta^{l}(\widehat{\lambda}(\omega))=(\omega\otimes\mathrm{id})\widehat{ \Delta}\). Moreover, it holds that \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\subseteq\mathrm{M}(\mathrm{C}_{0}( \mathbb{G}))\), see [17, Theorem 4.2].
Conversely, if \(T\in C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) is a left centraliser, then its Banach space dual \(T^{*}\) is a normal CB map on \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) which is a left \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\)-module homomorphism, i.e. \(T^{*}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\mathrm{CB}^{\sigma}(\mathrm{L}^{ \infty}(\widehat{\mathbb{G}}))\). Then, by [34, Corollary 4.4], there exists a unique CB left multiplier \(b\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) satisfying \(\Theta^{l}(b)=T^{*}\), that is, \(\Theta^{l}(b)_{*}=T\). These constructions are mutually inverse, and so the map \(\Theta^{l}(\cdot)_{*}:\,\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\to C^{l}_{cb }(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) is bijective. We define the operator space structure on \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) so that these spaces become completely isometric.
The above notions have right counterparts. Recalling that \(\mathrm{V}^{\widehat{\mathbb{G}}}\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime} \bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) is the right Kac-Takesaki operator, let us introduce the map \(\widehat{\rho}\colon\,\mathrm{L}^{1}(\widehat{\mathbb{G}})\ni\omega\mapsto( \mathrm{id}\otimes\omega)\mathrm{V}^{\widehat{\mathbb{G}}}\in\mathrm{L}^{\infty}( \mathbb{G})^{\prime}\). Its image \(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) should be thought of as a right analogue of the Fourier algebra \(\mathrm{A}(\mathbb{G})=\widehat{\lambda}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\)
An operator \(b^{\prime}\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\) is called a _completely bounded right multiplier_ if \(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))b^{\prime}\in\widehat{\rho} (\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) and the associated map \(\Theta^{r}(b^{\prime})_{*}\colon\mathrm{L}^{1}(\widehat{\mathbb{G}})\to \mathrm{L}^{1}(\widehat{\mathbb{G}})\) is CB. Similarly as in the left version, we write \(\Theta^{r}(b^{\prime})\in\mathrm{CB}^{\sigma}_{\mathrm{L}^{1}(\widehat{ \mathbb{G}})}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\) for \((\Theta^{r}(b^{\prime})_{*})^{*}\) and \(\mathrm{M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\) for the space of CB right multipliers.
Any CB right centraliser \(T\in C^{r}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) is associated to a unique CB right multiplier \(b^{\prime}\in\mathrm{M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{ \mathbb{G}})))\) via \(T=\Theta^{r}(b^{\prime})_{*}\) and this assignment is bijective. We similarly define an operator space structure on \(\mathrm{M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\) to make it completely isometric with \(C^{r}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\).
We will write e.g. \(\|b\|_{cb}=\|\Theta^{l}(b)\|_{cb}\) for \(b\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). Observe that \(b\widehat{\lambda}(\omega)=\widehat{\lambda}(\Theta^{l}(b)_{*}(\omega))\) for each \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) if and only if \((\mathbb{1}\otimes b)\mathrm{W}^{\widehat{\mathbb{G}}}=(\Theta^{l}(b)\otimes \mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}})\), and from this, it follows that \(\|b\|\leq\|b\|_{cb}\). Similarly we have \(\|b^{\prime}\|\leq\|b^{\prime}\|_{cb}\) for \(b^{\prime}\in\mathrm{M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{ \mathbb{G}})))\).
As a consequence of the above discussion, we have a commutative diagram
(3.1)
The two diagonal maps to \(\mathrm{L}^{\infty}(\mathbb{G})\) are the canonical inclusions, as is the map \(\mathrm{A}(\mathbb{G})\to\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\), while the vertical map \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\to C^{l}_{cb}(\mathrm{L}^{1}(\widehat{ \mathbb{G}}))\) is given by left multiplication. A simple calculation shows that this diagram indeed commutes. We obtain an immediate corollary: the map \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\to C^{l}_{cb}(\mathrm{L}^{1}(\widehat{ \mathbb{G}}))\) is injective, equivalently, if \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) with \(\omega\star\omega^{\prime}=0\) for all \(\omega^{\prime}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\), then \(\omega=0\).
There is a canonical way of moving between left and right CB multipliers using the extension of the unitary antipode \(\widehat{R}\) of \(\widehat{\mathbb{G}}\). Recall that it is implemented via \(\widehat{R}=J_{\varphi}(\cdot)^{*}J_{\varphi}\), and let us denote its canonical extension to a bounded linear map on \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\) by \(\widehat{R}^{\sim}=J_{\varphi}(\cdot)^{*}J_{\varphi}\). The following result will be used in Proposition 4.14 to show that it does not matter if we use left CB multipliers, or right CB multipliers, when we introduce the approximation property (AP), see Definition 4.1 below.
**Lemma 3.1**.: _For \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) we have \(\widehat{\rho}(\omega)=\widehat{R}^{\sim}(\widehat{\lambda}(\omega\circ \widehat{R}))\). Furthermore, \(\widehat{R}^{\sim}(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G})))=\mathrm{M}^{r }_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\) and for \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) we have \(\Theta^{r}(\widehat{R}^{\sim}(a))=\widehat{R}\circ\Theta^{l}(a)\circ\widehat{R}\)._
Proof.: Recall that \(\mathrm{V}^{\widehat{\mathbb{G}}}=(J_{\varphi}\otimes J_{\varphi})\chi( \mathrm{W}^{\widehat{\mathbb{G}}})^{*}(J_{\varphi}\otimes J_{\varphi})\), see [46, Proposition 2.15]. It follows that
\[\widehat{\rho}(\omega)=(\mathrm{id}\otimes\omega)\mathrm{V}^{\widehat{\mathbb{ G}}}=\widehat{R}^{\sim}\big{(}(\mathrm{id}\otimes\omega\circ\widehat{R})(\chi( \mathrm{W}^{\widehat{\mathbb{G}}}))\big{)}=\widehat{R}^{\sim}\big{(}(\omega \circ\widehat{R}\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}})\big{)}= \widehat{R}^{\sim}(\widehat{\lambda}(\omega\circ\widehat{R})). \tag{3.2}\]
Next, take \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) and \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). We have \(\widehat{R}^{\sim}(a)=J_{\varphi}a^{*}J_{\varphi}\in\mathrm{L}^{\infty}( \mathbb{G})^{\prime}\), so by (3.2),
\[\widehat{\rho}(\omega)\widehat{R}^{\sim}(a) =\widehat{R}^{\sim}(\widehat{\lambda}(\omega\circ\widehat{R})) \widehat{R}^{\sim}(a)=\widehat{R}^{\sim}(a\widehat{\lambda}(\omega\circ \widehat{R}))\] \[=\widehat{R}^{\sim}\big{(}\widehat{\lambda}(\Theta^{l}(a)_{*}( \omega\circ\widehat{R}))\big{)}=\widehat{\rho}\big{(}\Theta^{l}(a)_{*}(\omega \circ\widehat{R})\circ\widehat{R}\big{)}.\]
Hence \(\widehat{R}^{\sim}(a)\in\mathrm{M}^{r}_{cb}(a)\) with \(\Theta^{r}(\widehat{R}^{\sim}(a))=\widehat{R}\circ\Theta^{l}(a)\circ\widehat{R}\); this map is indeed CB, compare Lemma 4.8. We have shown that \(\widehat{R}^{\sim}(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G})))\subseteq \mathrm{M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\); the converse inclusion is analogous.
We finish by recording a known result for which we have not found a convenient reference.
**Lemma 3.2**.: _Let \(b\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). There is \(\beta\in\mathbb{C}\) with \(\Theta^{l}(b)(\mathbb{1})=\beta\mathbb{1}\)._
Proof.: It suffices to show that for \(T\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}( \widehat{\mathbb{G}}))\) there is \(\beta\) with \(T(\mathbb{1})=\beta\mathbb{1}\). By definition, \(\Delta\circ T=(T\otimes\mathrm{id})\Delta\) and so \(\Delta(T(\mathbb{1}))=T(\mathbb{1})\otimes\mathbb{1}\). By [18, Theorem 2.1] (and references therein) it follows that \(T(\mathbb{1})\in\mathbb{C}\mathbb{1}\), as required.
### Predual
Since the inclusion \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\hookrightarrow\mathrm{L}^{\infty}( \mathbb{G})\) is bounded (actually, contractive), we can consider the restriction of the Banach space adjoint of this map, giving a map \(\alpha^{l}\colon\mathrm{L}^{1}(\mathbb{G})\to\mathrm{M}^{l}_{cb}(\mathrm{A}( \mathbb{G}))^{*}\). Let us define the space \(Q^{l}(\mathrm{A}(\mathbb{G}))\) as the closure of the image of \(\alpha^{l}\), so that
\[Q^{l}(\mathrm{A}(\mathbb{G}))=\overline{\alpha^{l}(\mathrm{L}^{1}(\mathbb{G})) }\subseteq\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))^{*}.\]
According to [33, Theorem 3.4], the space \(Q^{l}(\mathrm{A}(\mathbb{G}))\) is a predual of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\), i.e. we have
\[Q^{l}(\mathrm{A}(\mathbb{G}))^{*}\cong\mathrm{M}^{l}_{cb}(\mathrm{A}( \mathbb{G}))\]
completely isometrically. Whenever we speak about the weak\({}^{*}\)-topology on \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) we will have in mind this particular choice of predual.
Similarly, we can restrict functionals in \(\mathrm{L}^{1}(\mathbb{G}^{\prime})\) to \(\mathrm{M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\), and after taking the closure obtain the predual \(Q^{r}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\subseteq\mathrm{ M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))^{*}\). From now on, we will restrict our discussion to the "left" setting.
**Proposition 3.3**.: \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) _is a dual Banach algebra, that is, the multiplication of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) is separately weak\({}^{*}\)-continuous._
Proof.: We turn \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))^{*}\) into a \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\)-bimodule in the usual way. Let \(a,b\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\subseteq\mathrm{L}^{\infty}( \mathbb{G})\) and \(f\in\mathrm{L}^{1}(\mathbb{G})\). Writing \(\langle\cdot,\cdot\rangle\) for the pairing between \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G})\) and \(Q^{l}(\mathrm{A}(\mathbb{G}))\), or between \(\mathrm{L}^{\infty}(\mathbb{G})\) and \(\mathrm{L}^{1}(\mathbb{G})\), we have
\[\langle ab,\alpha^{l}(f)\rangle=\langle ab,f\rangle =\langle a,bf\rangle=\langle a,\alpha^{l}(bf)\rangle\] \[=\langle b,fa\rangle=\langle b,\alpha^{l}(fa)\rangle.\]
This calculation shows that \(b\cdot\alpha^{l}(f)=\alpha^{l}(bf)\) and \(\alpha^{l}(f)\cdot a=\alpha^{l}(fa)\), so by continuity, it follows that \(Q^{l}(\mathrm{A}(\mathbb{G}))\) is a closed submodule of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))^{*}\). It is now standard, see [59, Proposition 1.2] for example, that the product is separately weak\({}^{*}\)-continuous in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\).
Our next goal is to obtain a characterisation of functionals in \(Q^{l}(\mathrm{A}(\mathbb{G}))\). We will do this by obtaining an alternative description of the weak\({}^{*}\)-topology on \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). In the process, we also discuss CB maps on the \(\mathrm{C}^{*}\)-algebra \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\) which are associated to left centralisers.
To start, we observe that the adjoint \(T^{*}\) of a CB left centraliser \(T\in C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) restricts to a CB map on \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\). Indeed, we can write \(T^{*}=\Theta^{l}(a)\) for some \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) and then the claim follows from the equality \((1\otimes a)\mathrm{W}^{\widehat{\mathbb{G}}}=(T^{*}\otimes\mathrm{id})\mathrm{ W}^{\widehat{\mathbb{G}}}\) and density of \(\mathrm{A}(\widehat{\mathbb{G}})\) in \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\). We seek a characterisation of which CB maps on \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\) occur in this way as restrictions of duals to left CB centralisers, in terms of a property similar to the characterisation \(C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\cong\operatorname{L}^{1}_{( \widehat{\mathbb{G}})}\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{ \mathbb{G}}))\).
In the following statement, recall that \((\mathrm{C}_{0}(\widehat{\mathbb{G}}),\widehat{\Delta})\) is bisimplifiable, and so elements of the form \(\widehat{\Delta}(a)(1\otimes b)\), for \(a,b\in\mathrm{C}_{0}(\widehat{\mathbb{G}})\), form a linearly dense subset of \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\otimes\mathrm{C}_{0}(\widehat{\mathbb{G}})\). Hence the left-hand-side of (3.3) is contained in \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\otimes\mathrm{C}_{0}(\widehat{\mathbb{ G}})\subseteq\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\mathrm{L}^{ \infty}(\widehat{\mathbb{G}})\), while the right-hand-side is in \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\mathrm{L}^{\infty}( \widehat{\mathbb{G}})\). We also recall that, by Kaplansky density, \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) is (completely) isometrically a subspace of \(\mathrm{C}_{0}(\widehat{\mathbb{G}})^{*}\).
**Lemma 3.4**.: _Let \(L\in\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}),\mathrm{L}^{\infty}( \widehat{\mathbb{G}}))\) be such that_
\[(L\otimes\mathrm{id})(\widehat{\Delta}(a)(1\otimes b))=\widehat{\Delta}(L(a))(1 \otimes b)\qquad(a,b\in\mathrm{C}_{0}(\widehat{\mathbb{G}})). \tag{3.3}\]
_Embedding \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) into the duals of \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) and \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\) in the usual way, we have that \(L^{*}\) maps \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) to itself, and the resulting restriction \(T\in\mathrm{CB}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) is a left centraliser. Furthermore \(T^{*}\in\mathrm{CB}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\) restricts to \(L\), so consequently \(L\in\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}))\)._
Proof.: As \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) is an essential \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\)-module, by Cohen-Hewitt factorisation, given \(\omega_{2}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) there are \(\omega_{3}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) and \(b\in\mathrm{C}_{0}(\widehat{\mathbb{G}})\) with \(\omega_{2}=b\omega_{3}\). Then, for \(\omega_{1}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) and \(a\in\mathrm{C}_{0}(\widehat{\mathbb{G}})\) we have
\[\langle L^{*}(\omega_{1}\star\omega_{2}),a\rangle_{\mathrm{C}_{0} (\widehat{\mathbb{G}})^{*},\mathrm{C}_{0}(\widehat{\mathbb{G}})}=\langle \widehat{\Delta}(L(a)),\omega_{1}\otimes\omega_{2}\rangle_{\mathrm{L}^{\infty} (\widehat{\mathbb{G}})\bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{G}}),\mathrm{L}^{1}(\widehat{\mathbb{G}})\bar{\otimes}\,\mathrm{L}^{1}(\widehat{ \mathbb{G}})}\] \[=\langle\widehat{\Delta}(L(a))(1\otimes b),\omega_{1}\otimes \omega_{3}\rangle_{\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\, \mathrm{L}^{\infty}(\widehat{\mathbb{G}}),\mathrm{L}^{1}(\widehat{\mathbb{G}}) \bar{\otimes}\,\mathrm{L}^{1}(\widehat{\mathbb{G}})}\] \[=\langle(L\otimes\mathrm{id})(\widehat{\Delta}(a)(1\otimes b)), \omega_{1}\otimes\omega_{3}\rangle_{\mathrm{L}^{\infty}(\widehat{\mathbb{G}}) \bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{G}}),\mathrm{L}^{1}( \widehat{\mathbb{G}})\bar{\otimes}\,\mathrm{L}^{1}(\widehat{\mathbb{G}})}\] \[=\langle L^{*}(\omega_{1})\otimes\omega_{3},\widehat{\Delta}(a)( 1\otimes b)\rangle_{(\mathrm{C}_{0}(\widehat{\mathbb{G}})\otimes\mathrm{C}_{0 }(\widehat{\mathbb{G}}))^{*},\mathrm{C}_{0}(\widehat{\mathbb{G}})\otimes \mathrm{C}_{0}(\widehat{\mathbb{G}})}\] \[=\langle L^{*}(\omega_{1}),(\mathrm{id}\otimes\omega_{3})( \widehat{\Delta}(a)(1\otimes b))\rangle_{\mathrm{C}_{0}(\widehat{\mathbb{G}})^ {*},\mathrm{C}_{0}(\widehat{\mathbb{G}})}\] \[=\langle L^{*}(\omega_{1}),(\mathrm{id}\otimes\omega_{2}) \widehat{\Delta}(a)\rangle_{\mathrm{C}_{0}(\widehat{\mathbb{G}})^{*},\mathrm{C }_{0}(\widehat{\mathbb{G}})}\] \[=\langle L^{*}(\omega_{1})\star\omega_{2},a\rangle_{\mathrm{C}_{ 0}(\widehat{\mathbb{G}})^{*},\mathrm{C}_{0}(\widehat{\mathbb{G}})}.\]
It follows that \(L^{*}(\omega_{1}\star\omega_{2})=L^{*}(\omega_{1})\star\omega_{2}\) in \(\mathrm{C}_{0}(\widehat{\mathbb{G}})^{*}\). As \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) is an ideal in \(\mathrm{C}_{0}(\widehat{\mathbb{G}})^{*}\) ([45, Proof of Proposition 8.3]), this shows that \(L^{*}(\omega_{1}\star\omega_{2})\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\), and as products have dense linear span in \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) ([13, Section 3]), we conclude that \(L^{*}\) restricts to a map on \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\), say \(T\in\mathrm{CB}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\). Then \(T(\omega_{1}\star\omega_{2})=T(\omega_{1})\star\omega_{2}\) for all \(\omega_{1},\omega_{2}\), and so \(T\in C_{cb}^{l}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\). We finally calculate that, for \(a\in\mathrm{C}_{0}(\widehat{\mathbb{G}}),\omega\in\mathrm{L}^{1}(\widehat{ \mathbb{G}})\),
\[\langle T^{*}(a),\omega\rangle_{\mathrm{L}^{\infty}(\widehat{\mathbb{G}}), \mathrm{L}^{1}(\widehat{\mathbb{G}})}=\langle a,T(\omega)\rangle_{\mathrm{L}^ {\infty}(\widehat{\mathbb{G}}),\mathrm{L}^{1}(\widehat{\mathbb{G}})}= \langle L^{*}(\omega),a\rangle_{\mathrm{C}_{0}(\widehat{\mathbb{G}})^{*}, \mathrm{C}_{0}(\widehat{\mathbb{G}})}=\langle L(a),\omega\rangle_{\mathrm{L}^ {\infty}(\widehat{\mathbb{G}}),\mathrm{L}^{1}(\widehat{\mathbb{G}})},\]
and so \(T^{*}\) restricts to \(L\), as required.
We can now characterise what it means for an operator in \(\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}))\) to be a centraliser. Condition (2) in the following proposition should be thought of as a \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\) variant of what it means to be a left \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\)-module homomorphism.
**Proposition 3.5**.: _For \(L\in\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}))\) the following are equivalent:_
1. _there is_ \(T\in C_{cb}^{l}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) _such that_ \(T^{*}\) _restricts to_ \(L\)_;_
2. \((L\otimes\mathrm{id})(\widehat{\Delta}(a)(1\otimes b))=\widehat{\Delta}(L(a))( 1\otimes b)\) _for each_ \(a,b\in\mathrm{C}_{0}(\widehat{\mathbb{G}})\)_;_
_Furthermore, the restriction map \(C_{cb}^{l}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\cong{}_{\mathrm{L}^{1}( \widehat{\mathbb{G}})}\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{ \mathbb{G}}))\to\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}))\) is a complete isometry; in particular, there is a bijection between \(C_{cb}^{l}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) and the space of all maps \(L\in\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}))\) satisfying (2)._
Proof.: If (1) holds then \(\widehat{\Delta}T^{*}=(T^{*}\otimes\mathrm{id})\widehat{\Delta}\) and so certainly the condition in (2) will hold for \(T^{*}\) and hence also for \(L\). Conversely, suppose that (2) holds. Then due to Lemma 3.4 we know that \(L^{*}\) restricts to a map \(T\in C_{cb}^{l}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) such that \(T^{*}\) restricts to \(L\), showing (1).
The restriction map \({}_{\mathrm{L}^{1}(\widehat{\mathbb{G}})}\mathrm{CB}^{\sigma}(\mathrm{L}^{ \infty}(\widehat{\mathbb{G}}))\to\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}))\) is clearly a complete contraction. With \(T,L\) as above, this restriction map is given by \(T^{*}\mapsto L\), and as \(T\) is the restriction of \(L^{*}\) and \(L\mapsto L^{*},T\mapsto T^{*}\) are completely isometric, it follows that \({}_{\mathrm{L}^{1}(\widehat{\mathbb{G}})}\mathrm{CB}^{\sigma}(\mathrm{L}^{ \infty}(\widehat{\mathbb{G}}))\to\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}))\) is a complete isometry.
**Proposition 3.6**.: _Equip the space \(\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}),\mathrm{L}^{\infty}(\widehat{ \mathbb{G}}))\) with the weak\({}^{*}\)-topology arising from the canonical predual \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{L}^{1}(\widehat{ \mathbb{G}})\). The restriction map_
\[C_{cb}^{l}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\cong{}_{\mathrm{L}^{1}( \widehat{\mathbb{G}})}\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{ \mathbb{G}}))\to\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}),\mathrm{L}^{ \infty}(\widehat{\mathbb{G}}))\]
_is a complete isometry which has weak\({}^{*}\)-closed image._
Proof.: Proposition 3.5 shows that the restriction map \({}_{\mathrm{L}^{1}(\widehat{\mathbb{G}})}\mathrm{CB}^{\sigma}(\mathrm{L}^{ \infty}(\widehat{\mathbb{G}}))\to\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{ G}}),\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\) is a complete isometry. Let \((T_{i})_{i\in I}\) be a net in \(C_{cb}^{l}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) such that the image of the net \((
\(\mathrm{L}^{1}(\widehat{\mathbb{G}})\), and note that \((\mathrm{id}\otimes\omega_{2})(\widehat{\Delta}(a)(1\otimes b))\in\mathrm{C}_{0}( \widehat{\mathbb{G}})\). We now calculate that
\[\langle\widehat{\Delta}(L(a))(1\otimes b),\omega_{1}\otimes\omega_{2}\rangle= \lim_{i\in I}\langle T_{i}^{*}(a),\omega_{1}\star(b\omega_{2})\rangle=\lim_{i \in I}\langle a,T_{i}(\omega_{1})\star(b\omega_{2})\rangle\]
\[=\lim_{i\in I}\langle\widehat{\Delta}(a)(1\otimes b),T_{i}(\omega_{1})\otimes \omega_{2}\rangle=\lim_{i\in I}\langle T_{i}^{*}\big{(}(\mathrm{id}\otimes \omega_{2})(\widehat{\Delta}(a)(1\otimes b))\big{)},\omega_{1}\rangle\]
\[=\langle L\big{(}(\mathrm{id}\otimes\omega_{2})(\widehat{\Delta}(a)(1\otimes b ))\big{)},\omega_{1}\rangle=\langle(L\otimes\mathrm{id})(\widehat{\Delta}(a)( 1\otimes b)),\omega_{1}\otimes\omega_{2}\rangle.\]
All the above pairings are between a von Neumann algebra and its predual. It follows that we have \(\widehat{\Delta}(L(a))(1\otimes b)=(L\otimes\mathrm{id})(\widehat{\Delta}(a) (1\otimes b))\) in \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{L}^{ \infty}(\widehat{\mathbb{G}})\). By Lemma 3.4, \(L^{*}\) restricts to \(T\in C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) such that \(T^{*}\) restricts back to give \(L\). That is, \(T^{*}_{i}\xrightarrow[i\in I]{}T^{*}\) weak\({}^{*}\) in \(\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}),\mathrm{L}^{\infty}(\widehat{ \mathbb{G}}))\), as required.
We now wish to show that the resulting weak\({}^{*}\)-topology on \(C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) given by Proposition 3.6 agrees with the weak\({}^{*}\)-topology on \(C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\cong\mathrm{M}^{l}_{cb}( \mathrm{A}(\mathbb{G}))\) given by the predual \(Q^{l}(\mathrm{A}(\mathbb{G}))\). In the following, for a Banach space \(E\), we denote by \(\kappa_{E}\colon E\to E^{**}\) the canonical map to the bidual.
**Lemma 3.7**.: _Let \(E,F\) be Banach spaces, and let \(\alpha:E^{*}\to F^{*}\) be a bounded linear map. Let \(D\subseteq F\) be a subset with dense linear span. Then \(\alpha\) is weak\({}^{*}\)-weak\({}^{*}\)-continuous if and only if \(\alpha^{*}\kappa_{F}(D)\subseteq\kappa_{E}(E)\). In this case, and when further \(\alpha\) is a bijection, the resulting preadjoint \(\alpha_{*}:F\to E\) is also an isomorphism of Banach spaces and \(\alpha\) is a weak\({}^{*}\)-weak\({}^{*}\)-homeomorphism._
Proof.: If \(\alpha\) is weak\({}^{*}\)-continuous, then there is a preadjoint operator \(\alpha_{*}:F\to E\) with \((\alpha_{*})^{*}=\alpha\), and so \(\alpha^{*}\kappa_{F}(D)=(\alpha_{*})^{**}\kappa_{F}(D)=\kappa_{E}\alpha_{*}(D) \subseteq\kappa_{E}(E)\), as claimed. Conversely, if \(\alpha^{*}\kappa_{F}(D)\subseteq\kappa_{E}(E)\) then by norm density of \(\mathrm{span}\,D\) in \(F\), and norm continuity of \(\alpha^{*}\), we have that \(\alpha^{*}\kappa_{F}(F)\subseteq\kappa_{E}(E)\). We could now directly apply [16, Lemma 10.1], but let us give the argument. There is a linear map \(T:F\to E\) with \(\alpha^{*}\kappa_{F}(x)=\kappa_{E}(T(x))\) for each \(x\in F\). As \(\kappa_{E},\kappa_{F}\) are isometries, \(T\) is bounded with \(\|T\|\leq\|\alpha^{*}\|=\|\alpha\|\). Then for \(x\in F,\mu\in E^{*}\),
\[\langle T^{*}(\mu),x\rangle=\langle\mu,T(x)\rangle=\langle\kappa_{E}(T(x)),\mu \rangle=\langle\alpha^{*}\kappa_{F}(x),\mu\rangle=\langle\alpha(\mu),x\rangle.\]
Hence \(T^{*}=\alpha\) and so \(\alpha\) is weak\({}^{*}\)-continuous, with preadjoint \(T\).
When \(\alpha\) is a bijection, by the Open Mapping Theorem, it is an isomorphism. Thus also \(\alpha^{*}\) is an isomorphism, and so as \(\alpha^{*}\kappa_{F}=\kappa_{E}\alpha_{*}\) it follows that \(\alpha_{*}\) is bounded below and so has closed image. If \(\mu\in(\alpha_{*}(F))^{\perp}\) then \(0=\langle\mu,\alpha_{*}(x)\rangle=\langle\alpha(\mu),x\rangle\) for all \(x,\mu\) and so \(\alpha(\mu)=0\) so \(\mu=0\). Hence \(\alpha_{*}\) is a surjection, and so an isomorphism.
**Theorem 3.8**.: _The weak\({}^{*}\)-topology on \(C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) given by the embedding into \(\mathrm{CB}(\mathrm{C}_{0}(\widehat{\mathbb{G}}),\mathrm{L}^{\infty}(\widehat{ \mathbb{G}}))\) agrees with the weak\({}^{*}\)-topology on \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) given by \(Q^{l}(\mathrm{A}(\mathbb{G}))\)._
Proof.: We use Lemma 3.7. Set \(E=Q^{l}(\mathrm{A}(\mathbb{G}))\). To avoid confusion, for this proof only, we shall write \(\theta:C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\to\mathrm{CB}(\mathrm{C} _{0}(\widehat{\mathbb{G}}),\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\) for the complete isometry \(T\mapsto T^{*}|_{\mathrm{C}_{0}(\widehat{\mathbb{G}})}\), given by Proposition 3.6. As the image of \(\theta\) is weak\({}^{*}\)-closed, it has canonical predual \(F\) which is a quotient of \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{L}^{1}(\widehat{ \mathbb{G}})\). Let \(\pi:\mathrm{C}_{0}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{L}^{1}( \widehat{\mathbb{G}})\to F\) be the quotient map. We hence corestrict \(\theta\) to give an isomorphism \(\theta:C^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\to F^{*}\). Let \(\alpha_{0}:E^{*}=\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\to C^{l}_{cb}( \mathrm{L}^{1}(\widehat{\mathbb{G}}))\) be the canonical bijection, and set \(\alpha=\theta\circ\alpha_{0}:E^{*}\to F^{*}\).
Given \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) and set \(T=\alpha_{0}(a)\), so by definition, \(a\widehat{\lambda}(\omega)=\widehat{\lambda}(T(\omega))\) for each \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). Equivalently, \((1\otimes a)\mathrm{W}^{\widehat{\mathbb{G}}}=(T^{*}\otimes\mathrm{id})( \mathrm{W}^{\widehat{\mathbb{G}}})\). Given \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}}),f\in\mathrm{L}^{1}(\mathbb{G})\), set \(u=\pi((\mathrm{id}\otimes f)(\mathrm{W}^{\widehat{\mathbb{G}}})\otimes \omega)\in F\), and calculate that
\[\langle\kappa_{E}(\widehat{\lambda}(\omega)f),a\rangle_{E^{*},E^{* }}=\langle a,\widehat{\lambda}(\omega)f\rangle_{E^{*},E}=\langle a\widehat{ \lambda}(\omega),f\rangle_{\mathrm{L}^{\infty}(\mathbb{G}),\mathrm{L}^{1}( \widehat{\mathbb{G}})}=\langle(T^{*}\otimes\mathrm{id})(\mathrm{W}^{\widehat{ \mathbb{G}}}),\omega\otimes f\rangle\] \[=\langle T^{*}\big{(}(\mathrm{id}\otimes f)(\mathrm{W}^{\widehat{ \mathbb{G}}})\big{)},\omega\rangle_{\mathrm{L}^{\infty}(\widehat{\mathbb{G}}), \mathrm{L}^{1}(\widehat{\mathbb{G}})}=\langle\theta(T)\big{(}(\mathrm{id} \otimes f)(\mathrm{W}^{\widehat{\mathbb{G}}})\big{)},\omega\rangle_{ \mathrm{L}^{\infty}(\widehat{\mathbb{G}}),\mathrm{L}^{1}(\widehat{\mathbb{G}})}= \langle\alpha(a),u\rangle_{F^{*},F}.\]
It follows that \(\alpha^{*}(\kappa_{F}(u))=\kappa_{E}(\widehat{\lambda}(\omega)f)\in\kappa_{E}(E)\). As the linear span of such elements \(u\) is dense in \(F\), the conditions of the lemma are verified, and the result follows.
Using this result we can characterise functionals in the predual space \(Q^{l}
**Proposition 3.9**.: _For any Hilbert space \(\mathsf{H}\) and \(x\in\mathrm{C}_{0}(\widehat{\mathbb{G}})\otimes\mathcal{K}(\mathsf{H})\), \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{B}( \mathsf{H})_{*}\), the bounded linear functional_
\[\Omega_{x,\omega}\colon\,\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\ni a \mapsto\langle(\Theta^{l}(a)\otimes\mathrm{id})x,\omega\rangle\in\mathbb{C}.\]
_belongs to \(Q^{l}(\mathrm{A}(\mathbb{G}))\). Furthermore, all functionals in \(Q^{l}(\mathrm{A}(\mathbb{G}))\) are of this form for some separable Hilbert space._
This result was recorded without proof in [13, Proposition 3.2]. In the classical context of locally compact groups, an analogous result was proved by Haagerup and Kraus in [31, Proposition 1.5]. For the convenience of the reader, we give a proof using Theorem 3.8.
Proof.: We first show that \(\Omega_{x,\omega}\) is a member of \(Q^{l}(\mathrm{A}(\mathbb{G}))\). As \(Q^{l}(\mathrm{A}(\mathbb{G}))\subseteq\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{ G}))^{*}\) is norm closed, by first approximating \(x,\omega\) by sums of elementary tensors, and then collapsing the pairing between \(\mathcal{K}(\mathsf{H})\) and \(\mathrm{B}(\mathsf{H})_{*}\), we reduce the problem to the case when \(\mathsf{H}=\mathbb{C}\), when \(x=(\mathrm{id}\otimes\omega)\mathrm{W}^{\widehat{\mathbb{G}}}\) for some \(\omega\in\mathrm{L}^{1}(\mathbb{G})\), and when \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). We then calculate
\[\langle\Theta^{l}(a)\big{(}(\mathrm{id}\otimes\omega)\mathrm{W}^{\widehat{ \mathbb{G}}}\big{)},\widehat{\omega}\rangle=\langle\widehat{\lambda}(\widehat {\omega}\circ\Theta^{l}(a)),\omega\rangle=\langle a\widehat{\lambda}(\widehat {\omega}),\omega\rangle=\langle a,\widehat{\lambda}(\widehat{\omega})\omega \rangle\quad(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))),\]
which shows that \(\Omega_{x,\omega}=\alpha^{l}(\widehat{\lambda}(\widehat{\omega})\omega)\in Q ^{l}(\mathrm{A}(\mathbb{G}))\), as required.
Now, take any functional in \(Q^{l}(\mathrm{A}(\mathbb{G}))\), which by Theorem 3.8 is represented by some element \(\rho\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\mathrm{C}_{0}( \widehat{\mathbb{G}})\) (note that the projective operator space tensor product is symmetric). By [23, Theorem 10.2.1], we can find infinite matrices
\[\alpha\in\mathrm{M}_{1,\infty\times\infty},\ \beta\in\mathsf{K}_{\infty}( \mathrm{L}^{1}(\widehat{\mathbb{G}})),\ \gamma\in\mathsf{K}_{\infty}(\mathrm{C}_{0}(\widehat{\mathbb{G}})),\ \alpha^{\prime}\in\mathrm{M}_{\infty\times\infty,1}\]
such that \(\rho=\alpha(\beta\otimes\gamma)\alpha^{\prime}\) (for the introduction to infinite matrices with entries in an operator space, see [23, Sections 10.1, 10.2]). Writing \(\alpha=[\alpha_{1,(i,j)}]_{(i,j)\in\mathbb{N}^{2}}\) etc., this means that
\[\langle\Theta^{l}(a),\rho\rangle=\sum_{i,j,k,l=1}^{\infty}\alpha_{1,(i,j)} \langle\Theta^{l}(a)(\gamma_{j,l}),\beta_{i,k}\rangle\alpha^{\prime}_{(k,l),1} \qquad(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))). \tag{3.4}\]
Let \(\mathsf{H}\) be an infinite dimensional, separable Hilbert space with orthonormal basis \(\{e_{n}\}_{n=1}^{\infty}\) and let \(e_{i,j}\,(i,j\in\mathbb{N})\) be the corresponding rank one operators. Write \(T(\mathsf{H})\) for the operator space of trace class operators, identified in a completely isometric way with \(\mathrm{B}(\mathsf{H})_{*}\). For any \(n\in\mathbb{N}\) we have \(\|[e_{j,i}]_{i,j=1}^{n}\|_{\mathrm{M}_{n}(T(\mathsf{H}))}=1\). Indeed, the matrix \([e_{j,i}]_{i,j=1}^{n}\) corresponds to the map \(E_{n}\in\mathrm{CB}(\mathrm{B}(\mathsf{H}),\mathrm{M}_{n})\) given by \(E_{n}(x)=[\mathrm{Tr}(e_{j,i}x)]_{i,j=1}^{n}\). If we denote by \(V_{n}\colon\mathbb{C}^{n}\to\mathsf{H}\) the canonical inclusion associated with the choice of basis, one easily sees that \(E_{n}(x)=V_{n}^{*}xV_{n}\,(x\in\mathrm{B}(\mathsf{H}))\) and \(\|E_{n}\|_{cb}=1\) follows. Consequently \([e_{j,i}]_{i,j=1}^{\infty}\) is a well defined matrix in \(\mathrm{M}_{\infty}(T(\mathsf{H}))\). Finally, define
\[\omega=\alpha(\beta\otimes[e_{j,i}]_{i,j=1}^{\infty})\alpha^{\prime}\in \mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}T(\mathsf{H})=\mathrm{L}^ {1}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{B}(\mathsf{H})_{*}.\]
A choice of basis gives us an isomorphism \(\mathsf{H}\cong\ell^{2}\) and consequently we can consider \(\gamma\) as an element of \(\mathrm{C}_{0}(\widehat{\mathbb{G}})\otimes\mathcal{K}(\mathsf{H})\) ([23, Equation 10.1.2]). Finally, using equation (3.4) we can show that the functional associated to \(\rho\) is of the form \(\Omega_{\gamma,\omega}\). Indeed, we have
\[\langle a,\Omega_{\gamma,\omega}\rangle=\langle(\Theta^{l}(a) \otimes\mathrm{id})\gamma,\omega\rangle =\sum_{i,j,k,l=1}^{\infty}\alpha_{1,(i,j)}\langle(\Theta^{l}(a) \otimes\mathrm{id})\gamma,\beta_{i,k}\otimes e_{l,j}\rangle\alpha^{\prime}_{(k, l),1}\] \[=\sum_{i,j,k,l=1}^{\infty}\alpha_{1,(i,j)}\langle\Theta^{l}(a)( \gamma_{j,l}),\beta_{i,k}\rangle\alpha^{\prime}_{(k,l),1}\]
for any \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\).
### Viewing multipliers as bimodule maps
In this section we provide another way of looking at CB multipliers and the associated weak\({}^{*}\)-topology which will be useful in later considerations.
Let us first introduce some terminology. As usual, let \(\mathrm{CB}^{\sigma}(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G})))\) be the space of normal CB maps on \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\). Notice that \(\mathrm{CB}^{\sigma}(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G})))\cong\mathrm{CB}( \mathcal{K}(\mathrm{L}^{2}(\mathbb{G})),\mathrm{B}(\mathrm{L}^{2}(\mathbb{G})))\). This is an operator space which is equipped with the weak\({}^{*}\)-topology given by the predual \(\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}))\widehat{\otimes}\,\mathrm{B}(\mathrm{L}^{2}( \mathbb{G}))_{*}\). Via left and right multiplication, \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\) becomes a \(\mathrm{L}^{\infty}(\mathbb{G})^{*}\)-bimodule, hence we can consider normal
CB bimodule maps on \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\). We can also look at those maps which leave \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\subseteq\mathrm{B}(\mathrm{L}^{2}( \mathbb{G}))\) globally invariant. We will denote the set of CB normal \(\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\)-bimodule maps on \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\) which leave \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) globally invariant by \(\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\mathrm{CB}^{\sigma,\mathrm{L}^{\infty }(\widehat{\mathbb{G}})}_{\mathrm{L}^{\infty}(\mathbb{G})^{\prime}}(\mathrm{B} (\mathrm{L}^{2}(\mathbb{G})))\). One easily checks that this space is weak\({}^{*}\)-closed in \(\mathrm{CB}^{\sigma}(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G})))\), hence naturally inherits an operator space structure and a weak\({}^{*}\)-topology.
According to [34, Theorem 4.5] (and [18, Proposition 3.3] for the left version), for any \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) there exists a unique map \(\Phi(a)\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\mathrm{CB}^{\sigma,\mathrm{ L}^{\infty}(\widehat{\mathbb{G}})}_{\mathrm{L}^{\infty}(\mathbb{G})^{\prime}}( \mathrm{B}(\mathrm{L}^{2}(\mathbb{G})))\) which extends \(\Theta^{l}(a)\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\). This map satisfies
\[\mathbb{1}\otimes\Phi(a)(x)=\mathrm{W}^{\widehat{\mathbb{G}}}\big{(}(\Theta^{ l}(a)\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}*}(\mathbb{1}\otimes x )\mathrm{W}^{\widehat{\mathbb{G}}*}\big{)}\big{)}\mathrm{W}^{\widehat{\mathbb{ G}}*}\qquad(x\in\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))).\]
Furthermore, the resulting map
\[\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\ni a\mapsto\Phi(a)\in\mathrm{L}^{ \infty}(\mathbb{G})^{\prime}\mathrm{CB}^{\sigma,\mathrm{L}^{\infty}(\widehat{ \mathbb{G}})}_{\mathrm{L}^{\infty}(\mathbb{G})^{\prime}}(\mathrm{B}(\mathrm{L} ^{2}(\mathbb{G}))) \tag{3.5}\]
is a completely isometric isomorphism which is additionally a weak\({}^{*}\)-homeomorphism ([18, Theorem 6.2]).
When \(a\) arises from an element of the Fourier algebra, \(\Phi(a)\) takes a special form.
**Lemma 3.10**.: _For \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) let \(a=\widehat{\lambda}(\omega)\in\mathrm{A}(\mathbb{G})\), so that \(\Theta^{l}(a)=(\omega\otimes\mathrm{id})\widehat{\Delta}\). The associated map \(\Phi(a)\) is_
\[\Phi(a)\colon\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\ni x\mapsto(\omega\otimes \mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}*}(\mathbb{1}\otimes x)\mathrm{W }^{\widehat{\mathbb{G}}})\in\mathrm{B}(\mathrm{L}^{2}(\mathbb{G})).\]
Proof.: As before, the left centraliser associated with \(a\) is simply left multiplication by \(\omega\), and so \(\Theta^{l}(a)\) has the given form. Then for \(x\in\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\), using that \((\widehat{\Delta}\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}})= \mathrm{W}^{\widehat{\mathbb{G}}}_{13}\mathrm{W}^{\widehat{\mathbb{G}}}_{23}\),
\[\mathbb{1}\otimes\Phi(a)(x) =\mathrm{W}^{\widehat{\mathbb{G}}}\big{(}((\omega\otimes\mathrm{ id})\widehat{\Delta}\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}*}( \mathbb{1}\otimes x)\mathrm{W}^{\widehat{\mathbb{G}}})\big{)}\mathrm{W}^{ \widehat{\mathbb{G}}*}\] \[=\mathrm{W}^{\widehat{\mathbb{G}}}\big{(}(\omega\otimes\mathrm{id }\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{G}}*}_{23}\mathrm{W}^{ \widehat{\mathbb{G}}*}_{13}(\mathbb{1}\otimes\mathbb{1}\otimes x)\mathrm{W}^{ \widehat{\mathbb{G}}}_{13}\mathrm{W}^{\widehat{\mathbb{G}}}_{23})\big{)} \mathrm{W}^{\widehat{\mathbb{G}}*}\] \[=\mathrm{W}^{\widehat{\mathbb{G}}}\mathrm{W}^{\widehat{\mathbb{G}}* }(\omega\otimes\mathrm{id}\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{ G}}*}_{13}(\mathbb{1}\otimes\mathbb{1}\otimes x)\mathrm{W}^{\widehat{ \mathbb{G}}}_{13})\mathrm{W}^{\widehat{\mathbb{G}}}\mathrm{W}^{\widehat{ \mathbb{G}}*}\] \[=\mathbb{1}\otimes(\omega\otimes\mathrm{id})(\mathrm{W}^{\widehat{ \mathbb{G}}*}(\mathbb{1}\otimes x)\mathrm{W}^{\widehat{\mathbb{G}}}),\]
and so \(\Phi(a)\) has indeed the claimed form.
## 4. The approximation property
We define the approximation property for a locally compact quantum group \(\mathbb{G}\) (abbreviated AP) in a way completely analogous to the definition of AP for locally compact groups by Haagerup and Kraus in [31]. Recall that we fix the predual \(Q^{l}(\mathrm{A}(\mathbb{G}))\) of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\), and we always refer to the corresponding weak\({}^{*}\)-topology on \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\).
**Definition 4.1**.: We say that a locally compact quantum group \(\mathbb{G}\) has the _approximation property (AP)_ if there is a net \((a_{i})_{i\in I}\) in \(\mathrm{A}(\mathbb{G})\) which converges to \(\mathbb{1}\) in the weak\({}^{*}\)-topology of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\).
**Remark 4.2**.:
* We could call the above property "left AP" and introduce also a right variant of AP. However, in Proposition 4.14 we will show that these properties are equivalent, so that there is no need to distinguish between them.
* A variant of AP was considered by Kraus-Ruan [41] (for Kac algebras) and Crann in [14]. Their property is a priori stronger, but in Theorem 4.4 we show that this variant is in fact equivalent to Definition 4.1. This proves a conjecture by Kraus-Ruan [41, Remark 4.2].
Let us list some examples and counter-examples:
* In Section 5 we show weak amenability implies AP, therefore all compact quantum groups and the discrete quantum groups \(\widehat{O^{+}_{F}},\widehat{U^{+}_{F}}\) have AP ([25, 22]). Furthermore, the locally compact quantum group \(\mathrm{SU}_{q}(1,1)_{ext}\) has AP, see [11].
* Permanence properties of AP with respect to quantum subgroups (Theorem 7.1), direct products (Proposition 7.21), free products (Theorem 7.7) and the Drinfeld double construction (Theorem 7.15) allow us to construct examples with and without AP. For instance, the Drinfeld double of \(\operatorname{SL}(3,\mathbb{R})\), or of any classical locally compact group without AP, see [47], gives rise to non-classical quantum groups without AP.
### Equivalent characterisations
We check first that the approximation property is preserved under taking the _commutant_ quantum group \(\mathbb{G}^{\prime}\), or the _opposite_ quantum group \(\mathbb{G}^{\mathrm{op}}\), for definitions see [46, Section 4].
**Proposition 4.3**.: _The following conditions are equivalent:_
1. \(\mathbb{G}\) _has AP,_
2. \(\mathbb{G}^{\prime}\) _has AP._
3. \(\mathbb{G}^{\mathrm{op}}\) _has AP,_
Proof.: Assume that \(\mathbb{G}\) has AP, i.e. there is a net \((\omega_{i})_{i\in I}\) in \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) such that \(\lambda_{\widehat{\mathbb{G}}}(\omega_{i})\xrightarrow[i\in I]{}\mathbbm{1}\) weak\({}^{*}\) in \(\mathrm{M}^{l}_{cb}(\operatorname{A}(\mathbb{G}))\).
First we will prove that \(\mathbb{G}^{\prime}\) has AP. Recall that \(\widehat{\mathbb{G}^{\prime}}=\widehat{\mathbb{G}}^{\mathrm{op}}\) ([46, Proposition 4.2]). Using Proposition 3.9, choose an arbitrary functional \(\Omega_{y,\omega}\in Q^{l}(\operatorname{A}(\mathbb{G}^{\prime}))\) where \(\mathsf{H}\) is a separable Hilbert space, \(y\in\operatorname{C}_{0}(\widehat{\mathbb{G}})\otimes\mathcal{K}(\mathsf{H})\) and \(\nu\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\operatorname{B}( \mathsf{H})_{*}\). Pick some selfadjoint antiunitary \(\mathcal{J}\) on \(\mathsf{H}\) (for example, pick an orthonormal basis and let \(\mathcal{J}\) be coordinate-wise complex conjugation) and define \(j:\mathcal{K}(\mathsf{H})\to\mathcal{K}(\mathsf{H})\) by \(j(x)=\mathcal{J}x^{*}\mathcal{J}\), an \(\star\)-antihomomorphism with \(j^{2}=\mathrm{id}\). Then \(\widehat{R}\otimes j\) is a well-defined bounded map on the spatial tensor product \(\operatorname{C}_{0}(\widehat{\mathbb{G}})\otimes\mathcal{K}(\mathsf{H})\), and \(\widehat{R}_{*}\otimes j^{*}\) is well-defined on \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\operatorname{B}( \mathsf{H})_{*}\). Indeed, \(\widehat{R}\otimes j\) acts via \((\widehat{R}\otimes j)(X)=(J_{\varphi}\otimes\mathcal{J})X^{*}(J_{\varphi} \otimes\mathcal{J})\) for \(X\in\operatorname{C}_{0}(\widehat{\mathbb{G}})\otimes\mathcal{K}(\mathsf{H})\) and then we can define \(\widehat{R}_{*}\otimes j^{*}\) as the restriction of \((\widehat{R}\otimes j)^{*}\) to \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\operatorname{B}( \mathsf{H})_{*}\): \((\widehat{R}\otimes j)^{*}\) preserves this subspace as \((\widehat{R}\otimes j)^{*}(\omega_{\xi\otimes\eta})=\omega_{J_{\varphi}\xi \otimes\mathcal{J}\eta}\) for \(\xi\in\mathrm{L}^{2}(\mathbb{G}),\eta\in\mathsf{H}\). Since \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\mathcal{K}(\mathsf{H} )^{*}\subseteq(\operatorname{C}_{0}(\widehat{\mathbb{G}})\otimes\mathcal{K}( \mathsf{H}))^{*}\) is closed, the claim follows. Furthermore, both these maps are isometric bijections.
Set \(x=(\widehat{R}\otimes j)(y),\omega=(\widehat{R}_{*}\otimes j^{*})(\nu)\). Then using Lemma 3.10
\[\langle\lambda_{\widehat{\mathbb{G}}}(\omega_{i}),\Omega_{x, \omega}\rangle =\langle((\omega_{i}\otimes\mathrm{id})\Delta_{\widehat{ \mathbb{G}}}\otimes\mathrm{id})(\widehat{R}\otimes j)(y),(\widehat{R}_{*} \otimes j^{*})(\nu)\rangle\] \[=\langle(\widehat{R}(\omega_{i}\otimes\mathrm{id})\Delta_{ \widehat{\mathbb{G}}}\widehat{R}\otimes\mathrm{id})(y),\nu\rangle=\langle(( \mathrm{id}\otimes\widehat{R}_{*}(\omega_{i}))\Delta_{\widehat{\mathbb{G}}} \otimes\mathrm{id})(y),\nu\rangle\] \[=\langle((\widehat{R}_{*}(\omega_{i})\otimes\mathrm{id})\Delta_{ \widehat{\mathbb{G}}^{\mathrm{op}}}\otimes\mathrm{id})(y),\nu\rangle=\Omega_{ y,\nu}(\lambda_{\widehat{\mathbb{G}}^{\mathrm{op}}}(\widehat{R}_{*}(\omega_{i})))\]
and since \(\lambda_{\widehat{\mathbb{G}}}(\omega_{i})\xrightarrow[i\in I]{}\mathbbm{1}\) weak\({}^{*}\), we conclude \(\lambda_{\widehat{\mathbb{G}}^{\mathrm{op}}}(\widehat{R}_{*}(\omega_{i})) \xrightarrow[i\in I]{}\mathbbm{1}\) weak\({}^{*}\). This shows that \(\mathbb{G}^{\prime}\) has AP.
Next we prove that \(\mathbb{G}^{\mathrm{op}}\) has AP. By [46, Proposition 4.2] we have \(\widehat{\mathbb{G}^{\mathrm{op}}}=\widehat{\mathbb{G}}^{\prime}\). Write \(R^{\sim}\) for the extension of the unitary antipode on \(\mathbb{G}\), \(\operatorname{B}(\mathrm{L}^{2}(\mathbb{G}))\ni x\mapsto J_{\vec{\varphi}}x^{* }J_{\vec{\varphi}}\in\operatorname{B}(\mathrm{L}^{2}(\mathbb{G}))\), so that \(\tilde{\omega}_{i}=\omega_{i}\circ R^{\sim}\in\mathrm{L}^{1}(\widehat{ \mathbb{G}}^{\prime})\). We claim that the net \((\lambda_{\widehat{\mathbb{G}}^{\prime}}(\omega_{i}\circ R^{\sim}))_{i\in I}\) converges weak\({}^{*}\) to \(\mathbbm{1}\) in \(\mathrm{M}^{l}_{cb}(\operatorname{A}(\mathbb{G}^{\mathrm{op}}))\). Take \(z\in\operatorname{C}_{0}(\widehat{\mathbb{G}}^{\prime})\otimes\mathcal{K}( \mathsf{H}),\theta\in\mathrm{L}^{1}(\widehat{\mathbb{G}}^{\prime})\widehat{ \otimes}\operatorname{B}(\mathsf{H})_{*}\). Recall that \(\operatorname{C}_{0}(\widehat{\mathbb{G}}^{\prime})=J_{\vec{\varphi}} \operatorname{C}_{0}(\widehat{\mathbb{G}})J_{\vec{\varphi}}\) and
\[\Delta_{\widehat{\mathbb{G}}^{\prime}}\colon\,\mathrm{L}^{\infty}( \widehat{\mathbb{G}})^{\prime}\ni x\mapsto (J_{\vec{\varphi}}\otimes J_{\vec{\varphi}})\Delta_{\widehat{ \mathbb{G}}}(J_{\vec{\varphi}}xJ_{\vec{\varphi}})(J_{\vec{\varphi}}\otimes J_{ \vec{\varphi}})\] \[= (R^{\sim}\otimes R^{\sim})\Delta_{\widehat{\mathbb{G}}}(R^{\sim} (x))\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})^{\prime}\widehat{\otimes} \operatorname{L}^{\infty}(\widehat{\mathbb{G}})^{\prime}.\]
Using this, we obtain
\[\langle\lambda_{\widehat{\mathbb{G}}^{\prime}}(\omega_{i}\circ R^{ \sim}),\Omega_{z,\theta}\rangle=\langle(\Theta^{l}(\lambda_{\widehat{ \mathbb{G}}^{\prime}}(\omega_{i}\circ R^{\sim}))\otimes\mathrm{id})z,\theta \rangle=\langle((\omega_{i}\circ R^{\sim}\otimes\mathrm{id})\Delta_{ \widehat{\mathbb{G}}^{\prime}}\otimes\mathrm{id})z,\theta\rangle\] \[=\langle(R^{\sim}\otimes j)((\omega_{i}\otimes\mathrm{id}) \Delta_{\widehat{\mathbb{G}}}\otimes\mathrm{id})(R^{\sim}\otimes j)z, \theta\rangle=\langle\lambda_{\widehat{\mathbb{G}}}(\omega_{i}),\Omega_{(R^{\sim} \otimes j)z,\theta\circ(R^{\sim}\otimes j)}\rangle\] \[\xrightarrow[i\in I]{}\langle 1,\Omega_{(R^{\sim}\otimes j)z,\theta\circ(R^{\sim} \otimes j)}\rangle=\langle z,\theta\rangle=\langle 1,\Omega_{z,\theta}\rangle\]
and thus \(\mathbb{G}^{\mathrm{op}}\) has AP. The converse implications follow since \((\mathbb{G}^{\prime})^{\prime}=\mathbb{G}\) and \((\mathbb{G}^{\mathrm{op}})^{\mathrm{op}}=\mathbb{G}\)
The next result shows that the version of the approximation property considered in [41] and [14] is equivalent to AP as defined in Definition 4.1. Both [41, Definition 4.1] and [14, Page 1728] take condition (2) of the following theorem as their definition of AP.
**Theorem 4.4**.: _The following conditions are equivalent:_
1. \(\mathbb{G}\) _has AP,_
2. _there is a net_ \((a_{i})_{i\in I}\) _in the Fourier algebra_ \(\mathrm{A}(\mathbb{G})\)_, such that the corresponding net_ \((\Theta^{l}(a_{i}))_{i\in I}\) _converges to the identity in the stable point-weak_\({}^{*}\)_-topology of_ \(\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\)_._
In order to prove Theorem 4.4 we need to establish some preliminary results. Recall that \(\mathrm{L}^{\infty}(\mathbb{G})\) is a right \(\mathrm{L}^{1}(\mathbb{G})\)-module via \(x\star\omega=(\omega\otimes\mathrm{id})\Delta(x)\) for \(x\in\mathrm{L}^{\infty}(\mathbb{G}),\omega\in\mathrm{L}^{1}(\mathbb{G})\).
**Proposition 4.5**.: _Let \(a\in\mathrm{L}^{\infty}(\mathbb{G})\) and \(\omega\in\mathrm{L}^{1}(\mathbb{G})\)._
1. _If_ \(a\in\mathrm{A}(\mathbb{G})\) _then_ \(a\star\omega\in\mathrm{A}(\mathbb{G})\)_._
2. _If_ \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) _then_ \(a\star\omega\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) _with_ \(\|a\star\omega\|_{cb}\leq\|a\|_{cb}\|\omega\|\) _and_ \[\Theta^{l}(a\star\omega)(\widehat{x})=(\omega\otimes\mathrm{id})\big{(}( \mathrm{id}\otimes\Theta^{l}(a))((1\otimes\widehat{x})\mathrm{W}^{\mathbb{G}* })\mathrm{W}^{\mathbb{G}}\big{)}\qquad(\widehat{x}\in L^{\infty}(\widehat{ \mathbb{G}})).\]
Proof.: (1) Write \(a=\widehat{\lambda}(\widehat{\omega})\) for \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). Then
\[a\star\omega=(\omega\otimes\mathrm{id})\Delta\big{(}(\widehat{ \omega}\otimes\mathrm{id})\mathrm{W}^{\widehat{\mathbb{G}}}\big{)}=(\widehat {\omega}\otimes\omega\otimes\mathrm{id})\big{(}\mathrm{W}^{\widehat{\mathbb{G }}}_{13}\mathrm{W}^{\widehat{\mathbb{G}}}_{12}\big{)}\] \[=(\widehat{\omega}\otimes\mathrm{id})\big{(}\mathrm{W}^{ \widehat{\mathbb{G}}}((\mathrm{id}\otimes\omega)\mathrm{W}^{\widehat{\mathbb{ G}}}\otimes\mathbb{1})\big{)}=\widehat{\lambda}\big{(}\widehat{\omega}\big{(} \cdot(\mathrm{id}\otimes\omega)\mathrm{W}^{\widehat{\mathbb{G}}}\big{)} \big{)}\in\mathrm{A}(\mathbb{G})\]
as required.
(2) As \(\mathrm{W}^{\mathbb{G}}\in\mathrm{L}^{\infty}(\mathbb{G})\bar{\otimes}\, \mathrm{L}^{\infty}(\widehat{\mathbb{G}})\), there is a well-defined linear map \(T\) on \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) given by
\[T(\widehat{x})=(\omega\otimes\mathrm{id})\big{(}(\mathrm{id}\otimes\Theta^{l }(a))((1\otimes\widehat{x})\mathrm{W}^{\mathbb{G}*})\mathrm{W}^{\mathbb{G}} \big{)}\qquad(\widehat{x}\in L^{\infty}(\widehat{\mathbb{G}})).\]
Clearly \(T\) is completely bounded with \(\|T\|_{cb}\leq\|a\|_{cb}\|\omega\|\) and weak\({}^{*}\)-continuous. We first show that \(T\) is the adjoint of a centraliser, equivalently, that \(\widehat{\Delta}T=(T\otimes\mathrm{id})\widehat{\Delta}\). If \(\widehat{x}\in L^{\infty}(\widehat{\mathbb{G}})\) then using \(\widehat{\Delta}\Theta^{l}(a)=(\Theta^{l}(a)\otimes\mathrm{id})\widehat{\Delta}\) gives
\[\widehat{\Delta}T(\widehat{x}) =(\omega\otimes\mathrm{id}\otimes\mathrm{id})\big{(}(\mathrm{id }\otimes\widehat{\Delta}\Theta^{l}(a))((1\otimes\widehat{x})\mathrm{W}^{ \mathbb{G}*})\mathrm{W}^{\mathbb{G}}_{13}\mathrm{W}^{\mathbb{G}}_{12}\big{)}\] \[=(\omega\otimes\mathrm{id}\otimes\mathrm{id})\big{(}(\mathrm{id }\otimes\Theta^{l}(a)\otimes\mathrm{id})\big{(}(1\otimes\widehat{\Delta}( \widehat{x}))\mathrm{W}^{\mathbb{G}*}_{12}\big{)}\mathrm{W}^{\mathbb{G}}_{12} \big{)}\] \[=(T\otimes\mathrm{id})\widehat{\Delta}(\widehat{x}).\]
Consequently \(T\) is the adjoint of a centraliser, and so there exists \(b\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) with \(T=\Theta^{l}(b)\). Then \(b\widehat{\lambda}(\widehat{\omega})=\widehat{\lambda}(\widehat{\omega} \circ T)\) for each \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\), equivalently, \((1\otimes b)\mathrm{W}^{\widehat{\mathbb{G}}}=(T\otimes\mathrm{id})(\mathrm{W} ^{\widehat{\mathbb{G}}})\). In other words, we have
\[(b\otimes 1)\mathrm{W}^{\mathbb{G}*}=(\mathrm{id}\otimes T)(\mathrm{W}^{ \mathbb{G}*})=(\mathrm{id}\otimes\omega\otimes\mathrm{id})\big{(}(\mathrm{id }\otimes\mathrm{id}\otimes\Theta^{l}(a))(\mathrm{W}^{\mathbb{G}*}_{13}\mathrm{W }^{\mathbb{G}*}_{23})\mathrm{W}^{\mathbb{G}}_{23}\big{)},\]
or equivalently
\[b\otimes 1 =(\mathrm{id}\otimes\omega\otimes\mathrm{id})\big{(}(\mathrm{id} \otimes\mathrm{id}\otimes\Theta^{l}(a))(\mathrm{W}^{\mathbb{G}*}_{13}\mathrm{W }^{\mathbb{G}*}_{23})\mathrm{W}^{\mathbb{G}}_{23}\mathrm{W}^{\mathbb{G}}_{13} \big{)}\] \[=(\omega\otimes\mathrm{id}\otimes\mathrm{id})\big{(}(\mathrm{id }\otimes\mathrm{id}\otimes\Theta^{l}(a))(\mathrm{W}^{\mathbb{G}*}_{23}\mathrm{W }^{\mathbb{G}*}_{13})\mathrm{W}^{\mathbb{G}}_{13}\mathrm{W}^{\mathbb{G}}_{23} \big{)}. \tag{4.1}\]
Now, \((1\otimes a)\mathrm{W}^{\widehat{\mathbb{G}}}=(\Theta^{l}(a)\otimes\mathrm{id})( \mathrm{W}^{\widehat{\mathbb{G}}})\) so \(a\otimes 1=(\mathrm{id}\otimes\Theta^{l}(a))(\mathrm{W}^{\mathbb{G}})\) and hence
\[a\star\omega\otimes 1 =(\omega\otimes\mathrm{id})\Delta(a)\otimes 1=((\omega\otimes\mathrm{id}) \Delta\otimes\mathrm{id})\big{(}(\mathrm{id}\otimes\Theta^{l}(a))(\mathrm{W}^{ \mathbb{G}*}_{*})\mathrm{W}^{\mathbb{G}}\big{)}\] \[=(\omega\otimes\mathrm{id}\otimes\mathrm{id})\big{(}(\mathrm{id }\otimes\mathrm{id}\otimes\Theta^{l}(a))(\mathrm{W}^{\mathbb{G}*}_{23}\mathrm{W }^{\mathbb{G}*}_{13})\mathrm{W}^{\mathbb{G}}_{13}\mathrm{W}^{\mathbb{G}}_{23} \big{)}. \tag{4.2}\]
As (4.1) and (4.2) agree, we conclude that \(b=a\star\omega\). Thus \(a\star\omega\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) with \(\Theta^{l}(a\star\omega)=T\) as required.
**Lemma 4.6**.: _For any locally compact quantum group \(\mathbb{G}\) and \(x\in\mathrm{L}^{\infty}(\mathbb{G}),a\in\mathrm{C}_{0}(\mathbb{G}),\omega\in \mathrm{L}^{1}(\mathbb{G})\) we have \(\omega\star(xa)\in\mathrm{C}_{0}(\mathbb{G})\)._
Proof.: As \(\mathrm{L}^{1}(\mathbb{G})\) is a closed \(\mathrm{C}_{0}(\mathbb{G})\)-submodule of \(\mathrm{C}_{0}(\mathbb{G})^{*}\), by [60, Lemma 2.1] we know that \(\omega=b\omega_{1}\) for some \(b\in\mathrm{C}_{0}(\mathbb{G})\) and \(\omega_{1}\in\mathrm{L}^{1}(\mathbb{G})\). Then
\[\omega\star(xa)=(\mathrm{id}\otimes\omega)\Delta(xa)=(\mathrm{id}\otimes \omega_{1})\big{(}\Delta(x)\Delta(a)(\mathbb{1}\,\otimes b)\big{)}.\]
As \(a,b\in\mathrm{C}_{0}(\mathbb{G})\) we know that \(\Delta(a)(\mathbb{1}\,\otimes b)\in\mathrm{C}_{0}(\mathbb{G})\otimes\mathrm{C }_{0}(\mathbb{G})\), the minimal \(C^{*}\)-algebraic tensor product. By continuity, it hence suffices to prove that
\[(\mathrm{id}\otimes\omega_{1})\big{(}\Delta(x)(c\otimes d)\big{)}\in\mathrm{C }_{0}(\mathbb{G})\]
for \(c,d\in\mathrm{C}_{0}(\mathbb{G})\). However, this equals \(\big{(}(\mathrm{id}\otimes d\omega_{1})\Delta(x)\big{)}c\) and by [60, Theorem 2.4] we know that \((\mathrm{id}\otimes d\omega_{1})\Delta(x)\in\mathrm{M}(\mathrm{C}_{0}( \mathbb{G}))\), and so the result follows.
Next we introduce certain functionals in \(Q^{l}(\mathrm{A}(\mathbb{G}))\) in analogy to [31, Proposition 1.3]. For a Hilbert space \(\mathsf{H}\), \(x\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\operatorname{B}( \mathsf{H}),\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\bar{\otimes} \operatorname{B}(\mathsf{H})_{*}\) and \(f\in\mathrm{L}^{1}(\mathbb{G})\) define
\[\Omega_{x,\omega,f}\colon\,\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\ni a \mapsto\langle(\Theta^{l}(a\star f)\otimes\mathrm{id})x,\omega\rangle\in \mathbb{C}. \tag{4.3}\]
Note that \(\Omega_{x,\omega,f}\) is well-defined and bounded by Proposition 4.5.
**Proposition 4.7**.: _The linear functional \(\Omega_{x,\omega,f}\) is weak\({}^{*}\)-continuous, hence is contained in \(Q^{l}(\mathrm{A}(\mathbb{G}))\)._
Proof.: Clearly \(\Omega_{x,\omega,f}\) is bounded with \(\|\Omega_{x,\omega,f}\|\leq\|x\|\|f\|\|\omega\|\), so it suffices to prove the result when \(\omega\) is in the algebraic tensor product of \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\) with \(\operatorname{B}(\mathsf{H})_{*}\), and hence by linearity, we may suppose that \(\omega=\widehat{\omega}\otimes u\). Then
\[\Omega_{x,\omega,f}(a)=\langle(\Theta^{l}(a\star f)\otimes\mathrm{id})(x), \widehat{\omega}\otimes u\rangle=\langle\Theta^{l}(a\star f)((\mathrm{id} \otimes u)(x)),\widehat{\omega}\rangle\qquad(a\in\mathrm{M}^{l}_{cb}(\mathrm{ A}(\mathbb{G}))).\]
Thus, it suffices to show that for \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) and \(\widehat{x}\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\)
\[\mu\colon\,\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\ni a\mapsto\langle \Theta^{l}(a\star f)(\widehat{x}),\widehat{\omega}\rangle\in\mathbb{C}\]
is weak\({}^{*}\)-continuous. By Proposition 4.5, given the form of \(\Theta^{l}(a\star f)\),
\[\mu(a)=\langle(\mathrm{id}\otimes\Theta^{l}(a))((\mathbb{1}\otimes\widehat{x} )\mathrm{W}^{\mathbb{G}*})\mathrm{W}^{\mathbb{G}},f\otimes\widehat{\omega} \rangle\qquad(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))).\]
As \(\mathrm{W}^{\mathbb{G}}\in\mathrm{L}^{\infty}(\mathbb{G})\bar{\otimes}\, \mathrm{L}^{\infty}(\widehat{\mathbb{G}})\), also \(\mathrm{W}^{\mathbb{G}}(f\otimes\widehat{\omega})\in\mathrm{L}^{1}(\widehat{ \mathbb{G}})\widehat{\otimes}\,\mathrm{L}^{1}(\widehat{\mathbb{G}})\), so again by approximation, it suffices to show that for \(f^{\prime}\in\mathrm{L}^{1}(\mathbb{G}),\widehat{\omega}^{\prime}\in\mathrm{L} ^{1}(\widehat{\mathbb{G}})\) the map
\[\mu^{\prime}\colon\,\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\ni a\mapsto \langle(\mathrm{id}\otimes\Theta^{l}(a))((\mathbb{1}\,\otimes\widehat{x}) \mathrm{W}^{\mathbb{G}*}),f^{\prime}\otimes\widehat{\omega}^{\prime}\rangle= \langle\Theta^{l}(a)(\widehat{x}\widehat{y}),\widehat{\omega}^{\prime}\rangle\in \mathbb{C}\]
is weak\({}^{*}\)-continuous, where \(\widehat{y}=(f^{\prime}\otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}*})\in \mathrm{C}_{0}(\widehat{\mathbb{G}})\).
By linear density of products, compare Lemma 5.3, it suffices to consider the case when \(\widehat{\omega}^{\prime}=\widehat{\omega}_{1}\star\widehat{\omega}_{2}\) for \(\widehat{\omega}_{1},\widehat{\omega}_{2}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). As \(\Theta^{l}(a)_{*}(\widehat{\omega}_{1}\star\widehat{\omega}_{2})=\Theta^{l}(a)_ {*}(\widehat{\omega}_{1})\star\widehat{\omega}_{2}\) by the left centraliser property, we see that
\[\mu^{\prime}(a)=\langle\Theta^{l}(a)(\widehat{x}\widehat{y}),\widehat{\omega}_{1 }\star\widehat{\omega}_{2}\rangle=\langle\widehat{\omega}_{2}\star(\widehat{x} \widehat{y}),\Theta^{l}(a)_{*}(\widehat{\omega}_{1})\rangle\qquad(a\in\mathrm{M }^{l}_{cb}(\mathrm{A}(\mathbb{G}))).\]
As \(\widehat{y}\in\mathrm{C}_{0}(\widehat{\mathbb{G}})\), by Lemma 4.6 applied to \(\widehat{\mathbb{G}}\), we know that \(\widehat{\omega}_{2}\star(\widehat{x}\widehat{y})\in\mathrm{C}_{0}(\widehat{ \mathbb{G}})\). Thus \(\mu^{\prime}\in Q^{l}(\mathrm{A}(\mathbb{G}))\) by Proposition 3.9.
We are now ready to prove Theorem 4.4.
Proof of Theorem 4.4.: (2) \(\Rightarrow\) (1) follows directly from the characterisation of functionals in the predual \(Q^{l}(\mathrm{A}(\mathbb{G}))\) of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) in Proposition 3.9.
(1) \(\Rightarrow\) (2) Assume that \((a_{i})_{i\in I}\) is a net in \(\mathrm{A}(\mathbb{G})\) which converges weak\({}^{*}\) to \(\mathbb{1}\) in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). Pick a state \(f\in\mathrm{L}^{1}(\mathbb{G})\), and for each \(i\in I\) set \(b_{i}=a_{i}\star f\). By Proposition 4.5 we have \(b_{i}\in\mathrm{A}(\mathbb{G})\). We now show that \((\Theta^{l}(b_{i}))_{i\in I}\) converges to the identity in the stable point-weak\({}^{*}\)-topology.
Given a separable Hilbert space \(\mathsf{H}\), \(x\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\operatorname{B}( \mathsf{H})\), and \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\bar{\otimes}\operatorname{B}( \mathsf{H})_{*}\), using Proposition 4.7 we see that
\[\langle(\Theta^{l}(b_{i})\otimes\mathrm{id})x,\omega\rangle=\langle( \Theta^{l}(a_{i}\star f)\otimes\mathrm{id})x,\omega\rangle=\langle a_{i}, \Omega_{x,\omega,f}\rangle\] \[\xrightarrow[i\in I]{}\langle\mathbb{1},\Omega_{x,\omega,f}\rangle= \langle(\Theta^{l}(\mathbb{1}\star f)\otimes\mathrm{id})(x),\omega\rangle,\]
Since \(f\) is a state, we have \(\mathbb{1}\star f=(f\otimes\mathrm{id})\Delta(\mathbb{1})=\mathbb{1}\), hence \(\Theta^{l}(\mathbb{1}\star f)=\mathrm{id}\), and so \((\Theta^{l}(b_{i})\otimes\mathrm{id})(x)\xrightarrow[i\in I]{}x\) weak\({}^{*}\) as required.
### Further general properties
Let us start with an auxiliary technical result. Recall that for a von Neumann algebra \(\mathrm{M}\) and a linear map \(T\colon\mathrm{M}\to\mathrm{M}\) we define \(T^{\dagger}\colon\mathrm{M}\ni x\mapsto T(x^{*})^{*}\in\mathrm{M}\).
**Lemma 4.8**.: _If \(T\in\mathrm{CB}^{\sigma}(\mathrm{M})\), then \(T^{\dagger}\in\mathrm{CB}^{\sigma}(\mathrm{M})\) and \(\|T^{\dagger}\|_{cb}=\|T\|_{cb}\). If \(\mathrm{M}=\mathrm{L}^{\infty}(\mathbb{G})\) for a locally compact quantum group \(\mathbb{G}\) then \(R\circ T\circ R\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\mathbb{G}))\) and \(\|R\circ T\circ R\|_{cb}=\|T\|_{cb}\). Both operations \(T\mapsto T^{\dagger},T\mapsto R\circ T\circ R\) are continuous with respect to the stable point-weak\({}^{*}\)-topology._
Proof.: Let \(\mathrm{M}\subseteq\mathrm{B}(\mathsf{H})\). Using the normal version of Wittstock's Theorem (compare with the start of the proof of [32, Theorem 2.5] for example), we can find a Hilbert space \(\mathsf{K}\), bounded linear maps \(V,W\colon\mathsf{H}\to\mathsf{K}\) and a normal representation \(\pi:\mathrm{M}\to\mathrm{B}(\mathsf{K})\) such that \(T(x)=W^{*}\pi(x)V\) for \(x\in\mathrm{M}\), and \(\|T\|_{cb}=\|V\|W\|\). Consequently \(T^{\dagger}(x)=V^{*}\pi(x)W\) for \(x\in\mathrm{M}\), and it follows that \(T^{\dagger}\in\mathrm{CB}^{\sigma}(\mathrm{M})\) and \(\|T^{\dagger}\|_{cb}\leq\|T\|_{cb}\). As \((T^{\dagger})^{\dagger}=T\) we in fact have \(\|T^{\dagger}\|_{cb}=\|T\|_{cb}\). Assume now that \((T_{i})_{i\in I}\) is a net in \(\mathrm{CB}^{\sigma}(\mathrm{M})\) which converges to \(T\in\mathrm{CB}^{\sigma}(\mathrm{M})\) in the stable point-weak\({}^{*}\)-topology, and take \(x\in\mathrm{M}\mathbin{\bar{\otimes}}\mathrm{B}(\ell^{2}),\omega\in\mathrm{M}_{*} \mathbin{\bar{\otimes}}\mathrm{B}(\ell^{2})_{*}\). Then we have
\[\langle(T_{i}^{\dagger}\otimes\mathrm{id})x,\omega\rangle=\langle(T_{i} \otimes\mathrm{id})(x^{*})^{*},\omega\rangle=\overline{\langle(T_{i}\otimes \mathrm{id})(x^{*}),\overline{\omega}\rangle}\]
This calculation shows the stable point-weak\({}^{*}\)-continuity of the map \(T\mapsto T^{\dagger}\).
Now assume that \(\mathrm{M}=\mathrm{L}^{\infty}(\mathbb{G})\). If \(T\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\mathbb{G}))\), then clearly \(R\circ T\circ R\) is normal since the unitary antipode \(R\) is normal. Using again Wittstock's Theorem, write \(T^{\dagger}=W^{*}\pi(\cdot)V\) and choose an antiunitary \(\mathcal{J}\) on \(\mathsf{H}\) which satisfies \(\mathcal{J}^{*}=\mathcal{J}\). Then, for \(x\in\mathrm{L}^{\infty}(\mathbb{G})\),
\[R\circ T\circ R(x)=J_{\bar{\varphi}}T(J_{\bar{\varphi}}x^{*}J_{\bar{\varphi}})^ {*}J_{\bar{\varphi}}=J_{\bar{\varphi}}T^{\dagger}(J_{\bar{\varphi}}xJ_{\bar{ \varphi}})J_{\bar{\varphi}}\]
As \(\mathrm{L}^{\infty}(\mathbb{G})\ni x\mapsto\mathcal{J}\pi(J_{\bar{\varphi}}xJ_ {\bar{\varphi}})\mathcal{J}\in\mathrm{B}(\mathsf{H})\) is a \(\star\)-homomorphism, it follows that \(R\circ T\circ R\) is \(\mathrm{CB}\) with \(\|R\circ T\circ R\|_{cb}\leq\|T\|_{cb}\). Again, as \(R\circ(R\circ T\circ R)\circ R=T\), we have in fact \(\|R\circ T\circ R\|_{cb}=\|T\|_{cb}\).
Let \((T_{i})_{i\in I}\) be a net in \(\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\mathbb{G}))\) converging to \(T\) in the stable point-weak\({}^{*}\)-topology. Choose a self-adjoint antiunitary \(\mathcal{J}^{\prime}\) on \(\ell^{2}\) and define \(j=\mathcal{J}^{\prime}(\cdot)^{*}\mathcal{J}^{\prime}\), a normal \(\star\)-antiautomorphism of \(\mathrm{B}(\ell^{2})\). Then \(R\otimes j\) extends to a well-defined normal bounded linear map on \(\mathrm{L}^{\infty}(\mathbb{G})\mathbin{\bar{\otimes}}\mathrm{B}(\ell^{2})\). Indeed, in the proof of Proposition 4.3 we argued that \(R_{*}\otimes(j|_{\mathcal{K}(\ell^{2})})^{*}\) is a bounded linear map on \(\mathrm{L}^{1}(\mathbb{G})\mathbin{\bar{\otimes}}\mathrm{B}(\ell^{2})_{*}\), and we just need to take the dual map \(R\otimes j=(R_{*}\otimes(j|_{\mathcal{K}(\ell^{2})})^{*})^{*}\). For \(x\in\mathrm{L}^{\infty}(\mathbb{G})\mathbin{\bar{\otimes}}\mathrm{B}(\ell^{2} ),\omega\in\mathrm{L}^{1}(\mathbb{G})\mathbin{\bar{\otimes}}\mathrm{B}(\ell^{2} )_{*}\) we have
\[\langle(R\circ T_{i}\circ R\otimes\mathrm{id})x,\omega\rangle=\langle(R \circ T_{i}\circ R\otimes j^{2})x,\omega\rangle=\langle(T_{i}\otimes\mathrm{id}) \big{(}(R\otimes j)(x)\big{)},\omega\circ(R\otimes j)\rangle\] \[\xrightarrow[i\in I]{}\langle(T\otimes\mathrm{id})\big{(}(R\otimes j )(x)\big{)},\omega\circ(R\otimes j)\rangle=\langle(R\circ T\circ R\otimes \mathrm{id})x,\omega\rangle,\]
which concludes the proof.
We now show that \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) admits an interesting involution. Recall that \(S\) denotes the antipode of \(\mathbb{G}\).
**Proposition 4.9**.: _Let \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). Then \(a^{*}\in\mathrm{Dom}(S)\) and \(S(a^{*})\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) with \(\Theta^{l}(S(a^{*}))=\Theta^{l}(a)^{\dagger}\)._
Proof.: By Lemma 4.8, we know that \(\Theta^{l}(a)^{\dagger}\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\mathbb{G}))\). One easily checks that \(\Theta^{l}(a)^{\dagger}\) is a left \(\mathrm{L}^{1}(\tilde{\mathbb{G}})\)-module map, hence \(\Theta^{l}(a)^{\dagger}=\Theta^{l}(b)\) for some \(b\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). The claim follows now from [17, Theorem 5.9].
For the convenience of the reader let us also indicate a direct argument. As \(\Theta^{l}(a)^{\dagger}=\Theta^{l}(b)\) we have that
\[(\mathbb{1}\otimes b)\mathrm{W}^{\widehat{\mathbb{G}}}=(\Theta^{l}(b)\otimes \mathrm{id})\mathrm{W}^{\widehat{\mathbb{G}}}=(\Theta^{l}(a)^{\dagger}\otimes \mathrm{id})\mathrm{W}^{\widehat{\mathbb{G}}}=((\Theta^{l}(a)\otimes\mathrm{id}) \mathrm{W}^{\widehat{\mathbb{G}}*})^{*},\]
and hence
\[\mathrm{W}^{\widehat{\mathbb{G}}*}(\mathbb{1}\otimes b^{*})=(\Theta^{l}(a) \otimes\mathrm{id})\mathrm{W}^{\widehat{\mathbb{G}}*}\quad\Rightarrow\quad( \mathrm{id}\otimes\widehat{\omega})(\mathrm{W}^{\widehat{\mathbb{G}}})b^{*}=( \mathrm{id}\otimes\widehat{\omega}\circ\Theta^{l}(a))\mathrm{W}^{\mathbb{G}}\]
for each \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). In what follows, we treat \(S\) as a densely-defined, closed operator on \(\mathrm{L}^{\infty}(\mathbb{G})\) equipped with the weak\({}^{*}\)-topology. From [68, Proposition 2.24], compare [45, Proposition 8.3], we know that for any \(\widehat{\omega}\) we get \((\mathrm{id}\otimes\widehat{\omega})\mathrm{W}^{\mathbb{G}}\in D(S)\) and \(S((\mathrm{id}\otimes\widehat{\omega})\mathrm{W}^{\mathbb{G}})=(\mathrm{id} \otimes\widehat{\omega})\mathrm{W}^{\mathbb{G}*}\). It follows that \((\mathrm{id}\otimes\widehat{\omega})(\mathrm{W}^{\mathbb{G}})b^{*}\in D(S)\) with
\[S\big{(}(\mathrm{id}\otimes\widehat{\omega})(\mathrm{W}^{ \mathbb{G}})b^{*}\big{)} =(\mathrm{id}\otimes\widehat{\omega}\circ\Theta^{I}(a))\mathrm{W} ^{\mathbb{G}*}=(\widehat{\omega}\circ\Theta^{I}(a)\otimes\mathrm{id})\mathrm{W} ^{\widehat{\mathbb{G}}}=a\,(\widehat{\omega}\otimes\mathrm{id})(\mathrm{W}^{ \widehat{\mathbb{G}}}) \tag{4.4}\] \[=a\,(\mathrm{id}\otimes\widehat{\omega})\mathrm{W}^{\mathbb{G}*} =a\,S\big{(}(\mathrm{id}\otimes\widehat{\omega})\mathrm{W}^{\mathbb{G}}\big{)}.\]
Let \(C=\{(\mathrm{id}\otimes\widehat{\omega})\mathrm{W}^{\mathbb{G}}\,|\,\widehat {\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\}\subseteq D(S)\). We shall show that \(C\) contains a net \((a_{i})_{i\in I}\) such that both \(a_{i}\xrightarrow[i\in I]{}\mathbb{1}\) and \(S(a_{i})\xrightarrow[i\in I]{}\mathbb{1}\) weak\({}^{*}\) in \(\mathrm{L}^{\infty}(\mathbb{G})\). It follows that \(a_{i}b^{*}\xrightarrow[i\in I]{}b^{*}\) and \(aS(a_{i})\xrightarrow[i\in I]{}a\) weak\({}^{*}\), and so as \(S\) is weak\({}^{*}\)-closed, it follows from (4.4) that \(b^{*}\in D(S)\) with \(S(b^{*})=a\). Hence \(a^{*}=S(b^{*})^{*}\in\mathrm{Dom}(S)\) and \(S(a^{*})=S(S(b^{*})^{*})=b\), as claimed.
We now show the claim about \(C\), using some standard "smearing" techniques, compare [42]. For \(a\in\mathrm{C}_{0}(\mathbb{G})\) and \(r>0,z\in\mathbb{C}\), define
\[a(r,z)=\frac{r}{\sqrt{\pi}}\int_{\mathbb{R}}\exp(-r^{2}(t-z)^{2})\tau_{t}(a)\, \mathrm{d}t.\]
Then \(a(r,z)\) is analytic for the one-parameter automorphism group \((\tau_{t})_{t\in\mathbb{R}}\) with \(\tau_{w}(a(r,z))=a(r,z+w)\) for \(w\in\mathbb{C}\). Similarly, for \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\), define
\[\widehat{\omega}(r,z)=\frac{r}{\sqrt{\pi}}\int_{\mathbb{R}}\exp(-r^{2}(t-z)^{ 2})\widehat{\omega}\circ\widehat{\tau}_{t}\,\mathrm{d}t.\]
Given \(\widehat{\omega}\), let \(a=(\mathrm{id}\otimes\widehat{\omega})(\mathrm{W}^{\mathbb{G}})\). As \((\tau_{t}\otimes\widehat{\tau}_{t})(\mathrm{W}^{\mathbb{G}})=\mathrm{W}^{ \mathbb{G}}\), it follows that \(\tau_{t}(a)=(\mathrm{id}\otimes\widehat{\omega}\circ\widehat{\tau}_{-t})( \mathrm{W}^{\mathbb{G}})\) and hence \(a(r,z)=(\mathrm{id}\otimes\widehat{\omega}(r,-z))(\mathrm{W}^{\mathbb{G}})\). Finally, as \(S=R\tau_{-i/2}\), it follows that \(S(a(r,z))=R(a(r,z-i/2))=(\mathrm{id}\otimes(\widehat{\omega}\circ\widehat{R}) (r,-z+i/2))(\mathrm{W}^{\mathbb{G}})\).
As \(C\) is norm dense in \(\mathrm{C}_{0}(\mathbb{G})\), we can find a net \((\widehat{\omega}_{i})_{i\in I}\) with \(a_{i}=(\mathrm{id}\otimes\omega_{i})(\mathrm{W}^{\mathbb{G}})\xrightarrow[i \in I]{}\mathbb{1}\) strictly. By [42, Proposition 2.25], the net \((a_{i}(r,z))_{i\in I}\) converges strictly to \(\mathbb{1}\,(r,z)=\mathbb{1}\), for any choice of \(r,z\). By the above discussion, \(a_{i}(r,z)\in C\). Then also \(S(a_{i}(r,z))=R(a_{i}(r,z-i/2))\xrightarrow[i\in I]{}R(\mathbb{1})=\mathbb{1}\) strictly. Cohen-Hewitt's Factorisation Theorem shows that strict convergence implies weak\({}^{*}\)-convergence in \(\mathrm{L}^{\infty}(\mathbb{G})\), hence we have constructed the required net.
**Corollary 4.10**.: Let \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). Then \(\Theta^{l}(a)\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\) preserves the adjoint if and only if \(S(a^{*})=a\).
Next we check that \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) is globally invariant under the scaling and modular automorphism groups.
**Lemma 4.11**.: _Let \(\mathbb{G}\) be a locally compact quantum group and let \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). For any \(t\in\mathbb{R}\),_
* \(\tau_{t}(a)\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) _with_ \(\Theta^{l}(\tau_{t}(a))=\hat{\tau}_{t}\circ\Theta^{l}(a)\circ\hat{\tau}_{-t}\)_._
* \(\sigma_{t}^{\varphi}(a)\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) _with_ \(\Theta^{l}(\sigma_{t}^{\varphi}(a))\colon x\mapsto\hat{\delta}^{it}\,\hat{\tau}_ {t}\circ\Theta^{l}(a)\big{(}\hat{\delta}^{-it}\hat{\tau}_{-t}(x)\big{)}\)_._
* \(\sigma_{t}^{\varphi}(a)\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) _with_ \(\Theta^{l}(\sigma_{t}^{\psi}(a))\colon x\mapsto\hat{\tau}_{-t}\circ\Theta^{l}(a) \big{(}\hat{\tau}_{t}(x)\hat{\delta}^{-it}\big{)}\hat{\delta}^{it}\)_._
Proof.: Take \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). Using \((\hat{\tau}_{t}\otimes\tau_{t})\mathrm{W}^{\widehat{\mathbb{G}}}=\mathrm{W}^{ \widehat{\mathbb{G}}}\) (see proof of [45, Proposition 6.10]) we obtain
\[\tau_{t}(a)\widehat{\lambda}(\omega)=\tau_{t}\big{(}a\,\tau_{-t}(\widehat{ \lambda}(\omega))\big{)}=\tau_{t}\big{(}a\widehat{\lambda}(\omega\circ\hat{ \tau}_{t})\big{)}=\widehat{\lambda}\big{(}\Theta^{l}(a)_{*}(\omega\circ\hat{ \tau}_{t})\circ\hat{\tau}_{-t}\big{)}\]
which implies \(\tau_{t}(a)\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) and \(\Theta^{l}(\tau_{t}(a))=\hat{\tau}_{t}\circ\Theta^{l}(a)\circ\hat{\tau}_{-t}\); notice that clearly the right-hand-side of this final expression gives a completely bounded map.
We now use the following facts. Firstly, by the definition of \(\mathrm{W}^{\mathbb{G}}\) we have \((\rho\otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}*})\Lambda_{\varphi}(x)=\Lambda_{ \varphi}((\rho\otimes\mathrm{id})\Delta(x))\) for \(\rho\in\mathrm{L}^{1}(\mathbb{G}),x\in\mathfrak{R}_{\varphi}\). Secondly, \((\sigma_{t}^{\varphi}\otimes\sigma_{-t}^{\psi})\circ\Delta=\Delta\circ\tau_{t}\), see [45, Proposition 6.8]. Thus
\[(\rho\circ\sigma_{t}^{\varphi}\otimes\mathrm{id})(\mathrm{W}^{ \mathbb{G}*})\Lambda_{\varphi}(x)=\Lambda_{\varphi}\big{(}(\rho\circ\sigma_{t}^{ \varphi}\otimes\mathrm{id})\Delta(x))=\Lambda_{\varphi}\big{(}(\rho\otimes\sigma_{t}^ {\psi})\Delta(\tau_{t}(x))\big{)}\] \[=\nu^{\frac{t}{2}}\nabla_{\psi}^{it}\Lambda_{\varphi}((\rho\otimes \mathrm{id})\Delta(\tau_{t}(x)))=\nabla_{\psi}^{it}(\rho\otimes\mathrm{id})( \mathrm{W}^{\mathbb{G}*})
which implies that \((\sigma_{t}^{\varphi}\otimes\mathrm{id})(\mathrm{W}^{\mathrm{G}*})=(1\otimes \nabla_{\psi}^{it})\mathrm{W}^{\mathrm{G}*}(1\otimes P^{it})\); here we have also used [68, Definition 5.1, Remark 5.2]. Next, since \(\mathrm{W}^{\widehat{\mathrm{G}}}=\chi(\mathrm{W}^{\mathrm{G}*})\) and \(\hat{\delta}^{-it}=\nabla_{\psi}^{it}P^{it}\) ([68, Theorem 5.17]), and using that \(\hat{\tau}_{-t}(y)=P^{-it}yP^{it}\) for \(y\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) see [45, Proposition 8.23], we arrive at
\[(\mathrm{id}\otimes\sigma_{t}^{\varphi})(\mathrm{W}^{\widehat{\mathrm{G}}})= (\nabla_{\psi}^{it}\otimes 1)\mathrm{W}^{\widehat{\mathrm{G}}}(P^{it} \otimes 1)=(\hat{\delta}^{-it}\otimes 1)(\hat{\tau}_{-t}\otimes\mathrm{id})( \mathrm{W}^{\widehat{\mathrm{G}}}). \tag{4.5}\]
Using this formula and \(\hat{\tau}_{t}(\hat{\delta})=\hat{\delta}\) ([68, Theorem 3.11]) we calculate
\[\sigma_{t}^{\varphi}(a)\widehat{\lambda}(\omega)=\sigma_{t}^{ \varphi}\big{(}a\sigma_{-t}^{\varphi}(\widehat{\lambda}(\omega))\big{)}= \sigma_{t}^{\varphi}\big{(}a(\omega\otimes\mathrm{id})\big{(}(\hat{\delta}^{it }\otimes 1)(\hat{\tau}_{t}\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathrm{G}}}) \big{)}\big{)}\] \[=\sigma_{t}^{\varphi}\big{(}a\widehat{\lambda}\big{(}(\omega \hat{\delta}^{it})\circ\hat{\tau}_{t}\big{)}\big{)}=\sigma_{t}^{\varphi}\big{(} \widehat{\lambda}\big{(}\Theta^{l}(a)_{*}((\omega\hat{\delta}^{it})\circ\hat{ \tau}_{t})\big{)}\big{)}\] \[=\widehat{\lambda}\big{(}\big{(}\big{(}\Theta^{l}(a)_{*}((\omega \hat{\delta}^{it})\circ\hat{\tau}_{t})\big{)}\hat{\delta}^{-it}\big{)}\circ \hat{\tau}_{-t}\big{)}.\]
The above shows that \(\sigma_{t}^{\varphi}(a)\in\mathrm{M}^{l}(\mathrm{A}(\mathbb{G}))\), and for \(x\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) we get
\[\langle\Theta^{l}(\sigma_{t}^{\varphi}(a))(x),\omega\rangle= \langle x,\Theta^{l}(\sigma_{t}^{\varphi}(a)_{*}(\omega))\rangle=\langle x, \big{(}\Theta^{l}(a)_{*}((\omega\hat{\delta}^{it})\circ\hat{\tau}_{t})\hat{ \delta}^{-it}\big{)}\circ\hat{\tau}_{-t}\rangle\] \[=\langle\hat{\delta}^{-it}\hat{\tau}_{-t}(x),\Theta^{l}(a)_{*}(( \omega\hat{\delta}^{it})\circ\hat{\tau}_{t})\rangle=\langle\hat{\delta}^{it} \hat{\tau}_{t}\circ\Theta^{l}(a)\big{(}\hat{\delta}^{-it}\hat{\tau}_{-t}(x) \big{)},\omega\rangle.\]
The final claim is shown in a similar way. We have \((\sigma_{t}^{\psi}\otimes\tau_{-t})\circ\Delta=\Delta\circ\sigma_{t}^{\psi}\) ([45, Proposition 8.23]) and hence for \(x\in\mathfrak{N}_{\varphi}\),
\[(\rho\circ\sigma_{t}^{\psi}\otimes\mathrm{id})(\mathrm{W}^{ \mathrm{G}*})\Lambda_{\varphi}(x)=\Lambda_{\varphi}((\rho\circ\sigma_{t}^{\psi }\otimes\mathrm{id})\Delta(x))=\Lambda_{\varphi}\big{(}(\rho\otimes\tau_{t}) \Delta(\sigma_{t}^{\psi}(x))\big{)}\] \[=\nu^{-\frac{t}{2}}P^{it}\Lambda_{\varphi}((\rho\otimes\mathrm{id })\Delta(\sigma_{t}^{\psi}(x)))=P^{it}(\rho\otimes\mathrm{id})(\mathrm{W}^{ \mathrm{G}*})\nabla_{\psi}^{it}\Lambda_{\varphi}(x),\]
consequently \((\sigma_{t}^{\psi}\otimes\mathrm{id})(\mathrm{W}^{\mathrm{G}*})=(1\otimes P^{ it})\mathrm{W}^{\mathrm{G}*}(1\otimes\nabla_{\psi}^{it})\) and so
\[(\mathrm{id}\otimes\sigma_{t}^{\psi})(\mathrm{W}^{\widehat{\mathrm{G}}})=(P^{ it}\otimes 1)\mathrm{W}^{\widehat{\mathrm{G}}}(\nabla_{\psi}^{it}\otimes 1)=(\hat{\tau}_{t} \otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathrm{G}}})(\hat{\delta}^{-it} \otimes 1).\]
Calculating as before,
\[\sigma_{t}^{\psi}(a)\widehat{\lambda}(\omega)=\sigma_{t}^{\psi} \big{(}a(\omega\otimes\mathrm{id})\big{(}(\hat{\tau}_{-t}\otimes\mathrm{id})( \mathrm{W}^{\widehat{\mathrm{G}}})(\hat{\delta}^{it}\otimes 1)\big{)}\big{)}\] \[=\sigma_{t}^{\psi}\big{(}a\widehat{\lambda}\big{(}(\hat{\delta}^ {it}\omega)\circ\hat{\tau}_{-t}\big{)}\big{)}=\widehat{\lambda}\big{(}\big{(} \big{(}\hat{\delta}^{-it}\,\Theta^{l}(a)_{*}((\hat{\delta}^{it}\omega)\circ \hat{\tau}_{-t})\big{)}\big{)}\circ\hat{\tau}_{t}\big{)}\]
and so
\[\langle\Theta^{l}(\sigma_{t}^{\psi}(a))(x),\omega\rangle=\langle\hat{\tau}_{-t }\circ\Theta^{l}(a)\big{(}\hat{\tau}_{t}(x)\hat{\delta}^{-it}\big{)}\hat{ \delta}^{it},\omega\rangle\qquad(x\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\]
as desired.
In the next proposition we show that for any \(a\in\mathrm{M}_{cb}^{l}(\mathrm{A}(\mathbb{G}))\) the map \(\Theta^{l}(a)\) is bounded on the Hilbert space level; the final claim should be compared with Proposition 4.9.
**Proposition 4.12**.: _Let \(a\in\mathrm{M}_{cb}^{l}(\mathrm{A}(\mathbb{G}))\). Then for \(b\in\mathfrak{N}_{\widehat{\varphi}}\) we have \(\Theta^{l}(a)(b)\in\mathfrak{N}_{\widehat{\varphi}}\), and the densely defined operator_
\[\mathrm{L}^{2}(\mathbb{G})\supseteq\Lambda_{\widehat{\varphi}}(\mathfrak{N}_{ \widehat{\varphi}})\ni\Lambda_{\widehat{\varphi}}(b)\mapsto\Lambda_{\widehat{ \varphi}}(\Theta^{l}(a)(b))\in\mathrm{L}^{2}(\mathbb{G})\]
_is bounded. In fact, we have_
\[\Lambda_{\widehat{\varphi}}(\Theta^{l}(a)(b))=S^{-1}(a)\Lambda_{\widehat{ \varphi}}(b)=S(a^{*})^{*}\Lambda_{\widehat{\varphi}}(b)\qquad(b\in\mathfrak{N}_ {\widehat{\varphi}}).\]
In order to prove Proposition 4.12 we start with a general lemma; recall (2.2) for the definition of \(\mathscr{J}\).
**Lemma 4.13**.: _Let \(\omega\in\mathscr{J}\subseteq\mathrm{L}^{1}(\mathbb{G})\), let \(a\in\mathrm{L}^{\infty}(\mathbb{G})\), and let \(b\in\mathrm{Dom}(\sigma_{-i/2}^{\varphi})\subseteq\mathrm{L}^{\infty}( \mathbb{G})\). Then \(a\omega b\in\mathscr{J}\) with \(\Lambda_{\widehat{\varphi}}(\lambda(a\omega b))=aJ_{\varphi}\sigma_{-i/2}^{ \varphi}(b)^{*}J_{\varphi}\Lambda_{\widehat{\varphi}}(\lambda(\omega))\)._
Proof.: Take \(x\in\mathfrak{N}_{\varphi}\). We have
\[\langle x^{*},a\omega b\rangle=\langle(a^{*}xb^{*})^{*},\omega \rangle=\langle\Lambda_{\varphi}(a^{*}xb^{*})\,|\,\Lambda_{\widehat{ \varphi}}(\lambda(\omega))\rangle\] \[=\langle a^{*}J_{\varphi}\sigma_{i/2}^{\varphi}(b^{*})^{*}J_{ \varphi}\Lambda_{\varphi}(x)\,|\,\Lambda_{\widehat{\varphi}}(\lambda(\omega)) \rangle=\langle\Lambda_{\varphi}(x)\,|\,aJ_{\varphi}\sigma_{i/2}^{\varphi}(b^{*})J_{ \varphi}\Lambda_{\widehat{\varphi}}(\lambda(\omega))\rangle\]
which proves the claim.
Proof of Proposition 4.12.: Take \(b=(\mathrm{id}\otimes\omega)\mathrm{W}^{\widehat{\mathbb{G}}}\) for \(\omega\in\mathrm{L}^{1}(\mathbb{G})\) such that \(\overline{\omega}\in\mathrm{L}^{1}_{\sharp}(\mathbb{G})\) and \(\overline{\omega}^{\sharp}\in\mathscr{J}\); that such an \(\omega\) exists follows from Lemma 2.1, for example. Then, for any \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\),
\[\langle\Theta^{l}(a)(b),\widehat{\omega}\rangle=\langle(\mathrm{id}\otimes \omega)\mathrm{W}^{\widehat{\mathbb{G}}},\Theta^{l}(a)_{*}(\widehat{\omega}) \rangle=\langle\widehat{\lambda}(\Theta^{l}(a)_{*}(\widehat{\omega})),\omega\rangle\]
\[=\langle a\widehat{\lambda}(\widehat{\omega}),\omega\rangle=\langle(\widehat {\omega}\otimes\mathrm{id})\mathrm{W}^{\widehat{\mathbb{G}}},\omega a\rangle= \langle(\mathrm{id}\otimes\omega a)\mathrm{W}^{\widehat{\mathbb{G}}},\widehat {\omega}\rangle,\]
which shows that
\[\Theta^{l}(a)(b)=(\mathrm{id}\otimes\omega a)\mathrm{W}^{\widehat{\mathbb{G}}}.\]
Observe also that
\[b=(\omega\otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}*})=((\overline{\omega} \otimes\mathrm{id})\mathrm{W}^{\mathbb{G}})^{*}=(\overline{\omega}^{\sharp} \otimes\mathrm{id})\mathrm{W}^{\mathbb{G}}=\lambda(\overline{\omega}^{\sharp}), \tag{4.6}\]
in particular \(b\in\mathfrak{N}_{\widehat{\varphi}}\). Now for \(y\in\mathrm{Dom}(S)\), as \(a^{*}\in D(S)\) by Proposition 4.9, we have that \(S(y)^{*}a^{*}\in\mathrm{Dom}(S)\), and hence
\[\langle S(y),\overline{\omega a}\rangle=\langle aS(y),\omega\rangle= \overline{\langle S(y)^{*}a^{*},\overline{\omega}\rangle}=\overline{\langle S (y)^{*}a^{*},\overline{\omega}^{\sharp z}\rangle}\]
\[=\overline{\langle S(S(y)^{*}a^{*}),\overline{\overline{\omega}^{\sharp}} \rangle}=\langle yS(a^{*})^{*},\overline{\omega}^{\sharp}\rangle=\langle yS^{- 1}(a),\overline{\omega}^{\sharp}\rangle=\langle y,S^{-1}(a)\overline{\omega}^ {\sharp}\rangle\]
hence \(\overline{\omega a}\in\mathrm{L}^{1}_{\sharp}(\mathbb{G})\) and \(\overline{\omega a}^{\sharp}=S^{-1}(a)\overline{\omega}^{\sharp}\). Consequently
\[\Theta^{l}(a)(b)=(\mathrm{id}\otimes\omega a)\mathrm{W}^{\widehat{\mathbb{G} }}=(\overline{\omega a}^{\sharp}\otimes\mathrm{id})\mathrm{W}^{\mathbb{G}}=(S ^{-1}(a)\overline{\omega}^{\sharp}\otimes\mathrm{id})\mathrm{W}^{\mathbb{G}}= \lambda(S^{-1}(a)\overline{\omega}^{\sharp}).\]
This calculation, combined with Lemma 4.13, shows that \(\Theta^{l}(a)(b)\in\mathfrak{N}_{\widehat{\varphi}}\) with
\[\Lambda_{\widehat{\varphi}}(\Theta^{l}(a)(b))=\Lambda_{\widehat{\varphi}}( \lambda(S^{-1}(a)\overline{\omega}^{\sharp}))=S^{-1}(a)\Lambda_{\widehat{ \varphi}}(\lambda(\overline{\omega}^{\sharp}))=S^{-1}(a)\Lambda_{\widehat{ \varphi}}(b).\]
Now let \(b\in\mathfrak{N}_{\widehat{\varphi}}\) be arbitrary. By Lemma 2.1 the space
\[\{(\mathrm{id}\otimes\omega)\mathrm{W}^{\widehat{\mathbb{G}}}\,|\, \omega\in\mathrm{L}^{1}(\mathbb{G})\colon\overline{\omega}\in\mathrm{L}^{1}_{ \sharp}(\mathbb{G}),\,\overline{\omega}^{\sharp}\in\mathscr{J}\}\] \[=\{\lambda(\overline{\omega}^{\sharp})\,|\,\omega\in\mathrm{L}^{1 }(\mathbb{G})\colon\overline{\omega}\in\mathrm{L}^{1}_{\sharp}(\mathbb{G}),\, \overline{\omega}^{\sharp}\in\mathscr{J}\}\]
is a \(\sigma\text{-}\textsc{sot}\times\|\cdot\|\) core for \(\Lambda_{\widehat{\varphi}}\). Hence there is a net \((\omega_{i})_{i\in I}\) of suitable functionals with \(b=\sigma\text{-}\textsc{sot}-\lim_{i\in I}\lambda(\omega_{i})\) and \(\Lambda_{\widehat{\varphi}}(b)=\lim_{i\in I}\Lambda_{\widehat{\varphi}}( \lambda(\omega_{i}))\). By \(\sigma\text{-}\textsc{sot}\) -continuity of \(\Theta^{l}(a)\) and the previous reasoning we obtain
\[\Theta^{l}(a)(\lambda(\omega_{i}))\xrightarrow[i\in I]{\sigma\text{-}\textsc{ sot}_{\gamma}}\Theta^{l}(a)(b)\]
and
\[\Lambda_{\widehat{\varphi}}(\Theta^{l}(a)(\lambda(\omega_{i})))=S^{-1}(a) \Lambda_{\widehat{\varphi}}(\lambda(\omega_{i}))\xrightarrow[i\in I]{}S^{-1}(a )\Lambda_{\widehat{\varphi}}(b).\]
Thus, since \(\Lambda_{\widehat{\varphi}}\) is \(\sigma\text{-}\textsc{sot}\times\|\cdot\|\) closed,
\[\Theta^{l}(a)(b)\in\mathrm{Dom}(\Lambda_{\widehat{\varphi}})=\mathfrak{N}_{ \widehat{\varphi}}\quad\text{and}\quad\Lambda_{\widehat{\varphi}}(\Theta^{l}( a)(b))=S^{-1}(a)\Lambda_{\widehat{\varphi}}(b)\]
as claimed.
### Left versus right
We have defined AP of a locally compact quantum group \(\mathbb{G}\) in Definition 4.1 using left CB multipliers. Let us verify that we would have obtained the same notion using right CB multipliers.
**Proposition 4.14**.: _The following conditions are equivalent:_
1. \(\mathbb{G}\) _has AP, i.e. there is a net in_ \(\mathrm{A}(\mathbb{G})\) _which converges to_ \(\mathbb{1}\) _in the weak_\({}^{*}\)_-topology of_ \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\)_,_
2. _there is a net in_ \(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) _which converges to_ \(\mathbb{1}\) _in the weak_\({}^{*}\)_-topology of_ \(\mathrm{M}^{r}_{cb}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\)_._
Proof.: Assume that \((a_{i})_{i\in I}\) is a net in \(\mathrm{A}(\mathbb{G})\) which converges to \(\mathbb{1}\) in the weak\({}^{*}\)-topology of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). Let \(\widehat{R}^{\sim}\colon\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\to\mathrm{B}( \mathrm{L}^{2}(\mathbb{G}))\) be the extension of the unitary antipode of \(\mathbb{G}\) given by \(\widehat{R}^{\sim}(x)=J_{\varphi}x^{*}J_{\varphi}\left(x\in\mathrm{B}(\mathrm{L} ^{2}(\mathbb{G}))\right)\). For each \(i\) there is \(\omega_{i}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) with \(a_{i}=\widehat{\lambda}(\omega_{i})\), and so by Lemma 3.1, if we define \(b^{\prime}_{i}=\widehat{R}^{\sim}(a_{i})\) then \(b^{\prime}_{i}=\widehat{\rho}(\omega_{i}\circ\widehat{R})\in\widehat{\rho}( \mathrm{L}^{1}(\widehat{\mathbb{G}}))\subseteq\mathrm{M}^{r}_{cb}(\widehat{ \rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\) with \(\Theta^{r}(b^{\prime}_{i})=\widehat{R}\circ\Theta^{l}(a_{i})\circ\widehat{R} \in\mathrm{CB}^{g}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\).
Let us now argue that \(b^{\prime}_{i}\xrightarrow[i\in I]{w^{*}}1\): choose a functional \(\theta\in Q^{r}(\widehat{\rho}(\mathrm{L}^{1}(\widehat{\mathbb{G}})))\). We have to show that \(\theta\circ\widehat{R}^{\sim}\in Q^{l}(\mathrm{A}(\mathbb{G}))\) (this makes sense by Lemma 3.1). We can write \(\theta=\lim_{j\in J}\alpha^{r}(\nu_{j})\) for some \(\nu_{j}\in\mathrm{L}^{1}(\mathbb{G}^{\prime})\). Then, as \(\widehat{R}^{\sim}\colon\mathrm{M}^{l}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G }}))\to\mathrm{M}^{r}_{cb}(\mathrm{L}^{1}(\widehat{\mathbb{G}}))\) is an isometry (Lemma 4.8) we obtain
\[\|\theta\circ\widehat{R}^{\sim}-\alpha^{l}(\nu_{j}\circ\widehat{R}^{\sim})\|= \sup_{a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))_{1}}|\langle\theta \circ\widehat{R}^{\sim}-\alpha^{l}(\nu_{j}\circ\widehat{R}^{\sim}),a\rangle|\]
so \(\theta\circ\widehat{R}^{\sim}=\lim_{j\in J}\alpha^{l}(\nu_{j}\circ\widehat{R} ^{\sim})\in Q^{l}(\mathrm{A}(\mathbb{G}))\). Now we can calculate
\[\langle 1-b^{\prime}_{i},\theta\rangle=\langle 1-\widehat{R}^{\sim}(a_{i}), \theta\rangle=\langle 1-a_{i},\theta\circ\widehat{R}^{\sim}\rangle \xrightarrow[i\in I]{}0,\]
which shows that \((b^{\prime}_{i})_{i\in I}\) is a net giving us condition (2). The converse implication is analogous.
## 5. Relation to other approximation properties
In this section we discuss the relation between AP and other approximation properties for locally compact quantum groups which have been studied in the literature. Recall the following definitions, compare [7, Theorem 3.1], [9, Section 5.2].
**Definition 5.1**.: Let \(\mathbb{G}\) be a locally compact quantum group.
* \(\widehat{\mathbb{G}}\) is _coamenable_ if \(\mathrm{A}(\mathbb{G})\) has a bounded approximate identity,
* \(\mathbb{G}\) is _weakly amenable_ if \(\mathrm{A}(\mathbb{G})\) has a left approximate identity \((e_{i})_{i\in I}\) which is bounded in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). In this case, the smallest \(M\) such that we can choose \(\|e_{i}\|_{cb}\leq M\) for each \(i\) is the _Cowling-Haagerup constant_ of \(\mathbb{G}\), denoted \(\mathrm{A}_{cb}(\mathbb{G})\).
**Remark 5.2**.: If \((e_{i})_{i\in I}\) is a left approximate identity for \(\mathrm{A}(\mathbb{G})\) then \((R(e_{i}))_{i\in I}\) is a right approximate identity, where \(R\) is the unitary antipode on \(\mathrm{L}^{\infty}(\mathbb{G})\). Indeed, let \(\widehat{R}\) be the unitary antipode on \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\). As \(R\widehat{\lambda}=\widehat{\lambda}\widehat{R}_{*}\), each \(R(e_{i})\) is a member of \(\mathrm{A}(\mathbb{G})\), and as \(\widehat{R}_{*}\) is anti-multiplicative, it follows that \((R(e_{i}))_{i\in I}\) is indeed a right approximate identity. Thus it does not matter if we work in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) or in \(\mathrm{M}^{r}_{cb}(\mathrm{A}(\mathbb{G}))\), see also Lemma 4.8.
We shall show that if \(\mathbb{G}\) has the AP, and the approximating net can be chosen in an appropriately bounded way, then \(\mathbb{G}\) will enjoy one of the stronger properties in Definition 5.1.
Let us first record some general results. For a proof of the following fact see for example [13, Section 3].
**Lemma 5.3**.: _For any locally compact quantum group \(\mathbb{G}\), the linear span of \(\{ab\,|\,a,b\in\mathrm{A}(\mathbb{G})\}\) is dense in \(\mathrm{A}(\mathbb{G})\)._
Next we recall a standard result in Banach algebra theory which follows from the Hahn-Banach Theorem and the fact that convex sets have the same norm and weak closures; see for example [52, Theorem 5.1.2(e)].
**Proposition 5.4**.: _Let \(A\) be a Banach algebra which has a weak bounded left approximate identity, meaning that there is a bounded net \((e_{i})_{i\in I}\) in \(A\) such that \(\mu(e_{i}a-a)\xrightarrow[i\in I]{}0\) for each \(a\in A,\mu\in A^{*}\). Then \(A\) has a bounded left approximate identity (of the same bound)._
Proposition 5.4 does not say that the weak blai is itself a blai, but rather that having a weak blai implies there is a possibly different net forming a blai.
Let us denote by \(\mathrm{A}^{l}_{cb}(\mathbb{G})\) the closure of \(\mathrm{A}(\mathbb{G})\) inside \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) and use Proposition 5.4 to obtain the following result, compare [24, Proposition 1].
**Lemma 5.5**.: _The inclusion map \(\mathrm{A}(\mathbb{G})\to\mathrm{A}^{l}_{cb}(\mathbb{G})\) is an injective contraction. The locally compact quantum group \(\mathbb{G}\) is weakly amenable with Cowling-Haagerup constant at most \(K\) if and only if \(\mathrm{A}^{l}_{cb}(\mathbb{G})\) has a bounded left approximate identity of bound at most \(K\)._
Proof.: Let \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) be an element of norm one. Then the map
\[\mathrm{L}^{1}(\widehat{\mathbb{G}})\to\mathrm{L}^{1}(\widehat{\mathbb{G}}) \widehat{\otimes}\,\mathrm{L}^{1}(\widehat{\mathbb{G}})\colon\nu\mapsto\omega \otimes\nu\]
is a complete isometry, and so applying \(\widehat{\Delta}_{*}\) shows that \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\to\mathrm{L}^{1}(\widehat{\mathbb{G}})\colon\nu \mapsto\omega\star\nu\) is a complete contraction. Thus \(\|\widehat{\lambda}(\omega)\|_{cb}\leq\|\widehat{\lambda}(\omega)\|\), here and below writing \(\|\cdot\|_{cb}\) for the norm on \(\mathrm{A}^{l}_{cb}(\mathbb{G})\) and \(\|\cdot\|\) for the norm on \(\mathrm{A}(\mathbb{G})\). The diagram (3.1) shows in particular that \(\mathrm{A}(\mathbb{G})\to\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) is injective, and hence \(\mathrm{A}(\mathbb{G})\to\mathrm{A}^{l}_{cb}(\mathbb{G})\) is injective.
Now suppose that \(\mathbb{G}\) is weakly amenable and that \((e_{i})_{i\in I}\) is a left approximate identity for \(\mathrm{A}(\mathbb{G})\) with \(\|e_{i}\|_{cb}\leq K\) for each \(i\). Then \((e_{i})_{i\in I}\) is a bounded net in \(\mathrm{A}^{l}_{cb}(\mathbb{G})\). For \(x\in\mathrm{A}^{l}_{cb}(\mathbb{G})\) and \(\epsilon>0\) there is \(a\in\mathrm{A}(\mathbb{G})\) with \(\|x-a\|_{cb}<\epsilon\), and there is \(i_{0}\) so that \(\|e_{i}a-a\|<\epsilon\) when \(i\geq i_{0}\). Thus
\[\|e_{i}x-x\|_{cb}\leq\|e_{i}x-e_{i}a\|_{cb}+\|e_{i}a-a\|_{cb}+\|a-x\|_{cb}<K \epsilon+\epsilon+\epsilon\qquad(i\geq i_{0}).\]
It follows that \(e_{i}x\xrightarrow[i\in I]{}x\) in \(\mathrm{A}^{l}_{cb}(\mathbb{G})\). Consequently \((e_{i})_{i\in I}\) is a blai for \(\mathrm{A}^{l}_{cb}(\mathbb{G})\) of bound \(\leq K\).
Conversely, suppose that \(\mathrm{A}^{l}_{cb}(\mathbb{G})\) has a bounded left approximate identity of bound \(K\), say \((f_{i})_{i\in I}\). For \((i,n)\in I\times\mathbb{N}\) pick \(e_{i,n}\in\mathrm{A}(\mathbb{G})\) with \(\|e_{i,n}-f_{i}\|_{cb}<\frac{1}{n}\). For \(x\in\mathrm{A}^{l}_{cb}(\mathbb{G})\) and \(\epsilon>0\) there is \(i_{0}\) so that \(\|f_{i}x-x\|_{cb}<\epsilon\) for \(i\geq i_{0}\). With \(n>\frac{1}{\epsilon}\),
\[\|e_{i,n}x-x\|_{cb}\leq\|e_{i,n}x-f_{i}x\|_{cb}+\|f_{i}x-x\|_{cb}<\varepsilon \|x\|_{cb}+\varepsilon,\]
and so \(e_{i,n}x\xrightarrow[(i,n)\in I\times\mathbb{N}]{}x\). We conclude that we may assume that \((f_{i})_{i\in I}\) was actually a net in \(\mathrm{A}(\mathbb{G})\) and \(\|f_{i}\|_{cb}\leq K\) for each \(i\). It remains to show that \((f_{i})_{i\in I}\) is a left approximate identity for \(\mathrm{A}(\mathbb{G})\). Given \(a\in\mathrm{A}(\mathbb{G})\) and \(\epsilon>0\), by Lemma 5.3, we can find elements \(a_{k},b_{k}\in\mathrm{A}(\mathbb{G})\) for \(k=1,\ldots,n\) for some \(n\) such that \(a_{0}=\sum_{k=1}^{n}a_{k}b_{k}\in\mathrm{A}(\mathbb{G})\) is within \(\epsilon\) distance of \(a\). Then
\[\|f_{i}a-a\| \leq\|f_{i}a-f_{i}a_{0}\|+\|f_{i}a_{0}-a_{0}\|+\|a_{0}-a\|\] \[\leq K\|a-a_{0}\|+\sum_{k=1}^{n}\|f_{i}a_{k}b_{k}-a_{k}b_{k}\|+\|a _{0}-a\|\] \[\leq(K+1)\epsilon+\sum_{k=1}^{n}\|f_{i}a_{k}-a_{k}\|_{cb}\|b_{k}\|.\]
Here we used that for \(x\in\mathrm{A}^{l}_{cb}(\mathbb{G})\) and \(y\in\mathrm{A}(\mathbb{G})\) we have \(\|xy\|\leq\|x\|_{cb}\|y\|\). As \((f_{i})_{i\in I}\) is a left approximate identity in \(\mathrm{A}^{l}_{cb}(\mathbb{G})\), if \(i\) is sufficiently large then \(\|f_{i}a_{k}-a_{k}\|_{cb}\|b_{k}\|<\frac{\epsilon}{n}\) for each \(k\), and so \(\|f_{i}a-a\|\leq(K+2)\epsilon\). Hence \((f_{i})_{i\in I}\) is a left approximate identity for \(\mathrm{A}(\mathbb{G})\).
The following is an improvement on [31, Theorem 1.13] in the classical situation; Haagerup and Kraus only consider the case where each element of the approximating net comes from a state.
**Proposition 5.6**.: _Assume that \(\mathbb{G}\) is a locally compact quantum group with the approximation property. If we can choose the approximating net \((e_{i})_{i\in I}\) in \(\mathrm{A}(\mathbb{G})\) to be bounded then \(\widehat{\mathbb{G}}\) is coamenable._
Proof.: By definition, \(e_{i}\xrightarrow[i\in I]{}\mathbbm{1}\) weak\({}^{*}\) in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\). By Proposition 4.7, we consider elements of the form \(\Omega_{\widehat{x},\widehat{\omega},f}\in Q^{l}(\mathrm{A}(\mathbb{G}))\). Here we will just consider \(\widehat{x}\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) and \(\widehat{\omega}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\), with \(f\in\mathrm{L}^{1}(\mathbb{G})\) a state. According to equation (4.3), we get
\[\lim_{i\in I}\langle\Theta^{l}(e_{i}\star f)(\widehat{x}),\widehat{\omega} \rangle=\lim_{i\in I}\langle e_{i},\Omega_{\widehat{x},\widehat{\omega},f} \rangle=\langle 1,\Omega_{\widehat{x},\widehat{\omega},f}\rangle=\langle\Theta^{l}( \mathbbm{1}\star f)(\widehat{x}),\widehat{\omega}\rangle=\langle\widehat{x}, \widehat{\omega}\rangle,\]
using \(\mathbbm{1}\star f=\mathbbm{1}\) and \(\Theta^{l}(\mathbbm{1})=\mathrm{id}\) in the last step. For each \(i\in I\) let \(e_{i}\) be associated to \(\widehat{\omega}_{i}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\), so that \(e_{i}=\widehat{\lambda}(\widehat{\omega}_{i})\). Then
\[e_{i}\star f =(f\otimes\mathrm{id})\Delta(e_{i})=(f\otimes\mathrm{id})\Delta(( \mathrm{id}\otimes\widehat{\omega}_{i})(\mathrm{W}^{\mathbb{G}*}))=(f\otimes \mathrm{id}\otimes\widehat{\omega}_{i})(\mathrm{W}^{\mathbb{G}*}_{23}\mathrm{W }^{\mathbb{G}*}_{13})\] \[=(\mathrm{id}\otimes\widehat{\omega}_{i})\big{(}\mathrm{W}^{\mathbb{ G}*}(\mathbbm{1}\otimes(f\otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}*}))\big{)}=\widehat{ \lambda}(\widehat{\omega}^{\prime}_{i}),\]
where \(\widehat{\omega}^{\prime}_{i}=(f\otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}*}) \widehat{\omega}_{i}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). Notice that \(\|\widehat{\omega}^{\prime}_{i}\|\leq\|f\|\|\widehat{\omega}_{i}\|=\|\widehat{ \omega}_{i}\|\).
As computed in Lemma 3.10, it follows that \(\Theta^{l}(e_{i}\star f)(\widehat{x})=(\widehat{\omega}^{\prime}_{i}\otimes \operatorname{id})\widehat{\Delta}(\widehat{x})\), so we see that
\[\lim_{i\in I}\langle\widehat{x},\widehat{\omega}^{\prime}_{i}\star\widehat{ \omega}\rangle=\lim_{i\in I}\langle(\widehat{\omega}^{\prime}_{i}\otimes \operatorname{id})\widehat{\Delta}(\widehat{x}),\widehat{\omega}\rangle= \langle\widehat{x},\widehat{\omega}\rangle\qquad(\widehat{x}\in\operatorname {L}^{\infty}(\widehat{\mathbb{G}}),\widehat{\omega}\in\operatorname{L}^{1}( \widehat{\mathbb{G}})).\]
Thus \((\widehat{\omega}^{\prime}_{i})_{i\in I}\) is a weak bounded left approximate identity for \(\operatorname{L}^{1}(\widehat{\mathbb{G}})\). By Proposition 5.4, we obtain that \(\operatorname{L}^{1}(\widehat{\mathbb{G}})\) has a bounded left approximate identity. This already implies that \(\widehat{\mathbb{G}}\) is coamenable, see [7, Theorem 3.1].
The following is [31, Theorem 1.12] in the classical situation.
**Proposition 5.7**.: _Let \(\mathbb{G}\) be a locally compact quantum group with the approximation property, and assume that we can choose the approximating net \((e_{i})_{i\in I}\) in \(\operatorname{A}(\mathbb{G})\) to be bounded with respect to \(\|\cdot\|_{cb}\). Then \(\mathbb{G}\) is weakly amenable, with Cowling-Haagerup constant at most the bound of \((e_{i})_{i\in I}\)._
Proof.: We proceed as in the previous proof, starting with a net \((e_{i})_{i\in I}\) in \(\operatorname{A}(\mathbb{G})\), but now only with \(\|e_{i}\|_{cb}\leq K\) for each \(i\). We set \(e_{i}=\widehat{\lambda}(\widehat{\omega}_{i})\) and \(\widehat{\omega}^{\prime}_{i}=(f\otimes\operatorname{id})(\operatorname{W}^{ \mathbb{G}*})\widehat{\omega}_{i}\), where \(f\) is some fixed state and we are considering the natural (left) \(\operatorname{L}^{\infty}(\widehat{\mathbb{G}})\)-module structure on \(\operatorname{L}^{1}(\widehat{\mathbb{G}})\). Then \((\widehat{\omega}^{\prime}_{i})_{i\in I}\) is a weak left approximate identity for \(\operatorname{L}^{1}(\widehat{\mathbb{G}})\). Furthermore, \(\|\widehat{\lambda}(\widehat{\omega}^{\prime}_{i})\|_{cb}=\|e_{i}\star f\|_ {cb}\leq\|e_{i}\|_{cb}\|f\|\leq K\) by Proposition 4.5.
For each \(i\) let \(f_{i}=\widehat{\lambda}(\widehat{\omega}^{\prime}_{i})\in\operatorname{A}( \mathbb{G})\). Let \(\theta\colon\operatorname{A}(\mathbb{G})\to\operatorname{A}^{l}_{cb}( \mathbb{G})\) be the inclusion map, and consider the adjoint, \(\theta^{*}\colon\operatorname{A}^{l}_{cb}(\mathbb{G})^{*}\to\operatorname{A}( \mathbb{G})^{*}\). For \(\mu\in\operatorname{A}^{l}_{cb}(\mathbb{G})^{*}\) and \(a\in\operatorname{A}(\mathbb{G})\) we see that
\[\lim_{i\in I}\langle\mu,\theta(f_{i})\theta(a)\rangle=\lim_{i\in I}\langle \theta^{*}(\mu),f_{i}a\rangle=\langle\theta^{*}(\mu),a\rangle=\langle\mu, \theta(a)\rangle.\]
As \(\theta\) has dense range, and \((\theta(f_{i}))_{i\in I}\) is bounded in \(\operatorname{A}^{l}_{cb}(\mathbb{G})\), it follows that \((\theta(f_{i}))_{i\in I}\) is a weak bounded left approximate identity. By Proposition 5.4, \(\operatorname{A}^{l}_{cb}(\mathbb{G})\) has a bounded left approximate identity, and so Lemma 5.5 shows that \(\mathbb{G}\) is weakly amenable, with Cowling-Haagerup constant at most \(K\).
## 6. Discrete quantum groups and operator algebraic approximation properties
If the quantum group being studied is discrete, we can obtain better results. In Proposition 6.5 we will show that the net exhibiting \(\operatorname{AP}\) can be chosen to have additional properties. We also relate \(\operatorname{AP}\) to approximation properties of the associated operator algebras, in both the locally compact case, Proposition 6.10, and the discrete case, Proposition 6.12.
For the rest of this section \(\mathbb{T}\) stands for an arbitary discrete quantum group. Then \(\widehat{\mathbb{T}}\) is a compact quantum group, and we freely use the additional theory available in the compact case. We follow [50] as well as [48, 64], being aware that we use the "left" convention for multiplicative unitaries and corepresentations.
Every irreducible unitary representation of \(\widehat{\mathbb{T}}\) is finite-dimensional, and we denote by \(\operatorname{Irr}(\widehat{\mathbb{T}})\) the collection of equivalence classes of irreducibles. We write \(\overline{\alpha}\) for the conjugate of \(\alpha\in\operatorname{Irr}(\widehat{\mathbb{T}})\). For each \(\alpha\in\operatorname{Irr}(\widehat{\mathbb{T}})\) let \(U^{\alpha}\in\operatorname{C}(\widehat{\mathbb{T}})\otimes\operatorname{B}( \mathsf{H}_{\alpha})\) be a unitary corepresentation in the class of \(\alpha\). With respect to an orthonormal basis of \(\mathsf{H}_{\alpha}\) we regard \(U^{\alpha}\) as a matrix \([U^{\alpha}_{i,j}]_{1\leq i,j\leq\dim(\alpha)}\). The _matrix coefficients_\(U^{\alpha}_{i,j}\) span a dense Hopf \(\star\)-algebra \(\operatorname{Pol}(\widehat{\mathbb{T}})\subseteq\operatorname{C}(\widehat{ \mathbb{T}})\). We denote by \(h\) the Haar state on \(\operatorname{C}(\widehat{\mathbb{T}})\) and \(\operatorname{L}^{\infty}(\widehat{\mathbb{T}})\), and let \(\Lambda_{h}\colon\operatorname{C}(\widehat{\mathbb{T}})\to\operatorname{L}^{2}( \widehat{\mathbb{T}})\) be the GNS map for \(h\). As as \(\operatorname{L}^{\infty}(\widehat{\mathbb{T}})\) is in standard position on \(\operatorname{L}^{2}(\widehat{\mathbb{T}})\), the set \(\{\omega_{\Lambda_{h}(a),\Lambda_{h}(b)}\,|\,a,b\in\operatorname{Pol}(\widehat{ \mathbb{T}})\}\) is dense in \(\operatorname{L}^{1}(\widehat{\mathbb{T}})\). As each member of \(\operatorname{Pol}(\widehat{\mathbb{T}})\) is analytic for the modular automorphism group of \(h\), this agrees in fact with the set \(\{\omega_{\Lambda_{h}(a),\Lambda_{h}(1)}\,|\,a\in\operatorname{Pol}(\widehat{ \mathbb{T}})\}\). Notice that \(\omega_{\Lambda_{h}(a),\Lambda_{h}(1)}\) is the functional \(h(a^{*}\cdot)\).
For each \(\alpha\in\operatorname{Irr}(\widehat{\mathbb{T}})\) there is a positive invertible operator \(\uprho_{\alpha}\) related to the possible non-tracality of the Haar state \(h\), see [50, Section 1.7]. We choose and fix a basis of \(\mathsf{H}_{\alpha}\) such that \(\uprho_{\alpha}\) is diagonal. We define the _Woronowicz characters_\(\{f_{z}\,|\,z\in\mathbb{C}\}\) by the relation \((f_{z}\otimes\operatorname{id})(U^{\alpha})=\uprho_{\alpha}^{z}\), valid for each \(\alpha\). The modular automorphism group is then implemented as
\[\sigma_{z}^{h}(a)=f_{iz}\star a\star f_{iz}\quad(a\in\operatorname{Pol}(\widehat{ \mathbb{T}}),z\in\mathbb{C}),\]
or equivalently, \((\sigma_{z}^{h}\otimes\mathrm{id})(U^{\alpha})=(\mathbb{1}\otimes\rho_{\alpha}^{ iz})U^{\alpha}(\mathbb{1}\otimes\rho_{\alpha}^{iz})\). Similarly, the scaling group is implemented as
\[\tau_{z}(a)=f_{-iz}\star a\star f_{iz}\quad(a\in\mathrm{Pol}(\widehat{\mathbb{T }}),z\in\mathbb{C}),\]
or equivalently, \((\tau_{z}\otimes\mathrm{id})(U^{\alpha})=(\mathbb{1}\otimes\rho_{\alpha}^{iz})U ^{\alpha}(\mathbb{1}\otimes\rho_{\alpha}^{-iz})\). As we assume that \(\rho_{\alpha}\) is diagonal, say with entries \(\rho_{\alpha,i}\,(1\leq i\leq\dim(\alpha))\), we get
\[\tau_{z}(U^{\alpha}_{i,j})=(\rho_{\alpha,i})^{iz}(\rho_{\alpha,j})^{-iz}U^{ \alpha}_{i,j}. \tag{6.1}\]
The algebra \(\mathrm{c}_{0}(\mathbb{T})\) is isomorphic to the direct sum of full matrix algebras \(\mathrm{M}_{\dim(\alpha)}\) indexed by \(\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})\), and \(\ell^{\infty}(\mathbb{T})\) is isomorphic to the direct product of these matrix algebras. Given \(a\in\ell^{\infty}(\mathbb{T})\) we write \(a=(a^{\alpha})_{\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})}\) where \(a^{\alpha}\in\mathrm{M}_{\dim(\alpha)}\), and similarly for \(\mathrm{c}_{0}(\mathbb{T})\). With respect to this isomorphism,
\[\mathrm{W}^{\widehat{\mathbb{T}}}=\bigoplus_{\alpha\in\mathrm{Irr}(\widehat{ \mathbb{T}})}\sum_{i,j=1}^{\dim(\alpha)}U^{\alpha}_{i,j}\otimes e^{\alpha}_{i,j}\in\mathrm{M}\,\big{(}\mathrm{C}(\widehat{\mathbb{T}})\otimes\mathrm{c}_{0 }(\mathbb{T})\big{)}, \tag{6.2}\]
where \(\{e^{\alpha}_{i,j}\}_{i,j=1}^{\dim(\alpha)}\) are the matrix units of the matrix algebra \(\mathrm{M}_{\dim(\alpha)}\subseteq\mathrm{c}_{0}(\mathbb{T})\).
We start with a result expressing the action of \(\Theta^{l}(a)\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{T}}))\), for \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\), on matrix elements.
**Lemma 6.1**.: _For any \(a=(a^{\alpha})_{\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})}\in\mathrm{M}^{l}_ {cb}(\mathrm{A}(\mathbb{T}))\subseteq\ell^{\infty}(\mathbb{T})\) with \(a^{\alpha}=[a^{\alpha}_{i,j}]_{i,j=1}^{\dim(\alpha)}\) we have_
\[\Theta^{l}(a)(U^{\alpha}_{i,j})=\sum_{k=1}^{\dim(\alpha)}a^{\alpha}_{i,k}U^{ \alpha}_{k,j}\qquad(\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}}),1\leq i,j\leq \dim(\alpha)).\]
Proof.: Let \(x\in\mathrm{Pol}(\widehat{\mathbb{T}})\) and set \(\omega=h(x\cdot)\in\mathrm{L}^{1}(\widehat{\mathbb{T}})\). Recall that \(a\widehat{\lambda}(\omega)=\widehat{\lambda}(\Theta^{l}(a)_{*}(\omega))\), equivalently, \(a(\omega\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{T}}})=(\Theta^{l}(a)_ {*}(\omega)\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{T}}})\). Using the expression for \(\mathrm{W}^{\widehat{\mathbb{T}}}\) from (6.2), it follows that
\[\sum_{\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})}\sum_{i,j,k=1} ^{\dim(\alpha)}\langle U^{\alpha}_{k,j},\omega\rangle a^{\alpha}_{i,k}e^{\alpha} _{i,j} =\sum_{\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})}\sum_{k,j=1}^{ \dim(\alpha)}\langle U^{\alpha}_{k,j},\omega\rangle ae^{\alpha}_{k,j}\] \[=a(\omega\otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{T}}})\] \[=(\Theta^{l}(a)_{*}(\omega)\otimes\mathrm{id})(\mathrm{W}^{ \widehat{\mathbb{T}}})=\sum_{\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})}\sum_ {j,k=1}^{\dim(\alpha)}\langle\Theta^{l}(a)(U^{\alpha}_{i,j}),\omega\rangle e^{ \alpha}_{i,j}.\]
By density, this holds for all \(\omega\), and so we conclude \(\Theta^{l}(a)(U^{\alpha}_{i,j})=\sum_{k=1}^{\dim(\alpha)}a^{\alpha}_{i,k}U^{ \alpha}_{k,j}\), as claimed.
**Remark 6.2**.: Later, see Proposition 6.5, we shall consider \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\) with \(\Theta^{l}(a)\) unit preserving. Let \(e\) denote the trivial representation of \(\widehat{\mathbb{T}}\), so \(\dim(e)=1\) and \(U^{e}=\mathbb{1}\otimes\mathbb{1}\). From Lemma 6.1, for such an \(a\), we see that \(\mathbb{1}=\Theta^{l}(a)(\mathbb{1})=a^{e}_{1,1}\mathbb{1}\) and so \(a^{e}_{1,1}=1\). Further, as the Haar state \(h\) annihilates all coefficients of all irreps except \(e\), and as \(\mathrm{Pol}(\widehat{\mathbb{T}})\) is dense in \(\mathrm{C}(\widehat{\mathbb{T}})\), it follows that \(h\circ\Theta^{l}(a)=h\).
For discrete quantum groups we will also look at a central variation of AP. We denote by \(\mathrm{c}_{00}(\mathbb{T})\subseteq\mathrm{c}_{0}(\mathbb{T})\) the dense subspace of elements \(x=(x^{\alpha})_{\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})}\) such that \(x^{\alpha}=0\) for all but finitely many \(\alpha\). From the description of \(\mathrm{W}^{\widehat{\mathbb{T}}}\) in (6.2) it is clear that we have \(\mathrm{c}_{00}(\mathbb{T})\subseteq\mathrm{A}(\mathbb{T})\). Notice that the centre of \(\ell^{\infty}(\mathbb{T})\), denoted \(\mathcal{Z}(\ell^{\infty}(\mathbb{T}))\), consists of families of matrices \(x=(x^{\alpha})_{\alpha\in\mathrm{Irr}(\widehat{\mathbb{T}})}\) such that each \(x^{\alpha}\in\mathrm{M}_{\dim(\alpha)}\) is a scalar multiple of the identity.
**Definition 6.3**.: We say that a discrete quantum group \(\mathbb{T}\) has the _central approximation property (central AP)_ if there is a net \((a_{i})_{i\in I}\) in \(\mathrm{c}_{00}(\mathbb{T})\cap\mathcal{Z}(\ell^{\infty}(\mathbb{T}))\) which converges to \(\mathbb{1}\) in the weak\({}^{*}\)-topology of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\).
It is clear from the definitions that central AP implies AP. At first sight, it might seem more natural to use \(\mathrm{A}(\mathbb{I})\) instead of \(\mathrm{c}_{00}(\mathbb{I})\) in Definition 6.3, and indeed, this alternative definition (for other approximation properties) is taken in [9, Definition 7.1]. However, in terms of applications, and also from the point of view of representation categories, working with \(\mathrm{c}_{00}(\mathbb{I})\) is in fact the most appropriate choice. Let us point out that the examples considered in [9] actually do end up working with \(\mathrm{c}_{00}(\mathbb{I})\). We will discuss the relation of central AP to properties of the representation category \(\mathrm{Rep}(\widehat{\mathbb{I}})\) in Section 8.
**Remark 6.4**.: We shall say that \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\) is _finitely supported_ if \(a\in\mathrm{c}_{00}(\mathbb{I})\). Of course, we have \(\mathrm{c}_{00}(\mathbb{I})\subseteq\mathrm{A}(\mathbb{I})\subseteq\mathrm{M} ^{l}_{cb}(\mathrm{A}(\mathbb{I}))\). For \(a\in\mathrm{c}_{00}(\mathbb{I})\), it follows from Lemma 6.1 that \(\Theta^{l}(a)(U^{\alpha}_{i,j})=0\) for all but finitely many \(\alpha\). Hence \(\Theta^{l}(a)\) restricted to \(\mathrm{Pol}(\widehat{\mathbb{I}})\) is a finite-rank map, and so by continuity, \(\Theta^{l}(a)\) restricted to \(\mathrm{C}(\widehat{\mathbb{I}})\) is finite-rank, and hence by normality, \(\Theta^{l}(a)\) is also finite-rank.
In the next result we show that whenever a discrete quantum group has AP, then this is implemented by a net of elements with convenient properties.
**Proposition 6.5**.: _Assume that \(\mathbb{I}\) is a discrete quantum group with AP. Then there is a net \((a_{i})_{i\in I}\) of elements in \(\mathrm{c}_{00}(\mathbb{I})\) such that_
* \(a_{i}\xrightarrow[i\in I]{}\mathbb{1}\) _in_ \((\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I})),w^{*})\)_,_
* _every_ \(a_{i}\) _is invariant under the scaling group of_ \(\mathbb{I}\) _and modular automorphism groups of the left/right Haar integrals,_
* _every_ \(\Theta^{l}(a_{i})\) _is star and unit preserving._
_If \(\mathbb{I}\) has central AP then we can additionally assume that \(a_{i}\in\mathrm{c}_{00}(\mathbb{I})\cap\mathcal{Z}(\ell^{\infty}(\mathbb{I}))\)._
For the proof of Proposition 6.5 we shall need two lemmas. For any operator space \(X\), we denote by \(\kappa\colon X^{*}\widehat{\otimes}X\to\mathbb{C}\) the canonical completely contractive map \(\omega\otimes x\mapsto\langle\omega,x\rangle\).
**Lemma 6.6**.: _Let \(\mathsf{H}\) be a Hilbert space, let \(\mathrm{M},\mathrm{N}\) be von Neumann algebras, and let_
\[v\in\big{(}\mathrm{M}\,\bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{ I}})\,\bar{\otimes}\,\mathrm{N}\,\bar{\otimes}\,\mathrm{B}(\mathsf{H})\big{)} \widehat{\otimes}\big{(}\mathrm{M}_{*}\,\widehat{\otimes}\,\mathrm{L}^{1}( \widehat{\mathbb{I}})\widehat{\otimes}\,\mathrm{N}_{*}\,\widehat{\otimes}\, \mathrm{B}(\mathsf{H})_{*}\big{)}.\]
_The bounded linear functional_
\[\Omega_{v}\colon\,\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\ni a\mapsto \kappa\big{(}\big{(}(\mathrm{id}\otimes\Theta^{l}(a)\otimes\mathrm{id}^{ \otimes 2})\otimes\mathrm{id}^{\otimes 4})v\big{)}\in\mathbb{C}\]
_belongs to \(Q^{l}(\mathrm{A}(\mathbb{I}))\), and we have \(\|\Omega_{v}\|\leq\|v\|\)._
Proof.: Since \(Q^{l}(\mathrm{A}(\mathbb{I}))\) is closed in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))^{*}\), it is enough to consider \(v=x\otimes(\omega_{1}\otimes\omega_{2}\otimes\omega_{3}\otimes\omega_{4})\) for \(x\in\mathrm{M}\,\bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\, \bar{\otimes}\,\mathrm{N}\,\bar{\otimes}\,\mathrm{B}(\mathsf{H}),\omega_{1}\in \mathrm{M}_{*},\omega_{2}\in\mathrm{L}^{1}(\widehat{\mathbb{I}}),\omega_{3} \in\mathrm{N}_{*},\omega_{4}\in\mathrm{B}(\mathsf{H})_{*}\). Let \(\varepsilon\in\ell^{1}(\mathbb{I})\) be the counit of \(\ell^{\infty}(\mathbb{I})\). Define \(y=(\omega_{1}\otimes\mathrm{id}\otimes\omega_{3}\otimes\mathrm{id})x\in \mathrm{L}^{\infty}(\widehat{\mathbb{I}})\,\bar{\otimes}\,\mathrm{B}(\mathsf{H})\). For \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\) we have
\[\langle\Omega_{v},a\rangle=\kappa\big{(}(\mathrm{id}\otimes \Theta^{l}(a)\otimes\mathrm{id}^{\otimes 2})x\otimes(\omega_{1}\otimes\omega_{2} \otimes\omega_{3}\otimes\omega_{4})\big{)}\] \[=\langle(\mathrm{id}\otimes\Theta^{l}(a)\otimes\mathrm{id}^{ \otimes 2})x,\omega_{1}\otimes\omega_{2}\otimes\omega_{3}\otimes\omega_{4}\rangle= \langle(\Theta^{l}(a)\otimes\mathrm{id})y,\omega_{2}\otimes\omega_{4}\rangle\] \[=\langle(\Theta^{l}(a\star\varepsilon)\otimes\mathrm{id})y,\omega_ {2}\otimes\omega_{4}\rangle=\langle a,\Omega_{y,\omega_{2}\otimes\omega_{4}, \varepsilon}\rangle\]
hence \(\Omega_{v}=\Omega_{y,\omega_{2}\otimes\omega_{4},\varepsilon}\) and the claim follows from Proposition 4.7.
In the following, recall that a _mean_ on \(\mathbb{R}\) is a state \(m_{\mathbb{R}}\) on \(\mathrm{L}^{\infty}(\mathbb{R})\) which is invariant under the translation action of \(\mathbb{R}\). Such a state exists as the group \(\mathbb{R}\) is abelian and hence amenable.
**Lemma 6.7**.: _Let \(m_{\mathbb{R}}\) be a mean on \(\mathbb{R}\), let \(\mathsf{H}\) be a Hilbert space, and let \(x\in\mathrm{C}(\widehat{\mathbb{I}})\otimes\mathcal{K}(\mathsf{H}),\rho\in \mathrm{L}^{1}(\widehat{\mathbb{I}})\widehat{\otimes}\,\mathrm{B}(\mathsf{H})_ {*}\). Then_
\[\Omega_{x,\rho}^{\tau}\colon\,\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\ni a \mapsto m_{\mathbb{R}}\big{(}t\mapsto\langle(\Theta^{l}(a)\otimes\mathrm{id})( \hat{\tau}_{-t}\otimes\mathrm{id})(x),\rho\circ(\hat{\tau}_{t}\otimes\mathrm{ id})\rangle\big{)}\in\mathbb{C}\]
_defines a bounded functional on \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\). Furthermore, \(\Omega_{x,\rho}^{\tau}\in Q^{l}(\mathrm{A}(\mathbb{I}))\) and \(\|\Omega_{x,\rho}^{\tau}\|\leq\|x\|\|\rho\|\)._
Proof.: As \(t\mapsto\rho\circ(\hat{\tau}_{t}\otimes\mathrm{id})\) is norm continuous, it follows that the function \(t\mapsto\langle(\Theta^{l}(a)\otimes\mathrm{id})(\hat{\tau}_{-t}\otimes\mathrm{ id})(x),\rho\circ(\hat{\tau}_{t}\otimes\mathrm{id})\rangle\) is continuous, and bounded, and so we can indeed apply \(m_{\mathbb{R}}\) to it. The only nontrivial claim is that \(\Omega^{\tau}_{x,\rho}\) is a normal functional.
In order to verify this take \(t\in\mathbb{R}\). Using the canonical complete contraction \(\kappa\) we can write
\[\langle(\Theta^{l}(a)\otimes\mathrm{id})(\hat{\tau}_{-t}\otimes\mathrm{id})(x ),\rho\circ(\hat{\tau}_{t}\otimes\mathrm{id})\rangle=\kappa(\Theta^{l}(a) \otimes\mathrm{id}\otimes\mathrm{id}\otimes\mathrm{id})\big{(}(\hat{\tau}_{-t }\otimes\mathrm{id})(x)\otimes\rho\circ(\hat{\tau}_{t}\otimes\mathrm{id}) \big{)}.\]
Let \(x=U^{\alpha}_{i,j}\in\mathrm{Pol}(\widehat{\Gamma}),y=U^{\beta}_{k,l}\in \mathrm{Pol}(\widehat{\Gamma})\) and set \(\rho=h(y\cdot)\in\mathrm{L}^{1}(\widehat{\Gamma})\). As \(h\) is \((\hat{\tau}_{t})_{t\in\mathbb{R}}\) invariant, it follows that \((\rho\circ\hat{\tau}_{t})(z)=h(\hat{\tau}_{t}(z))=h(\hat{\tau}_{t}(\hat{\tau} _{-t}(y)z))=h(\hat{\tau}_{-t}(y)z)\) for each \(z\in\mathrm{C}(\widehat{\Gamma})\), and so \(\rho\circ\hat{\tau}_{t}=h(\hat{\tau}_{-t}(y)\cdot)\). From (6.1) we hence see that
\[\hat{\tau}_{-t}(x)\otimes\rho\circ\hat{\tau}_{t}=(\rho_{\alpha,i})^{-it}(\rho _{\alpha,j})^{it}(\rho_{\beta,k})^{-it}(\rho_{\beta,l})^{it}x\otimes\rho.\]
It follows that \(m_{\mathbb{R}}\big{(}t\mapsto\hat{\tau}_{-t}(x)\otimes\rho\circ\hat{\tau}_{t} \big{)}\in\mathrm{Pol}(\widehat{\Gamma})\odot\mathrm{L}^{1}(\widehat{\Gamma})\), the algebraic tensor product. By linearity, this holds for any \(x,y\in\mathrm{Pol}(\widehat{\Gamma})\). By linearity again, given \(x\in\mathrm{Pol}(\widehat{\Gamma})\odot\mathcal{K}(\mathsf{H})\) and \(\rho\in h(\mathrm{Pol}(\widehat{\Gamma})\cdot)\odot\mathrm{B}(\mathsf{H})_{*}\), it follows that
\[v=m_{\mathbb{R}}\big{(}t\mapsto(\hat{\tau}_{-t}\otimes\mathrm{id})(x)\otimes \rho\circ(\hat{\tau}_{t}\otimes\mathrm{id})\big{)}\in\big{(}\mathrm{L}^{ \infty}(\widehat{\Gamma})\odot\mathrm{B}(\mathsf{H})\big{)}\odot\big{(} \mathrm{L}^{1}(\widehat{\Gamma})\odot\mathrm{B}(\mathsf{H})_{*}\big{)}\]
and we have
\[\Omega^{\tau}_{x,\rho}(a)=\kappa\big{(}(\Theta^{l}(a)\otimes\mathrm{id} \otimes\mathrm{id}\otimes\mathrm{id})v\big{)}=\Omega_{v}(a).\]
Consequently we have \(\Omega^{\tau}_{x,\rho}\in Q^{l}(\mathrm{A}(\mathbb{I}))\) by Lemma 6.6. Furthermore,
\[\|\Omega^{\tau}_{x,\rho}\|\leq\|v\|\leq\|x\|\|\rho\|.\]
General elements \(x\in\mathrm{C}(\widehat{\Gamma})\otimes\mathcal{K}(\mathsf{H}),\rho\in \mathrm{L}^{1}(\widehat{\Gamma})\widehat{\otimes}\,\mathrm{B}(\mathsf{H})_{*}\) can be approximated in norm by \(x,\rho\) as above, hence \(\Omega^{\tau}_{x,\rho}\) is a normal functional, using again that \(Q^{l}(\mathrm{A}(\mathbb{I}))\subseteq\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{ I}))^{*}\) is a closed subspace.
Proof of Proposition 6.5.: By assumption, there is a net \((a_{i})_{i\in I}\) in \(\mathrm{A}(\mathbb{I})\) which converges to \(1\) in the weak\({}^{*}\)-topology of \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\). For each \(i\in I\) there is \(\omega_{i}\in\mathrm{L}^{1}(\widehat{\Gamma})\) with \(a_{i}=\widehat{\lambda}(\omega_{i})\), and there are \(\xi_{i},\eta_{i}\in\mathrm{L}^{2}(\widehat{\Gamma})\) with \(\omega_{i}=\omega_{\xi_{i},\eta_{i}}\). Given \(n\in\mathbb{N}\) we may choose \(\xi_{i,n},\eta_{i,n}\in\Lambda_{h}(\mathrm{Pol}(\widehat{\Gamma}))\) with \(\|\xi_{i}-\xi_{i,n}\|\leq\epsilon_{1}\) and \(\|\eta_{i}-\eta_{i,n}\|\leq\epsilon_{2}\), where
\[\epsilon_{1}=\tfrac{1}{1+2n\|\eta_{i}\|},\quad\epsilon_{2}=\tfrac{1}{2n( \epsilon_{1}+\|\xi_{i}\|)}.\]
Set \(a_{i,n}=\widehat{\lambda}(\omega_{\xi_{i,n},\eta_{i,n}})\), so that \(a_{i,n}\in\mathrm{c}_{00}(\mathbb{I})\), and
\[\|a_{i}-a_{i,n}\|_{\mathrm{A}(\mathbb{I})} =\|\omega_{i}-\omega_{\xi_{i,n},\eta_{i,n}}\|\leq\|\omega_{\xi_{i, \eta_{i}}}-\omega_{\xi_{i,n},\eta_{i}}\|+\|\omega_{\xi_{i,n},\eta_{i}}-\omega_{ \xi_{i,n},\eta_{i,n}}\|\] \[\leq\epsilon_{1}\|\eta_{i}\|+\epsilon_{2}\|\xi_{i,n}\|\leq \epsilon_{1}\|\eta_{i}\|+\epsilon_{2}\big{(}\epsilon_{1}+\|\xi_{i}\|\big{)} \leq\tfrac{1}{n}.\]
Equipping \(I\times\mathbb{N}\) with the product order, it follows that \(a_{i,n}\xrightarrow[(i,n)\in I\times\mathbb{N}]{}\) in \((\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I})),w^{*})\).
Fix \((i,n)\in I\times\mathbb{N}\) and choose \(m_{\mathbb{R}}\in\mathrm{L}^{\infty}(\mathbb{R})^{*}\), a mean on \(\mathbb{R}\). Let \(b_{i,n}\) be the unique element of \(\ell^{\infty}(\mathbb{I})\) with
\[\langle b_{i,n},\omega\rangle=m_{\mathbb{R}}(t\mapsto\langle\tau_{t}(a_{i,n}), \omega\rangle)\qquad(\omega\in\ell^{1}(\mathbb{I})).\]
As each \(\tau_{t}\) leaves each matrix block \(\mathrm{M}_{\mathrm{dim}(\alpha)}\subseteq\ell^{\infty}(\mathbb{I})\) invariant, it follows that \(b_{i,n}\in\mathrm{c}_{00}(\mathbb{I})\), because \(a_{i,n}\in\mathrm{c}_{00}(\mathbb{I})\). Next, we set
\[c_{i,n}=\tfrac{1}{2}(b_{i,n}+R(b_{i,n})^{*})\in\mathrm{c}_{00}(\mathbb{I}).\]
Clearly each \(b_{i,n}\), and consequently each \(c_{i,n}\), is invariant under \((\tau_{t})_{t\in\mathbb{R}}\). Thus \(c_{i,n}\) is analytic for \((\tau_{t})_{t\in\mathbb{R}}\), and so \(c_{i,n}\in D(S^{-1})\) and \(S^{-1}(c_{i,n})=R(c_{i,n})\). Then as \(\nabla^{it}_{\varphi}=\nabla^{-it}_{\psi}=P^{it}\), see [40, Lemma 6.2], each \(c_{i,n}\) is also invariant under the modular automorphism group. Since
\[c^{*}_{i,n}=\tfrac{1}{2}(b^{*}_{i,n}+R(b_{i,n}))=R(c_{i,n})=S^{-1}(c_{i,n}),\]
the operator \(\Theta^{l}(c_{i,n})\) is star preserving by Corollary 4.10.
We now show that \((c_{i,n})_{(i,n)\in I\times\mathbb{N}}\) converges to \(\mathbb{1}\) weak\({}^{*}\) in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\). We will show this first for \((b_{i,n})_{(i,n)\in I\times\mathbb{N}}\). Choose \(\rho\in\ell^{1}(\mathbb{I}),\omega\in\mathrm{L}^{1}(\widehat{\mathbb{I}})\) and set \(y=(\mathrm{id}\otimes\rho)\mathrm{W}^{\widehat{\mathbb{I}}}\). We have
\[\langle\Theta^{l}(b_{i,n})(y),\omega\rangle=\langle(\mathrm{id} \otimes\rho)(\mathrm{W}^{\widehat{\mathbb{I}}}),\Theta^{l}(b_{i,n})_{*}(\omega )\rangle=\langle b_{i,n}(\omega\otimes\mathrm{id})(\mathrm{W}^{\widehat{ \mathbb{I}}}),\rho\rangle\] \[=m_{\mathbb{R}}\big{(}t\mapsto\langle\tau_{t}(a_{i,n})(\omega \otimes\mathrm{id})(\mathrm{W}^{\widehat{\mathbb{I}}}),\rho\rangle\big{)}=m_{ \mathbb{R}}\big{(}t\mapsto\langle\Theta^{l}(\tau_{t}(a_{i,n}))y,\omega\rangle \big{)}. \tag{6.3}\]
By continuity, the above equation holds for each \(y\in\mathrm{C}(\widehat{\mathbb{I}})\).
Let \(\mathsf{H}\) be a separable Hilbert space, and consider \(x\in\mathrm{C}(\widehat{\mathbb{I}})\odot\mathcal{K}(\mathsf{H})\) and \(\rho\in\mathrm{L}^{1}(\widehat{\mathbb{I}})\odot\mathrm{B}(\mathsf{H})_{*}\). Then using linearity and (6.3), it follows that
\[\langle b_{i,n},\Omega_{x,\rho}\rangle=\langle(\Theta^{l}(b_{i,n})\otimes \mathrm{id})x,\rho\rangle=m_{\mathbb{R}}\big{(}t\mapsto\langle(\Theta^{l}( \tau_{t}(a_{i,n}))\otimes\mathrm{id})x,\rho\rangle\big{)}.\]
Lemmas 4.11 and 6.7 imply
Both sides of the above equation are continuous with respect to \(x,\rho\), hence it holds also for \(x\in\mathrm{C}(\widehat{\mathbb{I}})\otimes\mathcal{K}(\mathsf{H})\) and \(\rho\in\mathrm{L}^{1}(\widehat{\mathbb{I}})\widehat{\otimes}\mathrm{B}( \mathsf{H})_{*}\). Since \(\Omega_{x,\rho}^{\tau}\) is a normal functional, it follows that
\[\lim_{(i,n)\in I\times\mathbb{N}}\langle b_{i,n},\Omega_{x,\rho}\rangle =\lim_{(i,n)\in I\times\mathbb{N}}\langle a_{i,n},\Omega_{x,\rho} ^{\tau}\rangle=\langle 1,\Omega_{x,\rho}^{\tau}\rangle\] \[=m_{\mathbb{R}}\big{(}t\mapsto\langle(\hat{\tau}_{-t}\otimes \mathrm{id})(x),\rho\circ(\hat{\tau}_{t}\otimes\mathrm{id})\rangle\big{)}= \langle x,\rho\rangle=\langle 1,\Omega_{x,\rho}\rangle\]
and so \(b_{i,n}\xrightarrow[(i,n)\in I\times\mathbb{N}]{}\mathbb{1}\), as the functionals \(\Omega_{x,\rho}\) give all of \(Q^{l}(\mathrm{A}(\mathbb{I}))\), by Proposition 3.9. Then using Proposition 4.9,
\[\langle R(b_{i,n})^{*},\Omega_{x,\rho}\rangle=\langle S(b_{i,n}^{* }),\Omega_{x,\rho}\rangle=\langle(\Theta^{l}(b_{i,n})^{\dagger}\otimes\mathrm{ id})x,\rho\rangle\] \[=\langle(\Theta^{l}(b_{i,n})\otimes\mathrm{id})(x^{*})^{*},\rho \rangle=\overline{\langle(\Theta^{l}(b_{i,n})\otimes\mathrm{id})(x^{*}), \overline{\rho}\rangle}\]
which converges to \(\overline{\langle x^{*},\overline{\rho}\rangle}=\langle x,\rho\rangle=\langle 1,\Omega_{x,\rho}\rangle\). We can conclude that \(c_{i,n}=\frac{1}{2}(b_{i,n}+R(b_{i,n})^{*})\) also converges weak\({}^{*}\) to \(\mathbb{1}\).
We have now shown all the properties required of the net \((c_{i,n})_{(i,n)\in I\times\mathbb{N}}\) except that each \(\Theta^{l}(c_{i,n})\) is unit preserving. By Lemma 3.2, we know that there is a family of scalars \((\alpha_{i,n})_{(i,n)\in I\times\mathbb{N}}\) with \(\Theta^{l}(c_{i,n})(\mathbb{1})=\alpha_{i,n}\mathbb{1}\) for each \((i,n)\). As \(\Theta^{l}(c_{i,n})\) is star-preserving, each \(\alpha_{i,n}\in\mathbb{R}\). As \(c_{i,n}\xrightarrow[(i,n)\in I\times\mathbb{N}]{}\mathbb{1}\) weak\({}^{*}\), \(\Theta^{l}(c_{i,n})(\mathbb{1})=\alpha_{i,n}\mathbb{1}\)\(\xrightarrow[(i,n)\in I\times\mathbb{N}]{}\mathbb{1}\) weak\({}^{*}\) and so \(\alpha_{i,n}\xrightarrow[(i,n)\in I\times\mathbb{N}]{}\mathbb{1}\). We may hence replace \(c_{i,n}\) by \(\alpha_{i,n}^{-1}c_{i,n}\).
Finally, when \(\mathbb{I}\) has central AP, then we can skip the first step (as \(a_{i}\in\mathrm{c}_{00}(\mathbb{I})\cap\mathcal{Z}(\ell^{\infty}(\mathbb{I}))\) by assumption), and proceed as above to form \(b_{i}\). It follows from (6.1) that \(\hat{\tau}_{t}(U^{\alpha}_{i,i})=U^{\alpha}_{i,i}\), and the equality \((\hat{\tau}_{t}\otimes\tau_{t})\mathrm{W}^{\widehat{\mathbb{I}}}=\mathrm{W}^{ \widehat{\mathbb{I}}}\), together with (6.2), shows \(\tau_{t}(a_{i})=a_{i}\) for each \(t,i\). Thus actually \(b_{i}=a_{i}\), and the final step of forming \(c_{i}\), and rescaling, will also give central elements.
In the unimodular case there is no difference between AP and central AP.
**Proposition 6.8**.: _Let \(\mathbb{G}\) be a unimodular discrete quantum group. Then \(\mathbb{G}\) has AP if and only if it has central AP._
Proof.: Since the Haar integral \(h\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) is a trace there exists a unique state-preserving normal faithful conditional expectation \(E\colon\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\,\mathrm{L}^{ \infty}(\widehat{\mathbb{G}})\to\widehat{\Delta}(\mathrm{L}^{\infty}(\widehat{ \mathbb{G}}))\subseteq\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\, \mathrm{L}^{\infty}(\widehat{\mathbb{G}})\). Explicitly, we have
\[E(U^{\alpha}_{i,j}\otimes U^{\beta}_{k,l})=\frac{\delta_{\alpha\beta}\delta_{jk}}{ \dim(\alpha)}\widehat{\Delta}(U^{\alpha}_{i,l}),\]
compare [9, Section 6.3.2]. Set \(\widehat{\Delta}^{\sharp}=\widehat{\Delta}^{-1}E:\mathrm{L}^{\infty}(\widehat{ \mathbb{G}})\bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\to\mathrm{L}^ {\infty}(\widehat{\mathbb{G}})\). Given \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{G}))\) define
\[\Psi(a)=\widehat{\Delta}^{\sharp}(\mathrm{id}\otimes\Theta^{l}(a))\widehat{ \Delta}\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})).\]
That \(\Psi(a)\) is normal and completely bounded is clear, and note that we have \(\|\Psi(a)\|_{cb}\leq\|a\|_{cb}\). Moreover, if \(a\) is finitely supported then \(\Psi(a)\) has finite-rank, see Remark 6.4.
Define also \(A:\ell^{\infty}(\mathbb{G})\to\mathcal{Z}\ell^{\infty}(\mathbb{G})\) by
\[A(f)=\sum_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{G}})}\frac{ \operatorname{Tr}_{\alpha}(f)}{\dim(\alpha)}p_{\alpha},\]
where \(p_{\alpha}\in\ell^{\infty}(\mathbb{G})\) is the central projection corresponding to \(\operatorname{B}(\mathsf{H}_{\alpha})\subseteq\ell^{\infty}(\mathbb{G})\) and \(\operatorname{Tr}_{\alpha}\in\ell^{1}(\mathbb{G})\) is the projection onto \(\operatorname{B}(\mathsf{H}_{\alpha})\) composed with the (non-normalised) trace on \(\operatorname{B}(\mathsf{H}_{\alpha})\). Then \(A\) is a contractive linear map. Given \(a\in\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{G}))\) and \(\omega\in h(\operatorname{Pol}(\widehat{\mathbb{G}})\cdot)\), we compute
\[\lambda_{\widehat{\mathbb{G}}}(\Psi(a)_{*}(\omega))=\sum_{\alpha \in\operatorname{Irr}(\widehat{\mathbb{G}})}\sum_{i,j=1}^{\dim(\alpha)} \langle\Psi(a)(U^{\alpha}_{i,j}),\omega\rangle e^{\alpha}_{i,j}\] \[=\sum_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{G}})}\sum_{ i,j,k=1}^{\dim(\alpha)}\langle\widehat{\Delta}^{\sharp}(U^{\alpha}_{i,k}\otimes \Theta^{l}(a)(U^{\alpha}_{k,j})),\omega\rangle e^{\alpha}_{i,j}=\sum_{\alpha \in\operatorname{Irr}(\widehat{\mathbb{G}})}\sum_{i,j,k,l=1}^{\dim(\alpha)}a ^{\alpha}_{k,l}\langle\widehat{\Delta}^{\sharp}(U^{\alpha}_{i,k}\otimes U^{ \alpha}_{l,j}),\omega\rangle e^{\alpha}_{i,j}\] \[=\sum_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{G}})}\sum_{ i,j,k=1}^{\dim(\alpha)}\frac{a^{\alpha}_{k,k}}{\dim(\alpha)}\langle U^{ \alpha}_{i,j},\omega\rangle e^{\alpha}_{i,j}=\sum_{\alpha\in\operatorname{ Irr}(\widehat{\mathbb{G}})}\sum_{i,j=1}^{\dim(\alpha)}\langle U^{\alpha}_{i,j}, \omega\rangle\frac{\operatorname{Tr}_{\alpha}(a)}{\dim(\alpha)}e^{\alpha}_{i,j }=A(a)\lambda_{\widehat{\mathbb{G}}}(\omega).\]
Here we used Lemma 6.1 to compute the action of \(\Theta^{l}(a)\), and notice that by the choice of \(\omega\), all the sums involved are finite. As such \(\omega\) are dense in \(\operatorname{L}^{1}(\widehat{\mathbb{H}})\), it follows that \(A(a)\) is a left \(\operatorname{CB}\) multiplier and \(\Theta^{l}(A(a))=\Psi(a)\). In particular, \(A(a)\in\mathcal{Z}\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{G}))\) and \(\|A(a)\|_{cb}\leq\|a\|_{cb}\).
Now assume that \(\mathbb{G}\) has AP and let \((f_{i})_{i\in I}\) in \(\operatorname{c}_{00}(\mathbb{G})\) be a net converging weak\({}^{*}\) to \(\mathbb{1}\) in \(\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{G}))\). In order to prove that \(\mathbb{G}\) has central AP, we shall show that \(A(f_{i})\xrightarrow[i\in I]{}1\) weak\({}^{*}\) in \(\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{G}))\). Take \(x\in\operatorname{C}(\widehat{\mathbb{G}})\otimes\mathcal{K}(\mathsf{H}), \omega\in\operatorname{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes} \operatorname{B}(\mathsf{H})_{*}\) for a separable Hilbert space \(\mathsf{H}\). Then we have
\[\langle A(f_{i}),\Omega_{x,\omega}\rangle =\langle(\Psi(f_{i})\otimes\operatorname{id})x,\omega\rangle\] \[=\langle(\widehat{\Delta}^{\sharp}\otimes\operatorname{id})( \operatorname{id}\otimes\Theta^{l}(f_{i})\otimes\operatorname{id})( \widehat{\Delta}\otimes\operatorname{id})x,\omega\rangle\] \[=\langle(\operatorname{id}\otimes\Theta^{l}(f_{i})\otimes \operatorname{id})(\widehat{\Delta}\otimes\operatorname{id})x,\omega\circ( \widehat{\Delta}^{\sharp}\otimes\operatorname{id})\rangle.\]
By applying Lemma 6.6, with \(\operatorname{M}=\operatorname{L}^{\infty}(\widehat{\mathbb{H}}),\operatorname{ N}=\mathbb{C}\), it follows that
\[\lim_{i\in I}\langle A(f_{i}),\Omega_{x,\omega}\rangle=\langle(\operatorname{id} \otimes\operatorname{id}\otimes\operatorname{id})(\widehat{\Delta}\otimes \operatorname{id})x,\omega\circ(\widehat{\Delta}^{\sharp}\otimes\operatorname{ id})\rangle=\langle x,\omega\rangle=\langle 1,\Omega_{x,\omega}\rangle,\]
hence showing that \(A(f_{i})\xrightarrow[i\in I]{}1\) weak\({}^{*}\), as required.
In the remainder of this section we relate AP to approximation properties of associated operator algebras. Let us start by recalling the appropriate von Neumann algebraic approximation property, see [31, Section 2].
**Definition 6.9**.: Let \(\operatorname{M}\) be a von Neumann algebra. Then \(\operatorname{M}\) has the _weak\({}^{*}\) operator approximation property (W\({}^{*}\)OAP)_ if there exists a net \((\Theta_{i})_{i\in I}\) of finite rank normal \(\operatorname{CB}\) maps on \(\operatorname{M}\) which converges to the identity in the stable point-weak\({}^{*}\)-topology, i.e. \((\Theta_{i}\otimes\operatorname{id})x\xrightarrow[i\in I]{}x\) for all separable Hilbert spaces \(\mathsf{H}\) and \(x\in\operatorname{M}\bar{\otimes}\operatorname{B}(\mathsf{H})\).
The following result was established by Kraus and Ruan for Kac algebras, [41, Theorem 4.15], using the formally stronger definition of AP (which by Theorem 4.4 is equivalent to the definition taken in this paper). A result of this type was also obtained in [13, Proposition 4.7], again with a formally stronger definition of AP, and under the assumption that \(\mathbb{G}\) is strongly inner amenable. We record a proof for the convenience of a reader.
**Proposition 6.10**.: _Let \(\mathbb{G}\) be a locally compact quantum group. If \(\mathbb{G}\) has AP, then \(\operatorname{L}^{\infty}(\widehat{\mathbb{G}})\) has W\({}^{*}\)OAP._
Proof.: Assume that \(\mathbb{G}\) has AP. By Theorem 4.4 there is a net \((\widehat{\lambda}(\omega_{i}))_{i\in I}\) in \(\mathrm{A}(\mathbb{G})\) such that \(\Theta^{l}(\widehat{\lambda}(\omega_{i}))\underset{i\in I}{\longrightarrow}\) id in the stable point-weak\({}^{*}\)-topology of \(\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\). Extend \(\omega_{i}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) to \(\tilde{\omega}_{i}\in\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))_{*}\) with the same norm and define
\[\Psi_{i}\colon\,\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\ni T\mapsto (\tilde{\omega}_{i}\otimes\mathrm{id})\big{(}\mathrm{V}^{\widehat{\mathbb{G} }}(T\otimes 1)\mathrm{V}^{\widehat{\mathbb{G}}*}\big{)}\in\mathrm{B}(\mathrm{L}^{2 }(\mathbb{G})).\]
Clearly \(\Psi_{i}\) is a normal CB map on \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\), and as \(\mathrm{V}^{\widehat{\mathbb{G}}}\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime} \bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) the image of \(\Psi_{i}\) lies in \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\). Note that if \(x\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) then
\[\Psi_{i}(x)=(\tilde{\omega}_{i}\otimes\mathrm{id})\big{(}\mathrm{V}^{\widehat {\mathbb{G}}}(x\otimes 1)\mathrm{V}^{\widehat{\mathbb{G}}*}\big{)}=(\tilde{ \omega}_{i}\otimes\mathrm{id})\widehat{\Delta}(x)=\Theta^{l}(\widehat{\lambda }(\omega_{i}))(x).\]
Thus \(\Psi_{i}\) is an extension of \(\Theta^{l}(\widehat{\lambda}(\omega_{i}))\) to all of \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\). As \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\) has \(\mathrm{W}^{*}\mathrm{CPAP}\), see [10, Propositions 2.1.4, 2.2.7], there is a net \((\Upsilon_{\lambda})_{\lambda\in\Lambda}\) of finite rank normal unital CP maps on \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}))\) which converges to the identity in the point-weak\({}^{*}\)-topology. Consider now the maps
\[\Psi_{i,\lambda}=\Psi_{i}\circ\Upsilon_{\lambda}|_{\mathrm{L}^{\infty}( \widehat{\mathbb{G}})}\colon\,\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\to \mathrm{L}^{\infty}(\widehat{\mathbb{G}}).\]
These are normal and CB, and, because \(\Upsilon_{\lambda}\) has finite rank, also each \(\Psi_{i,\lambda}\) is finite-rank. Fix a separable Hilbert space \(\mathsf{H}\) and finite sets \(F\subseteq\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\,\mathrm{B} (\mathsf{H}),\ G\subseteq\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes} \,\mathrm{B}(\mathsf{H})_{*}\) and \(0<\varepsilon<1\). Given \(x\in F,\rho\in G\), we have
\[\langle(\Psi_{i}\otimes\mathrm{id})x,\rho\rangle=\langle(\Theta^{l}(\widehat {\lambda}(\omega_{i}))\otimes\mathrm{id})\,x,\rho\rangle\underset{i\in I}{ \longrightarrow}\langle x,\rho\rangle.\]
Thus there is \(i(F,G,\varepsilon)\in I\) such that
\[|\langle(\Psi_{i(F,G,\varepsilon)}\otimes\mathrm{id})x-x,\rho\rangle|\leq \tfrac{\varepsilon}{2}\qquad(x\in F,\rho\in G).\]
Next, since \(\Psi_{i(F,G,\varepsilon)}\) is normal, we have
\[|\langle(\Psi_{i(F,G,\varepsilon)}\circ\Upsilon_{\lambda}\otimes\mathrm{id})x -(\Psi_{i(F,G,\varepsilon)}\otimes\mathrm{id})x,\rho\rangle|\underset{\lambda \in\Lambda}{\longrightarrow}0\qquad(x\in F,\rho\in G),\]
hence there is \(\lambda(F,G,\varepsilon)\in\Lambda\) so that
\[|\langle(\Psi_{i(F,G,\varepsilon)}\circ\Upsilon_{\lambda(F,G,\varepsilon)} \otimes\mathrm{id})x-(\Psi_{i(F,G,\varepsilon)}\otimes\mathrm{id})x,\rho\rangle| \leq\tfrac{\varepsilon}{2}\qquad(x\in F,\rho\in G),\]
and by triangle inequality
\[|\langle(\Psi_{i(F,G,\varepsilon)}\circ\Upsilon_{\lambda(F,G,\varepsilon)} \otimes\mathrm{id})x-x,\rho\rangle|\leq\varepsilon\qquad(x\in F,\rho\in G).\]
Consequently, the net \((\Psi_{i(F,G,\varepsilon),\lambda(F,G,\varepsilon)})_{(F,G,\varepsilon)}\) (indexed by finite subsets of \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\bar{\otimes}\,\mathrm{B}(\mathsf{H})\), \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\,\mathrm{B}(\mathsf{H})_ {*}\) and \(]0,1[\,]\) shows that \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) has the \(\mathrm{W}^{*}\mathrm{OAP}\).
**Remark 6.11**.: An analogous argument shows that if \(\widehat{\mathbb{G}}\) is coamenable then \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) has \(\mathrm{W}^{*}\mathrm{CPAP}\). In fact, a formally stronger result holds: \(\mathrm{W}^{*}\mathrm{CPAP}\) of \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}})\) follows from amenability of \(\mathbb{G}\) by [7, Theorem 3.3].
When the quantum group \(\mathbb{\Gamma}\) is discrete and has AP, we obtain approximation properties also for the associated \(\mathrm{C}^{*}\)-algebra. If \(\mathbb{\Gamma}\) is furthermore unimodular, the converse implications hold. These results are already known (see e.g. [41, Theorem 5.13]), hence we skip the proof. For the definition of OAP and strong OAP, see [23, Page 204] or [10, Section 12.4], for example.
**Proposition 6.12**.: _Let \(\mathbb{\Gamma}\) be a discrete quantum group. Consider the following conditions:_
1. \(\mathbb{\Gamma}\) _has AP,_
2. \(\mathrm{C}(\widehat{\mathbb{\Gamma}})\) _has strong OAP,_
3. \(\mathrm{C}(\widehat{\mathbb{\Gamma}})\) _has OAP,_
4. \(\mathrm{L}^{\infty}(\widehat{\mathbb{\Gamma}})\) _has W_\({}^{*}\mathrm{OAP}\)_._
_Then \((1)\Rightarrow(2)\Rightarrow(3)\) and \((1)\Rightarrow(4)\). If \(\mathbb{\Gamma}\) is unimodular then all the above conditions are equivalent._
**Remark 6.13**.: Combining Proposition 6.12 and Proposition 6.8, we see that when \(\mathbb{\Gamma}\) is unimodular and \(\mathrm{L}^{\infty}(\widehat{\mathbb{\Gamma}})\) has \(\mathrm{W}^{*}\mathrm{OAP}\), then \(\mathbb{\Gamma}\) has the central AP.
Following [66, Definition 1.27], we say that a discrete quantum group \(\mathbb{I}\) is exact when the _reduced crossed product functor_\(\mathbb{I}\)\(\mathbb{V}_{r}-\), preserves short exact sequences.
**Corollary 6.14**.: Let \(\mathbb{I}\) be a discrete quantum group. If \(\mathbb{I}\) has AP then it is exact.
Proof.: By Proposition 6.12, the \(\mathrm{C}^{*}\)-algebra \(\mathrm{C}(\widehat{\mathbb{I}})\) has strong OAP. It is then exact by a combination of [23, Corollary 11.3.2] and [36, Theorem 1.1]. The result now follows from [66, Proposition 1.28].
We end this section with a result which shows that for a discrete quantum group \(\mathbb{I}\), AP is equivalent to a strengthening of \(\mathrm{W}^{*}\mathrm{OAP}\) of \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) which takes into consideration \(\ell^{\infty}(\mathbb{I})\). Let us introduce this strengthening in a general setting, compare [39, Definition 6.9].
**Definition 6.15**.: Let \((\mathrm{M},\theta)\) be a von Neumann algebra with a n.s.f. weight.
* Let \(\Phi\in\mathrm{CB}^{\sigma}(\mathrm{M})\) be a normal CB map satisfying \(\Phi(\mathfrak{N}_{\theta})\subseteq\mathfrak{N}_{\theta}\). We say that \(\Phi\) has an \(\mathrm{L}^{2}\)-implementation if there is \(T\in\mathrm{B}(\mathsf{H}_{\theta})\) such that \(\Lambda_{\theta}(\Phi(x))=T\Lambda_{\theta}(x)\) for \(x\in\mathfrak{N}_{\theta}\).
* Let \(\mathrm{N}\subseteq\mathrm{B}(\mathsf{H}_{\theta})\) be a von Neumann algebra. We say that \((\mathrm{M},\theta)\) has \(\mathrm{W}^{*}\mathrm{OAP}\) relative to \(\mathrm{N}\) if there is a net \((\Phi_{i})_{i\in I}\) such that:
* each \(\Phi_{i}\) is a normal, CB, finite rank map on \(\mathrm{M}\),
* each \(\Phi_{i}\) satisfies \(\Phi_{i}(\mathfrak{N}_{\theta})\subseteq\mathfrak{N}_{\theta}\) and has \(\mathrm{L}^{2}\)-implementation \(T_{i}\in\mathrm{N}\),
* the net \((\Phi_{i})_{i\in I}\) converges to the identity in the stable point-weak\({}^{*}\)-topology.
Note that an \(\mathrm{L}^{2}\)-implementation is unique. If it is clear from the context which weight on \(\mathrm{M}\) we choose, we will simply say that \(\mathrm{M}\) has \(\mathrm{W}^{*}\mathrm{OAP}\) relative to \(\mathrm{N}\).
**Theorem 6.16**.: _Let \(\mathbb{I}\) be a discrete quantum group. Consider the following conditions:_
1. \(\mathbb{I}\) _has AP._
2. \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) _has W_\({}^{*}\mathrm{OAP}\) _relative to_ \(\ell^{\infty}(\mathbb{I})\)_._
3. \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) _has W_\({}^{*}\mathrm{OAP}\) _relative to_ \(\ell^{\infty}(\mathbb{I})^{\prime}\)_._
4. \(\mathbb{I}\) _has central AP._
5. \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) _has W_\({}^{*}\mathrm{OAP}\) _relative to_ \(\mathcal{Z}(\ell^{\infty}(\mathbb{I}))\)_._
_Then (1) \(\Leftrightarrow\) (2) \(\Leftrightarrow\) (3) \(\Leftarrow\) (4) \(\Leftrightarrow\) (5)._
**Remark 6.17**.: This result is an analogue of Theorem 6.11 in [39], for (co)amenability and relative \(\mathrm{W}^{*}\mathrm{CPAP}\). Furthermore, it is similar in spirit to [61, Theorem 3] which is concerned with amenability and injectivity.
We start with an auxiliary result (compare [39, Proposition 6.12]). Recall that for a normal CB map \(\Phi\) on a von Neumann algebra, we denote by \(\Phi^{\dagger}\) the normal CB map given by \(x\mapsto\Phi(x^{*})^{*}\).
**Proposition 6.18**.: _Let \(\mathbb{G}\) be a locally compact quantum group with left Haar integral \(\varphi\) and let \(\Phi\in\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\) be a normal CB map. Assume that \(\Phi^{\dagger}\) satisfies \(\Phi^{\dagger}(\mathfrak{N}_{\widehat{\varphi}})\subseteq\mathfrak{N}_{ \widehat{\varphi}}\) and has \(\mathrm{L}^{2}\)-implementation \(T\colon\,\mathrm{L}^{2}(\mathbb{G})\to\mathrm{L}^{2}(\mathbb{G})\). We have:_
1. \(T\in\mathrm{L}^{\infty}(\mathbb{G})\) _if and only if_ \(\Phi_{*}(\omega\star\nu)=\Phi_{*}(\omega)\star\nu\) _for all_ \(\omega,\nu\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\)_,_
2. \(T\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\) _if and only if_ \(\Phi_{*}(\omega\star\nu)=\omega\star\Phi_{*}(\nu)\) _for all_ \(\omega,\nu\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\)_,_
3. \(T\in\mathcal{Z}(\mathrm{L}^{\infty}(\mathbb{G}))\) _if and only if_ \(\Phi_{*}(\omega\star\nu)=\Phi_{*}(\omega)\star\nu=\omega\star\Phi_{*}(\nu)\) _for all_ \(\omega,\nu\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\)_._
Proof.: Using the biduality \(\mathbb{G}=\widehat{\mathbb{G}}\) and [68, Definition 4.6] (see also Section 2), we deduce that the subspace
\[\mathcal{N}=\{\widehat{\lambda}(\omega)\,|\,\omega\in\mathrm{L}^{1}(\widehat{ \mathbb{G}})\colon\exists_{\xi\in\mathrm{L}^{2}(\mathbb{G})}\forall_{x\in \mathfrak{N}_{\widehat{\varphi}}}\langle\Lambda_{\widehat{\varphi}}(x)\,|\, \xi\rangle=\omega(x^{*})\}\subseteq\mathrm{L}^{\infty}(\mathbb{G})\]
is a core for \(\Lambda_{\varphi}\), and that for \(\widehat{\lambda}(\omega)\in\mathcal{N}\) we have \(\Lambda_{\varphi}(\widehat{\lambda}(\omega))=\xi\). Before we proceed with the main proof, let us establish some preliminary results:
* For \(\widehat{\lambda}(\omega)\in\mathcal{N}\) we have \(\widehat{\lambda}(\Phi_{*}(\omega))\in\mathcal{N}\) and (6.4) \[T^{*}\Lambda_{\varphi}(\widehat{\lambda}(\omega))=\Lambda_{\varphi}(\widehat{ \lambda}(\Phi_{*}(\omega))).\]
Indeed, for \(x\in\mathfrak{N}_{\widehat{\varphi}}\),
\[\langle\Lambda_{\widehat{\varphi}}(x)\,|\,T^{*}\Lambda_{\varphi}( \widehat{\lambda}(\omega))\rangle =\langle T\Lambda_{\widehat{\varphi}}(x)\,|\,\Lambda_{\varphi}( \widehat{\lambda}(\omega))\rangle=\langle\Lambda_{\widehat{\varphi}}(\Phi^{ \dagger}(x))\,|\,\Lambda_{\varphi}(\widehat{\lambda}(\omega))\rangle\] \[=\omega(\Phi^{\dagger}(x)^{*})=\omega(\Phi(x^{*}))=\Phi_{*}( \omega)(x^{*}),\]
which proves that \(\widehat{\lambda}(\Phi_{*}(\omega))\in\mathcal{N}\) and that equation (6.4) holds.
* For \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}}),\widehat{\lambda}(\nu)\in \mathcal{N}\) we have \(\widehat{\lambda}(\omega\star\nu)\in\mathcal{N}\) and (6.5) \[\Lambda_{\varphi}(\widehat{\lambda}(\omega\star\nu))=\widehat{\lambda}( \omega)\Lambda_{\varphi}(\widehat{\lambda}(\nu)).\] Indeed, take \(x\in\mathfrak{N}_{\widehat{\varphi}}\). Using the left invariance of the Haar integral \(\varphi\) and the definition of \(\mathrm{W}^{\mathbb{G}}\) we obtain \[(\omega\star\nu)(x^{*})=\nu((\omega\otimes\mathrm{id})\Delta(x^{* }))=\nu((\overline{\omega}\otimes\mathrm{id})\Delta(x)^{*})=\langle\Lambda_{ \widehat{\varphi}}((\overline{\omega}\otimes\mathrm{id})\Delta(x))\,|\, \Lambda_{\varphi}(\widehat{\lambda}(\nu))\rangle\] \[=\langle(\overline{\omega}\otimes\mathrm{id})(\mathrm{W}^{ \widehat{\mathbb{G}}*})\Lambda_{\widehat{\varphi}}(x)\,|\,\Lambda_{\varphi}( \widehat{\lambda}(\nu))\rangle=\langle\Lambda_{\widehat{\varphi}}(x)\,|\, \widehat{\lambda}(\omega)\Lambda_{\varphi}(\widehat{\lambda}(\nu))\rangle,\] which proves the claim.
Let us now prove (1). If \(T\in\mathrm{L}^{\infty}(\mathbb{G})\) then (6.4) implies that \(\Lambda_{\varphi}(\widehat{\lambda}(\Phi_{*}(\omega)))=T^{*}\Lambda_{\varphi}( \widehat{\lambda}(\omega))=\Lambda_{\varphi}(T^{*}\widehat{\lambda}(\omega))\) and so \(T^{*}\widehat{\lambda}(\omega)=\widehat{\lambda}(\Phi_{*}(\omega))\), for each \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) such that \(\widehat{\lambda}(\omega)\in\mathcal{N}\), and by density of such \(\omega\) (see Lemma 2.1) this equation holds for all \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). Consequently
\[\widehat{\lambda}(\Phi_{*}(\omega\star\nu))=T^{*}\widehat{\lambda}(\omega \star\nu)=T^{*}\widehat{\lambda}(\omega)\widehat{\lambda}(\nu)=\widehat{ \lambda}(\Phi_{*}(\omega))\widehat{\lambda}(\nu)=\widehat{\lambda}(\Phi_{*}( \omega)\star\nu),\]
and so \(\Phi_{*}(\omega\star\nu)=\Phi_{*}(\omega)\star\nu\) for all \(\omega,\nu\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\).
For the converse, assume that \(\Phi_{*}(\omega\star\nu)=\Phi_{*}(\omega)\star\nu\) for all \(\omega,\nu\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). Let us take \(\widehat{\lambda}(\omega),\widehat{\lambda}(\nu)\in\mathcal{N}\) and assume that the map \(\mathbb{R}\ni t\mapsto(\omega\hat{\delta}^{-it})\circ\hat{\tau}_{-t}\in \mathrm{L}^{1}(\widehat{\mathbb{G}})\) extends to an entire map, and denoting by \(\rho\) the value of this map at \(t=-\frac{i}{2}\), we have furthermore \(\widehat{\lambda}(\rho)\in\mathcal{N}\). We now use \(\mathrm{W}^{\widehat{\mathbb{G}}}=\chi(\mathrm{W}^{\mathbb{G}})^{*}\), together with \((\sigma_{t}^{\varphi}\otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}})=(\tau_{t} \otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}})(1\otimes\widehat{\delta}^{it})\), see [68, Proposition 5.15], and \((\tau_{t}\otimes\hat{\tau}_{t})(\mathrm{W}^{\mathbb{G}})=\mathrm{W}^{\mathbb{G}}\), see [45, Proposition 8.23]. It follows that for each \(t\in\mathbb{R}\),
\[\sigma_{t}^{\varphi}(\widehat{\lambda}(\omega))=(\omega\otimes \mathrm{id})((\mathrm{id}\otimes\sigma_{t}^{\varphi})\mathrm{W}^{\widehat{ \mathbb{G}}})=(\mathrm{id}\otimes\overline{\omega})((\sigma_{t}^{\varphi} \otimes\mathrm{id})(\mathrm{W}^{\mathbb{G}}))^{*}\] \[=(\mathrm{id}\otimes\overline{\omega})((\tau_{t}\otimes\mathrm{id })(\mathrm{W}^{\mathbb{G}})(1\otimes\hat{\delta}^{it}))^{*}=(\mathrm{id} \otimes\overline{\omega})((\mathrm{id}\otimes\hat{\tau}_{-t})(\mathrm{W}^{ \mathbb{G}})(1\otimes\hat{\delta}^{it}))^{*}\] \[=(\mathrm{id}\otimes\omega)((1\otimes\hat{\delta}^{-it})( \mathrm{id}\otimes\hat{\tau}_{-t})(\mathrm{W}^{\mathbb{G}})^{*})=(\omega \otimes\mathrm{id})((\hat{\delta}^{-it}\otimes 1)(\hat{\tau}_{-t}\otimes\mathrm{id})(\mathrm{W}^{ \widehat{\mathbb{G}}}))\] \[=\widehat{\lambda}((\omega\hat{\delta}^{-it})\circ\hat{\tau}_{-t}).\]
Consequently we obtain \(\widehat{\lambda}(\omega)\in\mathrm{Dom}(\sigma_{-i/2}^{\varphi})\) and \(\sigma_{-i/2}^{\varphi}(\widehat{\lambda}(\omega))=\widehat{\lambda}(\rho)\). Take \(x\in\mathfrak{N}_{\widehat{\varphi}}\). From (6.5), we know that \(\widehat{\lambda}(\nu\star\rho),\widehat{\lambda}(\Phi_{*}(\nu)\star\rho)\in \mathcal{N}\). By assumption, \(\Phi_{*}(\nu\star\rho)=\Phi_{*}(\nu)\star\rho\), and so we arrive at
\[\langle\Lambda_{\widehat{\varphi}}(x)\,|\,T^{*}J_{\varphi}\widehat{ \lambda}(\omega)^{*}J_{\varphi}\Lambda_{\varphi}(\widehat{\lambda}(\nu)) \rangle=\langle T\Lambda_{\widehat{\varphi}}(x)\,|\,\Lambda_{\varphi}(\widehat {\lambda}(\nu)\sigma_{-i/2}^{\varphi}(\widehat{\lambda}(\omega)))\rangle\] \[=\langle\Lambda_{\widehat{\varphi}}(\Phi^{\dagger}(x))\,|\, \Lambda_{\varphi}(\widehat{\lambda}(\nu\star\rho))\rangle=(\nu\star\rho)( \Phi^{\dagger}(x)^{*})=\Phi_{*}(\nu\star\rho)(x^{*})=(\Phi_{*}(\nu)\star\rho)( x^{*})\] \[=\langle\Lambda_{\widehat{\varphi}}(x)\,|\,\Lambda_{\varphi}( \widehat{\lambda}(\Phi_{*}(\nu)\star\rho))\rangle=\langle\Lambda_{\widehat{ \varphi}}(x)\,|\,\Lambda_{\varphi}\big{(}\widehat{\lambda}(\Phi_{*}(\nu)) \sigma_{-i/2}^{\varphi}(\widehat{\lambda}(\omega))\big{)}\rangle\] \[=\langle\Lambda_{\widehat{\varphi}}(x)\,|\,J_{\varphi}\widehat{ \lambda}(\omega)^{*}J_{\varphi}T^{*}\Lambda_{\varphi}(\widehat{\lambda}(\nu))\rangle.\]
By Lemma 2.1, we know that the collection of such \(\omega\) is dense in \(\mathrm{L}^{1}(\widehat{\mathbb{G}})\), and so the corresponding collection of operators \(J_{\varphi}\widehat{\lambda}(\omega)^{*}J_{\varphi}\) is dense in \(\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\). Thus \(T^{*}\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime\prime}=\mathrm{L}^{\infty}( \mathbb{G})\), as required.
Next we consider (2). Suppose that \(T\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\). By equations (6.4) and (6.5), given \(\widehat{\lambda}(\omega),\widehat{\lambda}(\nu)\in\mathcal{N}\), we have
\[\Lambda_{\varphi}(\widehat{\lambda}(\Phi_{*}(\omega\star\nu)))=T^{*}\Lambda_{ \varphi}(\widehat{\lambda}(\omega\star\nu))=T^{*}\widehat{\lambda}(\omega) \Lambda_{\varphi}(\widehat{\lambda}(\nu))=\widehat{\lambda}(\omega)T^{*} \Lambda_{\varphi}(\widehat{\lambda}(\nu))=\Lambda_{\varphi}(\widehat{\lambda} (\omega\star\Phi_{*}(\nu))),\]
hence \(\Phi_{*}(\omega\star\nu)=\omega\star\Phi_{*}(\nu)\). As these functionals are dense in \(\mathrm{L}^{1}(\mathbb{G})\) by Lemma 2.1, the claim follows.
Conversely, suppose that \(\Phi_{*}(\omega\star\nu)=\omega\star\Phi_{*}(\nu)\) for all \(\omega,\nu\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\). For \(\widehat{\lambda}(\omega),\widehat{\lambda}(\nu)\in\mathcal{N}\) we then have
\[\widehat{\lambda}(\omega)T^{*}\Lambda_{\varphi}(\widehat{\lambda}(\nu))= \Lambda_{\varphi}(\widehat{\lambda}(\omega\star\Phi_{*}(\nu)))=\Lambda_{ \varphi}(\widehat{\lambda}(\Phi_{*}(\omega\star\nu)))=T^{*}\widehat{\lambda}( \omega)\Lambda_{\varphi}(\widehat{\lambda}(\nu)).\]
Again by density, it follows that \(T^{*}\in\mathrm{L}^{\infty}(\mathbb{G})^{\prime}\), as required.
Finally, (3) follows by combining (1) and (2).
Proof of Theorem 6.16.: If \(\mathbb{I}\) has AP then by Proposition 6.5 we have a net \((a_{i})_{i\in I}\) in \(\mathrm{c}_{00}(\mathbb{I})\) which converges in \((\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I})),w^{*})\) to \(1\), and the associated maps \(\Theta^{l}(a_{i})\) satisfy \(\Theta^{l}(a_{i})^{\dagger}=\Theta^{l}(a_{i})\). Proposition 4.12 show that each \(\Theta^{l}(a_{i})\) has an \(\mathrm{L}^{2}\)-implementation equal to \(S^{-1}(a_{i})\in\ell^{\infty}(\mathbb{I})\). Proposition 4.7, for \(f=\varepsilon\in\ell^{1}(\mathbb{I})\) being the counit of \(\mathbb{I}\), shows that \((\Theta^{l}(a_{i}))_{i\in I}\) converges to the identity in the stable point-weak\({}^{*}\)-topology, and consequently \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) has \(\mathrm{W}^{*}\mathrm{OAP}\) relative to \(\ell^{\infty}(\mathbb{I})\). This shows (1) \(\Rightarrow\) (2).
If \(\mathbb{I}\) has central AP then additionally \(S^{-1}(a_{i})\in\mathcal{Z}(\ell^{\infty}(\mathbb{I}))\) and so \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) has \(\mathrm{W}^{*}\mathrm{OAP}\) relative to \(\mathcal{Z}(\ell^{\infty}(\widehat{\mathbb{I}}))\). Thus (4) \(\Rightarrow\) (5).
Let us now show the equivalence of (2) and (3). Assume (2) and let \((\Phi_{i})_{i\in I}\) be a net giving \(\mathrm{W}^{*}\mathrm{OAP}\) of \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) relative to \(\ell^{\infty}(\mathbb{I})\). Define a net \((\Psi_{i})_{i\in I}\) by \(\Psi_{i}=\widehat{R}\circ\Phi_{i}^{\dagger}\circ\widehat{R}\). Lemma 4.8 shows that each \(\Psi_{i}\) is a normal, finite rank CB map on \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) and \(\Psi_{i}\underset{i\in I}{\longrightarrow}\mathrm{id}\) in the stable point-weak\({}^{*}\)-topology. Let \(T_{i}\in\ell^{\infty}(\mathbb{I})\) be the \(\mathrm{L}^{2}\)-implementation of \(\Phi_{i}\). By [46, Proposition 2.11] we know that \(J_{\varphi}\Lambda_{h}(x)=\Lambda_{h}(\widehat{R}(x)^{*})\) for each \(x\in\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\), and so
\[\Lambda_{h}(\Psi_{i}(x))=\Lambda_{h}(\widehat{R}\big{(}\Phi_{i}^ {\dagger}(\widehat{R}(x))\big{)})=J_{\varphi}\Lambda_{h}(\Phi_{i}^{\dagger}( \widehat{R}(x))^{*})\] \[=J_{\varphi}\Lambda_{h}(\Phi_{i}(\widehat{R}(x)^{*}))=J_{\varphi} T_{i}\Lambda_{h}(\widehat{R}(x)^{*})=J_{\varphi}T_{i}J_{\varphi}\Lambda_{h}(x).\]
Hence \(\Psi_{i}\) has \(\mathrm{L}^{2}\)-implementation \(J_{\varphi}T_{i}J_{\varphi}\in\ell^{\infty}(\mathbb{I})^{\prime}\), showing (3). The converse is analogous.
Now assume (2), i.e. that \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) has \(\mathrm{W}^{*}\mathrm{OAP}\) relative to \(\ell^{\infty}(\mathbb{I})\). As before, let \((\Phi_{i})_{i\in I}\) be the corresponding net of normal, finite rank CB maps with \(\mathrm{L}^{2}\)-implementations \(T_{i}\in\ell^{\infty}(\mathbb{I})\). For \(i\in I\), let \(\Psi=\Phi_{i}^{\dagger}\), so that \(\Psi\) is a normal CB map such that \(\Psi^{\dagger}=\Phi_{i}\) has \(\mathrm{L}^{2}\)-implementation \(T_{i}\in\ell^{\infty}(\mathbb{I})\). By Proposition 6.18(1), \(\Psi_{*}\) is a left centraliser. Hence also \(\Phi_{i,*}\) is a left centraliser, compare with the proof of Proposition 4.9, and so there is \(a_{i}\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I}))\) with \(\Theta^{l}(a_{i})=\Phi_{i}\). By Lemma 6.1, as \(\Phi_{i}\) is finite-rank, it must be that \(a_{i}\in\mathrm{c}_{00}(\mathbb{I})\). By definition, \((\Phi_{i})_{i\in I}=(\Theta^{l}(a_{i}))_{i\in I}\) converges to the identity in the stable point-weak\({}^{*}\)-topology, and so by Proposition 3.9, \(a_{i}\underset{i\in I}{\longrightarrow}\mathrm{1}\) in \((\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{I})),w^{*})\) and consequently \(\mathbb{I}\) has AP. Therefore (1) and (2) are equivalent.
Finally, suppose that \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\) has \(\mathrm{W}^{*}\mathrm{OAP}\) relative to \(\mathcal{Z}(\ell^{\infty}(\mathbb{I}))\), and proceed as above. By definition, we have that \(T_{i}\Lambda_{h}(x)=\Lambda_{h}(\Phi_{i}(x))=\Lambda_{h}(\Theta^{l}(a_{i})(x))\) for each \(x\in\mathrm{C}(\widehat{\mathbb{I}})\). By Proposition 4.12, it follows that \(T_{i}=S(a_{i}^{*})^{*}\). As \(T_{i}\in\mathcal{Z}(\ell^{\infty}(\mathbb{I}))\), it follows that \(a_{i}\in\mathcal{Z}(\ell^{\infty}(\mathbb{I}))\cap\mathrm{c}_{00}(\mathbb{I})\), which shows that \(\mathbb{I}\) has central AP. This establishes that (4) and (5) are equivalent. The implication (4) \(\Rightarrow\) (1) is trivial.
## 7. Permanence properties
### Quantum subgroups
For classical locally compact groups, AP passes to closed subgroups, see [31, Proposition 1.14]. We shall show that an analogous property holds also in the quantum case.
Let us start by recalling the notion of a closed quantum subgroup of a locally compact quantum group, see [20]. In what follows we will use the universal \(\mathrm{C}^{*}\)-algebra \(\mathrm{C}^{u}_{0}(\mathbb{G})\) and the reducing map \(\Lambda_{\mathbb{G}}:\mathrm{C}^{u}_{0}(\mathbb{G})\to\mathrm{C}_{0}(\mathbb{G})\), see [43], along with the semi-universal and universal multiplicative unitaries, compare [20, Section 1.2]. We will also use the notion of a quantum homomorphism as explored in [49], see also [19, Section 2.1].
Let \(\mathbb{H},\mathbb{G}\) be locally compact quantum groups. Assume that there is a homomorphism \(\mathbb{H}\to\mathbb{G}\) exhibited by a _strong quantum homomorphism_ in the sense of [20, Section 1.3], i.e. a non-degenerate \(\star\)-homomorphism \(\pi:\mathrm{C}^{u}_{0}(\mathbb{G})\to\mathrm{M}(\mathrm{C}^{u}_{0}(\mathbb{H}))\) such that \(\Delta^{u}_{\mathbb{H}}\circ\pi=(\pi\otimes\pi)\circ\Delta^{u}_{\mathbb{G}}\). Then there is a dual strong quantum homomorphism \(\widehat{\pi}:\mathrm{C}^{u}_{0}(\widehat{\mathbb{H}})\to\mathrm{M}(\mathrm{C}^ {u}_{0}(\widehat{\mathbb{G}}))\) which is related to \(\pi\)
via \((\pi\otimes\mathrm{id})\mathds{W}^{\mathbb{G}}=(\mathrm{id}\otimes\widehat{\pi}) \mathds{W}^{\mathbb{H}}\), see [20, Section 1.3]. In this situation we say that \(\mathbb{H}\) is a _closed quantum subgroup_ of \(\mathbb{G}\) in the sense of Vaes if there is a normal unit injective \(\star\)-homomorphism \(\gamma\colon\mathrm{L}^{\infty}(\widehat{\mathbb{H}})\to\mathrm{L}^{\infty}( \widehat{\mathbb{G}})\) such that \(\gamma(\Lambda_{\widehat{\mathbb{H}}}(x))=\Lambda_{\widehat{\mathbb{G}}}( \widehat{\pi}(x))\) for all \(x\in\mathrm{C}^{u}_{0}(\widehat{\mathbb{H}})\), see [65, Definition 2.5], [20, Definition 3.1]. Notice that this condition implies that \(\Delta_{\widehat{\mathbb{G}}}\gamma=(\gamma\otimes\gamma)\Delta_{\widehat{ \mathbb{H}}}\).
**Theorem 7.1**.: _Let \(\mathbb{H},\mathbb{G}\) be locally compact quantum groups and assume that \(\mathbb{H}\) is a closed quantum subgroup of \(\mathbb{G}\) in the sense of Vaes. If \(\mathbb{G}\) has AP then so does \(\mathbb{H}\)._
Proof.: Since \(\mathbb{H}\) is a closed quantum subgroup of \(\mathbb{G}\) we obtain maps \(\pi,\widehat{\pi},\gamma\) as discussed above. If \(\mathbb{G}\) has AP, then according to Theorem 4.4 we can choose a net \((a_{i})_{i\in I}\) in \(\mathrm{A}(\mathbb{G})\subseteq\mathrm{C}_{0}(\mathbb{G})\) such that \((\Theta^{l}(a_{i}))_{i\in I}\) converges to the identity in the stable point-weak\({}^{*}\)-topology of \(\mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}}))\). For each \(i\in I\) let \(\omega_{i}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\) be such that \(a_{i}=\lambda_{\widehat{\mathbb{G}}}(\omega_{i})\), and define \(b_{i}=\lambda_{\widehat{\mathbb{H}}}(\gamma_{*}(\omega_{i}))\). As \(\gamma_{*}(\omega_{i})\in\mathrm{L}^{1}(\widehat{\mathbb{H}})\), we see that \(b_{i}\in\mathrm{A}(\mathbb{H})\). Let \(\mathsf{H}\) be a separable Hilbert space and take \(x\in\mathrm{C}_{0}(\widehat{\mathbb{H}})\otimes\mathcal{K}(\mathsf{H}),\omega \in\mathrm{L}^{1}(\widehat{\mathbb{H}})\widehat{\otimes}\,\mathrm{B}( \mathsf{H})_{*}\). We have
\[\langle b_{i},\Omega_{x,\omega}\rangle=\langle(\Theta^{l}(b_{i})\otimes \mathrm{id})x,\omega\rangle=\langle\big{(}(\gamma_{*}(\omega_{i})\otimes \mathrm{id})\Delta_{\widehat{\mathbb{H}}}\otimes\mathrm{id}\big{)}x,\omega\rangle.\]
Since \(\gamma\) is a complete isometry, \(\gamma_{*}\) is a completely quotient map ([23, Corollary 4.1.9]). By [23, Proposition 7.1.7], \(\gamma_{*}\otimes\mathrm{id}\colon\mathrm{L}^{1}(\widehat{\mathbb{G}}) \widehat{\otimes}\,\mathrm{B}(\mathsf{H})_{*}\to\mathrm{L}^{1}(\widehat{ \mathbb{H}})\widehat{\otimes}\,\mathrm{B}(\mathsf{H})_{*}\) is also a complete quotient map, hence we can find \(\omega^{\prime}\in\mathrm{L}^{1}(\widehat{\mathbb{G}})\widehat{\otimes}\, \mathrm{B}(\mathsf{H})_{*}\) such that \(\omega=(\gamma_{*}\otimes\mathrm{id})\omega^{\prime}\).
Since \(\gamma\) intertwines the coproducts,
\[\langle b_{i},\Omega_{x,\omega}\rangle =\langle(\omega_{i}\otimes\mathrm{id}\otimes\mathrm{id})\big{(}( \gamma\otimes\gamma)\Delta_{\widehat{\mathbb{H}}}\otimes\mathrm{id})x,\omega ^{\prime}\rangle\] \[=\langle((\omega_{i}\otimes\mathrm{id})\Delta_{\widehat{\mathbb{G }}}\otimes\mathrm{id})(\gamma\otimes\mathrm{id})x,\omega^{\prime}\rangle= \langle(\Theta^{l}(a_{i})\otimes\mathrm{id})(\gamma\otimes\mathrm{id})x, \omega^{\prime}\rangle.\]
Using stable point-weak\({}^{*}\)-convergence we obtain
\[\langle b_{i},\Omega_{x,\omega}\rangle\underset{i\in I}{\longrightarrow}\langle( \gamma\otimes\mathrm{id})x,\omega^{\prime}\rangle=\langle x,\omega\rangle= \langle\mathbb{1},\Omega_{x,\omega}\rangle.\]
Hence \(b_{i}\overset{w^{*}}{\underset{i\in I}{\longrightarrow}}\mathbb{1}\) showing that \((b_{i})_{i\in I}\) witnesses that \(\mathbb{H}\) has the AP.
### Direct limits
In this section we show that AP is preserved by taking direct limits of discrete quantum groups obtained from directed systems with injective connecting maps. The corresponding fact for classical groups is certainly known, but we were not able to locate a reference.
Let us first recall some facts about quantum subgroups of discrete quantum groups, using the notation and terminology from Section 6. In the discrete setting there is no difference between closed quantum subgroups in the sense of Vaes, closed quantum subgroups in the sense of Woronowicz [20, Theorem 6.2], and open quantum subgroups in the sense of [35]. We will therefore simply speak of quantum subgroups of discrete quantum groups in the sequel.
Let \(\mathbb{I},\mathbb{A}\) be discrete quantum groups and assume that \(\mathbb{A}\) is a quantum subgroup of \(\mathbb{I}\). Then one can identify \(\mathrm{Irr}(\widehat{\mathbb{A}})\) with a subset of \(\mathrm{Irr}(\widehat{\mathbb{I}})\), and one obtains a corresponding identification of \(\mathrm{L}^{2}(\widehat{\mathbb{A}})\) with a subspace of \(\mathrm{L}^{2}(\widehat{\mathbb{I}})\). Let \(p\in\ell^{\infty}(\mathbb{I})\subseteq\mathrm{B}(\mathrm{L}^{2}(\widehat{ \mathbb{I}}))\) be the projection onto \(\mathrm{L}^{2}(\widehat{\mathbb{A}})\). Then \(p\) is a group-like projection (i.e. \(p\) is a central projection satisfying \(\Delta_{\mathbb{I}}(p)(\mathbb{1}\otimes p)=p\otimes p\), see [35, Definition 4.1]) and the strong quantum homomorphism \(\pi\colon\mathrm{c}_{0}(\mathbb{I})\to\mathrm{c}_{0}(\mathbb{A})\) associated with the inclusion of \(\mathbb{A}\) into \(\mathbb{I}\) is given by \(\pi(f)=fp\). Dually, we have an injective, normal, unital \(\star\)-homomorphism \(\iota\colon\mathrm{L}^{\infty}(\widehat{\mathbb{A}})\to\mathrm{L}^{\infty}( \widehat{\mathbb{I}})\) which respects the coproducts. The map \(\iota\) restricts to injective \(\star\)-homomorphisms \(\mathrm{C}(\widehat{\mathbb{A}})\to\mathrm{C}(\widehat{\mathbb{I}})\) and \(\mathrm{Pol}(\widehat{\mathbb{A}})\to\mathrm{Pol}(\widehat{\mathbb{I}})\).
The following fact is well-known, compare for instance [69, Section 2]. Using \(\iota\) we can view \(\mathrm{L}^{\infty}(\widehat{\mathbb{A}})\) as a subalgebra of \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\), and so in the following, make sense of \(\mathbb{E}\) being a conditional expectation in the usual sense of a contractive projection onto a subalgebra.
**Lemma 7.2**.: _The formula \(\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\ni x\mapsto pxp\in\mathrm{B}(p\, \mathrm{L}^{2}(\widehat{\mathbb{I}}))=\mathrm{B}(\mathrm{L}^{2}(\widehat{ \mathbb{A}}))\) defines a normal conditional expectation \(\mathbb{E}\colon\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\to\mathrm{L}^{\infty}( \widehat{\mathbb{A}})\) satisfying \(\mathbb{E}(U^{\alpha}_{i,j})=0\) for \(\alpha\in\mathrm{Irr}(\widehat{\mathbb{I}})\setminus\mathrm{Irr}(\widehat{ \mathbb{A}})\), \(1\leq i,j\leq\dim(\alpha)\). Furthermore, \(\mathbb{E}\) restricts to a conditional expectation \(\mathrm{C}(\widehat{\mathbb{I}})\to\mathrm{C}(\widehat{\mathbb{A}})\)._
We shall be interested in directed systems of discrete quantum groups in the following sense.
**Definition 7.3**.: Let \(I\) be a directed set. A _directed system of discrete quantum groups with injective connecting maps_ is a family of discrete quantum groups \((\mathbb{T}_{i})_{i\in I}\) together with injective unital normal \(\star\)-homomorphisms
\[\iota_{j,i}\colon\,\mathrm{L}^{\infty}(\widehat{\mathbb{T}}_{i})\to\mathrm{L}^{ \infty}(\widehat{\mathbb{T}}_{j})\qquad(i,j\in I\colon i\leq j),\]
compatible with coproducts, such that
* \(\iota_{i,i}=\mathrm{id}\) for \(i\in I\),
* \(\iota_{k,j}\iota_{j,i}=\iota_{k,i}\) for all \(i,j,k\in I\) satisfying \(i\leq j\leq k\).
If \((\mathbb{T}_{i})_{i\in I}\) is a directed system of discrete quantum groups with injective connecting maps then \(\mathbb{T}_{i}\) is a quantum subgroup of \(\mathbb{T}_{j}\) for \(i\leq j\), and we have injective maps \(\mathrm{Pol}(\widehat{\mathbb{T}}_{i})\to\mathrm{Pol}(\widehat{\mathbb{T}}_{j})\). The algebraic direct limit \(\varinjlim_{i\in I}\mathrm{Pol}(\widehat{\mathbb{T}}_{i})\) becomes naturally a unital Hopf \(\ast\)-algebra, equipped with an invariant faithful state induced by the Haar integrals of \(\widehat{\mathbb{T}}_{i}\). We therefore have \(\varinjlim_{i\in I}\mathrm{Pol}(\widehat{\mathbb{T}}_{i})=\mathrm{Pol}( \widehat{\mathbb{T}})\) for a uniquely determined discrete quantum group \(\mathbb{T}\), see for example [37, Chapter 11, Theorem 27]. We denote \(\mathbb{T}=\varinjlim_{i\in I}\mathbb{T}_{i}\) and call this the _direct limit_ of the directed system \((\mathbb{T}_{i})_{i\in I}\).
**Proposition 7.4**.: _Let \((\mathbb{T}_{i})_{i\in I}\) be a directed system of discrete quantum groups with injective connecting maps and let \(\mathbb{T}\) be its associated direct limit. If \(\mathbb{T}_{i}\) has (central) AP for all \(i\in I\), then \(\mathbb{T}\) has (central) AP._
Proof.: By construction each \(\mathbb{T}_{i}\) is a quantum subgroup of \(\mathbb{T}\). Consequently we obtain injective normal \(\star\)-homomorphisms \(\iota_{i}:\mathrm{L}^{\infty}(\widehat{\mathbb{T}}_{i})\to\mathrm{L}^{\infty}( \widehat{\mathbb{T}})\), and normal conditional expectations \(\mathbb{E}_{i}\colon\,\mathrm{L}^{\infty}(\widehat{\mathbb{T}})\to\mathrm{L}^{ \infty}(\widehat{\mathbb{T}}_{i})\) for all \(i\in I\).
Identifying \(\mathrm{Irr}(\widehat{\mathbb{T}}_{i})\) with a subset of \(\mathrm{Irr}(\widehat{\mathbb{T}})\) gives us the extension by zero map \(\rho_{i}:\ell^{\infty}(\mathbb{T}_{i})\to\ell^{\infty}(\mathbb{T})\). For \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{T}})\) we have \(\omega\circ\iota_{i}\in\mathrm{L}^{1}(\widehat{\mathbb{T}}_{i})\) and so \(\lambda_{\widehat{\mathbb{T}}_{i}}(\omega\circ\iota_{i})\in\ell^{\infty}( \mathbb{T}_{i})\). We see that \(\rho_{i}\lambda_{\widehat{\mathbb{T}}_{i}}(\omega\circ\iota_{i})\) agrees with \(\lambda_{\widehat{\mathbb{T}}}(\omega)\in\ell^{\infty}(\mathbb{T})\) restricted to \(\ell^{\infty}(\mathbb{T}_{i})\) and set to zero in the remaining matrix blocks. Similarly, for \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{T}}_{i})\), by normality, \(\omega\circ\mathbb{E}_{i}\in\mathrm{L}^{1}(\widehat{\mathbb{T}})\), and as \(\mathbb{E}_{i}(U_{i,j}^{\alpha})=0\) for \(\alpha\not\in\mathrm{Irr}(\widehat{\mathbb{T}}_{i})\), see Lemma 7.2, it follows that \(\lambda_{\widehat{\mathbb{T}}}(\omega\circ\mathbb{E}_{i})=\rho_{i}\lambda_{ \widehat{\mathbb{T}}_{i}}(\omega)\).
We claim that \(\rho_{i}\) restricts to a contraction \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{i}))\to\mathrm{M}^{l}_{cb}(\mathrm{A }(\mathbb{T}))\). Indeed, take \(a\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{i}))\) and \(\omega\in\mathrm{L}^{1}(\widehat{\mathbb{T}})\). Using the observations from the previous paragraph,
\[\rho_{i}(a)\lambda_{\widehat{\mathbb{T}}}(\omega) =\rho_{i}\big{(}a\lambda_{\widehat{\mathbb{T}}_{i}}(\omega\circ \iota_{i})\big{)}\] \[=\rho_{i}\big{(}\lambda_{\widehat{\mathbb{T}}_{i}}(\Theta^{l}(a)_ {\ast}(\omega\circ\iota_{i}))\big{)}=\rho_{i}\big{(}\lambda_{\widehat{\mathbb{ T}}_{i}}(\omega\circ\iota_{i}\circ\Theta^{l}(a))\big{)}\] \[=\lambda_{\widehat{\mathbb{T}}}\big{(}\omega\circ\iota_{i}\circ \Theta^{l}(a)\circ\mathbb{E}_{i}\big{)}.\]
It follows that \(\rho_{i}(a)\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\) and
\[\Theta^{l}(\rho_{i}(a))=\iota_{i}\circ\Theta^{l}(a)\circ\mathbb{E}_{i}\in \mathrm{CB}^{\sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{T}})),\]
which yields the claim. By the definition of \(\rho_{i}\) it is clear that \(\rho_{i}^{\ast}(\ell^{1}(\mathbb{T}))\subseteq\ell^{1}(\mathbb{T}_{i})\subseteq Q ^{l}(\mathrm{A}(\mathbb{T}_{i}))\), which shows that the induced map \(\rho_{i}:\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{i}))\to\mathrm{M}^{l}_{cb}( \mathrm{A}(\mathbb{T}))\) is weak\({}^{\ast}\)-weak\({}^{\ast}\)-continuous.
If \(\mathbb{T}_{i}\) has AP then the identity element \(\mathbb{1}\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{i}))\) is in the weak\({}^{\ast}\)-closure of \(\mathrm{c}_{00}(\mathbb{T}_{i})\) inside \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{i}))\). As \(\rho_{i}\) is weak\({}^{\ast}\)-weak\({}^{\ast}\)-continuous, it follows that \(\rho_{i}(\mathbb{1})\) is contained in the weak\({}^{\ast}\)-closure of \(\rho_{i}(\mathrm{c}_{00}(\mathbb{T}_{i}))\subseteq\mathrm{c}_{00}(\mathbb{T})\). So \(p_{i}=\rho_{i}(1)\), the projection corresponding to \(\mathrm{Irr}(\widehat{\mathbb{T}}_{i})\subseteq\mathrm{Irr}(\widehat{\mathbb{ T}})\), is contained in the weak\({}^{\ast}\)-closure of \(\mathrm{c}_{00}(\mathbb{T})\) inside \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\). Clearly we have \(\langle p_{i},\omega\rangle\xrightarrow[i\in I]{}\langle 1,\omega\rangle\) for all \(\omega\in\ell^{1}(\mathbb{T})\). Moreover we have \(\|p_{i}\|_{cb}=1\) since \(p_{i}\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\) with \(\Theta^{l}(p_{i})=\iota_{i}\circ\mathbb{E}_{i}\). Hence if all \(\mathbb{T}_{i}\) have AP we see that \(\mathbb{1}\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\) is contained in the weak\({}^{\ast}\)-closure of \(\mathrm{c}_{00}(\mathbb{T})\). This means that \(\mathbb{T}\) has AP.
If all \(\mathbb{T}_{i}\) have central AP then we additionally know that \(\mathbb{1}\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{i}))\) is in the weak\({}^{\ast}\)-closure of \(\mathcal{Z}(\ell^{\infty}(\mathbb{T}_{i}))\cap\mathrm{c}_{00}(\mathbb{T}_{i})\), hence each \(p_{i}\) is in the weak\({}^{\ast}\)-closure of \(\mathcal{Z}(\ell^{\infty}(\mathbb{T}))\cap\mathrm{c}_{00}(\mathbb{T})\). It follows that \(\mathbb{T}\) has central AP.
### Free products
In this section we show that AP is preserved by the free product construction for discrete quantum groups. For classical groups this fact is probably known to experts, but we could not find a proof in the literature. Our proof is based on results of Ricard and Xu from [57] which we recall first.
#### 7.3.1. Ricard-Xu results
Let \((A_{i},\phi_{i})_{i\in I}\) be a family of unital \(\mathrm{C}^{*}\)-algebras with faithful states indexed by some set \(I\). Denote by \(\mathsf{H}_{i}\) the GNS Hilbert space for \(\phi_{i}\), and by \(\mathsf{H}_{i}^{\mathrm{op}}\) the Hilbert space obtained from \(A_{i}\) by completion with respect to the norm given by \(a\mapsto\phi_{i}(aa^{*})^{1/2}\). Then \(a\mapsto a^{*}\) extends to an antilinear isometry \(\mathsf{H}_{i}\to\mathsf{H}_{i}^{\mathrm{op}}\).
We write \(\mathcal{A}=\star_{i\in I}(A_{i},\phi_{i})\) for the _reduced unital free product_ of the family \((A_{i},\phi_{i})_{i\in I}\), and \(A\subseteq\mathcal{A}\) for its canonical dense unital \(\star\)-subalgebra (the algebraic unital free product), compare [5]. Next, for \(d\geq 0\), denote by \(\Sigma_{d}\subseteq A\) the subspace of length-\(d\) elements. That is, we have \(\Sigma_{0}=\mathbb{C}\mathbb{1}\), and if \(\hat{A}_{i}\) denotes the set of all \(a\in A_{i}\) with \(\phi_{i}(a)=0\), then \(\Sigma_{d}\) for \(d\geq 1\) is the subspace of \(A\) spanned by all elements of the form \(a_{1}\cdots a_{d}\) where \(a_{j}\in\hat{A}_{i_{j}}\) for each \(j\), and with \(i_{j}\neq i_{j+1}\) for \(1\leq j<d\). Moreover we let \(\mathcal{A}_{d}\subseteq\mathcal{A}\) be the norm closure of \(\Sigma_{d}\).
In the sequel we shall use two results from [57], the first one being the following.
**Lemma 7.5**.: _[_57_, Corollary 3.3]_ _For \(d\geq 0\), the natural projection \(A\to\Sigma_{d}\) onto length-\(d\) elements extends to a CB map \(\mathcal{P}_{d}\colon\mathcal{A}\to\mathcal{A}_{d}\) with \(\|\mathcal{P}_{d}\|_{cb}\leq\max(4d,1)\)._
The second fact which we need is a minor extension of [57, Lemma 4.10].
**Lemma 7.6**.: _Fix \(d\geq 1\). For \(i\in I\) and \(1\leq k\leq d\), let \(T_{i,k}\in\mathrm{CB}(A_{i})\) be linear maps which satisfy \(\phi_{i}\circ T_{i,k}=\lambda_{k}\phi_{i}\) for some \(\lambda_{k}\in\mathbb{C}\), and which extend to bounded maps \(\mathsf{H}_{i}\to\mathsf{H}_{i}\) and \(\mathsf{H}_{i}^{\mathrm{op}}\to\mathsf{H}_{i}^{\mathrm{op}}\). If_
\[K=(2d+1)\prod_{k=1}^{d}\sup_{i}\max\bigl{(}\|T_{i,k}\|_{cb},\|T_{i,k}\|_{ \mathrm{B}(\mathsf{H}_{i})},\|T_{i,k}\|_{\mathrm{B}(\mathsf{H}_{i}^{\mathrm{op }})}\bigr{)}<\infty\]
_then the natural map \(\Pi_{k}T_{i,k}\colon\Sigma_{d}\to\Sigma_{d}\) given by_
\[a_{1}\cdots a_{d}\mapsto T_{i_{1},1}(a_{1})\cdots T_{i_{d},d}(a_{d})\qquad(a_{ j}\in\hat{A}_{i_{j}},\;i_{j}\neq i_{j+1})\]
_extends to a CB map \(\mathcal{A}_{d}\to\mathcal{A}_{d}\) with CB norm bounded above by \(K\)._
Proof.: The only difference between this claim and [57, Lemma 4.10] is that [57, Lemma 4.10] has the stronger hypothesis that \(\phi_{i}\circ T_{i,k}=\phi_{i}\) for each \(i,k\). A close examination of the proof of [57, Lemma 4.10] shows that this hypothesis is only used to ensure that the map \(\Pi_{k}T_{i,k}\) is well-defined, because each \(T_{i,k}\) maps \(\hat{A}_{i}\) to itself. This condition remains true under our weaker hypothesis, and the rest of the proof of [57, Lemma 4.10] carries over without change.
#### 7.3.2. AP for free products
Let \(\mathbb{\Gamma}_{1},\mathbb{\Gamma}_{2}\) be discrete quantum groups and let \(\mathbb{\Gamma}=\mathbb{\Gamma}_{1}\star\mathbb{\Gamma}_{2}\) be their _free product_. Recall from [71] that this means in particular that \(\mathcal{A}=\mathrm{C}(\widehat{\mathbb{\Gamma}})\) is the unital reduced free product \(\mathrm{C}(\widehat{\mathbb{\Gamma}}_{1})\star\mathrm{C}(\widehat{\mathbb{ \Gamma}}_{2})\) with respect to Haar integrals, and \(h_{\widehat{\mathbb{\Gamma}}}\) is the free product state \(h_{\widehat{\mathbb{\Gamma}}_{1}}\star h_{\widehat{\mathbb{\Gamma}}_{2}}\). Moreover \(\mathrm{Pol}(\widehat{\mathbb{\Gamma}})\) is the algebraic unital free product of \(\mathrm{Pol}(\widehat{\mathbb{\Gamma}}_{1})\) and \(\mathrm{Pol}(\widehat{\mathbb{\Gamma}}_{2})\). The irreducible representations of \(\widehat{\mathbb{\Gamma}}\) are given as follows, see [71, Theorem 3.10]. Each \(\alpha\in\mathrm{Irr}(\widehat{\mathbb{\Gamma}})\) has a well-defined length \(\mathrm{len}(\alpha)\in\mathbb{Z}_{+}\). The trivial representation is the only representation of length \(0\), and for \(n\geq 1\) we have
\[\{\alpha\in\mathrm{Irr}(\widehat{\mathbb{\Gamma}})\,|\,\,\mathrm{len}(\alpha)=n \}=\{\alpha_{i_{1}}\boxtimes\cdots\boxtimes\alpha_{i_{n}}\,|\,\forall_{1\leq j \leq n}\,\alpha_{i_{j}}\in\mathrm{Irr}(\widehat{\mathbb{\Gamma}}_{i_{j}})\setminus \{e\},\;\forall_{1\leq j<n}\,i_{j}\neq i_{j+1}\}.\]
Again, here we denote by \(e\) the trivial representation of a compact quantum group. More explicitly, given \(\alpha\in\mathrm{Irr}(\widehat{\mathbb{\Gamma}}_{k})\) associated to the corepresentation matrix \(U^{\alpha}=[U^{\alpha}_{i,j}]_{i,j=1}^{\mathrm{dim}(\alpha)}\in\mathrm{M}_{ \mathrm{dim}(\alpha)}(\mathrm{C}(\widehat{\mathbb{\Gamma}}_{k}))\), by regarding \(\mathrm{C}(\widehat{\mathbb{\Gamma}}_{k})\) as a subalgebra of \(\mathrm{C}(\widehat{\mathbb{\Gamma}})\), we may regard \(U^{\alpha}\) as a corepresentation of \((\mathrm{C}(\widehat{\mathbb{\Gamma}}),\Delta_{\mathbb{\Gamma}})\). Then \(\boxtimes\) is just the usual tensor product of corepresentations.
To ease notation, we will write \(h=h_{\widehat{\mathbb{\Gamma}}}\) in the sequel.
**Theorem 7.7**.: _Let \(\mathbb{\Gamma}_{1},\mathbb{\Gamma}_{2}\) be discrete quantum groups and let \(\mathbb{\Gamma}=\mathbb{\Gamma}_{1}\star\mathbb{\Gamma}_{2}\) be their free product. If \(\mathbb{\Gamma}_{1},\mathbb{\Gamma}_{2}\) have (central) AP, then \(\mathbb{\Gamma}\) has (central) AP._
Before we can prove Theorem 7.7 we need to establish some auxiliary results. For \(d\in\mathbb{N}\) let us define the (non-linear) map
\[\tilde{\Psi}_{d}\colon\,\bigoplus_{k=1}^{d}\ell^{\infty}(\mathbb{T}_{1})\oplus_{ \infty}\ell^{\infty}(\mathbb{T}_{2})\to\ell^{\infty}(\mathbb{T}) \tag{7.1}\]
(here \(\oplus\) is the \(\ell^{\infty}\)-direct sum) via
\[\tilde{\Psi}_{d}((g_{1,k},g_{2,k})_{k=1}^{d}) =\big{(}\tilde{\Psi}_{d}((g_{1,k},g_{2,k})_{k=1}^{d})_{\alpha} \big{)}_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{T}})},\qquad\text{where}\] \[\tilde{\Psi}_{d}((g_{1,k},g_{2,k})_{k=1}^{d})_{\alpha} =\begin{cases}0,&\text{len}(\alpha)\neq d,\\ g_{i_{1},1,\alpha_{1}}\otimes\cdots\otimes g_{i_{d},d,\alpha_{d}},&\alpha= \alpha_{1}\boxtimes\cdots\boxtimes\alpha_{d}\colon\alpha_{j}\in\operatorname {Irr}(\widehat{\mathbb{T}}_{i_{j}}).\end{cases}\]
We consider
\[\mathcal{V}=\big{\{}(g_{1,k},g_{2,k})_{k=1}^{d}\in\bigoplus_{k=1}^{d}\mathrm{M }_{cb}^{l}(\mathrm{A}(\mathbb{T}_{1}))\oplus_{\infty}\mathrm{M}_{cb}^{l}( \mathrm{A}(\mathbb{T}_{2}))\,|\,\forall_{1\leq k\leq d}\,g_{1,k,e}=g_{2,k,e} \big{\}},\]
and write \(\Psi_{d}\) for the restriction of \(\tilde{\Psi}_{d}\) to \(\mathcal{V}\). Recall that \(\mathcal{P}_{d}:\mathcal{A}\to\mathcal{A}_{d}\) is induced by the projection onto elements of length \(d\).
**Lemma 7.8**.: _The image of \(\Psi_{d}\) is a subset of \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\mathbb{T}))\), that is, we can regard \(\Psi_{d}\) as a map \(\mathcal{V}\to\mathrm{M}_{cb}^{l}(\mathrm{A}(\mathbb{T}))\). Furthermore, \(\mathcal{P}_{d}\) extends to a weak\({}^{*}\)-weak\({}^{*}\)-continuous CB map \(\mathrm{L}^{\infty}(\widehat{\mathbb{T}})\to\overline{\mathcal{A}_{d}}^{w^{*}}\)._
Proof.: Fix \((g_{1,k},g_{2,k})_{k=1}^{d}\in\mathcal{V}\). Proposition 4.12 shows that each \(\Theta^{l}(g_{i,k})\) extends to a bounded linear map on \(\mathrm{L}^{2}(\widehat{\mathbb{T}}_{i})\) with norm \(\|S_{\mathbb{T}_{i}}^{-1}(g_{i,k})\|\). Next, using Proposition 4.9 and Proposition 4.12, for \(x\in\mathrm{L}^{\infty}(\widehat{\mathbb{T}})\), we have
\[\|\Theta^{l}(g_{i,k})(x)\|_{\mathrm{L}^{2}(\widehat{\mathbb{T}}_{ i})^{\mathrm{op}}}=\|\Theta^{l}(g_{i,k})(x)^{*}\|_{2}=\|\Theta^{l}(g_{i,k})^{ \dagger}(x^{*})\|_{2}=\|\Theta^{l}(S_{\mathbb{T}_{i}}(g_{i,k}^{*}))(x^{*})\|_{2}\] \[\leq\|S_{\mathbb{T}_{i}}^{-1}(S_{\mathbb{T}_{i}}(g_{i,k}^{*}))\| \|x^{*}\|_{2}=\|g_{i,k}^{*}\|\,\|x\|_{\mathrm{L}^{2}(\widehat{\mathbb{T}}_{i} )^{\mathrm{op}}}=\|g_{i,k}\|\,\|x\|_{\mathrm{L}^{2}(\widehat{\mathbb{T}}_{i} )^{\mathrm{op}}}.\]
Hence \(\Theta^{l}(g_{i,k})\) extends to a bounded linear map on \(\mathrm{L}^{2}(\widehat{\mathbb{T}}_{i})^{\mathrm{op}}\) with norm bounded by \(\|g_{i,k}\|\).
Let us consider \(T_{i,k}=\Theta^{l}(g_{i,k})\in\mathrm{CB}(\mathrm{C}(\widehat{\mathbb{T}}_{i}))\) for \(1\leq k\leq d\). Then, according to Lemma 7.5 and Lemma 7.6, we obtain a CB map \(\Upsilon\) on \(\mathrm{C}(\widehat{\mathbb{T}})\) acting by \(0\) on elements of length \(d^{\prime}\neq d\), and on elements of length \(d\) by
\[a_{1}\cdots a_{d}\mapsto\Theta^{l}(g_{i_{1},1})(a_{1})\cdots\Theta^{l}(g_{i_{d},d})(a_{d}),\]
where \(a_{j}\in\mathrm{C}(\widehat{\mathbb{T}}_{i_{j}})\), and the CB norm of \(\Upsilon\) is bounded above by
\[4d(2d+1)\prod_{k=1}^{d}\max_{i\in\{1,2\}}\max(\|g_{i,k}\|_{cb},\|S_{\mathbb{T }_{i}}^{-1}(g_{i,k})\|,\|g_{i,k}\|).\]
Since \(\|\cdot\|\leq\|\cdot\|_{cb}\) on \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\mathbb{T}_{i}))\) we have, using Lemma 4.8 and Proposition 4.9,
\[\|S_{\mathbb{T}_{i}}^{-1}(g_{i,k})\|=\|S_{\mathbb{T}_{i}}^{-1}(g_{i,k})^{*}\| \leq\|S_{\mathbb{T}_{i}}^{-1}(g_{i,k})^{*}\|_{cb}=\|g_{i,k}\|_{cb},\]
and hence we get in fact
\[\|\Upsilon\|_{cb}\leq 4d(2d+1)\prod_{k=1}^{d}\max_{i\in\{1,2\}}\|g_{i,k}\|_{cb}. \tag{7.2}\]
We claim that \(\Upsilon\) extends to a normal map on \(\mathrm{L}^{\infty}(\widehat{\mathbb{T}})\). For this it suffices to show that \(\Upsilon^{*}\) preserves \(\mathrm{L}^{1}(\widehat{\mathbb{T}})\subseteq\mathrm{C}(\widehat{\mathbb{T}})^ {*}\). Indeed, if this is the case, then the extension may be defined as \((\Upsilon^{*}|_{\mathrm{L}^{1}(\widehat{\mathbb{T}})})^{*}\in\mathrm{CB}^{ \sigma}(\mathrm{L}^{\infty}(\widehat{\mathbb{T}}))\).
Thus take \(\rho\in\mathrm{L}^{1}(\widehat{\mathbb{T}})\). Since \(\Upsilon\) is bounded and \(\mathrm{L}^{1}(\widehat{\mathbb{T}})\subseteq\mathrm{C}(\widehat{\mathbb{T}})^ {*}\) is norm-closed, it is enough to consider \(\rho=h(a\cdot)\) for \(a\in\mathrm{L}^{\infty}(\widehat{\mathbb{T}})\). Take \(b\in\mathrm{L}^{\infty}(\widehat{\mathbb{T}})\) and denote by \(\Upsilon_{2}\) the extension of \(\Upsilon\) to a
bounded linear map on \(\mathrm{L}^{2}(\widehat{\Gamma})\). Note that this extension exists since the GNS Hilbert space for \(h\) is
\[\mathrm{L}^{2}(\widehat{\Gamma})=\mathbb{C}\Omega\oplus\bigoplus_{d=1}^{\infty} \bigoplus_{i_{1}\neq\cdots\neq i_{d}}\mathrm{L}^{2}(\widehat{\Gamma}_{i_{1}})^ {\circ}\otimes\cdots\otimes\mathrm{L}^{2}(\widehat{\Gamma}_{i_{d}})^{\circ},\]
where \(\mathrm{L}^{2}(\widehat{\Gamma}_{i})^{\circ}\) is the subspace of \(\mathrm{L}^{2}(\widehat{\Gamma}_{i})\) orthogonal to \(\Lambda_{h_{1}}(1)\) (see [5, Section 2]), and \(\Theta^{l}(g_{i,k})\) has bounded extension to \(\mathrm{L}^{2}(\widehat{\Gamma}_{i})\) by Proposition 4.12. We have
\[\Upsilon^{*}(\rho)(b)=h(a\Upsilon(b))=\langle\Lambda_{h}(a^{*})\,|\,\Lambda_{h} (\Upsilon(b))\rangle=\langle\Lambda_{h}(a^{*})\,|\,\Upsilon_{2}\Lambda_{h}(b) \rangle=\langle\Upsilon_{2}^{*}\Lambda_{h}(a^{*})\,|\,\Lambda_{h}(b)\rangle.\]
Hence \(\Upsilon^{*}(\rho)=\omega_{\Upsilon_{2}^{*}\Lambda_{h}(a^{*}),\Lambda_{h}(1)} \in\mathrm{L}^{1}(\widehat{\Gamma})\). Let us denote the resulting normal extension of \(\Upsilon\) to \(\mathrm{L}^{\infty}(\widehat{\Gamma})\) with the same symbol.
In particular, taking \(g_{i,k}=1\) for all \(1\leq k\leq d\) in the above discussion shows that the projection \(\mathcal{P}_{d}:\mathrm{C}(\widehat{\Gamma})\to\mathcal{A}_{d}\) extends to a normal CB map \(\mathrm{L}^{\infty}(\widehat{\Gamma})\to\overline{\mathcal{A}_{d}}^{w^{*}}\).
We finally identify \(\Upsilon\) with the adjoint of a centraliser, namely we claim that \(\Psi_{d}((g_{1,k},g_{2,k})_{k=1}^{d})\in\mathrm{M}^{l}_{cb}(\Lambda(\Gamma))\) and \(\Upsilon=\Theta^{l}(\Psi_{d}((g_{1,k},g_{2,k})_{k=1}^{d}))\). For \(\alpha=\alpha_{1}\boxtimes\cdots\boxtimes\alpha_{d}\in\mathrm{Irr}(\widehat{ \Gamma})\) let us write \(i_{j}\in\{1,2\}\) for indices such that \(\alpha_{j}\in\mathrm{Irr}(\widehat{\Gamma}_{i_{j}})\setminus\{e\},\,i_{j} \neq i_{j+1}\). Furthermore, write each matrix block as \(g_{i,k,\alpha}=[g_{i,k,\alpha,m,n}]_{m,n=1}^{\mathrm{dim}(\alpha)}=\sum_{m,n= 1}^{\mathrm{dim}(\alpha)}g_{i,k,\alpha,m,n}e_{m,n}^{\alpha}\), where \(\{e_{m,n}^{\alpha}\}_{m,n=1}^{\mathrm{dim}(\alpha)}\) are the matrix units in \(\mathrm{B}(\mathsf{H}_{\alpha})\). Choose arbitrary \(\omega\in h(\mathrm{Pol}(\widehat{\Gamma})\cdot)\subseteq\mathrm{L}^{1}( \widehat{\Gamma})\). We can calculate \(\Psi_{d}((g_{1,k},g_{2,k})_{k=1}^{d})\lambda_{\widehat{\Gamma}}(\omega)\) as follows:
\[\Psi_{d}((g_{1,k},g_{2,k})_{k=1}^{d})\lambda_{\widehat{\Gamma}}(\omega)\] \[=\sum_{d^{\prime}=0}^{\infty}\sum_{\alpha=\alpha_{1}\boxtimes \cdots\boxtimes\alpha_{d^{\prime}}}\sum_{m_{1},n_{1}=1}^{\mathrm{dim}(\alpha_{1 })}\cdots\sum_{m_{d^{\prime}},n_{d^{\prime}}=1}^{\mathrm{dim}(\alpha_{d^{ \prime}})}\langle U_{(m_{1},\ldots,m_{d^{\prime}}),(n_{1},\ldots,n_{d^{\prime} })}^{\alpha}\rangle\] \[\Psi_{d}((g_{1,k},g_{2,k})_{k=1}^{d})(e_{m_{1},n_{1}}^{\alpha_{1 }}\otimes\cdots\otimes e_{m_{d^{\prime}},n_{d^{\prime}}}^{\alpha_{d^{\prime}} ^{\prime}})\] \[=\sum_{\alpha=\alpha_{1}\boxtimes\cdots\boxtimes\alpha_{d}}\sum_{m_ {1},n_{1}=1}^{\mathrm{dim}(\alpha_{1})}\cdots\sum_{m_{d},n_{d}=1}^{\mathrm{dim }(\alpha_{d})}\langle U_{(m_{1},\ldots,m_{d}),(n_{1},\ldots,n_{d})}^{\alpha} \rangle(g_{i_{1},1,\alpha_{1}}e_{m_{1},n_{1}}^{\alpha_{1}}\otimes\cdots\otimes g _{i_{d},d,\alpha_{d}}e_{m_{d},n_{d}}^{\alpha_{d}})\] \[=\sum_{\alpha=\alpha_{1}\boxtimes\cdots\boxtimes\alpha_{d}}\sum_{m_ {1},k_{1},n_{1}=1}^{\mathrm{dim}(\alpha_{1})}\cdots\sum_{m_{d},k_{d},n_{d}=1}^{ \mathrm{dim}(\alpha_{d})}\langle U_{m_{1},n_{1}}^{\alpha_{1}}\cdots U_{m_{d},n_{ d}}^{\alpha_{d}},\omega\rangle\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
so that \(\mathcal{V}=\bigoplus_{k=1}^{d}\mathcal{V}_{0}\).
Note that \(\bigoplus_{k=1}^{d}\mathrm{M}^{l}_{cb}(\mathrm{A}(\Gamma_{1}))\oplus_{\infty} \mathrm{M}^{l}_{cb}(\mathrm{A}(\Gamma_{2}))\) is a dual Banach space with predual given by the \(\ell^{1}\)-direct sum \(\bigoplus_{k=1}^{d}Q^{l}(\mathrm{A}(\Gamma_{1}))\oplus_{1}Q^{l}(\mathrm{A}( \Gamma_{2}))\). We can restrict the corresponding weak\({}^{*}\)-topology to \(\mathcal{V}\), and similarly for \(\mathcal{V}_{0}\).
**Lemma 7.9**.: \(\mathcal{V}\) _is weak\({}^{*}\)-closed, and \(\Psi_{d}\) is separately weak\({}^{*}\)-weak\({}^{*}\)-continuous._
Proof.: We show that \(\Psi_{d}\) is separately weak\({}^{*}\)-weak\({}^{*}\)-continuous. We need to show that for any \(1\leq k\leq d\) and fixed elements \((g_{1,k^{\prime}},g_{2,k^{\prime}})\in\mathcal{V}_{0}\) for \(1\leq k^{\prime}\leq d\) and \(k^{\prime}\neq k\), the map
\[\mathcal{V}_{0}\ni(g_{1,k},g_{2,k})\mapsto\Psi_{d}((g_{1,k^{\prime}},g_{2,k^{ \prime}})_{k^{\prime}=1}^{d})\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))\]
is weak\({}^{*}\)-weak\({}^{*}\)-continuous.
Assume that
\[(g_{1,k}^{\lambda},g_{2,k}^{\lambda})\xrightarrow[\lambda\in\Lambda]{w^{*}}(g _{1,k},g_{2,k})\quad\text{ in }\quad\mathcal{V}_{0}.\]
By definition of the restricted topology on \(\mathcal{V}_{0}\), we have
\[(g_{1,k}^{\lambda},g_{2,k}^{\lambda})\xrightarrow[\lambda\in\Lambda]{w^{*}}(g _{1,k},g_{2,k})\quad\text{ in }\quad\mathrm{M}^{l}_{cb}(\mathrm{A}(\Gamma_{1}))\oplus_{\infty} \mathrm{M}^{l}_{cb}(\mathrm{A}(\Gamma_{2}))\]
and so in particular,
\[(g_{1,k}^{\lambda},g_{2,k}^{\lambda})\xrightarrow[\lambda\in\Lambda]{w^{*}}(g _{1,k},g_{2,k})\quad\text{ in }\quad\ell^{\infty}(\Gamma_{1})\oplus_{\infty}\ell^{\infty}(\Gamma_{2}).\]
Take \(\Omega\in Q^{l}(\mathrm{A}(\mathbb{T}))\). Since \(Q^{l}(\mathrm{A}(\mathbb{T}))\) is the norm closure of \(\ell^{1}(\mathbb{T})\) in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}))^{*}\), we can find a sequence \((\Omega_{n})_{n\in\mathbb{N}}\) in \(\ell^{1}(\mathbb{T})\subseteq Q^{l}(\mathrm{A}(\mathbb{T}))\) which converges in norm to \(\Omega\). By the description (7.1) of \(\tilde{\Psi}_{d}\), we see that the map
\[\ell^{\infty}(\Gamma_{1})\oplus_{\infty}\ell^{\infty}(\Gamma_{2})\ni(g_{1,k}^ {\prime},g_{2,k}^{\prime})\mapsto\tilde{\Psi}_{d}\big{(}(g_{1,k^{\prime}},g_{ 2,k^{\prime}})_{k^{\prime}=1}^{k-1},(g_{1,k}^{\prime},g_{2,k}^{\prime}),(g_{1, k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=k+1}^{d}\big{)}\in\ell^{\infty}( \mathbb{T})\]
is weak\({}^{*}\)-weak\({}^{*}\)-continuous, hence the linear functional
\[\Omega_{n}\circ\tilde{\Psi}_{d}\big{(}(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^ {\prime}=1}^{k-1},\ \cdot\,,(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=k+1}^{d}\big{)}\]
is contained in \(\ell^{1}(\mathbb{T}_{1})\oplus_{1}\ell^{1}(\mathbb{T}_{2})\) for each \(n\in\mathbb{N}\). Consider the projection
\[\mathbb{P}\colon\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{1}))\oplus_{\infty }\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{2}))\to\mathcal{V}_{0}\]
given by
\[\mathbb{P}(g_{1},g_{2})=(g_{1}^{\prime},g_{2}^{\prime})\ \text{ where }\ g_{i,\alpha_{i}} ^{\prime}=g_{i,\alpha_{i}}\,(\alpha_{i}\in\mathrm{Irr}(\widehat{\Gamma}_{i}) \setminus\{e\}),\ g_{1,e}^{\prime}=g_{2,e}^{\prime}=\tfrac{g_{1,e}+g_{2,e}}{2}.\]
This is linear and continuous since the linear functional \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{i}))\ni g\mapsto g_{e}\in\mathbb{C}\) is continuous, and the image of \(\mathbb{P}\) is contained in the space \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{1}))\oplus_{\infty}\mathrm{M}^{l}_{ cb}(\mathrm{A}(\mathbb{T}_{2}))\) because we alter each \(g_{i}\) only in one entry. Furthermore, \(\mathbb{P}\) is weak\({}^{*}\)-weak\({}^{*}\)-continuous. Indeed, its adjoint is given by an analogous formula on \(\ell^{1}(\mathbb{T}_{1})\oplus_{1}\ell^{1}(\mathbb{T}_{2})\). The existence of \(\mathbb{P}\) shows that \(\mathcal{V}_{0}\) is weak\({}^{*}\)-closed, and hence also \(\mathcal{V}\) is weak\({}^{*}\)-closed.
Consider the difference
\[\Omega_{n}\circ\Psi_{d}\big{(}(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}= 1}^{k-1},\ \cdot\,,(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=k+1}^{d}\big{)}\circ \mathbb{P}\]
living in \(\mathrm{M}^{l}_{cb}(\mathrm{A}(\mathbb{T}_{1}))^{*}\oplus_{1}\mathrm{M}^{l}_{cb}( \mathrm{A}(\mathbb{T}_{2}))^{*}\). Because of the bound (7.2) we can estimate
\[\begin{split}&\big{\|}(\Omega-\Omega_{n})\circ\Psi_{d}\big{(}(g_{1, k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=1}^{k-1},\ \cdot\,,(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=k+1}^{d}\big{)}\circ \mathbb{P}\big{\|}\\ &=\sup_{(g_{1,k}^{\prime},g_{2,k}^{\prime})\in(\mathrm{M}^{l}_{cb}( \mathrm{A}(\mathbb{T}_{1}))\oplus_{\infty}\mathrm{M}^{l}_{cb}(\mathrm{A}( \mathbb{T}_{2})))_{1}}\\ &\qquad\big{|}(\Omega-\Omega_{n})\circ\Psi_{d}\big{(}(g_{1,k^{ \prime}},g_{2,k^{\prime}})_{k^{\prime}=1}^{k-1},\ \cdot\,,(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=k+1}^{d}\big{)}\circ \mathbb{P}(g_{1,k}^{\prime},g_{2,k}^{\prime})\big{|}\\ &\leq\sup_{(g_{1,k}^{\prime},g_{2,k}^{\prime})\in(\mathcal{V}_{0})_{ |\mathbb{P}|}}\big{|}(\Omega-\Omega_{n})\big{(}\Psi_{d}\big{(}(g_{1,k^{ \prime}},g_{2,k^{\prime}})_{k^{\prime}=1}^{k-1},(g_{1,k}^{\prime},g_{2,k}^{ \prime}),(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=k+1}^{d}\big{)} \big{)}\big{|}\\ &\leq\|\Omega-\Omega_{n}\|\ \|\mathbb{P}\|4d(2d+1)\prod_{k^{\prime}=1,k^{ \prime}\neq k}^{d}\max_{i\in\{1,2\}}\|g_{i,k^{\prime}}\|_{cb}\xrightarrow[n\to\infty]{}0. \end{split}\]
This shows
\[\Omega\circ\Psi_{d}\big{(}(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=1}^{k-1}, \ \cdot,(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=k+1}^{d}\big{)}\circ\mathbb{ P}\in Q^{l}(\Lambda(\mathbb{F}_{1}))\oplus_{1}Q^{l}(\Lambda(\mathbb{F}_{2})),\]
and hence
\[\Omega\circ\Psi_{d}\big{(}(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=1}^{ k-1},(g_{1,k}^{\lambda},g_{2,k}^{\lambda}),(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^ {\prime}=k+1}^{d}\big{)}\xrightarrow[]{}_{\lambda\in\Lambda}\Omega\circ\Psi_{d }\big{(}(g_{1,k^{\prime}},g_{2,k^{\prime}})_{k^{\prime}=1}^{d}\big{)}\]
as desired. This proves that \(\Psi_{d}\) is separately weak\({}^{*}\)-weak\({}^{*}\)-continuous.
For \(d\geq 1\) consider \(p_{d}\in\ell^{\infty}(\mathbb{F})\) defined via \(p_{d}=(p_{d,\alpha})_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{F}})}\) where
\[p_{d,\alpha}=\begin{cases}0,&\text{length of }\alpha\neq d,\\ \mathbb{1},&\text{length of }\alpha=d.\end{cases}\]
**Lemma 7.10**.: \(p_{d}\in\operatorname{M}_{cb}^{l}(\Lambda(\mathbb{F}))\) _and \(\Theta^{l}(p_{d})=\mathcal{P}_{d}\)._
Proof.: We already know that \(\mathcal{P}_{d}\) is a weak\({}^{*}\)-continuous map on \(\operatorname{L}^{\infty}(\widehat{\mathbb{F}})\). Take a linear functional \(\omega\in h(\operatorname{Pol}(\widehat{\mathbb{F}})\cdot)\subseteq \operatorname{L}^{1}(\widehat{\mathbb{F}})\). Then we get
\[(\omega\otimes\operatorname{id})\big{(}(\mathbb{1}\otimes p_{d}) \mathrm{W}^{\widehat{\mathbb{F}}}\big{)}=\sum_{d^{\prime}=1}^{\infty}\sum_{a \in\operatorname{Irr}(\widehat{\mathbb{F}})\colon\operatorname{len}(\alpha) =d^{\prime}}\sum_{i,j=1}^{\dim(\alpha)}(\omega\otimes\operatorname{id})\big{(} U_{i,j}^{\alpha}\otimes p_{d}\,e_{i,j}^{\alpha}\big{)}\] \[=\sum_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{F}})\colon \operatorname{len}(\alpha)=d}\sum_{i,j=1}^{\dim(\alpha)}(\omega\otimes \operatorname{id})\big{(}U_{i,j}^{\alpha}\otimes e_{i,j}^{\alpha}\big{)}\] \[=\sum_{d^{\prime}=1}^{\infty}\sum_{\alpha\in\operatorname{Irr}( \widehat{\mathbb{F}})\colon\operatorname{len}(\alpha)=d^{\prime}}\sum_{i,j=1} ^{\dim(\alpha)}(\omega\otimes\operatorname{id})\big{(}\mathcal{P}_{d}(U_{i,j} ^{\alpha})\otimes e_{i,j}^{\alpha}\big{)}=(\omega\circ\mathcal{P}_{d}\otimes \operatorname{id})\mathrm{W}^{\widehat{\mathbb{F}}},\]
noting that all sums in this calculation are finite, because of the form of \(\omega\). As such \(\omega\) are dense, this yields \((\mathbb{1}\otimes p_{d})\mathrm{W}^{\widehat{\mathbb{F}}}=(\mathcal{P}_{d} \otimes\operatorname{id})\mathrm{W}^{\widehat{\mathbb{F}}}\) as required.
Finally, we are ready to prove that \(\mathrm{AP}\) is preserved by taking free products of discrete quantum groups.
Proof of Theorem 7.7.: Assume that \(\mathbb{F}_{1},\mathbb{F}_{2}\) have \(\mathrm{AP}\) and choose families \((f_{i,\lambda})_{\lambda\in\Lambda_{i}}\) in \(\mathrm{c}_{00}(\mathbb{F}_{i})\) converging to \(\mathbb{1}\) in \((\operatorname{M}_{cb}^{l}(\Lambda(\mathbb{F}_{i})),w^{*})\). Due to Proposition 6.5 we may assume without loss of generality that each \(\Theta^{l}(f_{i,\lambda})\) is unit preserving. As in Remark 6.2, it then follows that \(\Theta^{l}(f_{i,\lambda})\) preserves the Haar integral on \(\widehat{\mathbb{F}}_{i}\), and that \(f_{i,\lambda,e}=1\) for all \(i\in\{1,2\},\lambda\in\Lambda_{i}\).
Fix \(d\in\mathbb{N}\). We shall first show that \(p_{d}\in\overline{\mathrm{c}_{00}(\mathbb{F})}^{w^{*}}\subseteq\operatorname{ M}_{cb}^{l}(\Lambda(\mathbb{F}))\). To do this, we will consider a net of the form
\[(f_{1,\lambda_{1,k}},f_{2,\lambda_{2,k}})_{k=1}^{d}\in\mathcal{V}.\]
where each \(\lambda_{i,k}\in\Lambda_{i}\) for \(i=1,2\) and \(1\leq k\leq d\). Lemma 7.9 gives us
\[\Psi_{d}\big{(}(f_{1,\lambda_{1,k}},f_{2,\lambda_{2,k}})_{k=1}^{d}\big{)}\in \operatorname{M}_{cb}^{l}(\Lambda(\mathbb{F})).\]
In fact, using the definition of \(\tilde{\Psi}_{d}\), we see that these multipliers are in \(\mathrm{c}_{00}(\mathbb{F})\) since all of the \(f_{i,\lambda}\) are finitely supported.
We first consider the case when we keep \(\lambda_{i,k}\) fixed, for \(k\geq 2\). Since \(\Psi_{d}\) is separately weak\({}^{*}\)-weak\({}^{*}\)-continuous, by Lemma 7.9, we have
\[\Psi_{d}\big{(}(f_{1,\lambda_{1,1}},f_{2,\lambda_{2,1}}),(f_{1,\lambda_{1,k}},f_ {2,\lambda_{2,k}})_{k=2}^{d}\xrightarrow[]{w^{*}}\xrightarrow[]{w^{*}} \xrightarrow[]{w^{*}}\Psi_{d}\big{(}(\mathbb{1},\mathbb{1}),(f_{1,\lambda_{1,k} },f_{2,\lambda_{2,k}})_{k=2}^{d}\big{)}.\]
We now repeat this argument in the second variable, and so forth, and using that \((\overline{\mathrm{c}_{00}(\mathbb{F})}^{w^{*}})^{-w^{*}}=\overline{\mathrm{c}_ {00}(\mathbb{F})}^{w^{*}}\), we obtain
\[p_{d}=\Psi_{d}((\mathbb{1},\mathbb{1})_{k=1}^{d})\in\overline{\mathrm{c}_{00}( \mathbb{F})}^{w^{*}}\subseteq\operatorname{M}_{cb}^{l}(\Lambda(\mathbb{F})).\]
Clearly we also have \(p_{0}=(\delta_{\alpha,e}\mathbb{1})_{\alpha\in\operatorname{Irr}(\widehat{ \mathbb{F}})}\in\mathrm{c}_{00}(\mathbb{F})\subseteq\overline{\mathrm{c}_{00}( \mathbb{F})}^{w^{*}}\).
We now show that \(\mathbb{1}\in\overline{\mathrm{c}_{00}(\Gamma)}^{w^{*}}\). Consider
\[T_{n}=\sum_{d=0}^{n}(1-\tfrac{1}{\sqrt{n}})^{d}p_{d}\in\overline{\mathrm{c}_{00}( \Gamma)}^{w^{*}}\subseteq\mathrm{M}_{cb}^{l}(\mathrm{A}(\Gamma))\qquad(n\in \mathbb{N}).\]
According to [57, Proposition 3.5], \(\lim_{n\to\infty}\|T_{n}\|_{cb}=1\) and \((\Theta^{l}(T_{n}))_{n\in\mathbb{N}}\) converges pointwise to the identity on \(\mathrm{C}(\widehat{\Gamma})\). Take \(x\in\mathrm{C}(\widehat{\Gamma})\odot\mathcal{K}(\mathsf{H}),\omega\in \mathrm{L}^{1}(\widehat{\Gamma})\odot\mathrm{B}(\mathsf{H})_{*}\) for a separable Hilbert space \(\mathsf{H}\). Then we obtain
\[\langle T_{n}-\mathbb{1},\Omega_{x,\omega}\rangle=\langle\big{(}(\Theta^{l}(T_ {n})-\mathrm{id})\otimes\mathrm{id}\big{)}x,\omega\rangle\xrightarrow[n\to\infty ]{}0,\]
and since \((T_{n})_{n\in\mathbb{N}}\) is uniformly bounded in CB norm, the same holds for general \(x\in\mathrm{C}(\widehat{\Gamma})\otimes\mathcal{K}(\mathsf{H}),\omega\in \mathrm{L}^{1}(\widehat{\Gamma})\widehat{\otimes}\mathrm{B}(\mathsf{H})_{*}\). By Proposition 3.9, this shows that \(T_{n}\xrightarrow[n\to\infty]{}\mathbb{1}\) weak\({}^{*}\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\Gamma))\). As each \(p_{d}\) is in the weak\({}^{*}\)-closure of \(\mathrm{c}_{00}(\Gamma)\), the same is true of each \(T_{n}\), and hence we conclude that \(\mathbb{1}\) is in the weak\({}^{*}\)-closure of \(\mathrm{c}_{00}(\Gamma)\), showing that \(\Gamma\) has AP.
If \(f_{i,\lambda}\in\mathcal{Z}(\ell^{\infty}(\Gamma_{i}))\cap\mathrm{c}_{00}( \Gamma_{i})\) for each \(i,\lambda\), then \(\Psi_{d}((f_{1,\lambda_{1,k}},f_{2,\lambda_{2,k}})_{k=1}^{d})\) is also central. Consequently, if \(\mathbb{1}_{1},\mathbb{1}_{2}\) have central AP then so does \(\mathbb{1}=\mathbb{1}_{1}\star\mathbb{1}_{2}\).
**Corollary 7.11**.: Let \((\mathbb{1}_{i})_{i\in I}\) be a family of discrete quantum groups with (central) AP. Then the free product \(\mathbb{1}=\star_{i\in I}\mathbb{1}_{i}\) has (central) AP.
Proof.: If \(I\) is finite the claim follows from Theorem 7.7 by induction. In the general case, for any finite (nonempty) set \(F\subseteq I\), the free product \(\star_{i\in F}\mathbb{1}_{i}\) is a quantum subgroup of \(\star_{i\in I}\mathbb{1}_{i}\) in a natural way. Moreover \((\star_{i\in F}\mathbb{1}_{i})_{F\subseteq I}\) forms a directed system of discrete quantum groups with injective connecting maps over the directed set of finite subsets of \(I\), compare Definition 7.3, and \(\star_{i\in I}\mathbb{1}_{i}=\varinjlim_{F\subseteq I}\star_{i\in F}\mathbb{1} _{i}\). Since \(\star_{i\in F}\mathbb{1}_{i}\) has (central) AP, the claim follows from Proposition 7.4.
### Double crossed products
In this section we study how the approximation property behaves with respect to the double crossed product construction. This contains the Drinfeld double of a locally compact quantum group as a special case.
#### 7.4.1. Preliminaries
We start by recalling some definitions, following the conventions in [6].
A _matching_ between two locally compact quantum groups \(\mathbb{G}_{1},\mathbb{G}_{2}\) is a faithful normal \(\star\)-homomorphism \(\mathrm{m}\colon\mathrm{L}^{\infty}(\mathbb{G}_{1})\bar{\otimes}\,\mathrm{L}^ {\infty}(\mathbb{G}_{2})\to\mathrm{L}^{\infty}(\mathbb{G}_{1})\bar{\otimes}\, \mathrm{L}^{\infty}(\mathbb{G}_{2})\) satisfying
\[(\Delta_{1}\otimes\mathrm{id})\,\mathrm{m}=\mathrm{m}_{23}\,\mathrm{m}_{13}( \Delta_{1}\otimes\mathrm{id})\quad\text{ and }\quad(\mathrm{id}\otimes\Delta_{2})\,\mathrm{m}= \mathrm{m}_{13}\,\mathrm{m}_{12}(\mathrm{id}\otimes\Delta_{2}).\]
Given this data one defines the _double crossed product_\(\mathbb{G}_{\mathrm{m}}\) of \(\mathbb{G}_{1},\mathbb{G}_{2}\) as follows. The von Neumann algebra of functions on \(\mathbb{G}_{\mathrm{m}}\) and its comultiplication are given by
\[\mathrm{L}^{\infty}(\mathbb{G}_{\mathrm{m}})=\mathrm{L}^{\infty}(\mathbb{G}_{1 })\bar{\otimes}\,\mathrm{L}^{\infty}(\mathbb{G}_{2}),\quad\Delta_{\mathrm{m}}=( \mathrm{id}\otimes\chi\,\mathrm{m}\otimes\mathrm{id})(\Delta_{1}^{\mathrm{op}} \otimes\Delta_{2}).\]
To ease notation, we will decorate objects related to \(\mathbb{G}_{1}\) (resp. \(\mathbb{G}_{2},\mathbb{G}_{\mathrm{m}}\)) with \(1\) (resp. \(2,\mathrm{m}\)) in the sequel, e.g. \(\mathrm{W}_{1}=\mathrm{W}^{\mathbb{G}_{1}}\). We will also denote the unit in \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_{\mathrm{m}}))\) by \(\mathbb{1}_{m}\).
Let \(J\) (resp. \(\hat{J}\)) be the modular conjugation of the left Haar integral on the bicrossed product of \(\mathbb{G}_{1},\mathbb{G}_{2}\) and its dual, see [6, Section 2.4]. Define a unitary
\[Z=J\hat{J}(\hat{J}_{1}J_{1}\otimes\hat{J}_{2}J_{2}).\]
It implements \(\mathrm{m}\) in the sense that \(\mathrm{m}(z)=ZzZ^{*}\) for all \(z\in\mathrm{L}^{\infty}(\mathbb{G}_{1})\bar{\otimes}\,\mathrm{L}^{\infty}( \mathbb{G}_{2})\). The Kac-Takesaki operator of \(\mathbb{G}_{\mathrm{m}}\) is given by
\[\mathrm{W}_{\mathrm{m}}=(\Sigma V_{1}^{*}\Sigma)_{13}Z_{34}^{*}\mathrm{W}_{2,24} Z_{34}.\]
One can describe structure of \(\mathbb{G}_{\mathrm{m}}\) and its dual, see in particular [6, Theorem 5.3]. For example, we have
\[\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{\mathrm{m}}}) =\big{(}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{1}})^{\prime} \otimes\mathbb{1})\cup Z^{*}(\mathbb{1}\otimes\mathrm{L}^{\infty}(\widehat{ \mathbb{G}_{2}}))Z\big{)}^{\prime\prime},\] \[\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{\mathrm{m}}})^{\prime} =\big{(}Z^{*}(\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{1}}) \otimes\mathbb{1})Z\cup(\mathbb{1}\otimes\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{ 2}})^{\prime})\big{)}^{\prime\prime}.\]
A special case of this construction is the (generalised) Drinfeld double. Let \(\mathbb{G}_{1},\mathbb{G}_{2}\) be locally compact quantum groups and assume that \(\mathcal{Z}\in\mathrm{L}^{\infty}(\mathbb{G}_{1})\bar{\otimes}\,\mathrm{L}^{ \infty}(\mathbb{G}_{2})\) is a bicharacter. That is, \(\mathcal{Z}\) is a unitary satisfying
\[(\Delta_{1}\otimes\mathrm{id})\mathcal{Z}=\mathcal{Z}_{23}\mathcal{Z}_{13} \quad\text{and}\quad(\mathrm{id}\otimes\Delta_{2})\mathcal{Z}=\mathcal{Z}_{13 }\mathcal{Z}_{12}.\]
Then one obtains an inner \(\star\)-automorphism
\[\mathrm{m}\colon\mathrm{L}^{\infty}(\mathbb{G}_{1})\bar{\otimes}\,\mathrm{L}^ {\infty}(\mathbb{G}_{2})\ni x\mapsto\mathcal{Z}x\mathcal{Z}^{*}\in\mathrm{L}^{ \infty}(\mathbb{G}_{1})\bar{\otimes}\,\mathrm{L}^{\infty}(\mathbb{G}_{2}),\]
and it is easy to check that this defines a matching between \(\mathbb{G}_{1}\) and \(\mathbb{G}_{2}\). Consequently, one can form the double crossed product \(\mathbb{G}_{\mathrm{m}}\), and this is called the _generalised Drinfeld double_ of \(\mathbb{G}_{1},\mathbb{G}_{2}\) with respect to \(\mathcal{Z}\).
In particular, if \(\mathbb{H}\) is a locally compact quantum group then we can consider \(\mathbb{G}_{1}=\mathbb{H}^{\mathrm{op}},\mathbb{G}_{2}=\widehat{\mathbb{H}}\) together with the bicharacter \(\mathcal{Z}=\mathrm{W}^{\mathbb{H}}\). The corresponding double crossed product \(\mathbb{G}_{\mathrm{m}}\) is called the _Drinfeld double_ of \(\mathbb{H}\).
4.2. \(\mathbb{G}_{1}^{\mathrm{op}},\mathbb{G}_{2}\) are quantum subgroups of \(\mathbb{G}_{\mathrm{m}}\)
Let us return to the general situation of locally compact quantum groups \(\mathbb{G}_{1},\mathbb{G}_{2}\) with a matching \(\mathrm{m}\). It is stated in [6, Theorem 5.3], see also the introduction to [6, Section 6], that \(\mathbb{G}_{1}^{\mathrm{op}}\) and \(\mathbb{G}_{2}\) are closed quantum subgroups of \(\mathbb{G}_{\mathrm{m}}\). We give a quick argument for the convenience of the reader.
**Lemma 7.12**.: \(\mathbb{G}_{1}^{\mathrm{op}}\) _and \(\mathbb{G}_{2}\) are closed quantum subgroups of \(\mathbb{G}_{\mathrm{m}}\) in the sense of Vaes._
Proof.: Note first that \(\widehat{\mathbb{G}_{1}^{\mathrm{op}}}=\widehat{\mathbb{G}_{1}}^{\prime}\), compare [46, Proposition 5.4]. We have natural normal, injective \(\star\)-homomorphisms
\[\gamma_{1}\colon\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{1}})^{ \prime}\to\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{\mathrm{m}}})\colon \widehat{x}^{\prime}\mapsto\widehat{x}^{\prime}\otimes 1,\] \[\gamma_{2}\colon\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{2}})\to \mathrm{L}^{\infty}(\widehat{\mathbb{G}_{\mathrm{m}}})\colon\widehat{x}\mapsto Z ^{*}(1\otimes\widehat{x})Z,\]
hence by [20, Theorem 3.3] it is enough to show that both maps respect coproducts.
First, take \(\widehat{x}^{\prime}\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{1}})^{\prime}\). Then \(\widehat{x}^{\prime}\otimes 1\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{\mathrm{m}}})\) and
\[\widehat{\Delta}_{\mathrm{m}}(\widehat{x}^{\prime}\otimes 1)=\Sigma\mathrm{W}_{ \mathrm{m}}((\widehat{x}^{\prime}\otimes 1)\otimes 1_{\mathrm{m}})\mathrm{W}_{ \mathrm{m}}^{*}\Sigma=\Sigma(\Sigma\mathrm{V}_{1}^{*}\Sigma)_{13}(\widehat{x}^ {\prime}\otimes 1\otimes 1\otimes 1)(\Sigma\mathrm{V}_{1}\Sigma)_{13}\Sigma,\]
noting that all the other parts of \(\mathrm{W}_{\mathrm{m}}\) cancel out. By slight abuse of notation, we write here \(\Sigma\) both for the swap map on \(\mathrm{L}^{2}(\mathbb{G}_{\mathrm{m}})\otimes\mathrm{L}^{2}(\mathbb{G}_{ \mathrm{m}})\), which is identified with \(\Sigma_{13}\Sigma_{24}\), and for the swap map on \(\mathrm{L}^{2}(\mathbb{G}_{1})\otimes\mathrm{L}^{2}(\mathbb{G}_{1})\). From the proof of [46, Proposition 4.2] we find that the coproduct on \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{1}})^{\prime}\) is given by \(\Delta_{\widehat{\mathbb{G}_{1}}^{\prime}}(\widehat{x}^{\prime})=\mathrm{V}_{1 }^{*}(1\otimes\widehat{x}^{\prime})\mathrm{V}_{1}\), and so
\[\widehat{\Delta}_{\mathrm{m}}(\widehat{x}^{\prime}\otimes 1)=\Sigma(\Sigma\mathrm{V}_{1 }^{*}(1\otimes\widehat{x}^{\prime})\mathrm{V}_{1}\Sigma)_{13}\Sigma=\Sigma_{24 }\Delta_{\widehat{\mathbb{G}_{1}}^{\prime}}(\widehat{x}^{\prime})_{13}\Sigma_{2 4}=\Delta_{\widehat{\mathbb{G}_{1}}^{\prime}}(\widehat{x}^{\prime})_{13}.\]
Since the inclusion \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{1}})^{\prime}\bar{\otimes}\,\mathrm{L}^ {\infty}(\widehat{\mathbb{G}_{1}})^{\prime}\to\mathrm{L}^{\infty}(\widehat{ \mathbb{G}_{\mathrm{m}}})\bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{G }_{\mathrm{m}}})\) is given by \(a\mapsto a_{13}\), this concludes the proof that \(\mathbb{G}_{1}^{\mathrm{op}}\) is a closed quantum subgroup of \(\mathbb{G}_{\mathrm{m}}\).
Take now \(\widehat{x}\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{2}})\) so that \(Z^{*}(1\otimes\widehat{x})Z\in\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{\mathrm{m} }})\). Then, following exactly the proof of [6, Proposition 3.5],
\[\widehat{\Delta}_{\mathrm{m}}(Z^{*}(1\otimes\widehat{x})Z)=\Sigma\mathrm{W}_{ \mathrm{m}}(Z^{*}(1\otimes\widehat{x})Z\otimes 1_{\mathrm{m}})\mathrm{W}_{\mathrm{m}}^{*}\Sigma=(Z^{*} \otimes Z^{*})\widehat{\Delta}_{2}(\widehat{x})_{24}(Z\otimes Z),\]
which is exactly the embedding of \(\Delta_{\widehat{\mathbb{G}_{2}}}(\widehat{x})\in\mathrm{L}^{\infty}(\widehat{ \mathbb{G}_{2}})\bar{\otimes}\,\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{2}})\) into \(\mathrm{L}^{\infty}(\widehat{\mathbb{G}_{\mathrm{m}}})\bar{\otimes}\,\mathrm{L}^ {\infty}(\widehat{\mathbb{G}_{\mathrm{m}}})\).
As a consequence of Theorem 7.1 and Proposition 4.3 we therefore obtain the following fact.
**Corollary 7.13**.: Suppose that \(\mathbb{G}_{\mathrm{m}}\) has AP. Then both \(\mathbb{G}_{1}\) and \(\mathbb{G}_{2}\) have AP.
**Remark 7.14**.: In view of the close analogy between Drinfeld doubles of \(q\)-deformations of compact semisimple Lie groups and the corresponding complex Lie groups [1], [70], it is natural to speculate that the converse to Corollary 7.13 does not hold, see also Remark 7.4 in [2]. Specifically, the Drinfeld double of \(\mathrm{SU}_{q}(3)\) might be an example of a locally compact quantum group which does not have AP.
#### 7.4.3. AP for \(\widehat{\mathbb{G}_{\rm m}}\)
We now aim to prove the following result.
**Theorem 7.15**.: _Let \(\mathbb{G}_{1},\mathbb{G}_{2}\) be locally compact quantum groups with a matching \({\rm m}\)._
* _If_ \(\widehat{\mathbb{G}_{1}}\) _and_ \(\widehat{\mathbb{G}_{2}}\) _have AP then so does_ \(\widehat{\mathbb{G}_{\rm m}}\)_._
* _If_ \(\widehat{\mathbb{G}_{1}}\) _and_ \(\widehat{\mathbb{G}_{2}}\) _are weakly amenable with Cowling-Haagerup constants_ \(\Lambda_{cb}(\widehat{\mathbb{G}_{1}}),\Lambda_{cb}(\widehat{\mathbb{G}_{2}})\) _then_ \(\widehat{\mathbb{G}_{\rm m}}\) _is weakly amenable with_ \(\Lambda_{cb}(\widehat{\mathbb{G}_{\rm m}})\leq\Lambda_{cb}(\widehat{\mathbb{G}_ {1}})\,\Lambda_{cb}(\widehat{\mathbb{G}_{2}})\)_._
* _If_ \(\mathbb{G}_{1},\mathbb{G}_{2}\) _are coamenable then so is_ \(\mathbb{G}_{\rm m}\)_._
An analogous result for the Haagerup property was obtained in [58], but we note that the terminology used in [58] is different. We also note that for generalised Drinfeld doubles the statement on coamenability in Theorem 7.15 can be shown quite easily using standard properties of bicharacters, as discussed in a preprint version of [58].
Before we prove Theorem 7.15 we need to establish a number of auxiliary results. Recall from Section 3.3 the construction of a normal CB map \(\Theta^{l}(a)\) on \({\rm L}^{\infty}(\widehat{\mathbb{H}})\) and its extension \(\Phi(a)\in{\rm CB}^{\sigma}({\rm B}({\rm L}^{2}(\mathbb{H})))\) for any locally compact quantum group \(\mathbb{H}\) and \(a\in{\rm M}^{l}_{cb}({\rm A}(\mathbb{H}))\). Moreover, in the proof of Lemma 7.12 we introduced injective, normal \(\star\)-homomorphisms \(\gamma_{1}\colon{\rm L}^{\infty}(\widehat{\mathbb{G}_{1}})^{\prime}\to{\rm L }^{\infty}(\widehat{\mathbb{G}_{\rm m}})\) and \(\gamma_{2}\colon{\rm L}^{\infty}(\widehat{\mathbb{G}_{2}})\to{\rm L}^{\infty }(\widehat{\mathbb{G}_{\rm m}})\). We will now use these maps to transport elements of the Fourier algebras \({\rm A}(\widehat{\mathbb{G}_{1}}^{\rm op}),{\rm A}(\widehat{\mathbb{G}_{2}})\) to left CB multipliers of \({\rm A}(\widehat{\mathbb{G}_{\rm m}})\).
**Lemma 7.16**.: _For \(\omega\in{\rm L}^{1}(\mathbb{G}_{1})\) we have \(a=\gamma_{1}(\lambda_{1}^{\rm op}(\omega))\in{\rm M}^{l}_{cb}({\rm A}( \widehat{\mathbb{G}_{\rm m}}))\). The associated maps are_
\[\Theta^{l}(a)=\Theta^{l}(\lambda_{1}^{\rm op}(\omega))\otimes{\rm id}\in{\rm CB }^{\sigma}({\rm L}^{\infty}(\mathbb{G}_{\rm m}))={\rm CB}^{\sigma}({\rm L}^{ \infty}(\mathbb{G}_{1})\bar{\otimes}\,{\rm L}^{\infty}(\mathbb{G}_{2}))\]
_and_
\[\Phi(a)=\Phi(\lambda_{1}^{\rm op}(\omega))\otimes{\rm id}\in{\rm CB}^{\sigma} ({\rm B}({\rm L}^{2}(\mathbb{G}_{\rm m})))={\rm CB}^{\sigma}({\rm B}({\rm L}^{ 2}(\mathbb{G}_{1}))\bar{\otimes}\,{\rm B}({\rm L}^{2}(\mathbb{G}_{2}))).\]
Proof.: Take \(\omega_{1}\otimes\omega_{2}\in{\rm L}^{1}(\mathbb{G}_{1}^{\rm op})\bar{\otimes }\,{\rm L}^{1}(\mathbb{G}_{2})={\rm L}^{1}(\mathbb{G}_{m})\). Using \({\rm W}_{1}^{\rm op}=\Sigma\nabla_{1}^{*}\Sigma\), see [46, Section 4], we get
\[\begin{split}\lambda_{\rm m}(\omega_{1}\otimes\omega_{2})& =(\omega_{1}\otimes\omega_{2}\otimes{\rm id}\otimes{\rm id})\big{(}( \Sigma\nabla_{1}^{*}\Sigma)_{13}Z_{34}^{*}{\rm W}_{2,24}Z_{34}\big{)}\\ &=\big{(}(\omega_{1}\otimes{\rm id})(\Sigma\nabla_{1}^{*}\Sigma) \otimes\mathbb{1}\big{)}Z^{*}\big{(}1\otimes(\omega_{2}\otimes{\rm id})({\rm W} _{2})\big{)}Z\\ &=\gamma_{1}(\lambda_{1}^{\rm op}(\omega_{1}))\gamma_{2}(\lambda_{ 2}(\omega_{2})),\end{split} \tag{7.3}\]
and consequently, writing \(\star\) for the product on \({\rm L}^{1}(\mathbb{G}_{1}^{\rm op})\),
\[\begin{split} a\lambda_{\rm m}(\omega_{1}\otimes\omega_{2})& =\gamma_{1}(\lambda_{1}^{\rm op}(\omega))\lambda_{m}(\omega_{1} \otimes\omega_{2})=\gamma_{1}\big{(}\lambda_{1}^{\rm op}(\omega)\lambda_{1}^{ \rm op}(\omega_{1})\big{)}\gamma_{2}(\lambda_{2}(\omega_{2}))\\ &=\gamma_{1}\big{(}\lambda_{1}^{\rm op}(\omega\star\omega_{1}) \big{)}\gamma_{2}(\lambda_{2}(\omega_{2}))=\lambda_{\rm m}\big{(}(\omega\star \omega_{1})\otimes\omega_{2}\big{)}.\end{split}\]
By linearity and continuity, \(a\) maps \({\rm A}(\widehat{\mathbb{G}_{\rm m}})\) into itself, and \(\Theta^{l}(a)\) has the given form.
The second assertion is verified using a direct calculation. Indeed, if \(x\in{\rm B}({\rm L}^{2}(\mathbb{G}_{\rm m}))={\rm B}({\rm L}^{2}(\mathbb{G}_ {1}))\bar{\otimes}\,{\rm B}({\rm L}^{2}(\mathbb{G}_{2}))\) then
\[\begin{split}\mathbb{1}_{\rm m}\otimes\Phi(a)(x)&={ \rm W}_{\rm m}\big{(}\big{(}((\omega\otimes{\rm id})\Delta_{1}^{\rm op}\otimes{ \rm id})\otimes{\rm id})({\rm W}_{\rm m}^{*}(\mathbb{1}_{\rm m}\otimes x){\rm W }_{\rm m})\big{)}{\rm W}_{\rm m}^{*}\\ &={\rm W}_{\rm m}\big{(}((\omega\otimes{\rm id})\Delta_{1}^{\rm op }\otimes{\rm id}^{\otimes 3})(Z_{34}^{*}{\rm W}_{2,24}^{*}Z_{34}{\rm W}_{1,13}^{\rm op}{}_{34}^{*}{ \rm W}_{1,13}^{\rm op}Z_{34}^{*}{\rm W}_{2,24}Z_{34})\big{)}{\rm W}_{\rm m}^{*} \\ &={\rm W}_{\rm m}Z_{34}^{*}{\rm W}_{2,24}^{*}Z_{34}\big{(}(( \omega\otimes{\rm id})\Delta_{1}^{\rm op}\otimes{\rm id}^{\otimes 3}){\rm W}_{1,13}^{\rm op }{}_{34}{\rm W}_{1,13}^{\rm op}\big{)}Z_{34}^{*}{\rm W}_{2,24}Z_{34}{\rm W}_{\rm m }^{*}\\ &={\rm W}_{\rm m}Z_{34}^{*}{\rm W}_{2,24}^{*}Z_{34}\big{(}(\omega \otimes{\rm id}^{\otimes 4})({\rm W}_{1,24}^{\rm op}{\rm W}_{1,14}^{\rm op}{\rm W}_{1,14}^{\rm op}{ \rm W}_{1,24}^{\rm op})\big{)}Z_{34}^{*}{\rm W}_{2,24}Z_{34}{\rm W}_{\rm m}^{*} \\ &={\rm W}_{\rm m}Z_{34}^{*}{\rm W}_{2,24}^{*}Z_{34}{\rm W}_{1,13}^{ \rm op}\big{(}(\omega\otimes{\rm id}\otimes{\rm id})({\rm W}_{1,12}^{\rm op }{\rm W}_{2,23}^{\rm op}{\rm W}_{1,12}^{\rm op})\big{)}_{34}{\rm W}_{1,13}^{ \rm op}Z_{34}^{*}{\rm W}_{2,24}Z_{34}{\rm W}_{\rm m}^{*}\\ &=\mathbb{1}_{\rm m}\otimes(\omega\otimes{\rm id}\otimes{\rm id})({ \rm W}_{1,12}^{\rm op}{\rm W}_{2,12}^{\rm op}).\end{split}\]
Here we use that \((\Delta_{1}^{\rm op}\otimes{\rm id}){\rm W}_{1}^{\rm op}={\rm W}_{1,13}^{\rm op }{\rm W}_{1,23}^{\rm op}\). The claim follows from Lemma 3.10.
**Lemma 7.17**.: _For \(\omega\in{\rm L}^{1}(\mathbb{G}_{2})\) we have \(b=\gamma_{2}(\lambda_{2}(\omega))\in{\rm M}
Proof.: Using equation (7.3), we get for \(\omega_{0}\in\mathrm{L}^{1}(\mathbb{G}_{2})\) and \(\omega_{1}\otimes\omega_{2}\in\mathrm{L}^{1}(\mathbb{G}_{\mathrm{m}})\) the relation
\[\lambda_{\mathrm{m}}(\omega_{1}\otimes\omega_{2})\gamma_{2}( \lambda_{2}(\omega_{0}))=\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{1})) \gamma_{2}(\lambda_{2}(\omega_{2}))\gamma_{2}(\lambda_{2}(\omega_{0}))\] \[=\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{1}))\gamma_{2}( \lambda_{2}(\omega_{2}\star\omega_{0}))=\lambda_{\mathrm{m}}(\omega_{1}\otimes( \omega_{2}\star\omega_{0})),\]
here with \(\star\) the product on \(\mathrm{L}^{1}(\mathbb{G}_{2})\). Thus, if \(T_{0}\colon\mathrm{L}^{\infty}(\mathbb{G}_{\mathrm{m}})\to\mathrm{L}^{\infty}( \mathbb{G}_{\mathrm{m}})\) is the map given by \(T_{0}=\mathrm{id}\otimes(\mathrm{id}\otimes\omega_{0})\Delta_{2}\), then \(T_{0}\) is normal, and the pre-adjoint \((T_{0})_{\ast}\) satisfies
\[\lambda_{\mathrm{m}}(\omega_{1}\otimes\omega_{2})\gamma_{2}( \lambda_{2}(\omega_{0}))=\lambda_{\mathrm{m}}((T_{0})_{\ast}(\omega_{1} \otimes\omega_{2})).\]
As \(\gamma_{2}\) intertwines the coproducts, it automatically intertwines the unitary antipodes ([49, Proposition 3.10]). The same is true for \(\lambda_{\mathrm{m}}\) and \(\lambda_{2}\), and hence we get
\[\lambda_{\mathrm{m}}\big{(}(R_{\mathrm{m}}\circ T_{0}\circ R_{ \mathrm{m}})_{\ast}(R_{\mathrm{m}\,\ast}(\omega_{1}\otimes\omega_{2}))\big{)}= \lambda_{\mathrm{m}}((T_{0}\circ R_{\mathrm{m}})_{\ast}(\omega_{1}\otimes \omega_{2}))=\widehat{R}_{\mathrm{m}}(\lambda_{\mathrm{m}}((T_{0})_{\ast}( \omega_{1}\otimes\omega_{2})))\] \[=\widehat{R}_{\mathrm{m}}\big{(}\gamma_{2}(\lambda_{2}(\omega_{ 0}))\big{)}\widehat{R}_{\mathrm{m}}(\lambda_{\mathrm{m}}(\omega_{1}\otimes \omega_{2}))=\gamma_{2}(\lambda_{2}(R_{2\ast}(\omega_{0})))\lambda_{\mathrm{m} }(R_{\mathrm{m}\,\ast}(\omega_{1}\otimes\omega_{2})).\]
Now set \(\omega_{0}=\omega\circ R_{2}\) for our given \(\omega\). As the set of functionals of the form \(R_{\mathrm{m}\,\ast}(\omega_{1}\otimes\omega_{2})\) is linearly dense, we obtain from Lemma 4.8 that \(b=\gamma_{2}(\lambda_{2}(\omega))\in\mathrm{M}^{l}_{cb}(\mathrm{A}(\widehat{ \mathbb{G}_{\mathrm{m}}}))\) and \(\Theta^{l}(b)=R_{\mathrm{m}}\circ T_{0}\circ R_{\mathrm{m}}\). By [6, Theorem 5.3] we know that \(R_{\mathrm{m}}=\mathrm{m}^{-1}(R_{1}\otimes R_{2})=(R_{1}\otimes R_{2})\, \mathrm{m}\). Therefore
\[R_{\mathrm{m}}\circ T_{0}\circ R_{\mathrm{m}} =\mathrm{m}^{-1}(R_{1}\otimes R_{2})(\mathrm{id}\otimes(\mathrm{ id}\otimes\omega\circ R_{2})\Delta_{2})(R_{1}\otimes R_{2})\mathrm{m}\] \[=\mathrm{m}^{-1}(\mathrm{id}\otimes R_{2}\circ(\mathrm{id} \otimes\omega\circ R_{2})\Delta_{2}\circ R_{2})\,\mathrm{m}\] \[=\mathrm{m}^{-1}(\mathrm{id}\otimes(\mathrm{id}\otimes\omega) \Delta_{2}^{\mathrm{op}})\,\mathrm{m}\] \[=\mathrm{m}^{-1}(\mathrm{id}\otimes(\omega\otimes\mathrm{id}) \Delta_{2})\,\mathrm{m},\]
and this yields the stated formula for \(\Theta^{l}(b)\).
In order to verify the formula for \(\Phi(b)\), recall that the unitary operator \(Z\) implements \(\mathrm{m}\) by \(\mathrm{m}(\cdot)=Z\cdot Z^{\ast}\), and hence \(\mathrm{m}^{-1}(\cdot)=Z^{\ast}\cdot Z\). Moreover, from [6, Proposition 3.5] we know that \((\mathrm{m}\otimes\mathrm{id})(\mathrm{W}_{\mathrm{m}})=Z_{34}^{\ast}\mathrm{W }_{2,24}Z_{34}\mathrm{W}_{1,13}^{\mathrm{op}}\). For \(x\in\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_{\mathrm{m}}))\), by applying the expression for \(\Theta^{l}(b)\) just obtained, we get
\[(\Theta^{l}(b)\otimes\mathrm{id})(\mathrm{W}_{\mathrm{m}}^{\ast} (\mathbb{1}_{\mathrm{m}}\otimes x)\mathrm{W}_{\mathrm{m}})\] \[=(\mathrm{m}^{-1}\otimes\mathrm{id}\otimes\mathrm{id})(\mathrm{ id}\otimes(\omega\otimes\mathrm{id})\Delta_{2}\otimes\mathrm{id}\otimes\mathrm{id})\] \[\big{(}\mathrm{W}_{1,13}^{\mathrm{op}\,\ast}Z_{34}^{\ast}\mathrm{W }_{2,24}^{\ast}Z_{34}(\mathbb{1}\otimes\mathbb{1}\otimes x)Z_{34}^{\ast} \mathrm{W}_{2,24}Z_{34}\mathrm{W}_{1,13}^{\mathrm{op}}\big{)}.\]
Now using \((\Delta_{2}\otimes\mathrm{id})\mathrm{W}_{2}=\mathrm{W}_{2,13}\mathrm{W}_{2,23}\), this expression becomes
\[=(\mathrm{m}^{-1}\otimes\mathrm{id}\otimes\mathrm{id})\big{(} \mathrm{W}_{1,13}^{\mathrm{op}\,\ast}Z_{34}^{\ast}(\mathrm{id}\otimes\omega \otimes\mathrm{id}^{\otimes 3})(\mathrm{W}_{2,35}^{\ast}\mathrm{W}_{2,25}^{\ast}(ZxZ^{\ast})_{45} \mathrm{W}_{2,25}\mathrm{W}_{2,35})Z_{34}\mathrm{W}_{1,13}^{\mathrm{op}}\big{)}\] \[=(\mathrm{m}^{-1}\otimes\mathrm{id}\otimes\mathrm{id})\big{(} \mathrm{W}_{1,13}^{\mathrm{op}\,\ast}Z_{34}^{\ast}\mathrm{W}_{2,24}^{\ast}( \omega\otimes\mathrm{id}\otimes\mathrm{id})(\mathrm{W}_{2,13}^{\ast}(Zx^{\ast})_ {23}\mathrm{W}_{2,13})_{34}\mathrm{W}_{2,24}Z_{34}\mathrm{W}_{1,13}^{\mathrm{ op}}\big{)}.\]
Finally, we use the form of \(\Phi(\lambda_{2}(\omega))\in\mathrm{CB}^{2}(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_ {2})))\), as in Lemma 3.10 to get
\[=(\mathrm{m}^{-1}\otimes\mathrm{id}\otimes\mathrm{id})\big{(} \mathrm{W}_{1,13}^{\mathrm{op}\,\ast}Z_{34}^{\ast}\mathrm{W}_{2,24}^{\ast}( \mathrm{id}\otimes\Phi(\lambda_{2}(\omega)))(ZxZ^{\ast})_{34}\mathrm{W}_{2,24} Z_{34}\mathrm{W}_{1,13}^{\mathrm{op}}\big{)}.\]
Using again \((\mathrm{m}\otimes\mathrm{id})(\mathrm{W}_{\mathrm{m}})=Z_{34}^{\ast}\mathrm{W}_{2,2 4}Z_{34}\mathrm{W}_{1,13}^{\mathrm{op}}\), we continue the calculation as
\[=(\mathrm{m}^{-1}\otimes\mathrm{id}\otimes\mathrm{id})\big{(}( \mathrm{m}\otimes\mathrm{id})(\mathrm{W}_{\mathrm{m}}^{\ast})Z_{34}^{\ast}( \mathrm{id}\otimes\Phi(\lambda_{2}(\omega)))(ZxZ^{\ast})_{34}Z_{34}(\mathrm{m} \otimes\mathrm{id})(\mathrm{W}_{\mathrm{m}})\big{)}\] \[=\mathrm{W}_{\mathrm{m}}^{\ast}(\mathrm{m}^{-1}\otimes\mathrm{id} \otimes\mathrm{id})(Z_{34}^{\ast}(\mathrm{id}\otimes\Phi(\lambda_{2}(\omega)))(ZxZ^{ \ast})_{34}Z_{34})\mathrm{W}_{\mathrm{m}}\] \[=\mathrm{W}_{\mathrm{m}}^{\ast}Z_{12}^{\ast}Z_{34}^{\ast}( \mathrm{id}\otimes\Phi(\lambda_{2}(\omega)))(ZxZ^{\ast})_{34}Z_{34}Z_{12} \mathrm{W}_{\mathrm{m}},\]
where at the end we used that \(\mathrm{m}^{-1}(\cdot)=Z^{\ast}\cdot Z\). It follows that
\[\mathbb{1}_{\mathrm{m}}\otimes\Phi(b)(x) =\mathrm{W}_{\mathrm{m}}(\Theta^{l}(b)\otimes\mathrm{id})( \mathrm{W}_{\mathrm{m}}^{\ast}(\mathbb{1}_{\mathrm{m}}\
In the next step, we shall establish continuity properties of the maps \(\mathrm{A}(\widehat{\mathbb{G}_{1}^{\mathrm{op}}})\to\mathrm{M}_{cb}^{l}(\mathrm{A}( \widehat{\mathbb{G}_{m}}))\) and \(\mathrm{A}(\widehat{\mathbb{G}_{2}})\to\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{ \mathbb{G}_{m}}))\) described in Lemmas 7.16, 7.17. For this we need the following general fact.
**Lemma 7.18**.:
* _Let_ \(E,F\) _be operator spaces. The map_ \((E\breve{\otimes}F)\widehat{\otimes}F^{*}\to E\) _given on simple tensors by_ \((x\otimes y)\otimes f\mapsto\langle f,y\rangle x\) _is completely contractive._
* _Let_ \(A,B\) _be_ \(C^{*}\)_-algebras. The map_ \((A\otimes B)\widehat{\otimes}(A^{*}\widehat{\otimes}B^{*})\to A\widehat{ \otimes}A^{*}\) _given on simple tensors by_ \((a\otimes b)\otimes(\mu\otimes\omega)\mapsto\langle\omega,b\rangle a\otimes\mu\) _is completely contractive._
Proof.: Due to [23, Theorem 8.1.10] the "tensor interchange" map \(F^{*}\widehat{\otimes}(F\breve{\otimes}E)\to(F^{*}\widehat{\otimes}F)\breve{ \otimes}E\), which is the formal identity on simple tensors, is a complete contraction. Since \((F^{*}\widehat{\otimes}F)^{*}\cong\mathrm{CB}(F^{*})\), the identity map \(\mathrm{id}_{F^{*}}\) induces the (completely) contractive linear functional \(F^{*}\widehat{\otimes}F\to\mathbb{C}\colon f\otimes y\mapsto\langle f,y\rangle\). Composing these complete contractions shows that \(F^{*}\widehat{\otimes}(F\breve{\otimes}E)\to E\colon f\otimes(y\otimes x) \mapsto\langle f,y\rangle x\) is a complete contraction. By commutativity of the projective and the injective tensor products of operators spaces, respectively, the first claim follows.
For the second part recall that the injective operator space tensor product agrees with the spatial tensor product on \(C^{*}\)-algebras. Using the re-bracketing isomorphism
\[(A\otimes B)\widehat{\otimes}(A^{*}\widehat{\otimes}B^{*})\cong(((A\otimes B )\widehat{\otimes}B^{*})\widehat{\otimes}A^{*},\]
the assertion hence follows by applying the first part to \(E=A,F=B\) and tensoring with \(A^{*}\).
**Lemma 7.19**.: _Let \((\omega_{i})_{i\in I}\) be a net in \(\mathrm{L}^{1}(\mathbb{G}_{1}^{\mathrm{op}})\) such that \(\lambda_{1}^{\mathrm{op}}(\omega_{i})\underset{i\in I}{\longrightarrow}1\) weak\({}^{*}\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{1}^{\mathrm{op}}}))\). Consider \(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i}))\in\mathrm{M}_{cb}^{l}( \mathrm{A}(\widehat{\mathbb{G}_{m}}))\) and \(\Phi(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i})))\in\mathrm{CB}^{\sigma }(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_{m})))\). Then \(\Phi(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i})))\xrightarrow[i\in I]{ \mathrm{id}}\text{ weak}^{*}\), and thus \(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i}))\underset{i\in I}{ \longrightarrow}1_{\mathbb{m}}\text{ weak}^{*}\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{m}}))\)._
Proof.: By Lemma 7.16 we have \(\Phi(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i})))=\Phi(\lambda_{1}^{ \mathrm{op}}(\omega_{i}))\otimes\mathrm{id}\) for each \(i\in I\). Applying Lemma 7.18 with \(A=\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}_{1}))\) and \(B=\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}_{2}))\) we obtain a completely contractive map
\[T\colon\big{(}\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}_{1}))\otimes\mathcal{K}( \mathrm{L}^{2}(\mathbb{G}_{2}))\big{)}\widehat{\otimes}(\mathrm{B}(\mathrm{L} ^{2}(\mathbb{G}_{1}))_{*}\widehat{\otimes}\,\mathrm{B}(\mathrm{L}^{2}(\mathbb{ G}_{2}))_{*}\big{)}\to\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}_{1}))\widehat{ \otimes}\,\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_{1}))_{*}\]
given by \(T((a\otimes b)\otimes(\omega_{1}\otimes\omega_{2}))=\langle\omega_{2},b\rangle a \otimes\omega_{1}\) on simple tensors. Then
\[\langle\Phi(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i}))),(a \otimes b)\otimes(\omega_{1}\otimes\omega_{2})\rangle =\langle\Phi(\lambda_{1}^{\mathrm{op}}(\omega_{i}))(a),\omega_{1 }\rangle\langle b,\omega_{2}\rangle\] \[=\langle\Phi(\lambda_{1}^{\mathrm{op}}(\omega_{i})),T((a\otimes b) \otimes(\omega_{1}\otimes\omega_{2}))\rangle,\]
and hence \(\Phi(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i})))=T^{*}(\Phi(\lambda_{1 }^{\mathrm{op}}(\omega_{i})))\). As \(\lambda_{1}^{\mathrm{op}}(\omega_{i})\underset{i\in I}{\longrightarrow}1\) weak\({}^{*}\), we know that \(\Phi(\lambda_{1}^{\mathrm{op}}(\omega_{i}))\xrightarrow[i\in I]{ \mathrm{id}}\text{ weak}^{*}\), and so \(\Phi(\gamma_{1}(\lambda_{1}^{\mathrm{op}}(\omega_{i})))\underset{i\in I}{ \longrightarrow}T^{*}(\mathrm{id})=\mathrm{id}\text{ weak}^{*}\), as required.
**Lemma 7.20**.: _Let \((\omega_{i})_{i\in I}\) be a net in \(\mathrm{L}^{1}(\mathbb{G}_{2})\) with \(\lambda_{2}(\omega_{i})\underset{i\in I}{\longrightarrow}1\) weak\({}^{*}\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{2}}))\). Then \(\Phi(\gamma_{2}(\lambda_{2}(\omega_{i})))\xrightarrow[i\in I]{ \mathrm{id}}\text{ weak}^{*}\) in \(\mathrm{CB}^{\sigma}(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_{m})))\), and consequently \(\gamma_{2}(\lambda_{2}(\omega_{i}))\xrightarrow[i\in I]{ \mathrm{1}}_{m}\text{ weak}^{*}\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{m}}))\)._
Proof.: According to Lemma 7.17 we have \(\Phi(\gamma_{2}(\lambda_{2}(\omega_{i})))(x)=Z^{*}(\mathrm{id}\otimes\Phi( \lambda_{2}(\omega_{i})))(ZxZ^{*})Z\). We can now argue exactly as in the proof of Lemma 7.19. Explicitly, for \(x\otimes u\) in \(\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}_{m}))\widehat{\otimes}\,\mathrm{B}( \mathrm{L}^{2}(\mathbb{G}_{m}))_{*}\), the predual of \(\mathrm{CB}^{\sigma}(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_{m})))\), consider
\[\langle\Phi(\gamma_{2}(\lambda_{2}(\omega_{i}))),x\otimes u\rangle= \langle Z^{*}(\mathrm{id}\otimes\Phi(\lambda_{2}(\omega_{i})))(ZxZ^{*})Z,u\rangle\] \[=\langle(\mathrm{id}\otimes\Phi(\lambda_{2}(\omega_{i})))(ZxZ^{*}), ZuZ^{*}\rangle=\langle(\mathrm{id}\otimes\Phi(\lambda_{2}(\omega_{i}))), ZxZ^{*}\otimes ZuZ^{*}\rangle.\]
Since \(x\mapsto ZxZ^{*}\) is a complete isometry on \(\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}_{m}))\) and \(u\mapsto ZuZ^{*}\) is a complete isometry on \(\mathrm{B}(\mathrm{L}^{2}(\mathbb{G}_{m}))_{*}\), we obtain a complete isometry \(T\) on \(\mathcal{K}(\mathrm{L}^{2}(\mathbb{G}_{m}))\widehat{\otimes}\,\mathrm{B}( \mathrm{L}^{2}(\mathbb{G}_{m}))_{*}\) given on simple tensors by \(T(x\otimes u)=ZxZ^{*}\otimes ZuZ^{*}\). Thus
\[\Phi(\gamma_{2}(\lambda_{2}(\omega_{i})))=T^{*}(\mathrm{id}\otimes\Phi(\lambda_{2 }(\omega_{i})))\xrightarrow[i\in I]{\mathrm{id}}T^{*}(\mathrm{id})=\mathrm{id}\]
weak\({}^{*}\), as required.
Proof of Theorem 7.15.: Assume that \(\widehat{\mathbb{G}_{1}}\) and \(\widehat{\mathbb{G}_{2}}\) have AP. Due to Proposition 4.3 it follows that \((\mathbb{G}_{1}^{\rm op})^{\wedge}=(\widehat{\mathbb{G}_{1}})^{\prime}\) also has AP. Choose nets \((\omega_{i}^{(1)})_{i\in I}\) in \(\mathrm{L}^{1}(\mathbb{G}_{1})\) with \(a_{i}=\lambda_{1}^{\rm op}(\omega_{i}^{(1)})\xrightarrow[i\in I]{}\mathbb{1}\) weak\({}^{*}\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{1}^{\rm op}}))\), and similarly \((\omega_{j}^{(2)})_{j\in J}\) in \(\mathrm{L}^{1}(\mathbb{G}_{2})\) with \(b_{j}=\lambda_{2}(\omega_{j}^{(2)})\xrightarrow[j\in J]{}\mathbb{1}\) weak\({}^{*}\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{2}}))\). Then, by Lemmas 7.16, 7.17, we have \(\gamma_{1}(a_{i}),\gamma_{2}(b_{j})\in\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{ \mathbb{G}_{m}}))\), and (7.3) gives
\[c_{i,j}=\gamma_{1}(a_{i})\gamma_{2}(b_{j})=\lambda_{\rm m}(\omega_{i}^{(1)} \otimes\omega_{j}^{(2)})\in\mathrm{A}(\widehat{\mathbb{G}_{m}}).\]
Since \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{m}}))\) is a dual Banach algebra, see Proposition 3.3, it follows from Lemma 7.19 that \(\lim_{i\in I}c_{i,j}=\gamma_{2}(b_{j})\) weak\({}^{*}\) for each \(j\in J\). Hence \(\gamma_{2}(b_{j})\) is contained in the weak\({}^{*}\)-closure of \(\mathrm{A}(\widehat{\mathbb{G}_{m}})\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{m}})\). Taking the limit in \(j\) and using Lemma 7.20 we see that \(\mathbb{1}_{\rm m}\) is contained in the weak\({}^{*}\)-closure of \(\mathrm{A}(\widehat{\mathbb{G}_{m}})\) in \(\mathrm{M}_{cb}^{l}(\mathrm{A}(\widehat{\mathbb{G}_{m}}))\), as required.
The remaining statements regarding weak amenability and coamenability are verified in a similar way: if \(\widehat{\mathbb{G}_{1}},\widehat{\mathbb{G}_{2}}\) are weakly amenable with Cowling-Haagerup constants \(\Lambda_{cb}(\widehat{\mathbb{G}_{1}}),\Lambda_{cb}(\widehat{\mathbb{G}_{2}})\) then we additionally know that
\[\|a_{i}\|_{cb}=\|\Phi(a_{i})\|_{cb}\leq\Lambda_{cb}(\widehat{\mathbb{G}_{1}}), \quad\|b_{j}\|_{cb}=\|\Phi(b_{j})\|_{cb}\leq\Lambda_{cb}(\widehat{\mathbb{G}_ {2}}),\]
and consequently \(\|c_{i,j}\|_{cb}\leq\Lambda_{cb}(\widehat{\mathbb{G}_{1}})\,\Lambda_{cb}( \widehat{\mathbb{G}_{2}})\). The result then follows from Proposition 5.7.
If \(\mathbb{G}_{1},\mathbb{G}_{2}\) are coamenable then we can choose \(a_{i},b_{j}\) such that \(\sup_{i\in I,j\in J}\|c_{i,j}\|_{\mathrm{A}(\widehat{\mathbb{G}_{m}})}=\)
\(\sup_{i\in I,j\in J}\|\omega_{i}\otimes\omega_{j}\|<+\infty\). The result then follows from Proposition 5.6.
### Direct products
The double crossed product construction contains as a special case the direct product of locally compact quantum groups. More precisely, assume that \(\mathbb{H}_{1},\mathbb{H}_{2}\) are locally compact quantum groups and let \(\mathrm{m}=\mathrm{id}\) be the trivial matching between \(\mathbb{G}_{1}=\mathbb{H}_{1}^{\rm op}\) and \(\mathbb{G}_{2}=\mathbb{H}_{2}\). In this case we write \(\mathbb{G}_{\mathrm{m}}=\mathbb{H}_{1}\times\mathbb{H}_{2}\) and call this the _direct product_ of \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\). Note that this definition agrees with the usual one since
\[\mathrm{L}^{\infty}(\mathbb{H}_{1}\times\mathbb{H}_{2}) =\mathrm{L}^{\infty}(\mathbb{G}_{1}^{\rm op})\bar{\otimes} \,\mathrm{L}^{\infty}(\mathbb{G}_{2})=\mathrm{L}^{\infty}(\mathbb{H}_{1})\bar {\otimes}\,\mathrm{L}^{\infty}(\mathbb{H}_{2}),\] \[\Delta_{\mathbb{H}_{1}\times\mathbb{H}_{2}} =(\mathrm{id}\otimes\chi\otimes\mathrm{id})(\Delta_{\mathbb{G}_{1 }^{\rm op}}\otimes\Delta_{\mathbb{G}_{2}})=(\mathrm{id}\otimes\chi\otimes \mathrm{id})(\Delta_{\mathbb{H}_{1}}\otimes\Delta_{\mathbb{H}_{2}}),\]
and we have \(\widehat{\mathbb{H}_{1}\times\mathbb{H}_{2}}=\widehat{\mathbb{H}_{1}}\times \widehat{\mathbb{H}_{2}}\). Consequently, Corollary 7.13 and Theorem 7.15 immediately give the following result.
**Proposition 7.21**.: _The direct product \(\mathbb{H}_{1}\times\mathbb{H}_{2}\) of two locally compact quantum groups \(\mathbb{H}_{1},\mathbb{H}_{2}\) has AP if and only if \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\) have AP._
## 8. Categorical AP
In this section we discuss the approximation property in the setting of rigid \(\mathrm{C}^{*}\)-tensor categories, building on [2], [3], [4], [56]. As an application we show in particular that the central approximation property for discrete quantum groups is invariant under monoidal equivalence.
Let us first fix some notation and terminology regarding \(\mathrm{C}^{*}\)-tensor categories, referring to [50] for more details and background. If \(\mathbf{T}\) is a \(\mathrm{C}^{*}\)-category and \(X,Y\in\mathbf{T}\) are objects we write \(\mathbf{T}(X,Y)\) for the space of morphisms from \(X\) to \(Y\). We denote by \(\mathrm{id}_{X}\) or \(\mathrm{id}\) the identity morphism in \(\mathbf{T}(X,X)\). By definition, a \(\mathrm{C}^{*}\)-tensor category is a \(\mathrm{C}^{*}\)-category \(\mathbf{T}\) together with a bilinear \(*\)-functor \(\otimes:\mathbf{T}\times\mathbf{T}\to\mathbf{T}\), a distinguished object \(\openone\in\mathbf{T}\) and unitary natural isomorphisms
\[\openone\otimes X\cong X\cong X\otimes\openone,\qquad(X\otimes Y)\otimes Z \cong X\otimes(Y\otimes Z)\]
satisfying certain compatibility conditions. For simplicity we shall always assume that \(\mathbf{T}\) is _strict_, which means that these unitary natural isomorphisms are identities, and we also assume that the tensor unit \(\openone\) is simple.
A \(\mathrm{C}^{*}\)-tensor category \(\mathbf{T}\) is called _rigid_ if all objects of \(\mathbf{T}\) are dualisable. This means that for every object \(X\in\mathbf{T}\) there exists an object \(X^{\vee}\in\mathbf{T}\) and morphisms \(s_{X}\in\mathbf{T}(X\otimes X^{\vee},\openone),t_{X}\in\mathbf{T}(X^{\vee} \otimes X,\openone)\) which form a standard solution of the conjugate equations. That is, we have
\[(t_{X}\otimes\mathrm{id}_{X^{\vee}})(\mathrm{id}_{X^{\vee}}\otimes s_{X}^{*})= \mathrm{id}_{X^{\vee}},\qquad(s_{X}\otimes\mathrm{id}_{X})(\mathrm{id}_{X} \otimes t_{X}^{*})=\mathrm{id}_{X},\]
and \(s_{X}(f\otimes\mathrm{id})s_{X}^{*}=t_{X}(\mathrm{id}\otimes f)t_{X}^{*}\) for all \(f\in\mathbf{T}(X,X)\). The _quantum trace_\(\mathrm{Tr}_{q}:\mathbf{T}(X,X)\to\mathbb{C}\) of \(X\) is defined by \(\mathrm{Tr}_{q}(f)=s_{X}(f\otimes\mathrm{id})s_{X}^{*}=t_{X}(\mathrm{id} \otimes f)t_{X}^{*}\), and the _quantum dimension_ of \(X\) is \(\mathrm{dim}_{q}(X)=\mathrm{Tr}_{q}(\mathrm{id}_{X})\). Every rigid \(\mathrm{C}^{*}\)-tensor category \(\mathbf{T}\) is semisimple, that is, every object of \(\mathbf{T}\) is isomorphic to a finite direct sum of simple objects. We write \(\mathrm{Irr}(\mathbf{T})\) for the set of isomorphism classes of simple objects in \(\mathbf{T}\), and choose representatives \(X_{i}\in\mathbf{T}\) for elements \(i=[X_{i}]\in\mathrm{Irr}(\mathbf{T})\), with the convention that we also write \(i=0\) for the class \([\openone]\).
The _fusion algebra_\(\mathbb{C}[\mathbf{T}]\) is the vector space with basis \(\mathrm{Irr}(\mathbf{T})\) equipped with the fusion product
\[[X_{i}]\cdot[X_{j}]=\sum_{k\in\mathrm{Irr}(\mathbf{T})}N_{ij}^{k}[X_{k}],\]
where \(N_{ij}^{k}=\mathrm{dim}(\mathbf{T}(X_{i}\otimes X_{j},X_{k}))\), and the \(*\)-structure determined by \([X_{i}]^{*}=[X_{i}^{\vee}]\). We will follow the usual abuse of notation and identify \(X\in\mathrm{Irr}(\mathbf{T})\) with its class \([X]\). The regular representation \(\lambda:\mathbb{C}[\mathbf{T}]\to\mathrm{B}(\ell^{2}(\mathrm{Irr}(\mathbf{T})))\) is defined by \(\lambda(X)(Y)=X\cdot Y\). The _tube algebra_\(\mathrm{Tub}(\mathbf{T})\) is
\[\mathrm{Tub}(\mathbf{T})=\bigoplus_{i,j,k\in\mathrm{Irr}(\mathbf{T})}\mathbf{ T}(X_{k}\otimes X_{i},X_{j}\otimes X_{k}) \tag{8.1}\]
equipped with a suitable multiplication and \(\star\)-structure, see [26, Definition 3.3]. Let us only note that \(\mathrm{Tub}(\mathbf{T})\) is a non-unital \(\star\)-algebra with local units, which are given by the projections \(p_{i}=\mathrm{id}\in\mathbf{T}(\openone\otimes X_{i},X_{i}\otimes\openone )=\mathbf{T}(X_{i},X_{i})\subseteq\mathrm{Tub}(\mathbf{T})\) for \(i\in\mathrm{Irr}(\mathbf{T})\). Moreover, the corner \(p_{0}\,\mathrm{Tub}(\mathbf{T})p_{0}\) corresponding to \([\openone]\in\mathrm{Irr}(\mathbf{T})\) is canonically isomorphic to the fusion algebra \(\mathbb{C}[\mathbf{T}]\), see [26, Proposition 3.1].
There is a natural faithful positive trace \(\mathrm{Tr}:\mathrm{Tub}(\mathbf{T})\to\mathbb{C}\) which vanishes on \(\mathbf{T}(X_{k}\otimes X_{i},X_{j}\otimes X_{k})\) for \(i\neq j\) or \(X_{k}\neq\openone\), and is given by the quantum trace \(\mathrm{Tr}_{q}\) on \(\mathbf{T}(\openone\otimes X_{i},X_{i}\otimes\openone)\) for all \(i\in\mathrm{Irr}(\mathbf{T})\). We write \(\mathrm{L}^{2}(\mathrm{Tub}(\mathbf{T}))\) for the associated GNS-Hilbert space. This yields the regular representation \(\lambda\): \(\mathrm{Tub}(\mathbf{T})\to\mathrm{B}(\mathrm{L}^{2}(\mathrm{Tub}(\mathbf{T})))\) of \(\mathrm{Tub}(\mathbf{T})\). The reduced \(\mathrm{C}^{*}\)-algebra \(\mathrm{C}^{*}_{\mathsf{red}}(\mathrm{Tub}(\mathbf{T}))\) and the von Neumann algebra \(\mathcal{L}(\mathrm{Tub}(\mathbf{T}))\) are defined as the closure of \(\lambda(\mathrm{Tub}(\mathbf{T}))\) in the operator norm and the weak operator topology, respectively. The map \(\mathrm{Tr}\) extends to a n.s.f trace on \(\mathcal{L}(\mathrm{Tub}(\mathbf{T}))\) by [55, Proposition 3.10].
Let us next recall the definition of multipliers on rigid \(\mathrm{C}^{*}\)-tensor categories from the work of Popa-Vaes [56, Section 3].
**Definition 8.1**.: Let \(\mathbf{T}\) be a rigid \(\mathrm{C}^{*}\)-tensor category. A _multiplier_ on \(\mathbf{T}\) is a family \(\theta=(\theta_{X,Y})\) of linear maps \(\theta_{X,Y}:\mathbf{T}(X\otimes Y,X\otimes Y)\to\mathbf{T}(X\otimes Y,X \otimes Y)\) for \(X,Y\in\mathbf{T}\) such that
\[\theta_{X_{2},Y_{2}}(gfh^{*})=g\theta_{X_{1},Y_{1}}(f)h^{*},\] \[\theta_{X_{2}\otimes X_{1},Y_{1}\otimes Y_{2}}(\mathrm{id}_{X_{2} }\otimes f\otimes\mathrm{id}_{Y_{2}})=\mathrm{id}_{X_{2}}\otimes\theta_{X_{1},Y_{1}}(f)\otimes\mathrm{id}_{Y_{2}},\]
for all \(X_{i},Y_{i}\in\mathbf{T}\), \(f\in\mathbf{T}(X_{1}\otimes Y_{1},X_{1}\otimes Y_{1})\) and \(g,h\in\mathbf{T}(X_{1},X_{2})\otimes\mathbf{T}(Y_{1},Y_{2})\subseteq\mathbf{ T}(X_{1}\otimes Y_{1},X_{2}\otimes Y_{2})\). A multiplier \(\theta=(\theta_{X,Y})\) on \(\mathbf{T}\) is said to be completely positive (or a _CP multiplier_) if all the maps \(\theta_{X,Y}\) are completely positive. A multiplier \(\theta=(\theta_{X,Y})\) on \(\mathbf{T}\) is said to be completely bounded (or a _CB multiplier_) if all the maps \(\theta_{X,Y}\) are completely bounded and \(\|\theta\|_{cb}=\sup_{X,Y\in\mathbf{T}}\|\theta_{X,Y}\|_{cb}<\infty\).
It is shown in [56, Proposition 3.6] that multipliers on \(\mathbf{T}\) are in canonical bijection with functions \(\mathrm{Irr}(\mathbf{T})\to\mathbb{C}\). We will often identify a multiplier \(\theta=(\theta_{X,Y})\) with its associated function \(\theta=(\theta(k))_{k\in\mathrm{Irr}(\mathbf{T})}\). Note that we have \(\|(\theta(k))_{k\in\mathrm{Irr}(\mathbf{T})}\|_{\infty}\leq\|\theta\|_{cb}\).
Let us write \(\mathrm{M}_{cb}(\mathbf{T})\) for the space of CB multipliers on \(\mathbf{T}\). Via composition of maps and the CB norm this becomes naturally a Banach algebra. From the definition of the correspondance between functions on \(\mathrm{Irr}(\mathbf{T})\) and multipliers, compare [56, Formula (3.5)], it follows that the product on \(\mathrm{M}_{cb}(\mathbf{T})\) corresponds to pointwise multiplication of functions. It is shown in [2, Corollary 5.3] that \(\mathrm{M}_{cb}(\mathbf{T})\) is a dual Banach algebra, whose predual \(Q(\mathbf{T})\) can be constructed using the tube algebra of \(\mathbf{T}\). More specifically, if \(\theta\in\mathrm{M}_{cb}(\mathbf{T})\) is a CB multiplier, define \(M_{\theta}\colon\mathrm{Tub}(\mathbf{T})\to\mathrm{Tub}(\mathbf{T})\) by
\[M_{\theta}(f)=\theta(k)f,\qquad f\in\mathbf{T}(X_{k}\otimes X_{i},X_{j}\otimes X_ {k})\subseteq\mathrm{Tub}(\mathbf{T}).\]
Due to [4, Proposition 5.1] the map \(M\) gives an isometric embedding of \(\mathrm{M}_{cb}(\mathbf{T})\) into the space \(\mathrm{CB}^{\sigma}(\mathcal{L}(\mathrm{Tub}(\mathbf{T})))\) of normal CB maps on \(\mathcal{L}(\mathrm{Tub}(\mathbf{T}))\), and also an isometric embedding with weak\({}^{*}\)-closed image \(\mathrm{M}_{cb}(\mathbf{T})\to\mathrm{CB}(\mathrm{C}^{*}_{\mathsf{red}}( \mathrm{Tub}(\mathbf{T})),\mathcal{L}(\mathrm{Tub}(\mathbf{T})))\). Then the predual \(Q(\mathbf{T})\) of \(\mathrm{M}_{cb}(\mathbf{T})\) obtained in [2] is constructed as the resulting quotient of the predual \(\mathrm{C}^{*}_{\mathsf{red}}(\mathrm{Tub}(\mathbf{T}))\widehat{\otimes} \mathcal{L}(\mathrm{Tub}(\mathbf{T}))_{*}\) of \(\mathrm{CB}(\mathrm{C}^{*}_{\mathsf{red}}(\mathrm{Tub}(\mathbf{T})),\mathcal{L }(\mathrm{Tub}(\mathbf{T})))\).
We can approximate elements of \(Q(\mathbf{T})\) by taking tensor products of elements in \(\mathrm{Tub}(\mathbf{T})\) and vector functionals associated to vectors from \(\mathrm{L}^{2}(\mathrm{Tub}(\mathbf{T}))\). Noting that \(\mathrm{Tub}(\mathbf{T})\) is dense in \(\mathrm{L}^{2}(\mathrm{Tub}(\mathbf{T}))\), a linearly dense collection of functionals in \(\mathcal{L}(\mathrm{Tub}(\mathbf{T}))_{*}\) is given by \(\mathcal{L}(\mathrm{Tub}(\mathbf{T}))\ni T\mapsto\left\langle f\,\middle|\,Tg \right\rangle=\mathrm{Tr}(f^{*}Tg)=\mathrm{Tr}(Tgf^{*})\) for \(f,g\in\mathrm{Tub}(\mathbf{T})\). As \(\mathrm{Tub}(\mathbf{T})\) has local units, it suffices to look at functionals of the form \(T\mapsto\mathrm{Tr}(Tf)\) for \(f\in\mathrm{Tub}(\mathbf{T})\). Under this identification, the canonical pairing of \(f\otimes g\in\mathrm{Tub}(\mathbf{T})\odot\mathrm{Tub}(\mathbf{T})\subseteq \mathrm{C}^{*}_{\mathsf{red}}(\mathrm{Tub}(\mathbf{T}))\widehat{\otimes} \mathcal{L}(\mathrm{Tub}(\mathbf{T}))_{*}\) with \(\theta\in\mathrm{M}_{cb}(\mathbf{T})\) becomes
\[\left\langle\theta,f\otimes g\right\rangle=\mathrm{Tr}(gM_{\theta}(f)). \tag{8.2}\]
Let us define a weighted \(\ell^{1}\)-norm on \(\mathrm{c}_{00}(\mathrm{Irr}(\mathbf{T}))\) by
\[\|f\|_{1}=\sum_{k\in\mathrm{Irr}(\mathbf{T})}\mathrm{dim}_{q}(X_{k})|f(k)|,\]
and denote by \(\ell^{1}(\mathrm{Irr}(\mathbf{T}))\) the corresponding completion, compare [56, Remark 10.4]. The weighting by quantum dimensions ensures that admissible \(*\)-representations of \(\mathbb{C}[\mathbf{T}]\) in the sense of [56, Definition 4.1] extend to contractive \(*\)-representations of \(\ell^{1}(\mathrm{Irr}(\mathbf{T}))\). Note that there is a contractive embedding \(\iota\colon\ell^{1}(\mathrm{Irr}(\mathbf{T}))\to\mathrm{M}_{cb}(\mathbf{T})^{*}\) given by
\[\iota(\omega)(\theta)=\sum_{k\in\mathrm{Irr}(\mathbf{T})}\mathrm{dim}_{q}(X_{ k})\omega(k)\theta(k)\qquad(\omega\in\ell^{1}(\mathrm{Irr}(\mathbf{T})),\theta\in \mathrm{M}_{cb}(\mathbf{T})).\]
**Lemma 8.2**.: _Let \(\mathbf{T}\) be a rigid \(\mathrm{C}^{*}\)-tensor category. Then the Banach space \(Q(\mathbf{T})\) can be identified with the closure of \(\ell^{1}(\mathrm{Irr}(\mathbf{T}))\) in \(\mathrm{M}_{cb}(\mathbf{T})^{*}\) under the embedding \(\iota\)._
Proof.: It follows from the explicit formulas that the image of the subspace \(\mathrm{Tub}(\mathbf{T})\odot\mathrm{Tub}(\mathbf{T})\subseteq\mathrm{C}^{*}_{ \mathsf{red}}(\mathrm{Tub}(\mathbf{T}))\widehat{\otimes}\mathcal{L}(\mathrm{Tub }(\mathbf{T}))_{*}\) in \(\mathrm{M}_{cb}(\mathbf{T})^{*}\) agrees with the image of \(\mathrm{c}_{00}(\mathrm{Irr}(\mathbf{T}))\) under the map \(\iota\). Indeed, that we obtain all of \(\mathrm{c}_{00}(\mathrm{Irr}(\mathbf{T}))\) in this way can be seen by considering \(f,g\in p_{0}\,\mathrm{Tub}(\mathbf{T})p_{0}\cong\mathbb{C}[\mathbf{T}]\) in (8.2). The claim therefore follows from density of the former space inside \(Q(\mathbf{T})\).
Remembering that the weak\({}^{*}\)-topology on \(\mathrm{M}_{cb}(\mathbf{T})\) means the one induced by the predual \(Q(\mathbf{T})\), we shall now give the following definition.
**Definition 8.3**.: Let \(\mathbf{T}\) be a rigid \(\mathrm{C}^{*}\)-tensor category. We say that \(\mathbf{T}\) has the _approximation property (AP)_ if there exists a net of finitely supported CB multipliers of \(\mathbf{T}\) converging to \(\mathbb{1}\) in the weak\({}^{*}\)-topology of \(\mathrm{M}_{cb}(\mathbf{T})\).
Comparing with [56, Definition 5.1] we see that every weakly amenable rigid \(\mathrm{C}^{*}\)-tensor category has AP. Indeed, a uniformly bounded net of finitely supported CB multipliers converging pointwise to \(\mathbb{1}\) converges also in the weak\({}^{*}\)-topology since \(\mathrm{c}_{00}(\mathrm{Irr}(\mathbf{T}))\) is dense in \(Q(\mathbf{T})\) by Lemma 8.2.
Next recall the definition of the central approximation property for discrete quantum groups from Definition 6.3. We aim to show that the central approximation property for a discrete quantum group \(\mathbb{I}\) is equivalent to the approximation property for the rigid \(\mathrm{C}^{*}\)-tensor category \(\mathsf{Corep}(\mathbb{I})\) of finite dimensional unitary corepresentations of \(\mathbb{I}\) (i.e. representations of \(\widehat{\mathbb{I}}\)). To make the notation more coherent, we will write in this section \(\mathbb{C}[\mathbb{I}]=\mathrm{Pol}(\widehat{\mathbb{I}})\) and \(\mathrm{C}^{*}_{\mathsf{red}}(\mathbb{I})=\mathrm{C}(\widehat{\mathbb{I}})\), \(\mathcal{L}(\mathbb{I})=\mathrm{L}^{\infty}(\widehat{\mathbb{I}})\), and use the same conventions for the Drinfeld double \(D(\mathbb{I})\).
We shall first discuss the relation between categorical AP for \(\mathsf{Corep}(\mathbb{I})\) and AP for the Drinfeld double \(D(\mathbb{I})\) of \(\mathbb{I}\). Recall from Section 7.4.1 that \(\mathrm{L}^{\infty}(D(\mathbb{I}))=\ell^{\infty}(\mathbb{I})\widehat{\otimes} \mathcal{L}(\mathbb{I})\) with the coproduct
\[\Delta_{D(\mathbb{I})}=(\mathrm{id}\otimes\chi\otimes\mathrm{id})(\mathrm{id }\otimes\mathrm{ad}(\mathrm{W})\otimes\mathrm{id})(\Delta\otimes\widehat{\Delta}),\]
where \(\mathrm{W}\in\ell^{\infty}(\mathbb{I})\bar{\otimes}\mathcal{L}(\mathbb{I})\) is the Kac-Takesaki operator of \(\mathbb{I}\). Note also that we have a canonical identification of the center \(\mathcal{Z}\ell^{\infty}(\mathbb{I})\) of \(\ell^{\infty}(\mathbb{I})\) with \(\ell^{\infty}(\mathrm{Irr}(\mathsf{Corep}(\mathbb{I})))=\ell^{\infty}(\mathrm{ Irr}(\widehat{\mathbb{I}}))\). In particular, every multiplier \(\theta\in\mathrm{M}_{cb}(\mathsf{Corep}(\mathbb{I}))\) can be viewed as a (central) element of \(\ell^{\infty}(\mathbb{I})\).
In what follows we will use the notion of an _algebraic quantum group_, see [67], [70, Section 3.2]. By definition, an algebraic quantum group is a multiplier Hopf \(\star\)-algebra for which there exists a positive left invariant functional and a positive right invariant functional. For example, if \(\mathbb{\Gamma}\) is a discrete quantum group then \(\mathrm{c}_{00}(\mathbb{\Gamma})\) and \(\mathbb{C}[\mathbb{\Gamma}]\) equipped with their respective comultiplications and Haar integrals are examples of algebraic quantum groups. Every algebraic quantum group gives rise to a locally compact quantum group in the sense of Kustermans-Vaes via an appropriate completion procedure, see [44]. Moreover, one finds that all elements of the underlying multiplier Hopf \(\star\)-algebra are contained in the Fourier algebra of the locally compact quantum group, see e.g. the end of Section 1 in [44].
The Drinfeld double \(D(\mathbb{\Gamma})\) of a discrete quantum group \(\mathbb{\Gamma}\) and its dual \(\widehat{D(\mathbb{\Gamma})}\) are also algebraic quantum groups. The corresponding multiplier Hopf \(\ast\)-algebras are \(\mathrm{c}_{00}(\mathbb{\Gamma})\odot\mathbb{C}[\mathbb{\Gamma}]\subseteq \mathrm{L}^{\infty}(D(\mathbb{\Gamma}))\) and
\[\mathcal{D}(D(\mathbb{\Gamma}))=\mathbb{C}[\mathbb{\Gamma}]\bowtie\mathrm{c }_{00}(\mathbb{\Gamma})=\mathrm{span}\{\gamma_{1}(\widehat{x})\gamma_{2}(x) \,|\,\widehat{x}\in\mathbb{C}[\mathbb{\Gamma}],x\in\mathrm{c}_{00}(\mathbb{ \Gamma})\}\subseteq\mathcal{L}(D(\mathbb{\Gamma})),\]
where
\[\gamma_{1} \colon\mathcal{L}(\mathbb{\Gamma})\to\mathcal{L}(D(\mathbb{\Gamma }))\colon\widehat{x}\mapsto\widehat{x}\otimes\mathbb{1},\] \[\gamma_{2} \colon\ell^{\infty}(\mathbb{\Gamma})\to\mathcal{L}(D(\mathbb{ \Gamma}))\colon x\mapsto Z^{\ast}(\mathbb{1}\otimes x)Z\]
are the maps introduced in Lemma 7.12. We will write \(\gamma_{1}(\widehat{x})\gamma_{2}(x)=\widehat{x}\bowtie x\) for \(\widehat{x}\in\mathbb{C}[\mathbb{\Gamma}],x\in\mathrm{c}_{00}(\mathbb{\Gamma})\).
**Lemma 8.4**.: _Let \(\mathbb{\Gamma}\) be a discrete quantum group and let \(D(\mathbb{\Gamma})\) be its Drinfeld double. There is an isometric embedding_
\[N\colon\,\mathrm{M}_{cb}(\mathsf{Corep}(\mathbb{\Gamma}))\to\mathrm{CB}^{ \sigma}(\mathcal{L}(D(\mathbb{\Gamma})))\colon\theta\mapsto N_{\theta}\]
_given by_
\[N_{\theta}(U^{\alpha}_{i,j}\bowtie x_{\beta})=\theta(\alpha)U^{\alpha}_{i,j} \bowtie x_{\beta}\]
_for \(U^{\alpha}_{i,j}\in\mathbb{C}[\mathbb{\Gamma}],x_{\beta}\in\mathrm{B}(\mathsf{ H}_{\beta})\subseteq\mathrm{c}_{00}(\mathbb{\Gamma})\). If \(\theta\in\mathrm{M}_{cb}(\mathsf{Corep}(\mathbb{\Gamma}))\) then \(\theta\otimes\mathbb{1}\in\mathrm{M}_{cb}^{l}(\mathrm{A}(D(\mathbb{\Gamma}))) \subseteq\mathrm{L}^{\infty}(D(\mathbb{\Gamma}))\) and \(N_{\theta}=\Theta^{l}(\theta\otimes\mathbb{1})\)._
Proof.: We wish to apply the results of [51, Section 3]. For this we need to work with the annular algebra
\[\mathrm{Tub}(\mathbb{\Gamma})=\bigoplus_{\alpha,\beta\in\mathrm{Irr}(\mathbb{ \Gamma})}\Big{(}\bigoplus_{\gamma\in\mathrm{Irr}(\mathbb{\Gamma})}\mathrm{Mor }(\gamma\otimes\alpha,\beta\otimes\gamma)\Big{)}\otimes\mathrm{B}(\mathsf{H}_{ \overline{\alpha}},\mathsf{H}_{\overline{\beta}}),\]
which is equipped with the multiplication given by the product from \(\mathrm{Tub}(\mathsf{Corep}(\mathbb{\Gamma}))\) in (8.1) and the composition of operators between the Hilbert spaces \(\mathsf{H}_{\gamma}\). We refer to [26, Section 3] for the general definition of annular algebras associated with full weight sets.
We again obtain a trace on \(\mathrm{Tub}(\mathbb{\Gamma})\) and so can perform the GNS construction, and construct the associated von Neumann algebra \(\mathcal{L}(\mathrm{Tub}(\mathbb{\Gamma}))\). Furthermore, [4, Proposition 5.1] applies, and we obtain a map \(\widetilde{M}\colon\,\mathrm{M}_{cb}(\mathsf{Corep}(\mathbb{\Gamma}))\to \mathrm{CB}^{\sigma}(\mathcal{L}(\mathrm{Tub}(\mathbb{\Gamma})))\) given by \(\widetilde{M}_{\theta}(f)=\theta(\gamma)f\) for \(f=f^{\prime}\otimes T\in\mathrm{Mor}(\gamma\otimes\alpha,\beta\otimes\gamma )\otimes\mathrm{B}(\mathsf{H}_{\overline{\alpha}},\mathsf{H}_{\overline{\beta} })\subseteq\mathrm{Tub}(\mathbb{\Gamma})\subseteq\mathcal{L}(\mathrm{Tub}( \mathbb{\Gamma}))\), which is a well-defined isometric embedding.
It is shown in [51, Theorem 3.5] that there is a \(\star\)-isomorphism between \(\mathrm{Tub}(\mathbb{\Gamma})\) and the algebraic convolution algebra \(\mathcal{D}(D(\mathbb{\Gamma}))=\mathbb{C}[\mathbb{\Gamma}]\bowtie\mathrm{c}_{00 }(\mathbb{\Gamma})\) of the Drinfeld double of \(\mathbb{\Gamma}\). Under this isomorphism, the trace \(\mathrm{Tr}\) on \(\mathrm{Tub}(\mathbb{\Gamma})\) does not correspond to the left invariant functional on \(\mathcal{D}(D(\mathbb{\Gamma}))\) on the nose, but both functionals can be obtained from one another by multiplication with a positive invertible element in the algebraic multiplier algebra of \(\mathrm{Tub}(\mathbb{\Gamma})\cong\mathcal{D}(D(\mathbb{\Gamma}))\). It follows that the regular representations of \(\mathrm{Tub}(\mathbb{\Gamma})\cong\mathcal{D}(D(\mathbb{\Gamma}))\) on \(\mathrm{L}^{2}(\mathrm{Tub}(\mathbb{\Gamma}))\) and \(\mathrm{L}^{2}(D(\mathbb{\Gamma}))\) are unitarily equivalent, which means that the isomorphism in [51, Theorem 3.5] induces a normal \(\star\)-isomorphism \(\mathcal{L}(\mathrm{Tub}(\mathbb{\Gamma}))\cong\mathcal{L}(D(\mathbb{\Gamma}))\), which restricts to a \(\star\)-isomorphism \(\mathrm{C}^{\ast}_{\mathsf{red}}(\mathrm{Tub}(\mathbb{\Gamma}))\cong\mathcal{C} ^{\ast}_{\mathsf{red}}(D(\mathbb{\Gamma}))\).
Inspecting the formulas in [51] one checks that \(\widetilde{M}_{\theta}\colon\mathrm{Tub}(\mathbb{\Gamma})\to\mathrm{Tub}( \mathbb{\Gamma})\) identifies under the isomorphism \(\mathrm{Tub}(\mathbb{\Gamma})\cong\mathcal{D}(D(\mathbb{\Gamma}))\) with the map \(N_{\theta}\colon\mathcal{D}(D(\mathbb{\Gamma}))\to\mathcal{D}(D(\mathbb{\Gamma}))\) in the statement of the lemma. Consequently, we see that \(N_{\theta}\) extends to a normal CB map on \(\mathcal{L}(D(\mathbb{\Gamma}))\).
An explicit formula for the multiplication in \(\mathcal{D}(D(\mathbb{\Gamma}))\) is given in [70, page 219], though be aware that there the factors \(\mathbb{C}[\mathbb{\Gamma}]\) and \(\mathrm{c}_{00}(\mathbb{\Gamma})\) are swapped. In particular, for \(x\in\mathrm{c}_{00}(\mathbb{\Gamma})\) we have that \(\gamma_{2}(x)\gamma_{1}(U^{\alpha}_{i,j})\in\mathrm{span}\{\gamma_{1}(U^{ \alpha}_{k,l})\gamma_{2}(y)\,|\,1\leq k,l\leq\dim(\alpha),y\in\mathrm{c}_{00}( \mathbb{\Gamma})\}\) and so \(N_{\theta}(\gamma_{2}(x)\gamma_{1}(U^{\alpha}_{i,j}))=\theta(\alpha)\gamma_{2}(x) \gamma_{1}(U^{\alpha}_{i,j})\).
From Section 7.4.1, we find that
\[\mathrm{W}^{D(\Gamma)*}=Z_{34}^{*}\widehat{\mathrm{W}}_{24}^{*}Z_{34}\mathrm{W}_ {13}^{*}=(\mathrm{id}\otimes\gamma_{2})(\widehat{\mathrm{W}}^{*})_{234}( \mathrm{id}\otimes\gamma_{1})(\mathrm{W}^{*})_{134},\]
where \(\mathrm{W}=\mathrm{W}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
closed quantum subgroup of \(D(\mathbb{T})\), see Lemma 7.12. As \(\gamma_{1}(\widehat{x})=\widehat{x}\otimes\mathbb{1}\), it follows from Lemma 8.4 that \(\Theta^{l}(\theta\otimes\mathbb{1})=N_{\theta}\) leaves the image of \(\gamma_{1}\) invariant, and so we obtain a map \(L_{\theta}\in\operatorname{CB}^{\sigma}(\mathcal{L}(\mathbb{T}))\) with \(\gamma_{1}L_{\theta}=N_{\theta}\gamma_{1}\) and \(\|L_{\theta}\|_{cb}\leq\|N_{\theta}\|_{cb}=\|\theta\|_{cb}\). In particular, \(L_{\theta}(U^{\alpha}_{i,j})=\theta(\alpha)U^{\alpha}_{i,j}\) for each \(\alpha,i,j\). Thus \(\theta\in\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{T}))\), with \(\|\theta\|_{\operatorname{M}_{cb}(\operatorname{Corep}(\mathbb{T}))}\geq\| \theta\|_{\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{T}))}\).
As these identifications are mutual inverses, we have shown that \(\operatorname{M}_{cb}(\operatorname{Corep}(\mathbb{T}))\cong\mathcal{Z} \operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{T}))\) isometrically. Let \(\gamma:\operatorname{M}_{cb}(\operatorname{Corep}(\mathbb{T}))\to \mathcal{Z}\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{T}))\) be the resulting isometric isomorphism, which we claim is weak\({}^{*}\)-weak\({}^{*}\)-continuous. We again use Lemma 3.7, with \(E=Q(\operatorname{Corep}(\mathbb{T}))\) and \(F\) the predual of \(\mathcal{Z}\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{T}))\), with \(D\subseteq F\) to be constructed. As the predual of \(\mathcal{Z}\operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{T}))\) is a quotient of \(Q^{l}(\operatorname{A}(\mathbb{T}))\), it suffices to take \(D\) to be the image under the quotient map of a linearly dense subset \(D^{\prime}\) of \(Q^{l}(\operatorname{A}(\mathbb{T}))\). We take \(D^{\prime}\subseteq\ell^{1}(\mathbb{T})\subseteq Q^{l}(\operatorname{A}( \mathbb{T}))\) to consist of all linear functionals \(\omega\) constructed by choosing \(x\in\operatorname{c}_{00}(\mathbb{T})\) and defining \(\langle y,\omega\rangle=\sum_{\alpha\in\operatorname{Irr}(\widehat{\mathbb{T} })}\dim_{q}(\alpha)\operatorname{Tr}_{\alpha}(y_{\alpha}x_{\alpha})\) for \(y\in\ell^{\infty}(\mathbb{T})\). Given \(\theta\in\operatorname{M}_{cb}(\operatorname{Corep}(\mathbb{T}))\) and \(\omega\in D\) induced by \(x\in\operatorname{c}_{00}(\mathbb{T})\), we see that
\[\langle\gamma(\theta),\omega\rangle=\sum_{\alpha\in\operatorname{Irr}( \widehat{\mathbb{T}})}\dim_{q}(\alpha)\theta(\alpha)\operatorname{Tr}_{\alpha} (x_{\alpha}),\]
where the sum is finite. Hence if we set \(z=\left(\operatorname{Tr}_{\alpha}(x_{\alpha})\right)_{\alpha\in\operatorname {Irr}(\widehat{\mathbb{T}})}\in\operatorname{c}_{00}(\operatorname{Irr}( \widehat{\mathbb{T}}))\) then \(\langle\gamma(\theta),\omega\rangle=\langle\theta,\iota(z)\rangle\), where \(\iota\) is the embedding of \(\ell^{1}(\operatorname{Irr}(\widehat{\mathbb{T}}))\) into \(Q(\operatorname{Corep}(\mathbb{T}))\) as in Lemma 8.2. It follows that \(\omega\circ\gamma\in Q(\operatorname{Corep}(\mathbb{T}))\), and hence \(\gamma^{*}\kappa_{F}(D)\subseteq\kappa_{E}(E)\). Now Lemma 3.7 yields the claim.
Let us now compare the categorical approximation property of \(\operatorname{Corep}(\mathbb{T})\) with the central approximation property of \(\mathbb{T}\).
**Proposition 8.7**.: _A discrete quantum group \(\mathbb{T}\) has central AP if and only if \(\operatorname{Corep}(\mathbb{T})\) has AP._
Proof.: The claim follows from Lemma 8.6, as the isomorphism \(\operatorname{M}_{cb}(\operatorname{Corep}(\mathbb{T}))\simeq\mathcal{Z} \operatorname{M}^{l}_{cb}(\operatorname{A}(\mathbb{T}))\) is a unital weak\({}^{*}\)-weak\({}^{*}\)-homeomorphism which restricts to \(\operatorname{c}_{00}(\operatorname{Irr}(\widehat{\mathbb{T}}))\simeq\mathcal{Z} \operatorname{c}_{00}(\mathbb{T})\).
As a consequence of Proposition 8.7 we see that central AP is invariant under monoidal equivalence.
**Corollary 8.8**.: Let \(\mathbb{T}\) and \(\mathbb{A}\) be discrete quantum groups such that \(\widehat{\mathbb{T}}\) and \(\widehat{\mathbb{A}}\) are monoidally equivalent. Then \(\mathbb{T}\) has central AP if and only if \(\mathbb{A}\) has central AP.
Proof.: According to the definition of monoidal equivalence [8], the \(\operatorname{C}^{*}\)-tensor categories \(\operatorname{Corep}(\mathbb{T})\) and \(\operatorname{Corep}(\mathbb{A})\) are unitarily monoidally equivalent. This means that \(\operatorname{Corep}(\mathbb{T})\) has AP if and only if \(\operatorname{Corep}(\mathbb{A})\) has AP. Due to Proposition 8.7 this yields the claim.
Finally, let us relate AP of \(\operatorname{Corep}(\mathbb{T})\) and \(D(\mathbb{T})\).
**Proposition 8.9**.: _Let \(\mathbb{T}\) be a discrete quantum group such that \(\operatorname{Corep}(\mathbb{T})\) has AP. Then the Drinfeld double \(D(\mathbb{T})\) has AP. If \(\mathbb{T}\) is unimodular, then the converse also holds: AP of \(D(\mathbb{T})\) implies AP of \(\operatorname{Corep}(\mathbb{T})\)._
Proof.: Due to Lemma 8.4 we have an isometric embedding \(\operatorname{M}_{cb}(\operatorname{Corep}(\mathbb{T}))\to\operatorname{M}^{l} _{cb}(\operatorname{A}(D(\mathbb{T})))\) given by \(\theta\mapsto\theta\otimes\mathbb{1}\). As \(\operatorname{Corep}(\mathbb{T})\) has AP, there is a net \((\theta_{i})_{i\in I}\) of finitely supported elements in \(\operatorname{M}_{cb}(\operatorname{Corep}(\mathbb{T}))\) with \(\theta_{i}\xrightarrow[i\in I]{}\mathbb{1}\) in the weak\({}^{*}\)-topology. By Lemma 8.5, it follows that the net \((N_{\theta_{i}})_{i\in I}\) converges weak\({}^{*}\) to the inclusion map in \(\operatorname{CB}\bigl{(}C^{*}_{\mathsf{red}}(D(\mathbb{T})),\mathcal{L}(D( \mathbb{T}))\bigr{)}\). As \(N_{\theta_{i}}=\Theta^{l}(\theta_{i}\otimes\mathbb{1})\) for each \(i\), by Theorem 3.8 this means that \(\theta_{i}\otimes\mathbb{1}\xrightarrow[i\in I]{}\mathbb{1}\otimes\mathbb{1}\) weak\({}^{*}\) in \(\operatorname{M}^{l}_{cb}(\operatorname{A}(D(\mathbb{T})))\). Since the elements \(\theta_{i}\otimes\mathbb{1}\) belong to the multiplier Hopf \(\star\)-algebra, they also belong to the Fourier algebra \(\operatorname{A}(D(\mathbb{T}))\) and we conclude that \(D(\mathbb{T})\) has AP.
If \(D(\mathbb{T})\) has AP then by Theorem 7.1 the same is true for \(\mathbb{T}\) since \(\mathbb{T}\) is a closed quantum subgroup of \(D(\mathbb{T})\). If \(\mathbb{T}\) is in addition unimodular, then AP of \(\mathbb{T}\) implies central AP of \(\mathbb{T}\) by Proposition 6.8, and consequently AP of \(\operatorname{Corep}(\mathbb{G})\) due to Proposition 8.7. |
2301.08581 | Tilted discs in six poorly studied cataclysmic variables | In this work, we search for negative superhumps (nSHs) in poorly studied
cataclysmic variables using TESS data. We find three eclipsing binaries with
nSH signatures: HBHA 4204-09, Gaia DR3 5931071148325476992, and SDSS
J090113.51+144704.6. The last one exhibits IW And-like behaviour in archival
ZTF data, and appears to have shallow, grazing eclipses. In addition, we detect
nSH signatures in two non-eclipsing systems: KQ Mon and Gaia DR3
4684361817175293440, by identifying the orbital period from the
superorbital-dependent irradiation of the secondary. We discover nSH signatures
in one more system, [PK2008] HalphaJ103959, by using an orbital period from
another work. An improved mass ratio - nSH deficit relation $q(\varepsilon_-)$
is suggested by us, which agrees with independent measurements on nova-like
variables. With this relation, we estimate the mass ratios of all systems in
our sample, and determine the orbital inclinations for the three that are
eclipsing. All systems with discovered nSHs in this work are excellent targets
for follow-up spectroscopic studies. | Stefan Y. Stefanov, Atanas K. Stefanov | 2023-01-20T13:52:14Z | http://arxiv.org/abs/2301.08581v1 | # Tilted discs in six poorly studied cataclysmic variables
###### Abstract
In this work, we search for negative superhumps (nSHs) in poorly studied cataclysmic variables using TESS data. We find three eclipsing binaries with nSH signatures: HBHA 4204-09, Gaia DR3 5931071148325476992, and SDSS J090113.51+144704.6. The last one exhibits IW And-like behaviour in archival ZTF data, and appears to have shallow, grazing eclipses. In addition, we detect nSH signatures in two non-eclipsing systems: KQ Mon and Gaia DR3 4684361817175293440, by identifying the orbital period from the superorbital-dependent irradiation of the secondary. We discover nSH signatures in one more system, [PK2008] HalphaJ103959, by using an orbital period from another work. An improved mass ratio - nSH deficit relation \(q(\varepsilon_{-})\) is suggested by us, which agrees with independent measurements on nova-like variables. With this relation, we estimate the mass ratios of all systems in our sample, and determine the orbital inclinations for the three that are eclipsing. All systems with discovered nSHs in this work are excellent targets for follow-up spectroscopic studies.
keywords: stars: activity - binaries: close - novae, cataclysmic variables - stars: individual: HBHA 4204-09, Gaia DR3 4684361817175293440, KQ Mon, SDSS J090113.51+144704.6, Gaia DR3 5931071148325476992, [PK2008] HalphaJ103959
## 1 Introduction
Cataclysmic variables (CVs) are binary systems that consist of a white-dwarf (WD) primary and a Roche-lobe filling secondary. Matter from the secondary flows through the first Lagrangian point and accretes on to the primary. In the case of a non-magnetic or a very weakly magnetic primary, this mass transfer happens through an accretion disc (Hellier, 2001). In systems with mass-transfer rates of \(\dot{\rm M}\simeq 1-5\times 10^{-9}\) M\({}_{\odot}\)yr\({}^{-1}\), thermal instabilities arise in the accretion disc and cause repeating quasi-periodic outbursts. These outbursts usually occur once about every few months, last several days, and can increase the system brightness with up to \(\sim 5\) mag. CVs with recorded outbursts are termed dwarf novae (DNe), whereas CVs with no recorded outbursts are termed nova-likes (NLs). In NLs, most of the flux originates from the accretion disc, which is in a hot steady state and is much brighter than the two system components. The orbital periods of this type of variables can range from \(\sim 1\) h to more than 10 h. Not many CVs, however, are observed in the period range of 2-3 h. This phenomenon is called the "period gap" and is explained by transitions in evolutionary stages of this type of variables (see Warner, 1995 for an encyclopedic description of CVs).
NLs can change their brightness on time-scales from seconds to millenina. Some systems have drops in brightness of several magnitudes, which can last from months to years. This behaviour is most commonly observed in systems with orbital periods (\(P_{\rm orb}\)) near the upper edge of the period gap. Such drops in brightness are categorised as a low state of type VY Scl (King and Cannizzo, 1998) and can also be displayed by magnetic CVs. YY Scl low states are likely caused by the reduction or the complete cessation of mass transfer in the system, which significantly decreases the flux coming from the disc. They are believed to be associated with the magnetic activity of the secondary. Star spots emerging on the first Lagrangian point may suppress mass transfer in the system (Livio and Pringle, 1994), and the radius of the secondary itself can be affected by magnetic activity (Howell, 2004). Yet, the exact mechanism of mass-transfer cessation during VY Scl episodes remains unknown.
Apart from low states and other long-term trends in brightness, CVs display an abundance of photometric variability on shorter time-scales. Roche-lobe geometry requires that the secondary takes a characteristic teardrop-like shape. As it orbits the barycentre, it presents different projections of itself to the observer, which introduces a photometric variability of period \(P_{\rm orb}/2\). A similar effect can occur when the secondary is strongly irradiated by the accretion disc. In that case, the visibility of the irradiated side of the secondary is dependent on the orbital phase of the system, and a light-curve modulation of period \(P_{\rm orb}\) takes place.
Some CVs exhibit variations in brightness that have periods slightly shorter or slightly longer than \(P_{\rm orb}\). These variations are called "superhumps" and are believed to be caused by a precessing accretion disc. Superhumps can be of either positive (pSH) or negative (nSH) type, depending on the sign of \(P_{\rm SH}-P_{\rm orb}\). They are well-studied and commonly seen in SU UMa stars, a DN subclass (e.g. Kato et al., 2009, 2017); as well as in NLs (Bruch, 2023). For NLs in particular, Bruch gave a sample of 46 systems, 13 have had
pSHs, 16 have had nSHs and 17 have had superhumps of both types at some point in the past (but not necessarily at the same time).
Each superhump type is associated with processes of different nature. pSHs are believed to be caused by an apsidally precessing accretion disc. In this case, the 3:1 resonance induces tidal deformations, the heat from which causing periodic changes in disc brightness (Whitehurst, 1988; Hirose & Osaki, 1990; Lubow, 1991). On the other hand, nSHs can be explained with a retrograde nodal precession of a tilted accretion disc. The tilt allows for the mass-transfer stream to go deeper in the gravitational well of the primary, and thus to release more energy upon impact. The point of impact on the disc is commonly referred to as the "bright spot". The sweeping of the bright spot across the disc faces introduces an additional photometric variability that has a period equal to the beating of \(P_{\rm orb}\) and the disc precession period \(P_{\rm prec}\)(Wood et al., 2009; Montgomery, 2009), i.e.
\[\frac{1}{P_{\rm nSH}}=\frac{1}{P_{\rm orb}}+\frac{1}{P_{\rm prec}}. \tag{1}\]
Superhumps of both types can be used to estimate some physical properties of these systems. The nSH deficit \(\varepsilon_{-}\) is defined as
\[\varepsilon_{-}=\frac{P_{\rm nSH}-P_{\rm orb}}{P_{\rm orb}} \tag{2}\]
and has been shown to correlate with the mass ratio of the system \(q=M_{1}/M_{2}\) in several works (e.g. Wood et al., 2009; Montgomery, 2009). A detailed study of nSHs can be found in Kimura et al. (2020); Kimura & Osaki (2021), where Kepler photometry of the NL system KIC 9406652 was analysed. The light curve of this particular object has identified \(P_{\rm orb}\), \(P_{\rm nSH}\) signals as well as superorbital ones (i.e. \(P_{\rm prec}\) signatures).
In this work, we present our results from a search for nSHs in poorly studied CVs that are similar to KIC 9406652. Section 2 presents our methods for searching and data reduction, and gives a list of objects with discovered nSH signatures. Section 3 contains a literature review and discussion of each system we found to have nSH behaviour. In Section 4, we attempt to estimate some physical parameters in said systems, and in Section 5, we summarise the findings of this work.
## 2 Analysis
### Data from TESS
The Transiting Exoplanet Survey Satellite (TESS; Ricker et al., 2015) mission is an all-sky survey in the red-infrared that continues to provide with long-term measurements of remarkable photometric precision. The TESS Science Processing Operations Center pipeline (SPOC; Jenkins et al., 2016) offers light curves from two different reduction techniques: Simple Aperture Photometry (SAP) and Pre-Search Data Conditioning Simple Aperture Photometry (PDCSAP). A comprehensive comparison between the two is given in Kinemuchi et al. (2012). PDCSAP tries to reduce effects of instrumental nature, but can sometimes introduce systematics in periodograms, and analysis should proceed with care. Bruch (2022) found in particular that the additional conditioning in PDCSAP may distort DNe light curves, and chose to use the simpler SAP technique in order to search for periodic variations in CVs. We use SAP light curves too in all analysis to follow.
### Photometric features of tilted accretion discs
Negative superhumps are direct evidence for a tilted accretion disc, but finding their signatures is only possible in systems of known \(P_{\rm orb}\). This is a strong restriction, since not many CVs have had their orbital periods measured. To expand the population of stars with known \(P_{\rm orb}\), we searched for systems with several significant peaks in the power spectrum, in a frequency region above the period gap. In the case of two neighbouring prominent peaks, it could be that those are signatures of \(P_{\rm pSH}\), \(P_{\rm orb}\) and not \(P_{\rm nSH},P_{\rm orb}\). Nevertheless, this degeneracy can be lifted with the following rationale.
In systems with a precessing tilted accretion disc, the disc orientation changes with respect to the secondary for different orbital phases \(\varphi_{\rm orb}\) and different disc precession phases \(\varphi_{\rm prec}\). The former is defined such that the secondary is at inferior conjunction at \(\varphi_{\rm orb}=0.0\); the latter is defined1 such that the light maximum in the disc precession cycle is at \(\varphi_{\rm prec}=0.0\). The observed irradiation of the secondary by the bright disc varies with both \(\varphi_{\rm prec}\) and \(\varphi_{\rm orb}\). Consider a non-eclipsing system at \(\varphi_{\rm orb}=0.5\) (Figure 1). The orbital plane of the system divides space into two half-spaces, one of which the observer finds themselves in. One part of the system resides in the same half-space as the observer, and the other part is in the opposite half-space (i.e. on the other side of the orbital plane with respect to the observer). We shall refer to those as the "near half-space" and the "far half-space". As an example, in Figure 1, the part of the accretion disc that lies in the near half-space is: its rear side at \(\varphi_{\rm prec}=0\), its right side at \(\varphi_{\rm prec}=0.25\) and so on.
Footnote 1: These definitions are consistent with Kimura et al. (2020).
For an observer, the near half-space of a system is more photometrically accessible than the far half-space.2 At \(\varphi_{\rm prec}=0\), the luminous disc reveals the most of itself to the observer, and the average system brightness across \(\varphi_{\rm orb}\) is the greatest. However, the irradiated region of the secondary is in the far half-space, and thus the \(\varphi_{\rm orb}\) variation in brightness is minimal in amplitude. In the opposite case of \(\varphi_{\rm prec}=0.5\), the observer sees the smallest possible projection of the disc, and the average system brightness across \(\varphi_{\rm orb}\) is the smallest - but the irradiated region of the secondary is now in the near half-space, and the \(\varphi_{\rm orb}\) variation in brightness is maximal in amplitude.
Figure 1: A model CV system at an orbital phase \(\varphi_{\rm orb}=0.5\), as it would be seen by an observer. Four precession phases \(\varphi_{\rm prec}\) of a disc with tilt \(\theta=6^{\circ}\) are illustrated. The orbital plane of the system is defined by dotted lines. It divides space into two half-spaces: the near (above the plane) and the far one (below the plane), with respect to the observer. The precession phase is defined such that the system is the brightest at \(\varphi_{\rm prec}=0\). In this orientation, the disc has the largest projected area at \(\varphi_{\rm prec}=0\). Conversely, at \(\varphi_{\rm prec}=0.5\), it has the smallest projection, but faces towards the secondary and irradiates it the most. Kimura & Osaki (2021), Figure 9 gives a full description of CV configurations in titled-disc regimes.
At the same time, nSHs introduce additional complexities in variability that need to be accounted for. Kimura and Osaki (2021) discuss this issue and carry out the following procedure. A given light curve is initially split into subsets of different time intervals. Then, for each subset, they: (1) fold by \(P_{\rm nSH}\) and construct an average light-curve profile of the nSH, (2) subtract said profile from each subset, (3) split the subset into different \(\varphi_{\rm prec}\) windows, (4) fold each window by \(P_{\rm orb}\). This technique results in multiple orbital phase curves, each corresponding to a different \(\varphi_{\rm prec}\) window. If these phase curves show a \(\varphi_{\rm prec}\)-dependent irradiation of the secondary, the system has a precessing tilted accretion disc and the observed superhump is negative. It is this consideration that could lift the pSH-nSH degeneracy in the power spectrum.
In order to address the nSH contamination, we use a variant of the nSH-subtraction technique by Kimura and Osaki (2021) with the following adjustments: all data is smoothed by a fourth-order Savitzky-Golay filter (Savitzky and Golay, 1964) of window size 10 d, and no separate subsets are considered; in (1), nSH light-curve profiles are constructed with median filters of window size 1101,3 in (3), four \(\varphi_{\rm prec}\) intervals are considered with centres at \(\varphi_{\rm prec}=0.00,0.25,0.50,0.75\) and of width 0.1.4
Footnote 3: We found that this window size worked generally well for all systems.
Footnote 4: That is, the intervals \(0.95-0.05\), \(0.20-0.30\), \(0.45-0.55\), \(0.70-0.80\).
### Target selection
The International Variable Star Index (VSX; Watson et al., 2006, accessed 2022 June) is perhaps the most extensive catalogue of known variable stars. We took all objects from the VSX labelled as CV or as NL (\(n=1249\)), and then sought all for which there were available TESS SPOC light curves of 120-second cadence (\(n=180\)). Lomb-Scargle periodograms (LS periodogram; Lomb, 1976; Scargle, 1982) of range between \(0.125-16.000\) d\({}^{-1}\) were constructed for those systems. Periodograms were then manually searched for the simultaneous presence of at least two neighbouring periodicities in the region above the period gap, as well as for one periodicity near their expected beat period. This was done to select NLs with signatures of all \(P_{\rm prec}\), \(P_{\rm orb}\), \(P_{\rm nSH}\). For most stars, long-term photometry from the All-Sky Automated Survey for Supernovae (ASAS-SN; Shappee et al., 2014; Kochanek et al., 2017) and from the Catalina Sky Survey (CSS, Drake et al., 2009) was available. We attempted to construct LS periodograms using photometry from said surveys, but data was found to be sparse and of too long cadence to be usable.
We report on the discovery of nSH behaviour in six poorly studied CVs. Three of them are eclipsing systems, which enabled us to directly determine \(P_{\rm orb}\). For two other systems, \(P_{\rm orb}\) was identified with the use of \(\varphi_{\rm prec}\)-dependent irradiation of the secondary. The last CV was found to have \(P_{\rm prec}\) and \(P_{\rm nSH}\) signatures, but not a \(P_{\rm orb}\) one. Our derived value of \(P_{\rm orb}\) by Equation (1), however, agrees well with the spectroscopic measurement of Pretorius and Knigge (2008). All six objects are discussed individually in Section 3. During inspection, we also found five new eclipsing CVs with no superhump behaviour. Their measured orbital periods are provided in Table 10, and their orbital phase curves are shown in Figure 11.
## 3 Review and results
The following sections provide literature review, discussion and interpretation of data for all CVs with discovered nSH behaviour. Each CV system has an associated figure containing: (1) available sky-survey data, (2) TESS photometry from sectors with prominent nSH behaviour together with corresponding LS periodograms, (3) orbital phase plots of data in the four \(\varphi_{\rm prec}\) regions discussed in Section 2.2. Measured periodicities of each system are given in Table 1. All measurements agree well with Equation (1) within uncertainty.
### HBHA 4204-09
HBHA 4204-095 (Figure 2) is discovered by ASAS-SN. It was classified as a CV by Jayasinghe et al. (2018) and by ALeRCE (Forster et al., 2021) in data from the Zwicky Transient Facility (ZTF; Bellm et al., 2019). This object is part of the "Catalogue of Stars in the Northern Milky Way Having H-alpha in Emission" (Kohoutek and Wehmeyer, 1999). The Gaia DR3 distance estimate is \(478\pm 3\) pc and ASAS-SN photometry gives a mean brightness of \(m_{V}=16.19\) mag.
Footnote 5: The VSX identifier of this source is ASASSN-V J210752.24+440542.0.
We report the presence of previously unknown V-shaped eclipses in HBHA 4204-09. Using them, we identify the periodogram peaks corresponding to \(P_{\rm orb}\), \(P_{\rm nSH}\), \(P_{\rm prec}\) (Table 1). Aside from these periodicities, the power spectrum contains a strong signal at 0\(\aas@@fstack{\prime\prime}\)070655(58), which matches \(P_{\rm orb}/2\). A collection of peaks at around 0\(\aas@@fstack{\prime\prime}\)155 is observed, which may be indicative of a pSH signature. Additional photometry of HBHA 4204-09 can be found in TESS Sectors 55 and 56, but no superhumps are present in those data sources. Due to its high orbital inclination, the near and the far half-spaces defined by the orbital plane are comparably accessible to the observer. The portion of the secondary in the far half-space is most irradiated at \(\varphi_{\rm prec}=0.00\), while the portion in the near half-space is most irradiated at \(\varphi_{\rm prec}=0.50\). The orbital profiles in panels (d) and (f) of Figure 2 show stronger secondary irradiation at aforementioned \(\varphi_{\rm prec}\), which is expected.
### Gaia DR3 4684361817175293440
Gaia DR3 468436181715293440, hereinafter Gaia-4684366 (Figure 3) was discovered and classified as a NL type CV by Bajer (2019). The Gaia DR3 distance estimate is \(1062^{+59}_{-50}\) pc. On the long-term ASAS-SN curve, a 1-mag fall in brightness can be observed around BTJD 800 - BTJD 1700. A panel with ASAS-SN photometry in this time period is shown in Figure 4. The observed drop in brightness has a smaller amplitude from what is expected in classic VX set low states. Quasi-cyclic variations of \(P\sim 20\) d resembling Z Cam outbursts appear after the start of the low state. The system later returns to normal brightness and outbursts are replaced with a standstill lasting for \(\sim 300\) days. This standstill is followed by another Z Cam outburst episode, after which no more outbursts of this type are observed.
Our LS periodogram of Gaia-468436 shows three peaks with periods matching Equation (1). We interpret them as \(P_{\rm orb}\), \(P_{\rm nSH}\)-\(P_{\rm prec}\) in a system with a tilted precessing disc (Section 2.2). We find two additional peaks at 0\(\aas@@fstack{\prime\prime}\)07372(14) and 0\(\aas@@fstack{\prime\prime}\)07702(15) that match \(P_{\rm nSH}/2\) and \(P_{\rm orb}/2\) respectively. In Figure 3(d)-(g), it can be seen that the light maximum of orbital-phase curves gradually shifts to earlier \(\varphi_{\rm orb}\) as the disc precession cycle advances. This is direct evidence for a retrogradely precessing tilted disc (Kimura et al., 2020).
### KQ Mon
KQ Mon (Figure 5) was classified as a NL-type CV by Bond (1979) using low-resolution spectra in the optical. Its orbital period was measured in Schmidtobreick et al. (2005) to be \(P_{\rm orb}=0\aas@@fstack{\rm d}1283(17)\) by analysing two nights of time-resolved spectroscopy. Later, Wolfe et al. (2013) examined far-ultraviolet spectra of KQ Mon from the International Ultraviolet Explorer. The mass of the primary was estimated to be \(M_{1}\sim 0.6\)\(M_{\odot}\) with the use of synthetic spectra. The same work argued that the primary contributes little to the total system flux, and is overwhelmed by the flux of a steady-state accretion disc. It was concluded that the system is located at a distance of 144-159 pc, with an inclination of \(i\leq 60\degr\) and an accretion rate in the order of \(\dot{M}\sim 10^{-9}\)\(M_{\odot}\) yr\({}^{-1}\). The Gaia DR3 distance is 628\(\pm\)8 pc, which disagrees with their estimates.
Our measured value for \(P_{\rm nSH}\) matches the \(P_{\rm orb}\) given in Schmidtobreick et al. (2005). What we measure as \(P_{\rm orb}=0\aas@@fstack{\rm d}13456(40)\) would have corresponded to a pSH signal in their interpretation. But then, no other signals in the periodogram would have been expected. We, however, measure a strong third signal at \(3\aas@@fstack{\rm d}12(24)\), which is self-consistent with the other two by Equation (1). In addition, we observe a \(\varphi_{\rm prec}\)-dependent amplitude of the orbital phase curve, which could be explained by a varying irradiation of the secondary. In Figure 3 of Schmidtobreick et al. (2005), a strong aliasing pattern can be seen. The authors chose an orbital period \(P_{\rm orb}\) among four possible signals, two of which agree with our measurements of \(P_{\rm orb}\), \(P_{\rm nSH}\). With all this in mind, we think that the correct value of \(P_{\rm orb}\) is \(0\aas@@fstack{\rm d}13456(40)\), and that there is presence of a tilted accretion disc in this system.
\begin{table}
\begin{tabular}{l c c c c c c} Name & RA & Dec & TESS Sector & \(P_{\rm orb}\) & \(P_{\rm nSH}\) & \(P_{\rm prec}\) & \(|\varepsilon_{-}|\) \\ \hline HBHA 4204-09 & \(21^{\rm h}07^{\rm m}52\aas@@fstack{s}24\) & \(+44\degr 05\arcmin 42\aas@@fstack{\prime\prime}0\) & 15, 16 & \(0\aas@@fstack{\rm d}14128(22)\) & \(0\aas@@fstack{\rm d}13657(22)\) & \(4\aas@@fstack{\rm d}11(18)\) & 0.0333(22) \\ Gaia DR3 4684361817175293440 & \(00^{\rm h}49\aas@@fstack{s}593\) & \(-76\degr 08\arcmin 27\aas@@fstack{\prime\prime}5\) & 28 & \(0\aas@@fstack{\rm d}15401(53)\) & \(0\aas@@fstack{\rm d}14750(52)\) & \(3\aas@@fstack{\rm d}40(27)\) & 0.0423(48) \\ KQ Mon & & \(0\aas@@fstack{\rm d}1321\aas@@fstack{s}13\) & \(-10\arcmin 21\aas@@fstack{s}49\) & 34 & \(0\aas@@fstack{\rm d}13456(40)\) & \(0\aas@@fstack{\rm d}12894(38)\) & \(3\aas@@fstack{\rm d}12(22)\) & 0.0418(41) \\ SDSS 0909113.51+144704.6 & \(0^{\rm h}011^{\rm m}15\aas@@fstack{s}51\) & \(+14\degr 47\degr 04\arcmin 47\) & \(44-46\) & \(0\aas@@fstack{\rm d}1463(17)\) & \(0\aas@@fstack{\rm d}13991(17)\) & \(3\aas@@fstack{\rm d}198(70)\) & 0.0437(16) \\ Gaia DR3 5931071148325476992 & \(10^{\rm h}36\aas@@fstack{s}03\aas@@fstack{s}63\) & \(-52\degr 33\degr 2\aas@@fstack{\prime\prime}6\) & 39 & \(0\aas@@fstack{\rm d}14827(46)\) & \(0\aas@@fstack{\rm d}14248(43)\) & \(3\aas@@fstack{\rm d}57(30)\) & 0.0391(42) \\
[PK2008] HalphalJ103959 & \(10^{\rm h}39\aas@@fstack{s}59\) & \(-47\degr 01\arcmin 26\arcmin 3\) & 36, 37 & \(0\aas@@fstack{\rm d}1577(2)\) & \(0\aas@@fstack{\rm d}15285(29)\) & \(4\aas@@fstack{\rm d}94(26)\) & 0.0308(22) \\ \hline \end{tabular} \({}^{\dagger}\)Orbital period measured spectroscopically by Pretorius & Knigge (2008).
\end{table}
Table 1: List of CVs with discovered nSHs using the methods described in Section 2. All periodicities in this table were measured on a Lomb-Scargle periodogram of range 0.125–16 d\({}^{-1}\) and of ten-fold oversming. All measured \(P_{\rm prec}\) in this table agree within uncertainty with the expected values by Equation (1) using measured \(P_{\rm orb}\) and \(P_{\rm nSH}\). Equatorial coordinates from Gaia DR3 and are in the J2000 epoch.
Figure 2: Photometry and analysis of HBHA 4204-09. (a) Long-term photometry from: ASAS-SN \(\delta\) (purple downward triangles), ASAS-SN \(V\) (green upward triangles). Temporal coverage of TESS: Sector 15 (light blue), Sector 16 (yellow). (b) Residual (mean-subtracted) SAP flux from TESS data. (c) Associated LS periodogram of (b), with indications of \(P_{\rm prec}\), \(P_{\rm orb}\), \(P_{\rm nSH}\) signals from Table 1 (blue dashes). (d)–(g) Binned orbital profiles around disc precession phases \(\varphi_{\rm prec}=0.00,0.25,0.50,0.75\) with nSH subtraction (blue squares) and without one (black circles). The typical standard deviation of data in bins is 25 e\({}^{-}\) s\({}^{-1}\). This system has a high inclination, and the parts of the secondary in both half-spaces are accessible to the observer. The secondary is expected to be the most irradiated in panels (d) and (f), where the observed out-of-eclipse profile is non-flat. In panels (e) and (g), where no irradiation of the secondary is expected, the out-of-eclipse profile is mostly flat.
### SDSS J090113.51+144704.6
SDSS J090113.51+144704.6, hereinafter SDSS-090113 (Figure 6) first appeared in Szkody et al. (2009) where it was classified as a CV due to accretion disc features in its spectrum. This system was included in the catalogue of bright WDs of Raddi et al. (2017). Gaia DR3 estimated the distance to SDSS-090113 to be \(1482^{+100}_{-116}\) pc. Later, Mosenelchner et al. (2022) included this system in their time-series analysis study of subdwarf A-type stars using Kepler K2 data. They discovered a periodicity of \(0\aas@@fstack{\prime}146\), which was suggested to be the orbital period \(P_{\rm orb}\).
SDSS-090113 has no recorded low states and its brightness varies around \(m_{V}=16.2\) mag. Between BTJD 1600 and BTJD 2300, we recognise an episode of anomalous Z Cam-type outbursts repeating once about every 25 days (Figure 7). These outbursts begin after a brightening, which is one of the defining features of the IW Andre phenomenon systems (Kato, 2019). This can be explained by a tilted disc that causes the accretion stream to enter inner disc regions, and thus to disrupt the accretion cycle. In this new type of accretion, the inner disc is in a hot state, while the outer disc repeats outbursts (Kimura et al., 2020).
Figure 6(d)-(g) shows what seems to be the presence of grazing eclipses in the orbital curve of SDSS-090113. They vary in depth and width, and for some phases of \(P_{\rm prec}\) they disappear completely, similar to ES Dra (Kato, 2022). Through them, we identify \(P_{\rm orb}\), \(P_{\rm sBH}\), \(P_{\rm prec}\) (Table 1). Two additional peaks are found at \(0\aas@@fstack{\prime}071531(52)\) and \(0\aas@@fstack{\prime}073161(40)\) that match \(P_{\rm sBH}/2\) and \(P_{\rm orb}/2\) respectively.
Figure 4: Z Cam-like episodes of Gaia-468436 in ASAS-SN \(g\) (blue downward triangles) and ASAS-SN \(V\) (green upward triangles). The Z Cam behaviour begins after a \(\sim 0.5\) mag fall in brightness. Emerging oscillations have a variable amplitude of \(\sim 0.8\) mag and are quasi-periodic with \(P\sim 20\) days. At about BTJD 1500, a brightening takes place, which is followed by a standstill at a level of 15.0 mag. At about BTJD 1760, the standstill is replaced with another oscillatory episode that has outbursts of similar period and amplitude as the former ones. No more Z Cam episodes were observed in Gaia-468436 in this data.
Figure 3: Photometry and analysis of Gaia-468436. (a) Long-term photometry from: ASAS-SN \(g\) (purple downward triangles), ASAS-SN \(V\) (green upward triangles). Temporal coverage of TESS: Sector 28 (light blue). (b) Residual (mean-subtracted) SAP flux from TESS data. (c) Associated LS periodogram of (b), with indications of \(P_{\rm prec}\), \(P_{\rm orb}\), \(P_{\rm sBH}\) signals from Table 1 (blue dashes). (d)–(g) Binned orbital profiles around disc precession phases \(\varphi_{\rm prec}=0.00\), \(0.25\), \(0.50\), \(0.75\) with nSH subtraction (blue squares) and without one (black circles). The typical standard deviation of data in bins is 4 e\({}^{-}\)s\({}^{-1}\). The effect of variable irradiation in panels (d)–(g) is similar to the one observed in KIC 9406652 (Kimura and Osaki, 2021).
### Gaia DR3 5931071148325476992
Gaia DR3 5931071148325476992, hereinafter Gaia-5931077 (Figure 8) is a poorly studied CV that was discovered in plates by Prestgard (2020) from the Digitized Sky Survey8 and the SuperCOSMOS H\(\alpha\) survey (Parker et al., 2005). The NOMAD catalogue (Zacharias et al., 2004) gives an apparent magnitude of \(m_{V}=16\) mag. No ASAS-SN photometry is available for this system. Its TESS brightness reads \(m_{\rm TESS}=16.02\) mag. There is an X-ray source (1RXS J163605.9-523335) at a distance of 20.8 arcsec, which is likely associated with Gaia-593107. In addition, we find two bright sources of brightness \(m_{\rm TESS}=13.44\) and \(m_{\rm TESS}=15.16\) mag in the aperture mask, that are expected to severely contaminate the light curve. This issue, however, is resolved by the apparent variability of Gaia-593107 in the discovery images9 of Prestgard, on the basis of which we attribute the tilted-disc behaviour to this specific system.
Footnote 7: The VSI identifier of this source is USNO-A2.0 0300-28957281.
Footnote 8: ESO Online Digitized Sky Survey: [http://archive.eso.org/dss/dss](http://archive.eso.org/dss/dss) (accessed 2022 October).
Our analysis of TESS light curves shows Gaia-593107 to be an eclipsing variable with an orbital period of \(P_{\rm orb}=0\aas@@fstack{s}{\rm d}14248(43)\). A peak at \(P_{\rm orb}/2\) is present as well. This allows us to locate \(P_{\rm orb}\), \(P_{\rm aSH}\), \(P_{\rm pre}\) (Table 1). A change in eclipse depth is observed in different phases of the determined \(\varphi_{\rm prec}\).
### [PK2008] Halphaj103959
[PK2008] Halphaj103959, hereinafter PK-103959 (Figure 9) was classified as a CV in Pretorius & Knigge (2008), where spectroscopic and photometric analyses of the system were carried out. The orbital period of PK-103959 was measured to be \(P_{\rm orb}=0\aas@@fstack{s}{\rm d}1577(2)\) in the same work. Catalina and ASAS-SN photometry has a mean brightness of \(m_{V}=\)15.7 mag, with no low states. A gradual increase in brightness can be seen in the period between -1000 and 2000 BTIJD.
We find signatures of \(P_{\rm aSH}\) and \(P_{\rm prec}\) (Table 1), but no peaks at the \(P_{\rm orb}\) by Pretorius & Knigge. However, there are two other visible peaks at \(0\aas@@fstack{s}{\rm d}07885(14)\) and \(0\aas@@fstack{s}{\rm d}07639(13)\), which correspond to \(P_{\rm orb}/2\) by Pretorius & Knigge and \(P_{\rm aSH}/2\) respectively.
## 4 Discussion
### Mass-ratio estimates
By using smoothed particle hydrodynamic (SPH) simulations of tilted accretion discs, Wood et al. (2009) found that the relation between the mass ratio and the nSH deficit is well-represented by
\[q(\varepsilon_{-})=-0.192|\varepsilon_{-}|^{0.5}+10.37|\varepsilon_{-}|-99.83| \varepsilon_{-}|^{1.5}+451.1|\varepsilon_{-}|^{2}. \tag{3}\]
This result has been supported by other works using related SPH simulations (Montgomery, 2009a; Thomas & Wood, 2015). To compare Equation (3) with observations, we searched for NL objects in
Figure 5: Photometry and analysis of KQ Mon. (a) Long-term photometry from: ASAS-SN \(g\) (purple downward triangles), ASAS-SN \(V\) (green upward triangles). Temporal coverage of TESS: Sector 34 (light blue). (b) Residual (mean-subtracted) SAP flux from TESS data. (c) Associated LS periodogram of (b), with indications of \(P_{\rm prec}\), \(P_{\rm orb}\), \(P_{\rm aSH}\) signals from Table 1 (blue dashes). (d)–(g) Binned orbital profiles around disc precession phases \(\varphi_{\rm prec}=0.00\), \(0.25\), \(0.50\), \(0.75\) with nSH subtraction (blue squares) and without one (black circles). The typical standard deviation of data in bins is 16 e\({}^{-}\)s\({}^{-1}\). The blue and the black curves differ due to the significant nSH contribution to the observed system flux. In blue curves of different \(\varphi_{\rm prec}\), there seems to be a change of shape and amplitude near \(\varphi_{\rm orb}=0.5\), which is expected, but could be also due to noise. Kimura & Osaki (2021) provide models for such orbital curves that could explain these observations.
literature for which superhump deficits and mass ratios were measured independently from one another. Our reasoning is that NLs share three main similarities with our discovered CVs: (1) their \(P_{\rm orb}\) are of the same order, (2) they have steady, hot and luminous discs, (3) samples of both populations exhibit VY-Sel behaviour. Using the sample of NLs with nSH signatures, given in Bruch (2023), we were able to find twelve such objects, which we list in Table 2. Figure 10(a) compares their measurements with the \(q(\varepsilon_{-})\) relations provided by Montgomery (2009a) and Wood et al. (2009). We share the concern that both works underestimate \(q(\varepsilon_{-})\) with respect to past measurements in literature.
There exists a different approach that could estimate mass ratios using nSHs. By making some assumptions, a \(q(\varepsilon_{-})\) relation can be derived in the following manner. Through linear perturbation theory, Papaloizou and Terquem (1995) derived the precession rate \(\omega_{\rm prec}\) of a differentially rotating fluid disc with a mass profile \(\Sigma(r)\):
\[\omega_{\rm prec}=-\frac{3}{4}\frac{GM_{2}}{a^{3}}\frac{\int\Sigma r^{3}{\rm d }r}{\int\Sigma\Omega r^{3}{\rm d}r}\cos\theta, \tag{4}\]
where \(a\) is the orbital separation, \(\Omega(r)=\sqrt{GM_{1}/r^{3}}\) is the Keplerian angular velocity profile of the disc, and \(\theta\) is the disc tilt with respect to the orbital plane. For a power-law mass profile \(\Sigma(r)\propto r^{m}\), Osaki
Figure 6: Photometry and analysis of SDSS-090113. (a) Long-term photometry from: ASAS-SN \(g\) (purple downward triangles), ASAS-SN \(V\) (green upward triangles), Catalina \(V\) (pink squares). Temporal coverage of TESS: Sector 44 (light blue), Sector 45 (yellow), Sector 46 (light blue). (b) Residual (mean-subtracted) SAP flux from TESS data. (c) Associated LS periodogram of (b), with indications of \(P_{\rm prec}\), \(P_{\rm orb}\), \(P_{\rm sH}\) signals from Table 1 (blue dashes). Data from (b) was smoothed by a fourth-order Savitzky-Golay filter of window size 10 d before constructing the periodogram. This was done solely for the sake of clear identification of \(P_{\rm prec}\) by the reader, and not for periodicity measurements. (d)–(g) Bimed orbital profiles around disc precession phases \(\varphi_{\rm prec}=0.00\), 0.25, 0.50, 0.75 with nSH subtraction (blue squares) and without one (black circles). The typical standard deviation of data in bins is 4 e\({}^{-}\) s\({}^{-}\). SDSS-090113 appears to have shallow grazing eclipses that are barely detectable for some \(\psi_{\rm prec}\). Observed eclipses vary in depth and width for different \(\psi_{\rm prec}\). This could be explained by a secondary that partially covers the tilted disc only when the projected area of the disc is large.
Figure 7: IW And episodes of SDSS-090113 in ZTF \(g\) (teal circles) and ZTF \(r\) (magenta diamonds). Three seasons of photometry are shown. The first season starts with oscillations that are terminated by brightening. This is one of the defining features of the IW And phenomenon (Kato, 2019; Kato et al., 2022). The second season shows the beginning of a new oscillatory episode that is variable in amplitude. The episode continues in the third season and abruptly ends at BTJD 2315.
& Kato (2013) derived that
\[\frac{\nu_{\rm prec}}{\nu_{\rm orb}}=-\frac{3}{4}\frac{2.5+n}{4+n}\frac{q}{\sqrt{1 +q}}\left(\frac{R_{d}}{a}\right)^{1.5}\cos\theta, \tag{5}\]
where \(\nu=\omega/2\pi\) and \(R_{d}\) is the disc radius.10 Using Equations (1), (2) and a mass profile of a steady-state disc given by \(n=-0.75\)(Shakura & Sunyaev, 1973), Equation (5) can be reduced to
Footnote 10: We note that precession is retrograde, which implies \(\omega_{\rm prec}\), \(\nu_{\rm prec}<0\).
\[\frac{\varepsilon_{-}}{1+\varepsilon_{-}}=-\frac{21}{52}\frac{q}{\sqrt{1+q}} \left(\frac{R_{d}}{a}\right)^{1.5}\cos\theta. \tag{6}\]
A similar derivation can be found in Montgomery (2009b). This shows that \(\varepsilon_{-}\) depends on three parameters: the mass ratio \(q\), the disc tilt \(\theta\) and the fractional disc radius \(R_{d}/a\). The third can be reasoned to be a function of \(q\) as follows. Suppose that in our systems with discovered nSHs, accretion discs are in steady state most of the time. Then, \(R_{d}\) approaches the tidal truncation radius \(r_{\rm tidal}\).Paczynski (1977, Table 1) provided a functional dependence \(r_{\rm tidal}(q)\). Later, Warner (1995) proposed the approximation
\[r_{\rm tidal}=\frac{0.60a}{1+q} \tag{7}\]
for \(0.03<q<1\). This is a good approximation in all regions but near \(q=0.7\), where \(r_{\rm tidal}(q)\) is underestimated. Using it would reduce Equation (6) to
\[\frac{\varepsilon_{-}}{1+\varepsilon_{-}}=-\frac{0.188q}{(1+q)^{2}}\cos\theta, \tag{8}\]
which does not describe well observational data for \(q>0.4\) (see dotted line in Figure 10(b)). Because of this, we do not use Equation (8). Instead, we linearly interpolate between data given in Paczynski (1977, Table 1) in order to evaluate \(R_{d}/a\) in Equation (6).
The other independent variable in Equation (6) is the disc tilt \(\theta\). Smak (2009) predicts that disc tilts should not exceed \(\theta_{\rm max}=7^{\circ}\) for CVs. In their photometric analysis of KIC 9406652, Kimura et al. (2020b) concluded that \(\theta\) varies between \(0\)-\(3^{\circ}\) over the course of 1500 days. Such range of \(\theta\) allows the assumption \(\cos\theta\simeq 1\), which is accurate to within one per cent. This motivates us to compute a \(q(\varepsilon_{-})\) curve for \(\cos\theta=1\) and compare it against measurements of Table 2 objects (Figure 10(b); Table 2). The \(q(\varepsilon_{-})\) relation becomes two-fold degenerate in \(q\) from about \(|\varepsilon_{-}|>0.048\).
Figure 8: Photometry and analysis of Gaia-593107. (a) Residual (mean-subtracted) SAP flux from TESS data. (b) Associated LS periodogram of (a), with indications of \(P_{\rm prec}\), \(P_{\rm orb}\), \(P_{\rm SSH}\) signals from Table 1 (blue dashes). (c)–(f) Binned orbital profiles around disc precession phases \(\varphi_{\rm prec}=0.00\), \(0.25\), \(0.50\), \(0.75\) with nSH subtraction (blue squares) and without one (black circles). The typical standard deviation of data in bins is \(8\) e\({}^{-}\)s\({}^{-1}\). Similar to the other eclipsing binaries in our sample, the secondary is expected to be the most irradiated in panels (c) and (e), where the observed out-of-eclipse profile is non-flat. In panels (d) and (f), these profiles are mostly flat, and the system brightness does not seem to increase near \(\varphi_{\rm orb}=0.5\).
Figure 9: Photometry and analysis of PK-103959. (a) Long-term photometry from: ASAS-SN \(g\) (purple downward triangles), ASAS-SN \(V\) (green upward triangles), Catalina \(V\) (pink squares). Temporal coverage of TESS: Sector 36 (light blue), Sector 37 (yellow). (b) Residual (mean-subtracted) SAP flux from TESS data. (c) Associated LS periodogram of (b), with indications of \(P_{\rm prec}\), \(P_{\rm orb}\), \(P_{\rm SSH}\) signals from Table 1 (blue dashes).
### Orbital inclination estimates
The eclipse width at half depth \(\Delta\varphi_{\rm orb}\) is a reasonable measure of the primary-eclipse duration in the assumption of an axisymmetric disc (Warner, 1995, Section 2.6.2). This duration can be used to determine the orbital inclination \(i\) by the relationship
\[\sin^{2}i\approx\frac{1-R_{L}(2)^{2}}{\cos^{2}(2\pi\varphi_{p})}, \tag{9}\]
where \(R_{L}(2)\) is the radius of the secondary star, and \(\pm\varphi_{p}\) are the phases of mid-immersion and mid-emergence of the primary star (Horne, 1985; Warner, 1995).11 The Roche-lobe radius approximation by Eggleton (1983)
Footnote 11: From here, \(\varphi_{p}\equiv\Delta\varphi_{\rm orb}/2\).
\[R_{L}(2)=\frac{0.49q^{2/3}}{0.6q^{2/3}+\ln(1+q^{1/3})} \tag{10}\]
can be used to substitute \(R_{L}(2)\) in Equation (9), such that the relation can retrieve \(i\) using only \(\Delta\varphi_{\rm orb}\) and the superhump-derived \(q\). We use this relation to constrain \(i\) for HBHA 4204-09, SDSS-090113 and Gaia-593107, which are eclipsing systems. We measured \(\Delta\varphi_{\rm orb}\) on nSH-subtracted data.
HBHA 4204-09 has a deep eclipse, with \(\Delta\varphi_{\rm orb}=0.049(4)\). Using our value of \(q\approx 0.29(3)\), we compute \(i=77(1)^{\circ}\). On the other hand, SDSS-090113 shows grazing eclipses of variable depth and width that disappear completely for some \(\varphi_{\rm prec}\). For this reason, we can assume \(\Delta\varphi_{\rm orb}\approx 0\). With our value of \(q\approx 0.47(4)\), we compute \(i=71\aas@@fstack{\circ}6(4)\). For the last eclipsing system in our sample, Gaia-593107, we measure \(\Delta\varphi_{\rm orb}=0.06(2)\). This eclipse width, combined with \(q\approx 0.38(8)\), gives an orbital inclination \(i=76(3)^{\circ}\).
## 5 Conclusions
In this work we present results from a search for nSHs in poorly studied CVs. We initially cross-matched TESS light-curve data with the VSX catalogue for objects labelled as CV or NL. We manually inspected LS periodograms of objects from this query (\(n=180\)), and then selected targets with at least two neighbouring periodicities above the period gap, including one large-period signal that matches the beating of the former two. This resulted in six systems with nSHs, which we list in Table 1.
Spectroscopic measurements of \(P_{\rm orb}\) were available for only one of the six aforementioned systems. For the rest, we used a couple of methods to recognise \(P_{\rm orb}\) signatures in their LS periodograms. Three systems had their orbital period determined by the presence of eclipses. For the last two, \(P_{\rm orb}\) was identified using the \(\varphi_{\rm prec}\)-dependent irradiation of the secondary caused by the precessing tilted disc (see Section 2.2). For all systems, the light maximum in the orbital phase profile was found to shift to earlier \(\varphi_{\rm orb}\), as \(\varphi_{\rm prec}\) advances. This is strong evidence of nSHs (Kimura et al., 2020; Kimura and Osaki, 2021) and supports our findings. For SDSS-090113, a peculiar behaviour was observed in 2T photometry (Figure 7), which is similar to what is observed in IW And stars (Kato, 2019). For Gaia-46843, two Z Cam-like episodes in ASAS-SN data were found, which take place after drops in brightness of about 0.5 mag (Figure 4).
Determined periodicities from TESS photometry can constrain some physical parameters of CV systems. The dependence between the nSH deficit \(\varepsilon_{-}\) and the system mass ratio \(q\) has been explored in several works already (e.g. Wood et al., 2009; Montgomery, 2009a). Referenced \(q(\varepsilon_{-})\) relations, however, underestimate independent measurements of \(q\) and \(\varepsilon_{-}\), which gives some ground for concern (Figure 10(a)). We tried to come to a better \(q(\varepsilon_{-})\) relation by using the precession rate of a differentially rotating steady-state disc that extends to the maximum tidal truncation radius \(r_{\rm tidal}\). This, in combination with the \(r_{\rm tidal}(q)\) relation given in Paczynski (1977, Table 1), resulted in better agreement with independent observations (Figure 10(b)). Mass-ratio estimates using this method are given in Table 3 for different disk radii \(R_{d}=0.9r_{\rm tidal},1.0r_{\rm tidal},1.1r_{\rm tidal}\). For the three systems that are eclipsing, we used the eclipse-mapping techniques described by Horne (1985); Warner (1995) to compute the orbital inclination \(i\) with our values of \(q\).
There are several subtle points that bear discussion. SAP light curves from TESS-SPOC may contain instrumental effects that could affect LS periodogram measurements; and photometry itself may be contaminated by nearby sources. To address the former, we repeated our methods on PDCSAP data, and found small differences in comparison to Table 1 measurements. Regarding the latter, we used mean-subtracted fluxes in all analysis, which mitigates effects by non-variable contaminating objects. None of our systems have bright sources in their vicinity, except the case of Gaia-593107, which is discussed in Section 3.5.
Our variant of the nSH subtraction method in Kimura and Osaki (2021) shares the same shortcomings. If the nSH profile is time-dependent, it cannot be fully subtracted. In addition, the mass-transfer stream could happen to obstruct some parts of the disc, causing the light maximum to occur at earlier orbital phases \(\varphi_{\rm orb}\)(Kimura et al., 2020). Inhomogeneities in the stellar surface brightness of the secondary can produce similar effects, shifting the light maximum to earlier or to later \(\varphi_{\rm orb}\).
The technique of using irradiation of the secondary in order to determine \(P_{\rm orb}\) is entirely based on photometry, and spectroscopic measurements of \(P_{\rm orb}\) could support its feasibility. In addition, radial-velocity analysis would put constraints on mass ratios, and would test the \(q(\varepsilon_{-})\) relation we consider in Section 4.1. The newly discovered systems in this work are therefore strongly encouraged for follow-up spectroscopic observations.
## Acknowledgements
This work includes data collected by the TESS mission and made use of lightkuvre, a Python package for Kepler and TESS data analysis (Lightkurve Collaboration et al., 2018). Funding for the TESS mission is provided by the NASA's Science Mission Directorate. This work has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology.
The CSS survey is funded by the National Aeronautics and Space Administration under Grant No. NNG05GF22G issued through the Science Mission Directorate Near-Earth Objects Observations Program. The CRTS survey is supported by the U.S. National Science Foundation under grants AST-0909182 and AST-1313422.
We used the following Python packages for data analysis and visualisation: NumPy(Harris et al., 2020), ScPyr (Virtanen et al., 2020), pandas(McKinney, 2010; The Pandas Development Team, 2022), Matplotlib(Hunter, 2007) and uncertainties (Lebigot, 2010).
We are grateful to R. K. Zamanov and to A. A. Kurtenkov for their advice during the preparation of this work. We thank the anonymous referee for their time and their effort. We acknowledge the grants KII-06-H28/2 and KII-06-M58/2 from the Bulgarian National Science Fund. Both authors contributed equally to this work. It is appre
ciated that our last names considerably simplified the issue of author ordering.
## Data Availability
This work contains publicly available data from the sky surveys TESS, ASAS-SN, CSS, CRTS, and ZTF, all of which can be found in their corresponding databases.
|
2307.14226 | Explore the possibility of advancing climate negotiations on the basis
of regional trade organizations: A study based on RICE-N | Climate issues have become more and more important now. Although global
governments have made some progress, we are still facing the truth that the
prospect of international cooperation is not clear at present. Due to the
limitations of the Integrated assessment models (IAMs) model, it is difficult
to simulate the dynamic negotiation process. Therefore, using deep learning to
build a new agents based model (ABM) might can provide new theoretical support
for climate negotiations. Building on the RICE-N model, this work proposed an
approach to climate negotiations based on existing trade groups. Simulation
results show that the scheme has a good prospect. | Wubo Dai | 2023-07-26T14:48:25Z | http://arxiv.org/abs/2307.14226v1 | Explore the possibility of advancing climate negotiations on the basis of regional trade organizations: A study based on RICE-N
###### Abstract
Climate issues have become more and more important now. Although global governments have made some progress, we are still facing the truth that the prospect of international cooperation is not clear at present. Due to the limitations of the Integrated assessment models (IAMs) model, it is difficult to simulate the dynamic negotiation process. Therefore, using deep learning to build a new agents based model (ABM) might can provide new theoretical support for climate negotiations. Building on the RICE-N model, this work proposed an approach to climate negotiations based on existing trade groups. Simulation results show that the scheme has a good prospect.
## 1 Introduction
Climate change has significant impacts on the natural environment and human society. Besides, the greenhouse gases causing climate change do not respect national boundaries, their effects can be felt worldwide. Therefore, collective action is required to address this global issue effectively. Although we have made some progress such as the Paris Agreement and the recent COP 27 meets, according to the latest IPCC 6 report [1], we are not doing enough. The report [1] also notes that international cooperation is a critical enabler for accelerated climate action. However, the prospect of international cooperation is not clear at present. According to Nordhaus's research [2], the main reason for the hindrance lies in the behavior of "free-ride", that is, countries have an incentive to rely on the emissions reductions of others without taking proportionate domestic abatement.
In order to use AI to provide a solution to this problem, the organizers used Nordhaus's RICE model [3] as the climate and economic dynamics framework, combined with ABM, created this RICE-N model that included 27 agents and their negotiation mechanism [4]. Building on the official model, we adopted a form similar to the customs unions, with added the requirement for mitigating rates. Simulation results show that the scheme has a good prospect.
## 2 Methodology
Previous researchers have proposed a lot of solutions to the climate negotiations. The Nordhaus Climate Club [2] is considered to be a very promising climate negotiation mechanism, so our research is based on this theory and real-world situations. Figure 1 illustrates the framework of the default RICE-N model [4] and the model of this work.
ClubsThe first thing to do is to build clubs, we set a new parameter (an int number) to represent the state of the group the agents are in at each step, and allow them to choose their group state freely when a new step begin, including setting up / joining in / changing or quitting a group. For example, assume that the grouping state of agents A and B in time step one are 1 and 2, that means they were in the different group now. If agent A still stay in group 1 in time step two, then agent B can choose its grouping state to 1, that means they will stay in the same group in this time step. Besides, B could choose its grouping state to 0 (do not attend any group), 2 (stay in the previous group) or 3(create a new group, assume no agent's state is 3 in last step).
In Nordhaus's theory [2], the foundation of the club is a comparable carbon-pricing mechanisms. Considering that countries cannot reach a consensus on the carbon pricing mechanism at present, we choose to use the default tariff-mitigation rate negotiation mechanism here. However, the negotiation will only be carried out among members of the same group.
MotivationThe motivation of the club has been described as sanction to non-members and tariff-free border between members [2]. One of the sanction ways is a countervailing duty on the carbon content of imports, just like the Carbon Border Adjustment Mechanism of the EU [5]; However, it might be difficult to achieve in the model now, so we chose another way Nordhaus mentioned: an extra tariff on all imports from non-members.
As for tariff-free border between members, after referring to the situation of some trade organizations, such as: North American Free Trade Area (NAFTA), China-ASEAN Free Trade Area (CAFTA) [6], we found that their zero tariff settings are gradually realized, so here we draw on this way, tariff for members will decrease to zero over time.
Critical massAnother important principle in climate clubs is called critical mass, which means it needs the participation of the major economic players, as they represent 61% of global gross domestic product and 43% of goods imports [7]. This means we need to set an in initial group state to get these countries in one club at the beginning. However, the real situation is it is difficult for them to reach an agreement in one step, considering the complex international relations.For example, China and the US's ambiguity to the EU's Carbon Border Adjustment Mechanism [8].
Based on these situations, we propose a concept: the major economic players do need participate but will separately at the beginning; and the beginning group state will be based on existing trade cooperation to make the negotiation easier. And hopefully, this will finally lead to climate club of global scope.
To simulate this scenario, we ranked 27 agents according to their carbon emissions and productions, The initial data comes from the previous work of Tianyu et al [4]. Then we divided agents into high carbon emitting countries (HC) and low carbon emitting countries (LC). You may see the **Supplementary Mater 1** for the detail calculation. Finally, we have two different initial group state:
For **HCs**: we put the top five agents into one group, randomize other agents into four other groups to simulate the ideal state where HC reach an agreement at the beginning.
Figure 1: Model structure of RICE-N and this work.
For **HC with LC**: we put the top five agents into five different groups, put other agents into them evenly based on the rank to simulate the state mentioned above.
Real world situationThere are some other sets to close to real world situation, like for saving rate and max export, choices are limited based on World Bank global average data. You may see the **Supplementary Mater 2** for the detail explanation.
## 3 Results and Discussion
The temperature rise and total gross output during 100 years in the simulation under two scenarios are shown in Figure 2. From the figure, we can find that the temperature rise of the HC with LC group is lower than that of the HCs by more than 0.8 \({}^{\circ}\)C when the gross output has almost no difference.
To find out how this discrepancy aroused, We averaged the mitigating rates of all agents in each stage, and displayed the data in Figure 3. According to the Average mitigating rate of all regions, We found that in HCs, average mitigating rate was lower than HC with LC in some periods, which led to a higher temperature rise.
Another thing we found was that according to the final group state, in HC with LC, agents tended to cluster together at the end than the other scenario. It also included more high carbon emitting agents.That was closer to the ideal eventuality of a globally unified climate club. The results are shown in Figure 4. However, There are still many aspects to explore and improve about this research.
Figure 2: temperature rise and total gross output.
## 4 Conclusions
In general, based on the Nordhaus' Climate Club [2] theory and real-world situation, this work proposed an approach to climate negotiations based on existing trade groups. Current research suggested that the model can inform real-world climate negotiations, but the underlying reasons remain to be discovered.
## Supplementary Material
For the implementation of the above settings, please refer to the relevant files in the Supplementary Materials.
## Acknowledgments
This work reused the AI4GCC code from MILA and Salesforce.
|
2307.13549 | A Planning Ontology to Represent and Exploit Planning Knowledge for
Performance Efficiency | Ontologies are known for their ability to organize rich metadata, support the
identification of novel insights via semantic queries, and promote reuse. In
this paper, we consider the problem of automated planning, where the objective
is to find a sequence of actions that will move an agent from an initial state
of the world to a desired goal state. We hypothesize that given a large number
of available planners and diverse planning domains; they carry essential
information that can be leveraged to identify suitable planners and improve
their performance for a domain. We use data on planning domains and planners
from the International Planning Competition (IPC) to construct a planning
ontology and demonstrate via experiments in two use cases that the ontology can
lead to the selection of promising planners and improving their performance
using macros - a form of action ordering constraints extracted from planning
ontology. We also make the planning ontology and associated resources available
to the community to promote further research. | Bharath Muppasani, Vishal Pallagani, Biplav Srivastava, Raghava Mutharaju, Michael N. Huhns, Vignesh Narayanan | 2023-07-25T14:51:07Z | http://arxiv.org/abs/2307.13549v2 | # A Planning Ontology to Represent and Exploit Planning Knowledge for Performance Efficiency
###### Abstract
Ontologies are known for their ability to organize rich metadata, support the identification of novel insights via semantic queries, and promote reuse. In this paper, we consider the problem of automated planning, where the objective is to find a sequence of actions that will move an agent from an initial state of the world to a desired goal state. We hypothesize that given a large number of available planners and diverse planning domains; they carry essential information that can be leveraged to identify suitable planners and improve their performance for a domain. We use data on planning domains and planners from the International Planning Competition (IPC) to construct a planning ontology and demonstrate via experiments in two use cases that the ontology can lead to the selection of promising planners and improving their performance using macros - a form of action ordering constraints extracted from planning ontology. We also make the planning ontology and associated resources available to the community to promote further research.
Keywords:Ontology Automated Planning Planner Improvement. **Resource Type:** Ontology, Knowledge Graph
**Licence:** Creative Commons Attribution 4.0 License
**URL:**[https://github.com/BharathMuppasani/AI-Planning-Ontology](https://github.com/BharathMuppasani/AI-Planning-Ontology)
## 1 Introduction
Automated planning, where the objective is to find a sequence of actions that will transition an agent from the initial state of the world to a desired goal state, is an active sub-field of Artificial Intelligence (AI). The ability to generate plans and make decisions in complex domains, such as robotics, logistics, and manufacturing, has led to significant progress in the automation of planning. Currently, there are numerous planning domains, planners, search algorithms, and associated heuristics in the field of automated planning. Each planner, in conjunction with a search algorithm and heuristic, generates plans with varying degrees of quality, cost, and optimality. The empirical results available for various planning
problems, ranked by planner performance and the heuristics used as available in International Planning Competition (IPC), can provide valuable information to identify various tunable parameters to improve planner performance. Traditionally, improving planner performance involves manually curating potential combinations to identify the optimal planner configuration. However, there has been limited effort to model the available information in a structured knowledge representation, such as an ontology, to facilitate efficient reasoning and enhance planner performance.
To address the challenge of representing planning problems and associated information in a structured manner, we propose an ontology for AI planning. An ontology formally represents concepts and their relationships [7], which enables systematic analysis of planning domains and planners. The proposed ontology captures the features of a domain and the capabilities of planners, facilitating reasoning with existing planning problems, identifying similarities, and suggesting different planner configurations. Planning ontology can also be a useful resource for the creation of new planners as it captures essential information about planning domains and planners, which can be leveraged to design more efficient planning algorithms. Furthermore, ontology can promote knowledge sharing and collaboration within the planning community.
In the field of planning, several attempts have been made to create ontologies to enhance the understanding of planners' capabilities. For instance, Plan-Taxonomy [2] introduced a taxonomy that aimed to explain the functionality of planners. Additionally, authors in [6] present a comprehensive ontology called PLANET, which represents plans in real-world domains and can be leveraged to construct new applications. Nonetheless, the reusability of PLANET is limited as it is not open-sourced. Consequently, researchers face difficulty in extending or replicating the ontology.
This paper outlines our methodology for constructing an ontology to represent AI planning domains, leveraging information obtained from the IPC. In our current work, we extended our initial research [11]. Specifically, we have enhanced the ontology to more accurately depict the various concepts within the planning domain. Furthermore, we include additional use cases of our ontology and provide experimental evaluations to support our findings further. Building a planning ontology using data from IPC offers several benefits, such as comprehensive coverage of planning domains, a rich source for various benchmark evaluation metrics, and documentation for planners. However, the ontology is not limited to the PDDL representation or domains in IPC and can easily be extended to any. Our contributions are at the intersection of ontologies and AI planning and can be summarized as follows.
* **Planning Ontology**: We developed an ontology for AI planning that can be used to represent and organize knowledge related to planning problems. Our ontology provides a structured way to capture the relationships between different planning concepts, enabling more efficient and effective knowledge sharing and reuse.
* **Usecase 1: Identifying Most Promising Planner for Performance**: We demonstrate the ontology's usage for identifying the best-performing planner for a specific planning domain using data from IPC-2011.
* which are action orderings and show that they can improve planner performance drastically.
In the remainder of the paper, we start with preliminaries about on automated planning and IPC. We then give an overview of the existing literature on ontologies for planning. Following this, we present a detailed description of the ontology construction process and its usage. We then discuss the proposed planning ontology and conclude with future research directions.
## 2 Preliminaries
In this section, we describe the necessary background for automated planning and the significance of the International Planning Competition.
### Automated Planning
Automated planning, also known as AI planning, is the process of finding a sequence of actions that will transform an initial state of the world into a desired goal state [5]. It involves constructing a plan or a sequence of actions that will achieve a specified objective while respecting any constraints or limitations that may be present. Formally, automated planning can be defined as a tuple \((S,A,T,I,G)\), where:
Figure 1: Demonstration of automated planning problem with blocksworld domain example
* \(S\) is the set of possible states of the world
* \(A\) is the set of possible actions that can be taken
* \(T\) is the transition function that describes the effects of taking an action on the current state of the world
* \(I\) is the initial state of the world
* \(G\) is the desired goal state
Using this notation, the problem of automated planning can be framed as finding a sequence of actions \(\prec a_{1},a_{2},...,a_{k}\succ\) that will transform the initial state \(I\) into the goal state \(G\), while respecting any constraints or limitations on the actions. A problem is defined in terms of a domain and a problem instance. The domain defines the possible actions that can be taken and the effects of each action, while the problem instance specifies the initial state of the world and the desired goal state. Various techniques can be used to solve the planning problem, such as search algorithms, constraint-based reasoning, and optimization methods. These techniques involve exploring the space of possible plans and selecting the one that satisfies the objective and any constraints. Figure 1 illustrates an automated planning scenario for the blocksworld domain, where an initial state can be transformed into a goal state by executing a sequence of actions.
**Attributes modeled about a domain.**
1. **Requirements:** A list of requirements that the planner must satisfy in order to solve the domain. Requirements include durative actions, conditional effects, or negative preconditions. For example, in blocksworld domain with types involved, one of the requirements is _typing_.
2. **Predicates:** Predicates are fundamental elements in the planning domain that define the properties of the world. They are used to describe the initial and goal states, as well as the preconditions and effects of actions. Predicates are usually defined as logical expressions over a set of variables, where each variable can take on a finite number of values. In the context of planning, predicates are typically used to represent facts about the world that can be true or false, such as the location of an object or the status of a machine. For example, in blocksworld domain, the predicate (on b1 b2) could indicate that block 'b2' is on top of block 'b1'.
3. **Actions:** Actions are the basic units of change in the planning domain. They represent atomic operations that can be performed to transform the world from one state to another. Each action has a name, a set of parameters, preconditions that must be satisfied before the action can be executed, and effects that describe the changes that the action makes to the world. Actions can be used to model a wide variety of operations, ranging from simple movements or transformations to complex processes such as planning or decision-making. For example, in blocksworld domain, the action unstack b2 b1 can be used to unstack block 'b2' from block 'b1'.
4. **Preconditions:** Preconditions are the conditions that must be true before an action can be executed. They are usually defined using predicates and can involve multiple variables. Preconditions can also be negative, which means that a certain condition must not be true for an action to be executed.
In planning, preconditions ensure that actions are only executed when the necessary conditions have been met, such as ensuring that a machine is turned off before it is serviced. For example, in blocksworld domain, the action unstack b2 b1 has a precondition of (on b1 b2), meaning that for the action to be valid, the block 'b2' should be on top of block 'b1'.
5. **Effects:** Effects describe the changes that an action makes to the world. They are usually defined using predicates and can involve multiple variables. Effects can be positive, which means that a certain condition becomes true after the action is executed, or negative, which means that a certain condition becomes false after the action is executed. In the context of planning, effects are used to model the changes that result from executing an action, such as moving an object from one location to another or turning a machine on. For example, in blocksworld domain, when the action unstack b2 b1 is executed, one of its effect is (not (on b1 b2)), indicating that block 'b2' is no longer on top of block 'b1'.
6. **Constants:** Constants are values that are fixed and do not change during the execution of the planning problem. They are used to represent objects or entities in the world that have a fixed value, such as the speed limit on a road. Constants can be used to simplify the planning problem by reducing the number of variables that need to be considered and by providing a fixed set of values that can be used in predicates and actions. For example, in blocksworld domain, the constant _table_ could represent the surface on which the blocks are initially placed.
7. **Types:** Types are used to classify objects or entities in the world based on their attributes or properties. They are used to define the domain of values that a variable can take on and can be used to constrain the values that are assigned to variables. In the context of planning, types are typically used to group related objects or entities together, such as cars or bicycles, and to specify the properties that are common to all members of a type, such as their color or size. For example, in blocksworld domain with types involved, one can represent the predicate as (on?x - block?y - block) stating that the parameters in the predicate are of type _block_.
**Attributes modeled about a problem instance from a domain.**
1. **Name:** The name of the planning problem.
2. **Domain:** The name of the planning domain that the problem belongs to.
3. **Objects:** A list of objects that are present in the planning problem. Objects are typically defined in terms of their type and name. In the example shown in Figure 1, objects are b1, b2, and b3.
4. **Initial State:** A description of the initial state of the world, including the values of all relevant predicates. Figure 1 represents an example initial state.
5. **Goal State:** A description of the desired goal state of the world, including the values of all relevant predicates. Figure 1 represents an example goal state.
Each of these attributes are essential for defining a complete planning problem that an automated planner can solve.
### International Planning Competition (IPC)
IPC serves as a significant means of assessing and comparing various planning systems. By presenting new planners and benchmark problems each year, the competitions aim to stimulate the advancement of new planning methodologies and reflect current trends and challenges in the field. The competition comprises multiple tracks, each covering various planning problems such as classical, temporal, and probabilistic planning. These tracks include benchmark problems that evaluate the performance of planners concerning parameters such as plan quality, plan length, and run time. The results of these competitions provide insights into the current state-of-the-art in planning and help identify the strengths and weaknesses of different planning systems. IPC can serve as an excellent starting point for building a planning-related ontology as the benchmark problems used in these competitions can provide a comprehensive overview of the domain and the types of problems that planners need to solve.
## 3 Related Work
The use of ontology-based knowledge representation and reasoning has been extensively studied in various domains, including automated planning. This section focuses on the applications of ontology-based knowledge representation and reasoning in the context of planning and related domains. In [13], an ontology is constructed for the Joint Forces Air Component Commander (JFACC) to represent knowledge from the air campaign domain. The ontology is modularized to facilitate data organization and maintenance, but its applicability is domain-specific, unlike our approach. In [15], the authors automate the knowledge discovery workflow using ontology and AI planning, creating a Knowledge Discovery (KD) ontology to represent the KD domain and converting its variables to a Planning Domain Definition Language (PDDL) format to obtain the PDDL domain. The ontology's objects represent initial and goal states, forming the KD task, which represents a specific problem. The authors use the Fast-Forward (FF) planning system to generate the required plans. In a survey of ontology-based knowledge representation and reasoning in the planning domain, [4] suggest that knowledge reasoning approaches can draw new conclusions in non-deterministic contexts and assist with dynamic planning. In [6], a reusable ontology, PLANET, is proposed for representing plans. PLANET includes representations for planning problem context, goal specification, plan, plan task, and plan task description. However, PLANET does not include representations for some entities commonly associated with planning domains, such as resources and time. Our planning ontology draws inspiration from PLANET and appends more metadata for planner improvement. In [1], a domain-independent approach is presented that advances the state of the art by augmenting the knowledge of a planning task with pertinent goal opportunities. The authors demonstrate that incorporating knowledge obtained from an ontology can aid in producing better-valued plans, highlighting the potential for planner enhancement using
more tuning parameters, which are captured in our planning ontology. In general, these studies demonstrate the potential of ontology-based knowledge representation and reasoning in the planning domain, including applications such as representing plans, aiding in air campaign planning, automating knowledge discovery workflows, and developing context-aware planning services.
## 4 Planning Ontology
This section covers the construction of planning ontology to capture the essential details of automated planning. We will discuss the considerations, challenges, benefits, and limitations of using ontologies for automated planning, to provide a better understanding of how they can improve the efficiency and effectiveness of automated planning systems.
### Competency Questions
Competency questions for an ontology are focused on the needs of the users who will be querying the ontology. These questions are designed to help users explore and understand the concepts and relationships within the ontology, and to find the information they need within the associated knowledge base. By answering these questions, the ontology can be better scoped and tailored to meet the needs of its users.
Figure 2: Planning ontology capturing different concepts of automated planning domain, problem, plan, and planner performance separated into categories (shown as colored rectangles)
* C1: What are the different types of planners used in automated planning?
* C2: What is the relevance of planners in a given problem domain?
* C3: What are the available actions for a given domain?
* C4: What problems in a domain satisfy a given condition?
* C5: What are all requirements a given domain has?
* C6: What is the cost associated with generating a plan for a given problem?
* C7: How many parameters does a specific action have?
* C8: What planning type a specific planner belongs to?
* C9: What requirements does a given planner support?
* C10: What are the different types present in a domain?
### Design
An ontology is a formal and explicit representation of concepts, entities, and their relationships in a particular domain. In this case, ontology is concerned with the domain of automated planning, which refers to the process of generating a sequence of actions to achieve a particular goal within a given set of constraints. The ontology aims to provide a structured framework for organizing and integrating knowledge about this domain, which can be useful in various applications, such as designing planning algorithms, extracting best-performing planners given a domain, or learning domain-specific macros.
Figure 2 shows an ontology that aims to encompass the various concepts of automated planning separated into categories of Domain, Problem, Plan, and Planner. The ontology for automated planning is composed of 19 distinct classes and 25 object properties. These classes and properties are designed to represent the various elements of the automated planning domain and its associated problems.
#### 4.2.1 Domain
The Domain category of the ontology comprises classes that represent the general characteristics of the planning domain. These classes include the Domain - Requirements, Types, Predicates, Constraints, and Actions that can be used to solve problems in that domain. The classes in the Domain category are designed to provide a structured framework for organizing and integrating knowledge about the problem domain, which can be useful in various applications such as designing planning algorithms, extracting best-performing planners given a domain, or learning domain-specific macros.
#### 4.2.2 Problem
The Problem category of the ontology includes classes that represent specific problems within a given domain. These classes are designed to capture the details of a particular problem, such as the Objects defined in the problem, which is an instance of the 'types' defined in the planning domain, the Initial State of the problem, and the Goal State which are a subclass of the parent class State which describes the current state of the problem domain.
#### 4.2.2 Plan
The Plan category of the ontology includes classes that represent the sequence of actions that must be taken to solve a given problem. The Plan class is used to store the knowledge about the plans that planners generate for specific problems. The plan cost for each plan is stored as a data property (non-negative integer) of the Plan class. This enables planners to be compared based on the quality of the plans they generate, as well as the cost of those plans.
#### 4.2.3 Planner
The Planner category of the ontology includes classes that capture the details of planner performance from previous IPCs. Specifically, Planning Domain relevance to a Planner is classified based on the percentage of problems they have successfully solved, which is then categorized into three levels of relevance to the planner: _low_, _medium_, and _high_. By incorporating this information into the ontology, planners can be evaluated based on their performance in different problem domains, and more informed decisions can be made about which planners to use for a given problem. In addition, this information can be used to guide the development of new planners and to evaluate their performance against established benchmarks.
### Accessing Planning Ontology
We have taken various measures to ensure that our planning ontology follows the FAIR principles [14] of being Findable, Accessible, Interoperable, and Reusable. To assist users in exploring and utilizing our ontology, we have made it accessible through a persistent URL1 and our GitHub repository2. Our repository contains ontology model files, mapping scripts, and utility scripts that extract information from PDDL domains and problems into intermediary JSON format and add the extracted data as triples using our model ontology, creating a knowledge graph. We provide sample SPARQL queries that address the ontology's competency questions mentioned earlier. Moreover, our ontology documentation, which is accessible through the GitHub repository, provides a comprehensive overview of the ontology's structure, concepts, and relations, including ontology visualization. This documentation serves as a detailed guide for users to comprehend the ontology's applications in the automated planning domain. We also provide the scripts and results from the ontology evaluation, which are presented as use cases of our ontology in later sections, in our repository, along with accompanying documentation.
Footnote 1: PURL - [https://purl.org/ai4s/ontology/planning](https://purl.org/ai4s/ontology/planning)
Footnote 2: [https://github.com/BharathMuppasani/AI-Planning-Ontology](https://github.com/BharathMuppasani/AI-Planning-Ontology)
## 5 Usage of Planning Ontology
In the following section, we show the evaluation of a few competency questions and discuss two use cases of our planning ontology.
**Evaluation of Competency questions:** For the evaluation of the competency questions, we have considered a sample knowledge graph, shown in Figure 3, for blocksworld from IPC-2000 domain created using planning ontology shown in Figure 2. SPARQL queries for each of these questions can be found at our GitHub Repository[2].
1. C1: What are the different types of planners used in automated planning? **Question Type:** Extracting planner information. **Sufficiency Condition:** There should exist at least one individual for Planner class. **Result:** Shown in Table 1.
2. C2: What is the relevance of planners in 'blocksworld' domain? **Question Type:** Extracting best planner for a domain. **Sufficiency Condition:** There should exist at least one Planner individual having either of the relevance properties with 'blocksworld' individual of PlanningDomain class. **Result:** Shown in Table 2.
3. C3: What are the available actions for 'blocksworld' domain? **Question Type:** Extracting domain information. **Sufficiency Condition:** For the 'blocksworld' individual of PlanningDomain, there must be at least one DomainAction individual with the relation hasAction. **Result:** Shown in Table 3.
4. C4: Which problems in 'blocksworld' have problems with the goal state of 'b1' being on the table? **Question Type:** Extracting problem information **Sufficiency Condition:** For the 'blocksworld' individual of PlanningDomain, there must be at least one PlanningProblem individual with the relation hasProblem and the problem should have '(ontable b1)' GoalState. **Result:** Shown in Table 4.
\begin{table}
\begin{tabular}{l l} \hline \hline
**S.No Domain** & **Relation Problem** \\ \hline
1 & blocksworld hasProblem problem\_3\_1 \\ \end{tabular}
\end{table}
Table 4: Results for C4 with knowledge graph in Figure 3
\begin{table}
\begin{tabular}{l l l} \hline \hline
**S.No Domain** & **Relation** & **Planner** \\ \hline
1 & blocksworld hasLowRelevancePlanner & CPT4 \\
2 & blocksworld hasLowRelevancePlanner & SHOP2 \\
3 & blocksworld hasHighRelevancePlanner & FF \\
4 & blocksworld hasHighRelevancePlanner & FastDownward \\
5 & blocksworld hasMediumRelevancePlanner LPG \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results for C2 with knowledge graph in Figure 3
\begin{table}
\begin{tabular}{l l} \hline \hline
**S.No Domain** & **Relation Action** \\ \hline
1 & blocksworld hasAction put-down \\
2 & blocksworld hasAction pick-up \\
3 & blocksworld hasAction stack \\
4 & blocksworld hasAction unstack \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for C3 with knowledge graph in Figure 3
5. C5: What are all requirements a given domain has? **Question Type:** Extracting domain information **Sufficiency Condition:** For the 'blocksworld' individual of PlanningDomain, there must exist at least one DomainRequirement individual with the relation hasRequirement. **Result:** Shown in Table 5.
#### 3.2.1 Usecase 1: Identifying Most Promising Planner
- One of the major challenges in the field of artificial intelligence (AI) is the automated selection of the best-performing planner for a given planning domain. This challenge arises due to the vast number of available planners and the diversity of planning domains. To address this challenge, our approach proposes the use of an ontology to represent the features of the planning domain and the capabilities of planners.
The ontology for planning aims to capture the connection between the Planning Domain and the Planner by indicating the relevance of a planner to a specific domain. We made use of data acquired from International Planning Competitions (IPCs) to furnish specific details regarding the relevance of planners. The IPC results provide us with relevant details on the planners that took part in the competition and the domains that were evaluated during that particular year. This information includes specifics on how each planner performed against all the domains that participated.
To show the usage of extracting the most promising planners for a given domain, we have used IPC-2011 data3 (optimal track). The ontology was populated with data acquired from the IPC-2011, which provided relevant details on the planners that took part in the competition and the domains that were evaluated during IPC-2011. A relevance relation of either _low_, _medium_, or _high_ was assigned to each planner based on the percentage, _low_-below 35%, _medium_-35% to 70%, _high_-70% and above, of problems they solved in a given domain. In this experiment, we consider that the experimental environment has four planners available: Fast Downward Stone Soup 14, LM-Cut4, Merge and Shrink4, and BJOLP4. We evaluate 3 problem instances of each domain from IPC-2011 with 2 policies for selecting planners to generate plans for each of these problem instances -
Footnote 3: [http://www.plg.inf.uc3m.es/ipc2011-deterministic/](http://www.plg.inf.uc3m.es/ipc2011-deterministic/)
Footnote 4: [https://www.fast-downward.org/IpcPlanners](https://www.fast-downward.org/IpcPlanners)
1. **Random Policy:** To solve each problem instance, this policy selects a random planner from the available planners.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**S.No Domain** & **Relation** & **Requirement** \\ \hline
1 & blocksworld hasRequirement :strips \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for C5 with knowledge graph in Figure 3
2. **Ontology Policy:** To solve each problem instance, this policy extracts the information on the best planner for the problem domain from the ontology populated with IPC-2011 data.
Table 6 presents the results of our evaluation, indicating the average number of nodes expanded and plan cost for each policy in a given domain. The table provides a comprehensive summary of the performance of different planners in terms of their efficiency and effectiveness. An ideal planner is expected to generate a solution with low values for both these metrics. The _Ontology Policy_, designed to select the best-performing planner for a given domain, outperformed the _Random Policy_ in terms of the average number of nodes expanded to find a solution. Moreover, the _Random Policy_ failed to solve problems in the 'parking' (1 out of 3), 'floortile' (2 out of 3), and 'tidybot' (2 out of 3) domains, which highlights the limitations of choosing a planner randomly. But if a domain is easily solvable by relevant planners that can tackle them, _Random Policy_ may still do well.
#### 4.2.2 Usecase 2: Extracting Macro Operators
- While automated planning has been successful in many domains, it can be computationally expensive, especially for complex problems. One approach to improve efficiency is by using macro-operators, which are sequences of primitive actions that can be executed as a single step. However, identifying useful macro-operators manually can be time-consuming and challenging. Authors in [3] introduce a novel method for improving the efficiency of planners by generating macro-operators. The proposed approach involves analyzing the inter-dependencies between actions in plans and extracting macro-operators that can replace primitive actions without
\begin{table}
\begin{tabular}{l r r r r} \hline
**Domain** & \multicolumn{2}{c}{**Ontology Policy**} & \multicolumn{2}{c}{**Random Policy**} \\ \cline{2-5} & Avg. Exp. Avg. Plan & Cost Avg. Exp Avg. Plan Cost \\ \hline \hline \multicolumn{1}{l}{scalaryer} & **8588** & 20 & 8706 & 20 \\ \multicolumn{1}{l}{elevators} & **1471** & 52 & 64541 & 52 \\ \multicolumn{1}{l}{transport} & 165263 & 491 & **132367** & 491 \\ \multicolumn{1}{l}{parking*} & **367910** & 18 & 488830 & 17 \\ \multicolumn{1}{l}{woodworking} & **1988** & 211 & 19844 & 211 \\ \multicolumn{1}{l}{floortile**} & 283724 & 54 & **2101** & 49 \\ \multicolumn{1}{l}{barman} & **1275078** & 90 & 5816476 & 90 \\ \multicolumn{1}{l}{openstacks} & **132956** & 4 & 139857 & 4 \\ \multicolumn{1}{l}{nomystery} & 1690 & 13 & 1690 & 13 \\ \multicolumn{1}{l}{pegsol} & **89246** & 6 & 101491 & 6 \\ \multicolumn{1}{l}{visitall} & 5 & 4 & 5 & 4 \\ \multicolumn{1}{l}{tidybot**} & **1173** & 17 & 3371 & 33 \\ \multicolumn{1}{l}{parcprinter} & 541 & 441374 & **417** & 441374 \\ \multicolumn{1}{l}{sokoban} & **9653** & 25 & 156600 & 25 \\ \hline \end{tabular}
\end{table}
Table 6: Demonstrating the effectiveness of two different policies employed to choose a planner for problem-solving.
losing the completeness of the problem domain. The soundness and complexity of the method are assessed and compared to other existing techniques. The paper asserts that the generated macro-operators are valuable and can be seamlessly integrated into planning domains without losing the completeness of the problem.
Based on the ontology depicted in Figure 2, we extract macro-operators that can enhance the efficiency of planners. To demonstrate this, we have considered three different domains: blocksworld(bw), driverlog(dl), and grippers(gr), presented in IPC-2000, 2002, and 1998 respectively. We initially developed a knowledge graph using the ontology represented in Figure 2 for the three domains of interest. Subsequently, we employed a SPARQL query to retrieve the stored plans for these domains. We then examined these plans to identify the sequences of action pairs and ranked them based on their frequency of occurrence. To improve the effectiveness of this technique, it is essential to consider both the frequency of occurrence of action pairs and the properties of the domain. Specifically, the precondition and effect of actions should be analyzed to ensure that the first action leads to the precondition of the second action in the pair. We employed another SPARQL query to extract the preconditions and effects associated with each of these actions. We analyzed the resulting action pairs to verify their validity of occurrence, thereby filtering out pairs that did not have a combined effect. The results of this extraction process are shown in Table 7. These action relations are stored back into the knowledge graph in the MacroAction class and can be utilized by planners to enhance their efficiency.
\begin{table}
\begin{tabular}{l l} \hline
**Domains** & **Extracted Action Relations** \\ \hline
**blocksworld** & unstack * put-down; pick-up * stack; put-down * unstack; stack * pick-up; unstack * stack; put-down * pick-up; stack * unstack \\ \hline
**driverlog** & drive-truck * unload-truck; drive-truck * load-truck; board-truck * drive-truck; walk * board-truck \\ \hline
**grippers** & pick * move; move * drop \\ \hline \end{tabular}
\end{table}
Table 7: Extracted action relations, ordered based on their frequency, for domains blocksworld, driverlog, and grippers.
\begin{table}
\begin{tabular}{l c c c c c c} \hline
**Domain** & \multicolumn{3}{c}{**Original Domain**} & \multicolumn{3}{c}{**Domain With Macros**} \\ \cline{2-7} & Avg. Exp. Avg. Eval. Avg. Gen. Avg. Exp. Avg. Eval. Avg. Gen. \\ \hline
**blocksworld** & 20219 & 59090 & 106321 & 18 & 310 & 359 \\
**gripper** & 2672 & 10660 & 30871 & 510 & 3974 & 11468 \\
**driverlog** & 3753 & 17849 & 45753 & 14888 & 720008 & 209760 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of planner performance between original and macro-enabled versions of three planning domains, showing the average number of nodes expanded, evaluated, and generated.
Table 8 shows the comparison of a planner performance given the original domain and macros-enabled version of the domain. For this evaluation, we have considered the FastDownward planner [8] with LM-Cut Heuristic [9] to generate plans for 20 problems of varying complexities for each domain. We evaluate the performance of each domain based on the average number of nodes expanded, evaluated, and generated to find a solution. This study demonstrates that macro operators can enhance the planner performance in most of the domains tested, with the exception of the driverlog domain. In this domain, the planner performs worse when macro operators are included, as they increase the average number of nodes expanded, evaluated, and generated. This is due to the fact that the macro operators introduce more actions to the domain, which increases the branching factor and challenges the heuristic to select the optimal action at each step. Hence, the applicability of macro operators depends on the features of the domain and the planner. Macro operators can facilitate the planning process by decreasing the search depth, but they can also hinder it by increasing the search width. A potential improvement is to use a more informative heuristic that guides the planner to choose the best action at each step.
## 6 Conclusion
In this work, we build and share a planning ontology that provides a structured representation of concepts and relations for planning, allowing for efficient extraction of domain, problem, and planner properties. The ontology's practical utility is demonstrated in identifying the best-performing planner for a given domain and extracting macro operators using plan statistics and domain properties. Standardized benchmarks from IPC domains and planners offer an objective and consistent approach to evaluating planner performance, enabling rigorous comparisons in different domains to identify the most suitable planner. The planning ontology can aid researchers and practitioners in automated planning, and its use can simplify planning tasks and boost efficiency. As the field of AI planning continues to evolve, planning ontology can play a crucial role in advancing the state-of-the-art while leveraging the past.
Future work could explore the use of mixed reasoning strategy with both ontologies (top-down) and Large Language Models (LLMs) (bottom-up) knowledge [10]. For instance, one could leverage the planning ontology in the context of LLMs, which have recently shown promise for automated planning [12]. Moreover, the application of this mixed reasoning approach could be extended to complex domains, such as multi-agent systems, where coordinating actions between multiple agents is crucial.
|
2305.05552 | Buoyancy enabled autonomous underwater construction with cement blocks | We present the first free-floating autonomous underwater construction system
capable of using active ballasting to transport cement building blocks
efficiently. It is the first free-floating autonomous construction robot to use
a paired set of resources: compressed air for buoyancy and a battery for
thrusters. In construction trials, our system built structures of up to 12
components and weighing up to 100Kg (75Kg in water). Our system achieves this
performance by combining a novel one-degree-of-freedom manipulator, a novel
two-component cement block construction system that corrects errors in
placement, and a simple active ballasting system combined with compliant
placement and grasp behaviors. The passive error correcting components of the
system minimize the required complexity in sensing and control. We also explore
the problem of buoyancy allocation for building structures at scale by defining
a convex program which allocates buoyancy to minimize the predicted energy cost
for transporting blocks. | Samuel Lensgraf, Devin Balkcom, Alberto Quattrini Li | 2023-05-09T15:43:47Z | http://arxiv.org/abs/2305.05552v1 | # Buoyancy enabled autonomous underwater construction with cement blocks
###### Abstract
We present the first free-floating autonomous underwater construction system capable of using active ballasting to transport cement building blocks efficiently. It is the first free-floating autonomous construction robot to use a paired set of resources: compressed air for buoyancy and a battery for thrusters. In construction trials, our system built structures of up to 12 components and weighing up to \(100\,\mathrm{Kg}\) (\(75\,\mathrm{Kg}\) in water). Our system achieves this performance by combining a novel one-degree-of-freedom manipulator, a novel two-component cement block construction system that corrects errors in placement, and a simple active ballasting system combined with compliant placement and grasp behaviors. The passive error correcting components of the system minimize the required complexity in sensing and control. We also explore the problem of buoyancy allocation for building structures at scale by defining a convex program which allocates buoyancy to minimize the predicted energy cost for transporting blocks.
## I Introduction
Near coast underwater infrastructure plays an important role in many of the most basic aspects of society. The United Nations estimates that about half of the world's seafood comes from aquaculture [1]. Offshore wind energy currently produces \(42\,\mathrm{MW}\) of electricity in the U.S. alone with numerous projects expected to expand that capacity [2]. While autonomous underwater vehicles (AUVs) have been widely explored for aiding with inspection and exploration tasks [3], little work has been done to explore using them to directly aid in constructing underwater infrastructure.
Autonomous construction in water presents the unique opportunity of controlling the construction vehicle's buoyancy, which allows an AUV to build heavier and larger structures on limited battery capacity than drone-based systems. To exploit this opportunity, we developed the first free-floating autonomous construction system that actively tunes its buoyancy, allowing it to manipulate cement building blocks efficiently. Figure 1 shows our system placing a cement block on top of a 2D pyramid.
Our AUV system is wholly designed around the task of constructing cement block structures. It consists of several novel components: a novel one degree-of-freedom manipulator that allows simple grasp behaviors which align the AUV, a novel two-component cement building block system designed specifically to accept large amounts of placement error, and a simple active ballasting system and associated control behaviors that offset variable amounts of weight.
While it is common to design long term autonomous systems with backup energy sources, the balancing of two complementary and distinct resources during a manipulation task is, to our knowledge, unexplored. We defined a convex program which captures the trade off between battery power and compressed air. The convex program can be used to plan buoyancy allocations for large structures. We use this convex programming formulation to explore the problem of scale more deeply than possible in our physical experiments.
To alleviate positioning and localization errors, our system builds structures with slightly modified cement blocks combined with molded cement interlocking elements, referred to as _cones_. Structures are built of alternating layers of cement blocks and cones which ensure that each layer helps the next slide into place. Our AUV's novel manipulator is fitted with two phases of error correction which when combined with our novel compliant grasping strategies allows the AUV to slide into the proper alignment as it grasps a component.
This work represents a first step towards large scale construction. A number of future challenges, including robust perception and adaptation to external disturbances, will be study of future work, as discussed at the end of the paper.
Fig. 1: Placing the final block on a pyramid while releasing excess buoyancy.
## II Related Work
In our previous work [4], we developed the basis used for controlling and localizing our AUV. The previous iteration of our system created a structure of eight smooth, uniformly shaped, nearly neutrally buoyant blocks. Our current system built a structure of twelve components of two different shapes that are significantly negatively buoyant and high friction.
We presented preliminary work on our system at the ICRA 2022 construction robotics workshop [5]. We presented preliminary results using the buoyancy allocation convex program but did not apply it to the question of scale. The system contained basic versions of several of the components described here, but the control software and hardware were not capable of stacking blocks.
Other autonomous underwater construction systems have primarily focused on tele-operation such as the haptic feedback system for operating a back hoe developed by Hirabayashi _et al._[6]. Augmented reality has been explored as a way to manipulate large objects underwater using waterproofed construction equipment [7]. Surface-level self propelled blocks have also been explored for building bridge-like structures [8]. One of our goals in designing the block the robot builds with was to keep them simple and similar to construction materials that are easily available. Self assembly of robotic systems in water has also been explored [9, 10, 11].
**Land-based construction systems.** Land-based robotic construction systems have seen more development than air and water-based systems [12]. Land-based systems have used a variety of mobility designs including wheeled robots [13, 14, 15] and tracked robots [16, 17]. Robotic systems for autonomously laying brick walls have been explored since the birth of autonomous construction research [18, 19]. Mobile-base 3D printing robots are currently being explored both in industry and in research [20, 21]. Land-based robotic construction systems typically assume easy access to a power source or spare batteries, limiting the need for explicitly considering energy use during construction.
**Drone-based construction systems.** Latteur _et al._ explored using drones to stack interlocking cement blocks [22, 23]. While the problems of designing easy to assemble cement blocks and localizing the drone were explored, this work was tested using human pilots. The problem of battery capacity's effect on scale was left as future work.
The largest structure assembled by drones appears in the Flight Assembled Architecture Installation [24] in which a team of drones manipulated 90 gram foam blocks. Because the construction process centers around using a large team of UAVs, the UAVs can be easily swapped out. This eliminated the need for explicitly considering energy usage. The construction of truss structures using teams of quadrotor drones was explored by Lindsey _et al._[25] and in simulation by Santos _et al._[26]. Using drones as the base of an aerial 3D printing system has also been explored [27, 28].
**Autonomous underwater manipulation.** Autonomous underwater manipulation systems, referred to as intervention AUVs, are often designed to be general purpose agents fitted with complex, high degree-of-freedom manipulators for performing tasks such as manipulating a panel or collecting samples [29, 30]. Problems such as station keeping while manipulating an object using a high-degree-of-freedom manipulator require complex control strategies [31, 32]. Manipulating objects using teams of AUVs has also been explored [33]. Most underwater manipulators are prohibitively expensive. Even simple models with more than one degree of freedom can cost tens of thousands of US dollars [34] and range up to millions of US dollars.
To overcome the cost and system complexity associated with most general purpose underwater manipulators, we designed our system to be as simple as possible while still achieving the task at hand. The most closely related autonomous underwater manipulation system to ours is the system by Palomeras _et al._[35] in which the AUV docks in a specially designed mount before executing a manipulation task by forcing rods into cones mounted above the panel. This work in part inspired the compliant plunging procedures used to grasp objects with our AUV.
**Buoyant robots.** Active ballasting has long been used by autonomous and remotely operated underwater vehicles. Compressed air coming from an above-water source was used to offset the weight of a payload during a recovery procedure [36]. Other AUVs have exploited compressed air based active ballast to accommodate changes in salinity and thereby buoyancy in estuary environments [37].
Piston tank and pump based active ballasting systems are commonly used in underwater gliders and submarines [38, 39]. Detweiler _et al._[40] designed a robotic platform for repeatedly accommodating dynamic payloads using a piston- tank-based active ballasting system and compensated for changing centers of mass by moving the robot's battery. Their system could accommodate up to \(1\,\mathrm{kg}\) of payload. They explored the trade off between using buoyancy and thrusters, however the model is not used for planning.
## III AUV construction system
Our AUV construction system is a low-cost AUV designed specifically for construction with cement blocks. Its hardware and software are co-designed to achieve robust assembly while keeping complexity low.
### _Error correcting cement blocks_
Our AUV builds structures using a novel two-component process in which layers of error correcting cone inserts provide passive error correction when inserted between layers of standard, commercially available rectangular cement blocks. To facilitate sliding error correction, we ground slight chamfers into the sides of the internal holes of the cement blocks. The cone inserts weigh \(3.9\,\mathrm{Kg}\) (\(3.2\,\mathrm{Kg}\) in water) and the rectangular blocks weigh \(12.9\,\mathrm{kg}\) (\(9.5\,\mathrm{Kg}\) in water). Figure 2 shows the cones and blocks.
The cone inserts are made of two part molded cement. The top half (yellow in Figure 2) is made of a 30% by volume perlite mix and the bottom half (orange in Figure 2) is embedded with bolts in the tip. This creates an asymmetric
weighting that helps the cones tend to fall in the proper orientation through water. The base of the cones are slightly wider than the cement blocks, which allows them to easily be grasped when resting in the cement blocks.
Based on the CAD design, the cones can correct for up to \(5\,\mathrm{cm}\) of position error on the y-axis and \(2.5\,\mathrm{cm}\) of error on the x-axis where the y-axis is along the long dimension of the block and the x-axis is along the shorter side looking down.
### _Manipulator_
The construction AUV is fitted with a purpose designed one degree-of-freedom (DOF) manipulator. The primary linkage of the manipulator is inspired by centuries-old stone grabber designs in which a pair of jaws exploit the weight of the item they grasp to draw the claw more strongly closed.
To drive the jaws of the manipulator, a high power, depth rated servo1 drives a lead screw nut which is forced against thrust bearings between two spaced plates (blue in Figure 3). Enough space is left between the two plates that the nut can travel without spinning, allowing gravity to draw the jaws closed. When the manipulator reaches the end of its motion, a relay is switched off to prevent the stalled servo from drawing additional current.
Footnote 1: [https://www.bluetrailengineering.com/servos](https://www.bluetrailengineering.com/servos)
When opened, the shape of the manipulator forms a triangle with two end stop plates (horizontal white plates in Figure 3). The \(26\,\mathrm{cm}\) wide triangular opening can be forced down on an object, pushing the end stop plates against its flat upper surface to correct error along the z-axis. The triangle opens wide enough that about \(6\,\mathrm{cm}\) of error along the block's x-axis can be tolerated.
The end stop plates are shaped to fit around the error correcting cones, allowing improved error correction. The end stop plates can accept up to \(6\,\mathrm{cm}\) of error on the x-axis and \(7\,\mathrm{cm}\) of error on the y-axis and still slide on the proper surface of the cone. See Figure 6 for an example of how the end stop plates are used. The manipulator is not capable of correcting grasp error for the cement blocks along their y-axis. This problem is mitigated by the high error tolerance of the cones on that axis.
### _Active Ballasting_
To allow our AUV to manipulate the heavy cement blocks without stressing its electronics, we designed a simple active ballasting system that offsets large amounts of weight using compressed air. To lift a single cement block weighing \(9.5\,\mathrm{kg}\) in water with thrusters alone, the vehicle must pull about 470 watts from its 230 watt-hour battery. At this rate, the vehicle would be able to operate for about thirty minutes. This would be impractical for building structures of any size.
Servo actuated inlet and outlet valves allow the AUV to release compressed air from a 3 cubic liter SCUBA tank
Fig. 4: Active ballasting system.
Fig. 3: CAD rendering of error correcting 1DOF manipulator.
Fig. 2: Interlocking cone inserts (yellow and orange) and rectangular cement block.
pressurized at \(3000\,\mathrm{PSI}\). The SCUBA tank stores enough compressed air to offset roughly \(600\,\mathrm{kg}\) of water at 5 meters deep. Four ballast chambers made of four inch PVC pipe store air at the ambient pressure to increase the vehicle's buoyancy. The ballast chambers are oriented vertically to limit sloshing when partially filled and are spaced far apart to limit the pendulum effect of carrying a large mass below the AUV. Figure 4 shows the components of the system.
To change its buoyancy, the AUV translates a scalar tank level, \(b\in[0.0,1.0]\), into a fixed downward thrust. \(b=0.0\) corresponds to enough thrust to lift the block without buoyancy and \(b=1.0\) corresponds to nearly no assistance from the thrusters.
As the pressure in the SCUBA tank decreases, so does the flow rate of air into the ballast chambers. To give \(b\) a uniform meaning as pressure decreases, we use a pulsing strategy to release air into the ballast chambers. After grasping a block, the air inlet valve is pulsed until the vehicle ascends with the combined force of positive buoyancy and thrusters.
### _Construction process_
The construction process is encoded as a set of behaviors paired with way points specified in a global coordinate frame. Blocks and cones are grasped from known global coordinates and placed on a foundation of half cones which guide the first layer of blocks. The construction area used in our experiments is shown in Figure 5. Each waypoint sets a goal location for a mixed set of PID controllers that manage the vehicle's position and rotation.
The AUV exploits its relatively small weight compared to the blocks and cones in water using a "plunging grasp" behavior. Because the cement blocks are heavy relative to the AUV, it would require large grasp forces and complex counter-steering to force the blocks to comply with the AUV's manipulator while it grasps. Instead, we allow the AUV to comply with what it grasps. This setup allows a simple upward thrust combined with the strong closing action of the jaws to bring the AUV into a reliable alignment with the object it grasps.
After positioning above its target, the AUV disengages its PID controllers for all but its z-axis. After disengaging the controllers, the vehicle turns on a fixed upward thrust. It forces the end stop plates against the upper surface of its target while the manipulator is fully open. The tip of the cones slides into the hole in the end stop plate, adding extra error correction. The end stop plates are made of a slippery plastic material, allowing the vehicle to freely rotate on its yaw axis. As the jaws close, the vehicle is rotated to align with the block or cone. After the manipulator is fully closed, the PID controllers are re-engaged and the vehicle begins a buoyancy change behavior as described in III-C. Figure 6 shows the plunging grasp behavior as the end stop plates slide along the top surface of both a cone and a block.
For the relatively heavy blocks, the vehicle executes a "bailing release" maneuver in which the negative buoyancy of the robot-block system is exploited. The ballasts are fully emptied and the PID controllers of the vehicle are disengaged. The weight of the system draws the block down onto the cones below. The vehicle uses a slight upward thrust to speed the process and help prevent jamming of the block on the cones. The bailing release maneuver limits the forces placed on the structure as the block slides into place and allows the AUV to disengage without having to fight the excess positive buoyancy.
The cones are grasped with a higher degree of error correction than the blocks which allows a simpler strategy to place them onto the structure. To release the cones, the vehicle hovers above the structure and begins opening its manipulator. As the cone falls from the jaws of the manipulator, it releases the stored air in its ballast chambers. The cones are relatively light compared to the cement blocks, so the shock on the structure as they fall into place is not destabilizing. Instead, keeping the AUV distant from the structure as the cone is dropped allows the momentum of the cone to help fight jamming and prevents possible interference between the AUV and the structure.
## IV Scaling buoyancy enabled construction
Planning the construction of large scale structures using two resources presents a novel problem for construction planning. The energy efficiency of our thrusters is nonlinear in their speed. This means that the problem of allocating
Fig. 5: Blocks and cones arrayed in a pre-construction configuration.
Fig. 6: Plunging grasp on a block (a) and on a cone (b). The closing action of the jaws of the manipulator orients the AUV parallel with the block.
the amount of buoyancy, \(b\), for each block is nontrivial. In this section, we explore an idealized construction problem in which buoyancy is allocated to minimize a model of the amount of battery power consumed to hold the vehicle at the desired depth. We use this model to approximate the number of times the vehicle's battery must be recharged for increasingly large structures with and without buoyancy. The convex program formulation and results in Figure 7 appeared in our previous workshop paper [5].
Positive buoyancy changes consume compressed air and holding the vehicle at depth using thrusters consumes battery power. A change in buoyancy at the beginning of a motion reduces the battery cost of holding depth while carrying a block. Keeping excess positive buoyancy after releasing a block increases the battery cost of navigating to the next pickup location. For construction tasks where blocks are picked up far away from where they are placed on the structure, the cost of holding the vehicle at depth while navigating to and from the structure dominates.
If we assume that the time required to navigate between points is proportional to the distance, then the energy cost of holding the vehicle at depth over the motion is as well. In this case, a buoyancy change can be represented by a scalar that reduces the energy cost of traversing between points.
To explore the trade off between compressed air use and battery cost, we conducted an experiment in which the AUV lifted a block using varying levels of the parameter \(b\in[0.0,1.0]\) during its positive buoyancy change procedure. Figure 7 shows that the cost trade-off of changing buoyancy is roughly quadratic both while holding a block and not. We use a quadratic fit of this trade off data to define a convex program for allocating buoyancy.
Let the instantaneous cost to hold depth with a block at a buoyancy level \(b\) be \(f_{+}(b)\) and \(f_{-}(b)\) without. \(f_{+}(b)\) and \(f_{-}(b)\) are polynomial approximations as shown in Figure 7. The energy cost to transport a block \(d\) meters can then be approximated as \(f_{+}(b)d\frac{1}{v}\) with average velocity \(v\).
We can idealize the construction process as traversing a set of distances \(\hat{d}=d_{1},\dots,d_{n}\) where \(d_{i}\) is transporting a block to the structure for odd \(i\) and returning to the pallet without a block otherwise. Now, let \(\Delta=[\delta_{1},\dots,\delta_{n}]\) be the changes in buoyancy after each action. When picking up a block at position \(i\), \(\delta_{i}\) corresponds to a positive buoyancy change, and negative otherwise.
To constrain our convex program, we can use a lower triangular matrix, \(\mathbf{M}\), filled with alternating positive and negative ones to total the buoyancy level at every location. By removing the even numbered columns from \(\mathbf{M}\) we get \(\mathbf{M}^{\prime}\) which can be used to total the amount of compressed air used. We represent the SCUBA tank capacity by the constraint \(\mathbf{M}^{\prime}\Delta\leq C\), \(C\in(0,\infty)\). \(0\leq\mathbf{M}\Delta\leq 1\) ensures that the tank is never filled past \(b=1\) or depleted past \(b=0\).
Finally, \(\mathbf{E}(\mathbf{M}\Delta)\) can be defined to predict the total energy cost: \(E(\mathbf{M}\Delta)=\sum_{i=1}^{n}f_{i}((\mathbf{M}\Delta)_{i})\frac{d_{i}}{v}\) where \(f_{i}=f_{+}\) if \(i\) is odd and \(f_{-}\) otherwise. Because \(\mathbf{E}\) takes the form of a linear combination of convex polynomials, it is itself a convex function.
\[\begin{split}\underset{\Delta}{\text{min}}&\mathbf{E} (\mathbf{M}\Delta)\\ \text{subject to}& 0\leq\Delta\leq 1\\ &\mathbf{M}^{\prime}\Delta\leq C\\ & 0\leq\mathbf{M}\Delta\leq 1\end{split} \tag{1}\]
To solve this problem, and verify the formulation, we used the disciplined convex programming solver CVXPY [41].
Consider the case where the AUV is tasked with building the base row of a wall using blocks from a single pallet located near one side. Assuming the vehicle moves at about 0.5 meters per second when transporting a block, and that we can use one fully charged 3 liter tank which can offset about 50 blocks, how many times must we charge the vehicle's battery to build the base row starting near the pallet and moving farther away? Table I shows the result of using our convex program for increasing lengths of the row.
These results show that adding the option to allocate buoyancy significantly reduces the number of times the vehicle's battery must be recharged throughout a construction process. The advantage of battery charges versus compressed air refills depends on the specific deployment scenario.
## V Construction Trials
To validate our system, we deployed it in an indoor swimming pool at about \(4\,\mathrm{m}\) deep. While the deployment area was controlled, periodically harsh caustics from sunlight tested our system's robustness to sub-optimal lighting.
A fiducial marker provides global position information, and a second fiducial marker rigidly fixed to an aluminum and cement foundation provides the local position information for the construction area: the location of the slots on the foundation and the pickup locations. The foundations and block pickup locations are placed on a \(2.5\times 2\) meter platforms to provide a flat work surface.
\begin{table}
\begin{tabular}{|c c c|} \hline Blocks long & Charges with buoyancy & Charges without \\ \hline
10 & 0 & 0 \\
50 & 0 & 0 \\
250 & 4 & 16 \\
500 & 21 & 64 \\ \hline \end{tabular}
\end{table} TABLE I: Approximate number of times the vehicle’s \(230\,\mathrm{Wh}\) battery must be charged to place a row of blocks of increasing length.
Fig. 7: Energy cost of maintaining depth with increasing positive buoyancy while holding a block (a) and decreasing positive buoyancy without a block (b).
Our AUV completed three test structures: a seven component column, a nine component pyramid base and a twelve component pyramid. The column and pyramid are shown in Figure 8. The column shows that the robot is able to place the heavy blocks without pulling down the structure. The two pyramid-like structures mimic the internal areas of a wall and show that the error correction allows the AUV to place adjacent cement blocks without jamming.
For each manipulation, we set \(b=0.8\). The seven piece column took about 30 minutes and used \(51\,\mathrm{Wh}\), the nine piece pyramid base took 38 minutes and used \(50\,\mathrm{Wh}\), and the twelve piece pyramid took 45 minutes and used \(67\,\mathrm{Wh}\). Manipulating each cinder block took about 60 PSI from the onboard SCUBA tank to achieve the \(b=0.8\) ballast level and the cones took about \(10\,\mathrm{PSI}\) of the \(3000\,\mathrm{PSI}\) max pressure.
We recorded the power use of the electronics system, including onboard sensing and thrusters. On average, each placement took 2 minutes 38 seconds and consumed 1.6% of the battery. Table II shows the breakdown of the average amount of time and power used during the construction of three trial structures.
Of 40 manipulations attempted during construction trials, one failed while grasping a cinder block, another failed from a missed drop of a cone and a third by missing the slots during a bailing release action. This left us with a 92.25% success rate for component manipulation.
Figure 9 shows the behaviors the AUV proceeds through to move a block from the pickup location to the structure. The polynomial approximation of the cost to hold the AUV at depth predicts that the average cost during the "transport" phase would be \(50\,\mathrm{W}\). The real value was \(81\,\mathrm{W}\). The spikes of current draw during the transit phase result from the response of the AUV's controllers as the set point is moved.
## VI Conclusions and Future Work
This paper presents a first exploration into the construction of cement blocks structures using active ballasting. Our system is the first free-floating autonomous construction system to build with cement blocks in air or water. Our system is able to build structures of up to 12 components weighing \(100\,\mathrm{Kg}\) (\(75\,\mathrm{Kg}\) in water). To improve our system towards large scale, real world utility, we plan to address the following key challenges.
**Reliance on computer vision.** To globally localize in the platform reference frame, our AUV relies on computer vision to sense fiducial markers. While this strategy works well in controlled, clear waters, using vision to sense distant targets in real bodies of water is unreliable. In future work, we will explore hybrid visual and acoustic strategies for sensing the vehicle's position.
**Sensing structure state.** Our current system has no direct sense of the state of the structure. This fundamentally limits the scale of structures the system can achieve. Even with a high probability of success for each manipulation action, probability of success for the structure rapidly becomes low without error recovery. In the next iteration of our system, we will explore ways to simply and reliably sense whether placement tasks are successful and recover if possible.
**More expressive building materials.** Our current building system works only in two dimensions. This is both due to the specific way we localize the AUV and the lack of right angled joining pieces. In future work, we plan to adapt both our AUV system and our building materials to allow right angles in the structures.
**Adapting design and control to external disturbances** In shallow-water marine deployments, external disturbances from waves and swells buffet the AUV. Model predictive control techniques to limit the effects of waves on an AUV can help alleviate position uncertainty, but recent experimental work by Walker _et al._[42] shows up to an \(0.6\,\mathrm{m}\) root mean squared error when attempting to remain in a fixed position. This error is larger than the \(2.5\,\mathrm{cm}\) error correction capacity of our building components. To harden our system to real-world external disturbances, we plan to increase the acceptance area of our construction components and employ rapid release strategies to quickly place components opportunistically as the AUV's position oscillates.
\begin{table}
\begin{tabular}{|c c c|} \hline Behavior & Percent time & Percent power use \\ \hline Grasping block & 5.8\% & 9.9\% \\ Adding buoyancy & 8.1\% & 7.5\% \\ Placing block & 6.2\% & 5.2\% \\ Transporting block & 44.5\% & 47.0\% \\ Returning to pallet & 35.5\% & 30.3\% \\ \hline \end{tabular}
\end{table} TABLE II: Average ratio of time and power use for executing each behavior during construction trials.
Fig. 8: Two structures completed by our AUV weighing \(54\,\mathrm{kg}\) and \(100\,\mathrm{kg}\) (\(41\,\mathrm{kg}\) and \(75\,\mathrm{kg}\) in water).
Fig. 9: Battery power used while transitioning through the phases to place a block. |
2304.12307 | Optimization of chemical mixers design via tensor trains and quantum
computing | Chemical component design is a computationally challenging procedure that
often entails iterative numerical modeling and authentic experimental testing.
We demonstrate a novel optimization method, Tensor train Optimization
(TetraOpt), for the shape optimization of components focusing on a Y-shaped
mixer of fluids. Due to its high parallelization and more extensive global
search, TetraOpt outperforms commonly used Bayesian optimization techniques in
accuracy and runtime. Besides, our approach can be used to solve general
physical design problems and has linear complexity in the number of optimized
parameters, which is highly relevant for complex chemical components.
Furthermore, we discuss the extension of this approach to quantum computing,
which potentially yields a more efficient approach. | Nikita Belokonev, Artem Melnikov, Maninadh Podapaka, Karan Pinto, Markus Pflitsch, Michael Perelshtein | 2023-04-24T17:56:56Z | http://arxiv.org/abs/2304.12307v1 | # Optimization of chemical mixers design via tensor trains and quantum computing
###### Abstract
Chemical component design is a computationally challenging procedure that often entails iterative numerical modeling and authentic experimental testing. We demonstrate a novel optimization method, Tensor train Optimization (TetraOpt), for the shape optimization of components focusing on a Y-shaped mixer of fluids. Due to its high parallelization and more extensive global search, TetraOpt outperforms commonly used Bayesian optimization techniques in accuracy and runtime. Besides, our approach can be used to solve general physical design problems and has linear complexity in the number of optimized parameters, which is highly relevant for complex chemical components. Furthermore, we discuss the extension of this approach to quantum computing, which potentially yields a more efficient approach.
## I Introduction
The development of new reactors and components is opening up entirely new opportunities for the chemical industry to optimize the geometry of its plants and processes [1]. These opportunities include improving the plants' performance, reducing costs and decreasing their environmental impact. However, the design of various components (e.g., fluid mixers) is a computationally challenging process [2].
The underlying problem in designing these components is searching for the geometry (shape) that satisfies certain criteria. For instance, the maximization of the efficiency of a chemical process or the minimization of the mechanical tension in a component can be considered. Such design is usually done in an iterative way, combining numerical modeling with real experimental testing [3]. With complex systems, even numerical modeling takes a substantial amount of time and computational resources, so it is not feasible to perform many of these simulations. The only efficient way to solve such a problem is by sampling the objective with various geometries and deciding what is the optimal set of geometrical parameters while attempting to keep the number of samples as low as possible. In a given high dimensional parameter system with multi objective optimization, black-box optimization techniques can be very effective [4].
Black-box optimization deals with problems where the structure of the objective function and/or the constraints defining the set are unknown. For example, in our case, there is no known analytical solution for the partial differential equations that govern the dynamics. The most naive way is to sample the objective using an equidistant grid in a parameter space. However, such an approach scales exponentially with the number of parameters and is clearly not feasible. More advanced approaches include the use of Bayesian optimization [5], where the probability distribution is updated after each sample so the next sampling point is provided by the optimization routine. While Bayesian optimization is widely used in academia and industry, its scaling is unclear and the structure of the algorithm does not allow for highly parallel processing or hardware acceleration, e.g. the use of Graphical Processing Units (GPUs).
In this work, we consider the tensor-train black-box optimization technique, TetraOpt (see Ref. [6] and [7]), and apply it to a specific component design, a Y-mixer used for the mixing of two fluids [8]. We introduce a set of geometrical parameters and utilize TetraOpt to find the geometry that provides the most efficient mixing of liquids. We demonstrate that such an optimization method can be implemented in parallel so that it reduces the time of the component design. This speeds up the prototyping and the full development cycle. In addition, this optimizer finds a better optimum in comparison to Bayesian optimization and requires much less work with hyperparameter tuning, since it has only three hyperparameters, which are very intuitive.
## II Problem description
The Y-mixer that is used to mix two liquids consists of two inlets that are connected symmetrically at a certain angle, guiding the liquids to a single outlet as shown in Fig. 1. In this work, we consider the mixing of water with ethanol with the property data given below.
\begin{tabular}{|l|l|l|} \hline
**Parameter** & **Ethanol** & **Water** \\ \hline Density, \(kg/m^{3}\) & 789 & 990 \\ \hline Molar weight, \(g/mol\) & 46 & 18 \\ \hline Dynamic viscosity, \(mPa\cdot s\) & 1.18 & 1.0 \\ \hline \end{tabular}
In order to simulate the flow of the liquids, we utilize an open-source Computational Fluid Dynamics (CFD) software package, OpenFoam[9], using the PyMesh utility for mesh generation [10]. The simulation is performed using the reactingFoam utility from OpenFoam. The case is simulated using a kEpsilon turbulence model [9]. The
volumetric flow rate is fixed for both liquids at 8 mL/s.
The shape of a Y-mixer includes a round section and different diameters along the tubes, as shown in Fig. 1 (left). The inlet tubes have three different diameters along their lengths, while the outlet tube has a constant diameter. When a Y mixer with narrow channels and a long outlet tube is considered, a detailed simulation with a fine mesh (a few million elements) could take a few hours until convergence. In order to benchmark optimization methods, we simplify the shape, as shown in Fig. 1 (right), which reduces the runtime to 20 seconds. Such a simplification affects both the dynamics inside the mixer and the objective function but without loss of generality, we can use it with the key goal to analyze and benchmark the TetraOpt.
In order to numerically characterize the quality of the mixing, we calculate the coefficient of variation (CoV) of the phase fraction of the liquids at a horizontal section, which is \(2.5\,\mathrm{mm}\) below the mixing chamber. The CoV is calculated in the following way:
\[CoV=\frac{\sigma(m_{1}/(m_{1}+m_{2}))}{\langle m_{1}/(m_{1}+m_{2})\rangle}, \tag{1}\]
where \(m_{1}(x)\), \(m_{2}(x)\) are phase fractions of the corresponding liquids on the section surface, which we obtain by solving the Navier-Stokes equations with OpenFoam. The standard deviation of \(x\) is \(\sigma(x)\) and the mean value of \(x\) is \(\langle x\rangle\). For homogeneous mixing, the coefficient of variation is close to zero (\(CoV\to 0\)), which constitutes the optimization problem that we solve here.
Here, we consider a set of four parameters to optimize:
1. **y-angle**, the angle between the inlet tube and the outlet tube (from \(0^{\circ}\) to \(30^{\circ}\));
2. **connection radius**, the effective radius of the closest to the outlet part of the inlet tube (from 0.2 to 0.5 mm);
3. **connection length**, the length of the closest to the outlet part of the inlet tube (from 0.5 to 1.5 mm);
4. **inlet radius**, the radius of the inlet (from 0.2 to 0.6 mm).
Due to the nature of the problem, it is impossible to write down the CoV as a function of these four parameters since it involves the solution of the Navier-Stokes equation. Therefore, the problem is considered to be a black-box - it is possible to sample the CoV at arbitrary parameters values with the goal of finding the minima of the CoV as fast as possible.
## III Bayesian optimization
One of the most common ways to solve black-box optimization problems is to use Bayesian optimization [11]. It is successfully used in hyperparameter tuning tasks in machine learning [12; 13] and in shape optimization in CFD problems [14]. The only assumption we make is that the cost function is continuous and its value can be estimated at a given point in a specified search area, which is true for the given shape optimization problem.
The Bayesian algorithm works in an iterative way - it leverages obtained information about the function (such as the function values at several points) to approximate it. At each iteration, it provides a new point at which the function should be estimated and updates the approximation using the new point. The process is repeated until it converges.
In the beginning, the algorithm receives the initial function values at several points. A single iteration of the algorithm consists of three steps:
1. Based on all the available data about previously estimated points, an approximation is built (a surrogate model), which is usually done via Gaussian Processes [12]. A Gaussian process (GP) is defined by its mean function \(m:\mathbf{x}\rightarrow\mathbb{R}\) and its covariance function \(k:\mathbf{x}\times\mathbf{x}\rightarrow\mathbb{R}\), which we denote as \[f(\mathbf{x})\sim\mathrm{GP}\left(m(\mathbf{x}),k\left(\mathbf{x},\mathbf{x}^ {\prime}\right)\right).\] In the one-dimensional case, it finds the mean \(m(x)\) and the dispersion \(\sigma(x)\) functions as shown in Fig. 2 (top).
2. The second step is to build the acquisition function, which shows how likely a point should be chosen as the next estimate in terms of the exploration-exploitation trade-off. Exploitation means sampling at points where the surrogate model predicts a high objective and exploration means sampling at locations where the prediction uncertainty is high. The simplest acquisition function in the case of maximization is the following: \[u\left(\mathbf{x}\mid\mathcal{D}_{1:t-1}\right)=m(\mathbf{x})+\sigma(\mathbf{x }),\] which corresponds to the upper blue covering line in Fig. 2 (bottom). However, the more popular ones are the maximum probability of improvement (MPI), expected improvement (EI) and upper confidence bound (UCB) [5]. Here \(\mathcal{D}_{1:t-1}=\left(\mathbf{x}_{1},y_{1}\right),\ldots,\left(\mathbf{x}_ {t-1},y_{t-1}\right)\) are the \(t-1\) samples drawn from \(f\) so far.
3. The next sampling point \(\mathbf{x}_{t}\) is determined according to \(\mathbf{x}_{t}=\operatorname*{argmax}_{\mathbf{x}}u\left(\mathbf{x}\mid \mathcal{D}_{1:t-1}\right)\). Then the function value at this point is obtained as \(y_{t}=f\left(\mathbf{x}_{t}\right)\). Finally, the sample is added to previous samples \(\mathcal{D}_{1:t}=\mathcal{D}_{1:t-1},\left(\mathbf{x}_{t},y_{t}\right)\) and the GP is updated.
The main disadvantages of the algorithm are that it is poorly parallelizable and struggles to work with non-continuous variables. The poor parallelization is due to
the fact that the algorithm works sequentially and at each step of the optimizer's work, it is necessary to estimate the value of the cost function at only a single point. Even though multiple runs of the algorithm can be performed in parallel, it usually fails to significantly change the efficiency [11].
## IV Tensor train optimization
Here, we propose to use a completely different black-box optimization algorithm, which is based on Tensor Train (TT) decompositions [15; 16]. Tensor Trains [17] represent multi-dimensional tensors in a compressed form as a product of small tensors:
\[A\left(i_{1},\ldots,i_{d}\right)=\sum_{\alpha_{0},\ldots,\alpha_ {d-1},\alpha_{d}} G_{1}\left(\alpha_{0},i_{1},\alpha_{1}\right)G_{2}\left(\alpha_ {1},i_{2},\alpha_{2}\right)\] \[\ldots G_{d}\left(\alpha_{d-1},i_{d},\alpha_{d}\right),\]
where \(G_{j}\) is the 3-dimensional tensor called the TT-core. The indices \(i_{j}\) run through values from 1 to \(n\). The main characteristic of such a representation is the rank, \(r\), which is equal to the maximum size among the indices \(\alpha_{0},\alpha_{1},\ldots,\alpha_{d}\) and which expresses the correlations between variables/indices.
As in a grid search, the optimization algorithm (TetraOpt) requires discretization of the search space on a uniform grid. Let \(d\) be the number of variables and \(n\) be the grid size in one dimension. However, unlike grid search, TetraOpt does not estimate the cost function at all \(n^{d}\) points of the grid but instead dynamically provides the next set of evaluating points in the search space based on the knowledge accumulated during all previous evaluations, as in Bayesian optimization. The main advantage of this algorithm in comparison to Bayesian optimization is its ability to be run in a parallel way and provide a better ("more global") search for the optimum.
The main hyperparameters of the TetraOpt algorithm are the _number of variables_, \(d\), the _grid-size_ in one dimension, \(n\), and the _rank_, \(r\), with which we try to approximate a tensor of discretized function values via tensor train decomposition. The larger \(r\) is, the better the optimum is but this requires more time. The last hyperparameter is the _number of iterations_, \(I\). TetraOpt requires \(O(Idnr^{2})\) function calls and performs \(O(Idnr^{3})\) calculation operations. Since black-box functions are usually hard to estimate, the runtime of the algorithm can be neglected in comparison to the time of the function calls.
TetraOpt is built upon the cross-approximation technique (TT-cross) [18], which enables the approximation of a given tensor in the Tensor Train format by referenc
Figure 1: Left: The original Y-mixer considered for the mixing of two fluids. Right: The simplified geometry of the Y-mixer With the parameters considered for the optimization.
Figure 2: Bayesian optimization. The blue curve represents the target function, the red dots depict the evaluated points, the dashed line shows a mean \(m(x)\) function and the blue region covers \([m(x)-\sigma(x),m(x)+\sigma(x)]\) area. The acquisition function is plotted below.
ing only a subset of its elements. The TT-cross algorithm in turn is based on the MaxVol routine [19], which finds an \(r\times r\) submatrix of maximum volume (i.e. a submatrix with a maximum determinant module) in an \(n\times r\) matrix. It can be shown that the maximum element of a submatrix with maximal volume is a good approximation of the maximal element of the whole matrix [19]:
\[\hat{J}_{\text{max}}\cdot r^{2}\geq J_{\text{max}},\]
in terms of its modulus, \(\hat{J}_{\text{max}}\) is the maximal element of a \(r\times r\) submatrix with maximal volume, and \(J_{\text{max}}\) is the maximal element of the whole matrix. To get more intuition and learn the technical aspects of tensor-based algorithms for optimization, we refer the reader to Ref. [15].
The overall scheme of the TetraOpt workflow can be found in Fig. 3: TetraOpt iteratively requires estimating the values of the optimization function at several grid points (see Fig. 4), which we count using the OpenFoam simulation. In terms of finding the optimum, the algorithm remembers the best points estimated during the TT-cross algorithm and updates them if superior points are found. In addition, the algorithm is parallelizable because, at each step, it requires an estimation of the cost function at \(nr^{2}\) points, which can be done in parallel. However, there is no rigorous proof that it finds a better optimum than Bayesian optimization - each task requires a separate analysis.
## V Results
In this section, the results of the shape optimization using the Tensor Train optimization technique and its comparison with Bayesian optimization are presented. The grid-search was utilized to find a global optima but, of course, such a method is computationally inefficient. All computations are performed using the QMware system [20], including CFD simulations and optimization. The CoV dependency on the given variables is complicated and non-obvious near the optimal value, as can be seen in Fig. 5.
Based on the simplified CFD model, a single simulation of OpenFOAM takes 16 seconds. As was stated, the TetraOpt algorithm is highly parallelizable, thus we run multiple CFD simulations in parallel so the effective runtime was reduced to 1.1 seconds per simulation, as shown in Fig. 6(a).
The results of the comparison between TetraOpt and the Bayesian optimization are shown in Fig. 6(b), where
Figure 4: The sequence of steps in TetraOpt using a 3D function example. (a) Demonstration of the function discretization on a uniform 3D-grid; the figures (b)-(f) indicate the points in which the function is estimated during one iteration of the TetraOpt algorithm. The next step is shown in a bright color and the previous steps are transparent.
Figure 3: Overall scheme of shape-optimization via TetraOpt. Firstly, TetraOpt generates a list of geometrical parameter sets at which the simulation should be performed. Then, the meshes for corresponding geometries are generated in Python using the PyMesh utility. After that, at each set of parameters, the simulations are run in parallel utilizing the previously generated meshes and OpenFoam (this is the most computationally difficult step). The cost values are calculated from each simulation using Python and inner OpenFoam functions. Then the data is passed to TetraOpt and the cycle is repeated until convergence.
we average the results by running both algorithms 10 times (solid curves). We compare two optimizers in terms of the runtime and demonstrate that TetraOpt converges faster to a more optimal value - on average TetraOpt obtains an approximately 2.33 times better optimum at the end of optimization than Bayesian optimization: 0.051 vs 0.12. Moreover, the tensor-based optimizer finds the best possible shape with a 0.027 cost function value (0% gap), while the Bayesian approach is able to find the shape with a 0.059 cost value (\(\sim\) 118% gap).
The behavior of the curves in Fig. 6(b) has a stepwise character due to the fact that CFD simulations are iteratively performed, which takes a considerable time (plateaus), and the optimum is updated while simulations are done (drops). The sharp jumps of the TetraOpt curve (e.g., at 75 and 125 seconds) are due to the fact that a large number of points were simultaneously estimated, which significantly updated the optimum.
The found optima are shown in Table 1. Remarkably, despite the fact that TetraOpt requires an order of magnitude more function calls (number of carried-out simulations), it finishes the optimization in less time due to parallelization. It is worth noting that the grid was such that the minimal value on the grid was close to the minimal value on a whole domain because otherwise, the Bayesian optimization would have some privileges.
In order to maximize the performance of both algorithms, we tune the hyperparameters. The following values are used for TetraOpt and Bayesian optimization, respectively:
For the Bayesian optimization, a parameter \(\kappa\), which indicates how close the next parameters are sampled, is 2.576, which is constant on each iteration. The Bayesian optimizer is taken from an open-source library [21].
Fig. 7 shows a visual comparison between the flow behaviors in optimized and non-optimized geometries. Figs. 7(a) and 7(b) represent the water fraction: while frame (a) shows overall profile of water fraction along
Figure 5: The cost function landscape as a function of two parameters: connection length and y-angle. The darker the colour, the better the mixing is. The fixed coordinates are the inlet radius = 0.275 mm and the connection radius = 0.3 mm. We sample 100 points on each axis.
Figure 6: (a) Execution time per simulation dependent on the number of parallel simulations. The effective time of each simulation time decreases but after 50 parallel simulations, it does not change (or even increases) due to imperfect parallelization methods and hardware. (b) Comparison between TetraOpt and Bayesian optimization. TetraOpt finds, on average, an approximately 2.33 times better optimum in the same time than the Bayesian approach. Here, TetraOpt performs CFD simulations in parallel, while Bayesian optimization requires only running one simulation at each iteration. Shadowed areas denote the variations in the behavior of the optimizers during 10 launches: dotted lines represent the best and worst scenarios for each optimizer. The red line represents the minimum found by a grid search.
the walls, frame (b) shows the water fraction distribution on the outlet section where the cost function is evaluated. It is clear that the water fraction in the optimized mixer case is more uniform and the absolute value is significantly smaller on average in comparison to the non-optimized mixer case. Figs. 7(c) and 7(d) show the pressure and absolute magnitudes of the velocity.
## VI Extension to quantum computing
Remarkably, a slightly modified version of TetraOpt can be improved with a quantum algorithmic part. As stated in the description of the optimizer (Sec. IV), the algorithm tries to approximate the tensor of function values on a grid but it obtains the optimum only as a by-product of the approximation algorithm. Therefore, the next step is to use the approximation to obtain new, even better, optima.
Let us denote a tensor of cost-function values on the uniform grid as \(x\) (for later convenience, it is worth mentioning that we can always reshape \(x\) into a vector), which we try to approximate via Tensor Train \(x_{TT}\) of rank \(r\) using the TI-cross algorithm. The optimization task can now be redefined as the problem of finding the maximum (minimum) element of the tensor \(x\). Here, we assume that \(x_{TT}\) approximates \(x\) with good precision, or at least its optimal values are close to the optima of \(x\).
Thus, we can implement the power method [22]. That is, we find the maximal element of \(x^{n}\) instead of \(x\), which is much easier since the largest element in \(x^{n}\) is much larger, in relative terms, compared to the largest element of \(x\). Since we assume that the optima of \(x_{TT}\) are close to the optima of \(x\) and since it is much faster to operate with \(x_{TT}\) (for example, the squaring operation \(x_{TT}^{2}\) costs \(O(dnr^{4})\) in TT format as compared to the complexity \(O(n^{d})\) for classical squaring \(x^{2}\)), the idea is to realize the power method via Tensor Trains. However, the problem with this approach is that the ranks increase dramatically as \(r^{n}\), leading to a significant increase in complexity. As a result, implementing this algorithm on a classical computer may not be as efficient as on a quantum computer, which does not have any complexity
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**N** & **Parameter** & **TetraOpt** & **BayesOpt** \\ \hline
1 & Y-angle, angle & 26.1\({}^{\circ}\) & 7.5\({}^{\circ}\) \\ \hline
2 & Inlet radius, mm & 0.27 & 0.275 \\ \hline
3 & Connection length, mm & 0.85 & 0.75 \\ \hline
4 & Connection radius, mm & 0.28 & 0.3 \\ \hline \hline \multicolumn{3}{|l|}{**Resulting values**} & **TetraOpt** & **BayesOpt** \\ \hline Cost function, a.u. & 0.0274 & 0.0593 \\ \hline Average fun. calls & 228.5 & 35 \\ \hline Runtime, s & 325 & 440 \\ \hline \end{tabular}
\end{table}
Table 1: (1-4) Optimal parameters of the Y-mixer found by TetraOpt and Bayesian optimization. We show the final cost function value, the total number of cost function calls (number of performed simulations) averaged by 10 optimization runs and the total runtime of the optimization launched using QMware.
Figure 7: The top row shows the data before optimization and the bottom row displays the data after optimization. (a) The water fraction with the cutting plane where the cost function is calculated (in green). (b) The water fraction at the specified cutting plane. In the non-optimized case, the value of cost function (CoV) is 0.56, while after the optimization it is 0.05. The color represents the phase fraction of the water at each point. (c) The pressure before and after the optimization and (d) velocity distribution before and after the optimization.
dependence on the ranks. To use a quantum computer for this purpose, we only need efficient preparation of \(x_{TT}\) and efficient multiplication by \(x_{TT}\). Fortunately, several algorithms exist for encoding Tensor Trains into a quantum computer [23, 24, 25, 26]. Thus, a quantum implementation of this algorithm will be able to perform optimization in a more efficient way.
## VII Conclusion
In this work, we utilized TetraOpt to solve a shape optimization problem of a Y-mixer used for the mixing of two fluids. This problem is considered a black-box optimization, i.e. there is no explicit expression for the cost function and its estimation at a single point requires significant computational resources. We demonstrated that compared to Bayesian optimization, TetraOpt finds a much better optimum in less time. We concluded that such an improvement comes from the fact that TetraOpt is a parallel technique and performs better exploration during the optimization compared to Bayesian methods.
It is worth emphasizing that the application of this method is not limited to the task at hand but can be applied to any optimization problem since it requires only an objective function given at a point in the search area. Besides, the method is straightforward to use because there are only three hyperparameters with intuitive settings.
Furthermore, we demonstrate an extension of this method to quantum hardware - the realization of the power method via quantum circuits, which can provide a better optimum. The implementation of the quantum part and its application to more complex problems and geometries is the subject of future work.
|
2307.04035 | A novel framework for Shot number minimization in Quantum Variational
Algorithms | Variational Quantum Algorithms (VQAs) have gained significant attention as a
potential solution for various quantum computing applications in the near term.
However, implementing these algorithms on quantum devices often necessitates a
substantial number of measurements, resulting in time-consuming and
resource-intensive processes. This paper presents a generalized framework for
optimization algorithms aiming to reduce the number of shot evaluations in
VQAs. The proposed framework combines an estimator and an optimizer. We
investigate two specific case studies within this framework. In the first case,
we pair a sample mean estimator with a simulated annealing optimizer, while in
the second case, we combine a recursive estimator with a gradient descent
optimizer. In both instances, we demonstrate that our proposed approach yields
notable performance enhancements compared to conventional methods. | Seyed Sajad Kahani, Amin Nobakhti | 2023-07-08T19:14:01Z | http://arxiv.org/abs/2307.04035v1 | # A novel framework for Shot number minimization in Quantum Variational Algorithms
###### Abstract
Variational Quantum Algorithms (VQAs) have gained significant attention as a potential solution for various quantum computing applications in the near term. However, implementing these algorithms on quantum devices often necessitates a substantial number of measurements, resulting in time-consuming and resource-intensive processes. This paper presents a generalized framework for optimization algorithms aiming to reduce the number of shot evaluations in VQAs. The proposed framework combines an estimator and an optimizer. We investigate two specific case studies within this framework. In the first case, we pair a sample mean estimator with a simulated annealing optimizer, while in the second case, we combine a recursive estimator with a gradient descent optimizer. In both instances, we demonstrate that our proposed approach yields notable performance enhancements compared to conventional methods.
## 1 Introduction
Variational Quantum Algorithms [1] have emerged as a promising solution for near-term applications of quantum computers. These versatile algorithms offer the capability to tackle a diverse range of complex problems, including but not limited to quantum chemistry [12], combinatorial optimization [2], and machine learning [15]. Despite their potential for near-term applications, variational algorithms often require a large number of measurements. This makes implementation of those algorithms on quantum devices extremely time and resource-intensive [4, 3], even when performed on shallow and low-width circuits.
Various research efforts have sought to employ optimizers to reduce the computational burden of VQAs. These include application of both existing and novel optimization techniques [7; 11; 8]. Such approaches are related to well studied and rich literature on optimization of noisy functions in various fields such as signal processing and control theory (see for example [6] and [10]). Sweke et al.[16] introduced a quantum stochastic gradient descent optimizer that relies on a gradient estimator with a limited number of shots. They proved that with some simplifying assumptions this approach will converge to the optimal values. However, the convergence rate is dependent on the error of the estimator. In another study, Polloreno et al.[13] studied the robustness of a double simulated annealing optimizer against inherent quantum noise, even when only a few shots are available and the noise is noticeable.
Another approach to solve this problem has been to employ a nested optimization framework in which a high-level optimizer is used to improve the performance of a low-level optimizer by tuning its parameters. For example, Tamiya et al.[17] employed Bayesian optimization on stochastic measurement results to determine the optimal step size through a line search. Inspired by stochastic gradient descent, this method incorporates an adaptive shot technique to reduce the number of measurements required during the line search. Similarly, Mueller et al.[9] proposed a technique to identify a suitable initial value set using Gaussian Processes. Subsequently, they utilized ImFil as the optimizer in their approach.
In this work we propose a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs. The key performance improving novelty in our approach are two fold. First, devising a framework to incorporate powerful estimation techniques to achieve near-true parameter estimates with much fewer data samples. Secondly, by utilizing the sensitivity analysis of the optimizers, it will be assured that the error level of estimators (and the number of shots as a result) are suitably chosen. This is made possible by breaking the problem into two separate estimation and optimization problems, and deriving theoretical results on the sufficient number of shot. We explore two specific case studies within this framework. For the first case, a sample mean estimator is paired with a simulated annealing optimizer, and in the second case, a recursive estimator is paired with a gradient descent optimizer.
The remainder of the paper is organized as follows; In section 2 background material, including quantum variational circuits, and estimation theory are presented. In section 3 we develop the proposed error control strategy and discuss the resulting optimization framework. In section 4 we present two case studies together with numerical results. Finally, in section 5, we conclude our work.
Basic Concepts
### Quantum Variational Algorithms
In theory of quantum variational algorithms, the expected value of an observable \(O\) over a state, generated by applying the parameterized quantum circuit \(U(\mathbf{\theta})\) on the initial state \(\left|0\right\rangle\) is a required data. This value is used by cost function \(\mathcal{C}\in\mathbb{R}^{m}\) to be minimized with respect to the parameter space \(\mathbf{\theta}\). Accordingly, the class of algorithms such as VQE, QAOA and QNN, can be formulated as [1],
\[\mathbf{\theta}^{*}=\min_{\mathbf{\theta}\in\mathbb{R}^{m}}\mathcal{C}\big{(}\left\langle 0 \right|U(\mathbf{\theta})^{\dagger}OU(\mathbf{\theta})\left|0\right\rangle\big{)}. \tag{1}\]
Specific details of these algorithms are available in [1]. Here we would like to focus on the underlying operation of these algorithms. Let,
\[f^{U,O}(\mathbf{\theta})=\left\langle 0\right|U(\mathbf{\theta})^{\dagger}OU(\mathbf{ \theta})\left|0\right\rangle, \tag{2}\]
in which \(U\) and \(O\) may be omitted when discussion is not related to the specific choice of \(U\) and \(O\). One of the simplest and widely used parameter-shift rules to compute the derivatives of \(f\) is given in Lemma 1.
**Lemma 1** (Parameter-shift rule [18]).: _under the circumstance that each the dependence of \(f\) to each parameter (like \(\mathbf{\theta}_{k}\)) is in the form of \(e^{\mathbf{i}\mathbf{\theta}_{k}P_{k}}\) where \(P_{k}\) is a Pauli operator, we have,_
\[\partial_{k}f(\mathbf{\theta})=\frac{f(\mathbf{\theta}+\mathbf{\hat{e}}_{k}\pi/2)-f( \mathbf{\theta}-\mathbf{\hat{e}}_{k}\pi/2)}{2}. \tag{3}\]
Variable \(\partial_{k}\) is \(\frac{\partial}{\partial\mathbf{\theta}_{k}}\) and \(\mathbf{\hat{e}}_{k}\) is the vector with \(1\) in the \(k\)-th position and \(0\) elsewhere. Lemma 1 is not only useful in calculating the derivative of \(f\), it can also be used to bound higher derivatives of \(f\) as shown in Lemma 2.
**Lemma 2**.: _For any \(\mathbf{\theta}\in\mathbb{R}^{m}\), we have,_
\[\left\|\mathrm{Hess}f\right\|_{2}\leq m\|O\|_{2}. \tag{4}\]
Proof.: From the definition we know that \(|f|<\|O\|_{2}\forall\mathbf{\theta}\in\mathbb{R}^{m}\). For any \(i\) and \(j\) there always exist some values of \(\mathbf{\theta}_{1},\mathbf{\theta}_{2},\mathbf{\theta}_{3},\mathbf{\theta}_{4}\) for which,
\[\mathrm{Hess}f_{ij}=\frac{f(\mathbf{\theta}_{1})-f(\mathbf{\theta}_{2})-f(\mathbf{\theta} _{3})+f(\mathbf{\theta}_{4})}{4}\leq\left\|O\right\|_{2}. \tag{5}\]
Accordingly,
\[\left\|\mathrm{Hess}f\right\|_{2}\leq m\|O\|_{2}. \tag{6}\]
### Estimation and Error Analysis
Contrary to the simple definition of \(f^{U,O}\), evaluating such an expected value at each sample point may involve measurements with respect to \(\ell\) multiple bases. Accordingly, the observable \(O\) will be decomposed to \(\ell\) observables, each of which is diagonal in a different basis, such as,
\[O=\sum_{j=1}^{\ell}V_{j}^{\dagger}D_{j}V_{j}. \tag{7}\]
For each \(\ell\), it is necessary to perform \(r_{j}\) repetitive measurements on a quantum circuit. The \(l\)th (out of \(r_{j}\)) measurement outcome will be considered as a sample from a random variable \(\chi_{j,l}\sim X(UV_{j},D_{j},\mathbf{\theta})\). We know that \(\mathbb{E}[\chi_{j,l}]=f^{UV_{j},D_{j}}(\mathbf{\theta})\) and this is the reason we typically define an estimator \(f^{U,O}(\mathbf{\theta})\) as follows.
**Definition 1** (Sample Mean Estimator).: _A sample mean estimator for \(f\) is defined as,_
\[\tilde{f}^{U,O}(\mathbf{\theta})=\sum_{j=1}^{\ell}\frac{1}{r_{j}}\sum_{l=1}^{r_{j} }\chi_{j,l}. \tag{8}\]
_And for any of \(\partial_{k}f\)s,_
\[\hat{\partial}_{k}f^{U,O}(\mathbf{\theta})=\sum_{j=1}^{\ell}\frac{1}{2r_{j+}}\sum _{l=1}^{r_{j+}}\chi_{j+,l}-\frac{1}{2r_{j+}}\sum_{l=1}^{r_{j-}}\chi_{j-,l}. \tag{9}\]
_where \(\chi_{j+,l}\sim X(UV_{j},D_{j},\mathbf{\theta}+\mathbf{\hat{e}}_{i}\pi/2)\) and \(\chi_{j-,l}\sim X(UV_{j},D_{j},\mathbf{\theta}-\mathbf{\hat{e}}_{i}\pi/2)\)._
The performance of such an estimator can be bounded with the aid of the Hoeffding's inequality. The inequality provides confidence intervals of the estimators of bounded random variables.
**Lemma 3** (Hoeffding's inequality [5]).: _For \(n\) random variables \(\xi_{1},\xi_{2},\ldots,\xi_{n}\) with \(a_{i}\leq\xi_{i}\leq b_{i}\) for all \(i\), and any \(t>0\), we have,_
\[\Pr\left(\left|\sum_{i=1}^{n}\xi_{i}-\sum_{i=1}^{n}\mathbb{E}[\xi_{i}]\right| \geq t\right)\leq 2e^{\frac{-2t^{2}}{\sum_{i=1}^{n}(b_{i}-a_{i})^{2}}}. \tag{10}\]
Based on this, the following bounds are obtained for the MSE (mean square error) and confidence interval (CI) of the sample mean estimator.
**Theorem 1** (Sample mean estimator bounds).: _By defining,_
\[\epsilon_{f}=\sum_{j=1}^{\ell}\frac{\|D_{j}\|_{2}^{2}}{r_{j}}, \tag{11}\]
_and,_
\[\epsilon_{\partial_{k}f}=\sum_{j=1}^{\ell}\frac{\|D_{j}\|_{2}^{2}}{4}\bigg{(} \frac{1}{r_{j+}}+\frac{1}{r_{j-}}\bigg{)}. \tag{12}\]
_When \(\hat{s}\) is \(\hat{f}^{U,O}\) or \(\hat{\partial}_{k}f^{U,O}\), it can be respectively bounded by \(\epsilon_{f}\) and \(\epsilon_{\partial_{k}f}\) for any \(\boldsymbol{\theta}\) and \(\kappa>0\) as follows,_
\[\mathrm{MSE}[\hat{s}(\boldsymbol{\theta})]\leq\epsilon,\quad\Pr\big{(}|\hat{ s}(\boldsymbol{\theta})-s(\boldsymbol{\theta})|>\kappa\sqrt{\epsilon}\big{)} \leq 2e^{-\frac{\kappa^{2}}{2}}. \tag{13}\]
Proof.: To prove the bounds for \(f\), we start by setting \(\xi\)s in Hoeffding's inequality to \(\frac{\chi_{j,l}}{r_{j}}\) for different \(j\) and \(l\)s. They are bounded to \(-\frac{\|D_{j}\|}{r_{j}}\leq\frac{\chi_{j,l}}{r_{j}}\leq\frac{\|D_{j}\|}{r_{j}}\), it can thus be shown that,
\[\Pr\Big{(}\Big{|}\hat{f}(\boldsymbol{\theta})-f(\boldsymbol{\theta})\Big{|}> t\Big{)}\leq 2e^{-\frac{2\epsilon^{2}}{4\epsilon_{f}}}. \tag{14}\]
It is now only required to replace \(t\) with \(\kappa\sqrt{\epsilon_{f}}\). From Popoviciu's inequality [14] it is evident that \(\mathrm{Var}[\xi_{i}]\leq\frac{b_{i}-a_{i}}{4}\) which is used for the MSE of bounded random variables. The same results hold for the partial derivatives, if we set \(\xi\)s to \(\frac{\chi_{j\pm,l}}{2r_{j\pm}}\) for different \(j\) and \(l\) and \(+\) and \(-\) signs.
## 3 Main Results
### Error Control Strategy
As mentioned in the introduction, a key performance improving novelty of our work is the means to control the error level, as well as the number of shots. This will be possible by connecting the number of shots to the error level of any estimator, using the problem below. Contrary to the normal estimators that often use a constant number of shots without any further analysis, we intend to find a sufficient value for \(r_{j}\)s such that the resulting estimation error is bounded by a specified amount.
**Problem 1** (Sufficient Number of Shots).: _Given an estimator \(\hat{s}\), find the values of \(r_{j}\)s which satisfy the following constraints,_
\[\mathrm{MSE}[\hat{s}]\leq E_{s}. \tag{15}\]
For the sample mean estimator discussed previously, solving Problem 1, for \(f^{U,O}\) and \(\partial_{k}f^{U,O}\) is equivalent to the following optimisation problems,
\[\operatorname*{argmin}_{r_{j}\in\mathbb{N}}\sum_{j=1}^{\ell}r_{j}\quad\text{s. t.}\quad\mathrm{MSE}[\hat{f}]\leq E_{f}. \tag{16}\]
\[\operatorname*{argmin}_{r_{j\pm}\in\mathbb{N}}\sum_{j=1}^{\ell}r_{j+}+r_{j-} \quad\text{s. t.}\quad\mathrm{MSE}[\hat{\partial}_{k}f]\leq E_{\partial_{k} f}. \tag{17}\]
Optimization problems 16 and 17 can be approximately solved using Algorithm 1. This algorithm solves the optimisations by relaxing MSE values to the bounds \(\epsilon_{f}\) and \(\epsilon_{\partial_{k}f}\) defined in Theorem 1 and limiting \(r_{j}\)s and \(r_{j\pm}\)s to have real values.
```
a) Sufficient shots for \(\hat{f}\), the function returns the outcoming bound for the error (\(\epsilon_{f}\)) as well as the number of shots (\(r_{j}\)s). functionStoFsorSMEstimatorF(\(E_{f}\)) Decompose \(f\) to \(\ell\) terms (as 7) \(\nu\leftarrow\Big{(}\sum_{j=1}^{\ell}\|D_{j}\|_{2}\Big{)}/E_{f}\) for\(j=1\) to \(l\)do \(r_{j}\leftarrow\lceil\big{(}\|D_{j}\|_{2}\big{)}/\nu\rceil\) endfor \(\epsilon_{f}\leftarrow\sum_{j=1}^{\ell}\|D_{j}\|_{2}^{2}/r_{j}\) return\((r_{j}\)s, \(\epsilon_{f})\) endfunction b) Sufficient shots for \(\hat{\partial}_{k}f\) that returns the similar outputs. functionStoFsorSMEstimatorDF(\(E_{\partial_{k}f}\)) \(\nu\gets 2\Big{(}\sum_{j=1}^{\ell}\|D_{j}\|_{2}\Big{)}/E_{f}\) for\(j=1\) to \(l\)do for\(\sigma\) in \(\{+,-\}\)do \(r_{j\sigma}\leftarrow\lceil\big{(}\|D_{j}\|_{2}\big{)}/\nu\rceil\) endfor endfor \(\epsilon_{\partial_{k}f}\leftarrow\sum_{j=1}^{\ell}\|D_{j}\|_{2}^{2}/4(1/r_{j+ }+1/r_{j-})\) return\(\epsilon_{\partial_{k}f}\), \(r_{j\pm}\)s endfunction
```
**Algorithm 1** Error control of sample mean estimators
We can easily verify the algorithm by replacing the values using the formulas in Theorem 1 and deduce that the algorithm not only bounds the MSE but also provides a CI for the values.
### Optimizing Agent
Regardless of technical detail, the function of all variational algorithms can be considered as that of agent which interacts with a quantum computer as shown in Figure 1. Such a high level conceptualization permits development of a unified framework for the evaluation of \(f\), \(\partial_{k}f\) and higher derivatives.
Most general purpose optimizers will not aim to control the number of shots which is often taken as a constant during the optimization. There have been attempts to develop adaptive algorithms such as [17] but the scope of their application is limited. Any optimizing agent will ultimately utilize available data by calculating a set of estimators. Statistically, it is possible to reduce
the number of estimators to a sufficient set of estimators. For most typical optimizer, those estimates will be limited to \(\hat{f}^{U,O}(\theta_{i})\) and \(\hat{\partial}_{k}f^{U,O}(\theta_{i})\), where \(f^{U,O}\) is the function that is being optimized.
However, by application of sufficient shot problem proposed earlier, it is possible to control the optimization error, instead of the number of shots. In our view this is a more natural way of looking at the problem. In such an improved strategy, the optimizer is provided with the errors \(E_{f}\) and \(E_{\partial_{k}f}\) instead of \(r_{j}\), and solves for \(\hat{f}\), \(\hat{\partial}_{k}f\) instead of \(\chi_{j,l}\). This is illustrated in Figure 2.
For the sake of simplicity we shall henceforth refer to \(f^{U,O}(\theta_{i})\) and \(\partial_{k}f^{U,O}(\theta_{i})\) as \(f_{i}\) and \(\partial_{k}f_{i}\) respectively. Moreover, this strategy can also be extended to the sample mean estimator \(\hat{f}_{i}\) and \(\hat{\partial}_{k}f_{i}\), defined in Definition 1.
In the proposed framework the main problem is broken down into two separate problems. These are,
1. An optimization problem of uncertain values, with a sensitivity analysis
2. An estimation problem, with the question of sufficient shots for the estimator.
In the proposed framework one is not limited to the sample mean estimator defined in Definition 1 and can make use of any static or dynamic estimator. Dynamic estimators will also have an internal states which is shown by a gray arrow in Figure 2.
We will demonstrate the profound effectiveness of this approach by introducing a few examples of estimators and optimizers in the following section. For the sake of illustrating the methodology we shall make use of existing standard and rather simple optimization and estimation techniques. Evidently the eventual
obtainable performance improvements can be much greater by a well matched and individually powerful optimizer and estimator.
## 4 Case Studies
### Example I: Error-Aware Simulated Annealing
A simple simulated annealing algorithm is a stochastic process that starts from a random point in the search space and iteratively moves to a new point with a transition probability \(P\) based on the values and temperature \(T_{i}\) at step \(i\). In order to introduce the uncertainty, we only need to redefine the transition probability \(\hat{P}\) based on the estimator as follows,
\[\hat{P}(\boldsymbol{\theta}_{i+1}|\boldsymbol{\theta}_{i})=\begin{cases}1& \text{if }\hat{f}_{i+1}<\hat{f}_{i}\\ e^{-\frac{f_{i+1}-f_{i}}{T_{i}}}&\text{otherwise.}\end{cases} \tag{18}\]
Then, the sensitivity can be analyzed as follows. In order to maintain an accuracy for \(\hat{P}(\boldsymbol{\theta}_{i+1}|\boldsymbol{\theta}_{i})\) we seek,
\[\mathbb{E}\Big{[}D_{KL}(P\parallel\hat{P})\Big{]}\leq\eta, \tag{19}\]
where \(D_{KL}\) is the Kullback-Leibler divergence. We know that this equation will hold if,
\[\mathbb{E}\Bigg{[}\Bigg{|}\text{log}\,\frac{P(\boldsymbol{\theta}_{i+1}| \boldsymbol{\theta}_{i})}{\hat{P}(\boldsymbol{\theta}_{i+1}|\boldsymbol{ \theta}_{i})}\Bigg{|}\Bigg{]}\leq\eta\qquad\forall\boldsymbol{\theta}_{i+1}. \tag{20}\]
Figure 2: Schematic diagram of an optimizer with sensitivity analysis and an estimator with a sufficient shot algorithm.
The RHS could be bounded using \(\mathbb{E}[|x-\mathbb{E}[x]|]\leq\sqrt{\text{Var}[x]}\) and the independence of \(\hat{f}_{i+1}\) and \(\hat{f}_{i}\) and by assuming a monotonically decreasing temperature \(T_{i+1}<T_{i}\),
\[\mathbb{E}\Big{[}\Big{|}\log P(\mathbf{\theta}_{i+1}|\mathbf{\theta}_{i}) -\log\hat{P}(\mathbf{\theta}_{i+1}|\mathbf{\theta}_{i})\Big{|}\Big{]} \leq\frac{1}{T_{i}}\mathbb{E}\Big{[}\Big{|}\hat{f}_{i+1}-\hat{f} _{i}-f_{i+1}+f_{i}\Big{|}\Big{]}, \tag{21}\] \[\leq\frac{1}{T_{i}}\sqrt{\text{Var}\Big{[}\hat{f}_{i+1}-\hat{f}_{ i}\Big{]}},\] \[\leq\frac{1}{T_{i}}\sqrt{\text{Var}\Big{[}\hat{f}_{i+1}\Big{]}+ \text{Var}\Big{[}\hat{f}_{i}\Big{]}}.\]
Note that the estimators should be unbiased, otherwise the equation above will not hold. Finally we will introduce the condition below, that is sufficient for the equation above and furthermore to bound KL divergence by \(\eta\),
\[\text{MSE}[f_{i+1}]\leq\frac{\eta^{2}T_{i}^{2}}{2}. \tag{22}\]
This is a more efficient condition for the estimator in comparison to the simply asking \(\text{MSE}[f_{i+1}]\leq E\). In order to compare the performance of the simulated annealing with and without the sensitivity analysis, we conducted three experiments as follows,
* **Simple Optimizer (1)**: A simulated annealing optimizer with the condition \(\text{MSE}[f_{i+1}]\leq E\) with a high value for \(E\).
* **Simple Optimizer (2)**: A simulated annealing optimizer with the condition \(\text{MSE}[f_{i+1}]\leq E\) with a low value for \(E\).
* **Error-Aware Optimizer**: A simulated annealing optimizer with Equation 22 as the condition.
For experimental studies, consider the benchmark problem defined in 2.
**Problem 2** (Benchmark problem).: _Assume a variational task with one qubit and \(U(\theta)=R_{x}(\theta)\) and \(O=Z\) with \(\mathcal{C}=I\), which implies \(\ell=1\) and \(m=1\). Also \(C(\theta)=\,\langle 0|R_{x}^{\dagger}(\theta)ZR_{x}^{\dagger}(\theta)|0\rangle\) could be simplified further into \(\cos\theta\)._
We start with an ensemble of \(\theta\)s near \(0\) and compare the distribution of the exact value of the function \(f\) through the optimization (with respect to the number of shots conducted) for each optimizer. The results are shown in Figure 3.
To more clearly highlight the difference between the distributions, we have also plotted the distribution of data points after 7000 shots for each optimizer in Figure 4.
Note that the error bound for different optimizers as a function of the number of shots is shown in Figure 5 which is just a visualisation of condition 22.
The results show that the error-aware simulated annealing is able to find a better solution with less number of shots.
### Example II: Recursive Estimator for Gradient Descent
To illustrate the flexibility of the framework with respect to the choice of estimators and optimizers, in this section we perform experiments with a standard gradient descent algorithm and a novel recursive estimator for the function and its derivative. The proposed recursive estimator works on the assumption that the distance between two function evaluations required by the optimizer at two consecutive iterations is not great. That is, the function (and possibly its gradient) at a point \(\boldsymbol{\theta}_{i}\) and its next evaluation at \(\boldsymbol{\theta}_{i+1}\) doesn't differ drastically from \(\boldsymbol{\theta}_{i}\). This assumption allows the update rule of the optimizer to be written in the form \(\boldsymbol{\theta}_{i+1}=\boldsymbol{\theta}_{i}+\delta\boldsymbol{\theta}_{i}\) where \(\delta\boldsymbol{\theta}_{i}\) is a vector with bounded norm. The proposed recursive estimation methodology is formally defined in Definition 2.
**Definition 2** (Recursive Estimators).: \[\begin{cases}\hat{f}_{i}^{*}=\alpha_{i}(\hat{f}_{i-1}^{*}+\delta\boldsymbol{ \theta}_{i-1}\cdot\hat{\boldsymbol{\nabla}}f_{i-1}^{*})+(1-\alpha_{i})\hat{f}_ {i}\\ \hat{\partial}_{k}f_{i}^{*}=\beta_{i}\hat{\partial}_{k}f_{i-1}^{*}+(1-\beta_{ i})\hat{\partial}_{k}f_{i}\end{cases},\quad\begin{cases}\hat{f}_{0}^{*}=\hat{f}_{0}\\ \hat{\partial}_{k}f_{0}^{*}=\hat{\partial}_{k}f_{0}\end{cases}\] (23)
Note that \(\alpha_{i}\)s and \(\beta_{i}\)s are values between 0 and 1 and act as hyperparameters which control the relative weight given to prior knowledge. The optimal values
Figure 4: Distribution of datapoints after 7000 shots for each optimizer.
Figure 3: Comparison of the performance of the error-aware simulated annealing with the simpler ones.
of these parameters are derives in later sections. First we present Theorem 2 which derives theoretical bounds for the bias and variance of the estimate so obtained.
**Theorem 2** (Recursive estimator bounds).: _For any \(i\),_
\[\begin{cases}\mathrm{Bias}[\hat{f}_{i}^{*}]\leq B_{i}\\ \mathrm{Bias}[\hat{\partial}_{k}f_{i}^{*}]\leq B_{\partial_{k},i}.\end{cases} \tag{24}\]
_Where \(B_{i}\) and \(B_{\partial_{k},i}\) are calculated recursively as follows,_
\[\begin{cases}B_{i}=\alpha_{i}\Big{(}B_{i-1}+\sum_{k=1}^{m}|\left(\delta \boldsymbol{\theta}_{i-1}\right)_{k}|B_{\partial_{k},i-1}+\frac{m}{2}\| \delta\boldsymbol{\theta}_{i-1}\|_{2}^{2}\|O\|_{2}\Big{)}\\ B_{\partial_{k},i}=\beta_{k,i}(B_{\partial_{k},i-1}+\|\delta\boldsymbol{ \theta}_{i-1}\|_{2}\|O\|_{2})\end{cases},\quad\begin{cases}B_{0}=0\\ B_{\partial_{k},0}=0.\end{cases} \tag{25}\]
_and similarly for the variance,_
\[\begin{cases}\mathrm{Var}[\hat{f}_{i}^{*}]\leq A_{i}^{2}\\ \mathrm{Var}[\hat{\partial}_{k}f_{i}^{*}]\leq A_{\partial_{k},i}^{2}.\end{cases} \tag{26}\]
_Using the notation in, Theorem 1_
\[\begin{cases}A_{i}^{2}=\alpha_{i}^{2}\Big{(}A_{i-1}^{2}+\sum_{k=1}^{m}|\left( \delta\boldsymbol{\theta}_{i-1}\right)_{k}|^{2}A_{\partial_{k},i-1}^{2}\Big{)} +(1-\alpha_{i})^{2}\epsilon_{f_{i}}^{2}\\ A_{\partial_{k},i}^{2}=\beta_{k,i}^{2}A_{\partial_{k},i-1}^{2}+(1-\beta_{k,i}) ^{2}\epsilon_{\partial_{k}f_{i}}^{2},\end{cases} \tag{27}\]
Proof.: Defining the drift term \(d_{i}=f_{i-1}+\delta\boldsymbol{\theta}_{i-1}\cdot\boldsymbol{\nabla}f_{i-1}- f_{i}\), we can write the bias and variance of \(\hat{f}_{i}^{*}\) as,
\[\mathrm{Bias}\Big{[}\hat{f}_{i}^{*}\Big{]} =\alpha_{i}\Big{(}\mathrm{Bias}\Big{[}\hat{f}_{i-1}^{*}\Big{]}+ \delta\boldsymbol{\theta}_{i-1}\cdot\mathrm{Bias}\Big{[}\hat{\boldsymbol{ \nabla}}f_{i-1}^{*}\Big{]}+d_{i}\Big{)} \tag{28}\] \[\mathrm{Var}\Big{[}\hat{f}_{i}^{*}\Big{]} =\alpha_{i}^{2}\Big{(}\mathrm{Var}\Big{[}\hat{f}_{i-1}^{*}\Big{]} +\delta\boldsymbol{\theta}_{i-1}^{2}\cdot\mathrm{Var}\Big{[}\hat{\boldsymbol{ \nabla}}f_{i-1}^{*}\Big{]}\Big{)}+(1-\alpha_{i})^{2}\mathrm{Var}\Big{[}\hat{f} _{i}\Big{]}. \tag{29}\]
Figure 5: Error bound for different optimizers as a function of the number of shots.
In an abuse of notation, \(\delta\mathbf{\theta}_{i-1}^{2}\) represents a vector of squared elements and \(\text{Var}\Big{[}\hat{\mathbf{\nabla}}f_{i-1}^{*}\Big{]}\) represents a vector of variances. This facilitates a more compact proof as shall be seen. With the same objective, we define another drift term for the derivatives of \(f\) as \(d_{\partial_{k},i}=\partial_{k}f_{i-1}-\partial_{k}f_{i}\) will helps us to write the bias and variance of \(\hat{\partial_{k}}f_{i}^{*}\) as,
\[\text{Bias}\Big{[}\hat{\partial_{k}}f_{i}^{*}\Big{]} =\beta_{k,i}\Big{(}\text{Bias}\Big{[}\hat{\partial_{k}}f_{i-1}^{* }\Big{]}+d_{\partial_{k},i}\Big{)} \tag{30}\] \[\text{Var}\Big{[}\hat{\partial_{k}}f_{i}^{*}\Big{]} =\beta_{k,i}^{2}\text{Var}\Big{[}\hat{\partial_{k}}f_{i-1}^{*} \Big{]}+(1-\beta_{k,i})^{2}\text{Var}\Big{[}\hat{\partial_{k}}f_{i}\Big{]}. \tag{31}\]
Combining Lemma 2 with the mean value theorem, we have,
\[\begin{cases}|d_{i}|\leq\frac{1}{2}\|\delta\mathbf{\theta}_{i-1}\|_{2}^{2}m\|O\|_ {2}\\ |d_{\partial_{k},i}|\leq\|\delta\mathbf{\theta}_{i-1}\|_{2}\|O\|_{2}.\end{cases} \tag{32}\]
Finally, combining the above equations with the fact that \(\text{Var}[\hat{f}_{i}]\leq\epsilon_{f_{i}}^{2}\) and \(\text{Var}[\hat{\partial_{k}}f_{i}]\leq\epsilon_{\partial_{k}f_{i}}^{2}\) completes the proof.
For the confidence interval of recursive estimator, we can prove the following result,
**Corollary 1** (Confidence Interval).: _As a result of Theorem 2 the following equation is valid for \(s^{*}\) is any of \(f_{i}\)s or \(\partial_{k}f_{i}\)s, simply by setting corresponding \(A\) and \(B\)s._
\[\text{MSE}[\hat{s}^{*}]\leq B^{2}+A^{2},\quad\Pr(|\hat{s}^{*}-f|>\kappa A+B) \leq 2e^{-\frac{s^{2}}{2}}. \tag{33}\]
Proof.: While the expression for the MSE is trivial, for the confidence interval we have,
\[\Pr\Big{(}\Big{|}\hat{f}_{i}^{*}-\mathbb{E}[\hat{f}_{i}^{*}]\Big{|}>\kappa \sqrt{A_{i}}\Big{)}\leq 2e^{-\frac{s^{2}}{2}}. \tag{34}\]
This is true because \(\hat{f}_{i}^{*}\) is a linear combination of \(\chi\)s that are from bounded distributions. Accordingly, Hoeffding's inequality applies. Moreover, there is a one-to-one correspondence between bounds from Hoeffding's and Popoviciu's inequalities (see the proof of Theorem 1), which obviously validates the equation above. Since \(\Big{|}\hat{f}_{i}^{*}-f_{i}\Big{|}>\kappa\sqrt{A_{i}}+B_{i}\Rightarrow\Big{|} \hat{f}_{i}^{*}-\mathbb{E}[\hat{f}_{i}^{*}]\Big{|}>\kappa\sqrt{A_{i}}\),
\[\Pr\Big{(}\Big{|}\hat{f}_{i}^{*}-f_{i}\Big{|}>\kappa\sqrt{A_{i}}+B_{i}\Big{)} \leq\Pr\Big{(}\Big{|}\hat{f}_{i}^{*}-\mathbb{E}[\hat{f}_{i}^{*}]\Big{|}>\kappa \sqrt{A_{i}}\Big{)}\leq 2e^{-\frac{s^{2}}{2}}. \tag{35}\]
Finally, we need to solve the sufficient shots problem (Problem 1) for the recursive estimator. The actual objective is to solve,
\[\begin{split}\underset{r_{j,i},r_{j\pm,i}\in\mathbb{N},\alpha_{i },\beta_{k,i}}{\operatorname{argmin}}&\sum_{i=1}^{\infty}\sum_{j =1}^{\ell}r_{j,i}+\sum_{k=1}^{m}r_{j+,k,i}+r_{j-,k,i}\\ \text{s. t.}&\forall i\quad\text{MSE}[\hat{f}_{i}^{* }]\leq E_{f}\\ \text{s. t.}&\forall i,k\quad\text{MSE}[\hat{\partial }_{k}f_{i}^{*}]\leq E_{\partial_{k}f}.\end{split} \tag{36}\]
However, we solve an iterative version as in Algorithm 1,
\[\min_{r_{j}\in\mathbb{N},\alpha_{i}}\sum_{j=1}^{\ell}r_{j}\quad\text{s. t.}\quad\text{MSE}[\hat{f}_{i}^{*}]\leq E_{f}. \tag{37}\]
\[\min_{r_{j,\pm}\in\mathbb{N},\beta_{k,i}}\sum_{j=1}^{\ell}r_{j+}+r_{j-}\quad \text{s. t.}\quad\text{MSE}[\hat{\partial}_{k}f_{i}^{*}]\leq E_{\partial_{k}f}. \tag{38}\]
Combining the two leads to Algorithm 2.
**Remark 1**.: _Note that with this algorithm, for the same error bound, the number of shots for a recursive estimator of a function will be at max equal to the number of shots for the naive estimator of that function._
To illustrate the performance of Algorithm 2, first we apply the estimator for the variational Problem 2 with a random (zero mean) initial point and a simple gradient-descent optimizer. Figure 4.2 shows the estimated values (with CIs) of the loss function, for different estimators, as a function of the number of shots used to evaluate the function.
It is evident that the proposed recursive estimator is outperforming the sample mean estimator by a significant margin. Another comparison made by visualizing number of shots per each GD iteration is shown in Figure 7.
To verify the theoretical results derived earlier, the bounds on MSE and CI are compared with the actual values of the MSE and CI of the estimators in Figures 8 and 9 respectively.
For further experimental verification, the same experiment has also been carried out on the more complex MaxCut problem for a square graph (\(|V|=4\) and \(|E|=4\)). The results are shown in Figure 10 and Figure 11.
Figure 6: estimated loss value vs. number of shots, for a simple GD optimizer equipped with each estimator
```
a) Sufficient shots for \(\hat{f}_{i}^{*}\) functionShotsForRestimatorF(\(E_{f}\)) using \(A_{i-1}\) and \(B_{i-1}\) from the previous evaluations using \(B_{\partial_{k},i-1}\)s and \(A_{\partial_{k},i-1}\)s from ShotsForRestimatorDF \(b\gets B_{i-1}+\left\|\delta\boldsymbol{\theta}_{i-1}\right\|_{2}\!\!\!B_{ \partial,i-1}+\frac{m}{2}\|\delta\boldsymbol{\theta}_{i-1}\|_{2}^{2}\|O\|_{2}\) \(a\gets A_{i-1}+\left\|\delta\boldsymbol{\theta}_{i-1}\right\|_{2}^{2}\!\! \!A_{\partial,i-1}\) if\(b^{2}+a>E_{f}\)then \(E^{\prime}\leftarrow\big{(}b^{2}+a\big{)}E_{f}\big{/}\big{(}b^{2}+a-E_{f}\big{)}\) (\(r_{j}\)s, \(\epsilon\)) \(\leftarrow\)ShotsForSMEstimatorF(\(E^{\prime}\)) else \(\epsilon\leftarrow\infty\) for\(j=1\) to \(l\)do \(r_{j}\gets 0\) endfor endif \(\alpha_{i}\leftarrow\big{(}b^{2}+a\big{)}/\big{(}b^{2}+a-E_{f}\big{)}\) \(A_{i}\leftarrow\alpha_{i}^{2}a+(1-\alpha_{i})^{2}\epsilon\) \(B_{i}\leftarrow\alpha_{i}b\) return (\(r_{j}\)s, \(B_{i}^{2}+A_{i}\)) endfunction b) Sufficient shots for \(\hat{\partial}_{k}^{*}f_{i}\) functionShotsForRestimatorDF(\(E_{\partial_{k}f}\)) using \(A_{\partial_{k},i-1}\) and \(B_{\partial_{k},i-1}\) from the previous evaluations \(b\gets B_{\partial_{k},i-1}+m\|\delta\boldsymbol{\theta}_{i-1}\|_{2}\|O\|_ {2}\) \(a\gets A_{\partial_{k},i-1}\) if\(b^{2}+a>E_{\partial_{k}f}\)then \(E^{\prime}\leftarrow\big{(}b^{2}+a\big{)}E_{\partial_{k}f}\big{/}\big{(}b^{2}+a-E _{\partial_{k}f}\big{)}\) (\(r_{j\pm}\)s, \(\epsilon\)) \(\leftarrow\)ShotsForSMEstimatorDF(\(E^{\prime}\)) else \(\epsilon\leftarrow\infty\) for\(j=1\) to \(l\)do for\(\sigma\) in \(\{+,-\}\)do \(r_{j\sigma}\gets 0\) endfor endif \(\beta_{k,i}\leftarrow\big{(}b^{2}+a\big{)}/\big{(}b^{2}+a-E_{f}\big{)}\) \(A_{\partial_{k},i}\leftarrow\beta_{k,i}^{2}a+\beta_{k,i}^{2}\epsilon\) \(B_{\partial_{k},i}\leftarrow\beta_{k,i}b\) return (\(r_{j\sigma}\)s, \(B_{\partial_{k},i}^{2}+A_{\partial_{k},i}\)) endfunction
```
**Algorithm 2** Error control of recursive estimators
## 5 Concluding remarks
In this paper, a generalized framework for optimization algorithms which seek to reduce shot-number evaluations in VQAs was proposed. In the general form, the proposed framework entails a combination of an estimator together with a numerical optimization algorithm. We introduced the sufficient shots problem and proposed an algorithm for it to be used with the sample mean estimator. This concept together with sensitivity analysis of optimizers, allows us to control the number of shots leading to a more natural and effective optimization process.
Two specific case studies of this framework were subject to extensive experiments. In the first case, a sample mean estimator is coupled with a simulated annealing optimizer, and in the second case, a recursive estimator was coupled with a gradient descent optimizer. In both cases we demonstrated that the proposed approach achieves significant performance improvements over conventional methods.
Our results highlight the importance of considering error control strategies and incorporating them into the design of optimizers for variational quantum
Figure 8: Exact MSE values vs Bounded MSE values
Figure 7: number of shots per each GD iteration for each of those estimators
algorithms. By leveraging estimators with error control and integrating them with interactive optimization processes, we can achieve better optimization performance and reduce the resource requirements for quantum computations.
Overall, this work contributes to advancing the field of variational quantum algorithms by providing a systematic framework for designing error-aware optimizers. The presented approaches and results open up new possibilities for improving the efficiency and effectiveness of quantum computing research in various domains, such as quantum chemistry, combinatorial optimization, and machine learning. Future directions could explore further extensions and applications of the proposed framework, as well as experimental validations on quantum devices.
Figure 10: loss function vs. number of shots, for a simple GD optimizer equipped with each estimator
Figure 9: CI bounds and the difference between exact value and the estimated value of the function |
2302.01793 | Self-Supervised In-Domain Representation Learning for Remote Sensing
Image Scene Classification | Transferring the ImageNet pre-trained weights to the various remote sensing
tasks has produced acceptable results and reduced the need for labeled samples.
However, the domain differences between ground imageries and remote sensing
images cause the performance of such transfer learning to be limited. Recent
research has demonstrated that self-supervised learning methods capture visual
features that are more discriminative and transferable than the supervised
ImageNet weights. We are motivated by these facts to pre-train the in-domain
representations of remote sensing imagery using contrastive self-supervised
learning and transfer the learned features to other related remote sensing
datasets. Specifically, we used the SimSiam algorithm to pre-train the
in-domain knowledge of remote sensing datasets and then transferred the
obtained weights to the other scene classification datasets. Thus, we have
obtained state-of-the-art results on five land cover classification datasets
with varying numbers of classes and spatial resolutions. In addition, By
conducting appropriate experiments, including feature pre-training using
datasets with different attributes, we have identified the most influential
factors that make a dataset a good choice for obtaining in-domain features. We
have transferred the features obtained by pre-training SimSiam on remote
sensing datasets to various downstream tasks and used them as initial weights
for fine-tuning. Moreover, we have linearly evaluated the obtained
representations in cases where the number of samples per class is limited. Our
experiments have demonstrated that using a higher-resolution dataset during the
self-supervised pre-training stage results in learning more discriminative and
general representations. | Ali Ghanbarzade, Hossein Soleimani | 2023-02-03T15:03:07Z | http://arxiv.org/abs/2302.01793v1 | # Self-Supervised In-Domain Representation Learning for Remote Sensing Image Scene Classification
###### Abstract
Transferring the ImageNet pre-trained weights to the various remote sensing tasks has produced acceptable results and reduced the need for labeled samples. However, the domain differences between ground imageries and remote sensing images cause the performance of such transfer learning to be limited. Recent research has demonstrated that self-supervised learning methods capture visual features that are more discriminative and transferable than the supervised ImageNet weights. We are motivated by these facts to pre-train the in-domain representations of remote sensing imagery using contrastive self-supervised learning and transfer the learned features to other related remote sensing datasets. Specifically, we used the SimSiam algorithm to pre-train the in-domain knowledge of remote sensing datasets and then transferred the obtained weights to the other scene classification datasets. Thus, we have obtained state-of-the-art results on five land cover classification datasets with varying numbers of classes and spatial resolutions. In addition, By conducting appropriate experiments, including feature pre-training using datasets with different attributes, we have identified the most influential factors that make a dataset a good choice for obtaining in-domain features. We have transferred the features obtained by pre-training SimSiam on remote sensing datasets to various downstream tasks and used them as initial weights for fine-tuning. Moreover, we have linearly evaluated the obtained representations in cases where the number of samples per class is limited. Our experiments have demonstrated that using a higher-resolution dataset during the self-supervised pre-training stage results in learning more discriminative and general representations.
Transfer Learning, Deep Learning, Remote Sensing, Self-Supervised Learning, Representation Learning, Scene Classification
## 1 Introduction
Remote sensing imageries are acquired via imaging satellites, airplanes, etc. [1]. These devices are capable of monitoring various aspects of the earth's surface. Unlike natural images, which are captured using digital cameras and often contain a limited number of objects, remote sensing imageries can encompass vast geographical areas and hold numerous contents with varying dimensions and sizes. Remote sensing images, in contrast to ground images, are not object-centric. Therefore, they can be used for various applications, including land cover classification, road network extraction, disaster prevention, and monitoring [2, 3]. Only artificial intelligence and machine learning systems can process this volume of data. Fortunately, recent advances in computer vision have made it easier to process and analyze visual data points[4]. With the recent advancements in deep learning for computer vision applications, the supervised learning approaches for land cover classification in remote sensing images have performed exceptionally well. However, the main drawback of supervised learning is that it needs a tremendous amount of labeled samples. Providing this volume of remote-sensing images is very costly and time-consuming. In addition, it requires experts to annotate data points carefully. When obtaining large labeled datasets is exhaustive, the general solution is to transfer the learned weights from the ImageNet dataset to these tasks [1, 5-13]. While this transfer learning has produced acceptable results for remote sensing tasks, It has the following drawbacks:
1. If there are significant domain differences between the remote sensing and the ImageNet datasets, this type of transfer learning will fail. As a result, it can perform poorly in some cases, such as hyperspectral and multispectral images.
2. Transferring ImageNet pre-trained weights directly to non-RGB remote sensing datasets is impossible [3, 14-19].
Domain differences between the natural and remote sensing images stimulate researchers to find alternative solutions. To do so, some researchers used supervised or unsupervised methods to pre-trained models on remote sensing datasets. The learned weights are then transferred to other remote-sensing tasks[5, 19]. However, the disadvantage of the supervised pre-training is that it requires large in-domain labeled samples to learn general representations from remote sensing images. Self-supervised learning has emerged to overcome all of the previously mentioned drawbacks. It aims to learn effective representations of the input data without relying on human-provided labels. Recent advances in self-supervised learning have demonstrated that their pre-trained features have a higher transferability than ImageNet models. This branch of artificial intelligence is advantageous when acquiring labeled data is time-consuming and expensive, such as medical and satellite images[14]. Additionally, these methods are more practical in the real world because different sensors generate millions or even billions of data samples, and labeling them is unavoidably impossible. Recently, contrastive self-supervised learning [20] outperformed other feature learning methods. These methods significantly narrowed the gap between supervised and unsupervised approaches in representation learning [21]. Currently, the most effective contrastive self-supervised algorithms[22] employ data augmentation techniques to generate positive samples. In
other words, they use data augmentation techniques such as image cropping, rotation, and so on to create multiple views of the same image. The objective function tries to bring positive samples as close to each other as possible in the feature space. In most of these methods, positive and negative pairs compete with each other [23]. Since these methods do not require labeled data, we can use a large amount of unlabeled data to learn the features in an unsupervised way and then transfer the weights to other remote sensing tasks.
Selecting an appropriate dataset for visual representation learning from remote sensing images, either supervised or self-supervised, is one of the influencing factors in learning high-generalizable features, which have huge effects on the performance of the final model on downstream tasks. In recent works, such as [6] and [17], researchers have determined the influencing factors on the datasets for pre-training features in a supervised manner from satellite images. Unlike supervised learning, which uses accurate human-provided labels as supervisory signals, self-supervised learning methods extract supervisory signals from the data itself. This difference makes it necessary to carefully investigate the vital factors that make the dataset an ideal option for self-supervised pre-training in remote sensing. One of our goals is to investigate the effect of the selected dataset for pre-training visual features from satellite images using the SimSim algorithm. To achieve this goal, in the pre-training stage, we used datasets with different characteristics in terms of the number of samples, spatial resolution, and the number of classes. Our other goal is to investigate the generalizability of self-supervised learned features using the SimSiam for land cover classification. We have examined the transferability of the in-domain pre-trained weights by conducting extensive experiments. In the SimSiam algorithm, we used ResNet50 with ImageNet weights as the backbone. In this setting, we have pre-trained the features in a self-supervised manner on MLRSNet, PatternNet, and Resisc45 datasets. Finally, we have fine-tuned obtained models on the target datasets under different conditions, such as fine-tuning all layers and linear evaluation using a limited number of samples. The results demonstrate that by selecting a suitable medium-sized remote sensing dataset, we can pre-train features that produce the best results for various land-cover classification tasks. Our main contributions are as follows:
1. We have investigated the generalizability of the SimSiam algorithm for learning visual representations in remote sensing images by conducting detailed and exhaustive experiments on six land cover classification datasets with different characteristics.
2. During pre-training in-domain representations with the SimSiam algorithm, we used ImageNet weights as initial weights to reduce the need for training data.
3. By conducting detailed experiments, we have discussed the factors that make the dataset a good reference for self-supervised pre-training of features. The obtained results have demonstrated that the pre-training dataset should have a high spatial resolution.
The remainder of this paper is organized as follows:
In section 2, we have reviewed the related works. Section 3 explores the SimSiam algorithm used for pre-training in-domain features from remote sensing images. Section 4 presents the statistics of the selected datasets for each step. In section 5, we solved the downstream tasks and demonstrated the results. Finally, we have concluded the paper in section 6.
## 2 Related Works
### Visual Representation Learning in Remote Sensing Imagery
Where large labeled datasets are unavailable, the general solution is to use pre-trained models on large-scale datasets such as ImageNet. The referred models can be used to extract features from new datasets or as a starting point for fine-tuning weights on the tasks and datasets. However, this type of transfer learning is directly applicable to the RGB remote sensing datasets[1, 5-12]. For pre-training the in-domain general representations for overhead imagery, we can use either supervised or unsupervised methods. We can refer to [5] as an example of supervised learning-based methods. Here, the initial steps for supervised in-domain visual representation learning from remote sensing images are described. The learned features have been evaluated using fine-tuning on land cover classification datasets. In most cases, it has been demonstrated that in-domain features learned from remote sensing datasets perform better than ImageNet counterparts. Additionally, in the case of supervised learning, they have investigated the characteristics that make a dataset a good reference for learning visual representations. The features learned from multi-resolution datasets have demonstrated higher generalizability and better performance. The researchers in [26] combined the Resisc45, PatternNet, and RSI-CB datasets, trained a model on them, and then fine-tuned it on the UC-Merced dataset. Compared to the ImageNet model, this model is more accurate. In comparison, the analysis in [27] demonstrated that the models pre-trained on the ImageNet perform better than models pre-trained on the PatternNet when transferred to the target AID [28] and UCM [29] datasets. Both studies [5] and [6] are very similar and examined the performance of ImageNet pre-trained models and models pre-trained on the in-domain datasets. They conducted experiments in [6] using two high-resolution and three medium-resolution datasets. The results indicated that fine-tuning in-domain pre-trained weights on remote sensing datasets perform better than ImageNet weights. However, the mentioned work used only two high-resolution datasets for pre-training in-domain features, and the influencial factors on learning highly generalizable representations from remote sensing datasets are not yet fully determined. Additionally, the mentioned works have examined the effect of a pre-training dataset for supervised representation learning. In contrast, by conducting detailed experiments on PatternNet and Resisc45 datasets, we have examined the impact of the pre-training dataset for the SimSiam algorithm, which is a contrastive self-supervised learning method.
Self-supervised learning is a highly practical subset of unsupervised learning that aims to learn general visual features from images without the use of human labels. In general, self-supervised learning methods consist of two steps. In the first step, a pretext task is designed, and by solving this proxy task, visual representations are learned. The second step is to transfer the pre-trained features from the previous step to other downstream tasks. The resulting model from the first step can be used as a starting point for further fine-tuning or feature extraction. Such techniques will be advantageous for difficult-to-label aerial and medical images. In the following section, we have classified the self-supervised visual representation learning methods into three groups and discussed each one.
**Representation learning by solving pretext tasks:**
Researchers in computer vision have defined many pretext tasks to date. We refer readers to [30] for a review of self-supervised learning methods based on pretext tasks. The authors have classified all pretext tasks into four categories:
1. Generative-based methods such as image-colorization, super-resolution, etc
2. Context-based tasks, such as image jigsaw puzzles, geometric transformations, etc.
3. Free semantic label-based tasks, such as contour detection, depth estimation, etc.
4. Cross Modal-based methods such as optical flow estimation, visual-audio correspondence, etc.
Moreover, the researchers in [31] and [32] combined several pretext tasks for capturing high generalizable features. These studies demonstrated that different pretext tasks are complementary and that combining them results in the acquisition of more generalizable features. The visual representations learned by solving most of the pretext tasks have limited generalizability and performance on downstream tasks compared to the ImageNet pre-trained models.
**Clustering:**
Clustering-based methods are another type of unsupervised method for learning visual representations. They alternate between clustering the representations and learning to predict the cluster assignment. Instead of directly comparing features as in contrastive learning, SwAV[21] clusters the data while enforcing consistency between cluster assignments produced for different augmentations (or views) of the same image. Researchers in [33] demonstrated that k-means assignments could be used as pseudo labels to learn visual representations. In [34], how to cast the pseudo-label assignment problem as an instance of the optimal transport problem have been demonstrated. Despite the fact that clustering-based methods have been very effective for learning visual representations, due to the need to alternate between clustering and feature learning, they require high computing resources.
**Contrastive self-supervised learning**:
The most contrastive self-supervised learning methods have emerged In the last two years. These methods minimized the gap between supervised and unsupervised feature learning. We refer readers to [35] and [36] to learn more about contrastive self-supervised learning algorithms. The main idea of contrastive learning is to bring pairs of positive samples closer together and pairs of negative samples further apart in the feature space. In practice, the performance of contrastive learning methods is highly dependent on a large number of negative samples [35]. As a result, a large number of negative samples must be provided. For instance, the PIRL [37] algorithm stores negative samples in a memory bank, while the MoCo [38] algorithm stores a row of negative samples in a momentum encoder. In contrast, the SimCLR [22] algorithm generates negative examples with large batch sizes, necessitating the use of significant computational resources. Unlike other contrastive self-supervised learning methods, the SimSiam algorithm does not need a memory bank or a larger batch size. Therefore, it requires fewer computation resources.
### _Self-supervised learning in remote sensing_
Recently, some researchers have attempted to apply self-supervised learning algorithms and concepts to remote sensing images. In [16], multi-scale spatial features from high-resolution remote sensing images are captured by multiple-layer feature-matching generative adversarial networks (MARTA GANs) and are used for solving land cover classification tasks. In another work, a pretext task is defined in a way that predicts RGB channels information using high-frequency channels information[18]. Additionally, [15] employs image colorization, relative position prediction, and instance discrimination as pretext tasks for learning the in-domain representations from remote sensing images. The learned features are evaluated by transferring to other land cover classification datasets with very few labeled samples. In [39] the MoCov2 algorithm have been modified by introducing a geography-aware cost function to learn the visual features of remote sensing images. Rather than using regular data augmentation techniques to generate positive samples, the aforementioned work utilized geographic information about the locations that satellites frequently pass through to generate positive samples. Researches in [14] have demonstrated that hierarchical pre-training first on natural images and then on remote sensing images, improves accuracy in downstream tasks. In [40], the effect of different data augmentation methods for contrastive self-supervised learning algorithms on remote sensing datasets has been studied. In [41], a self-supervised learning approach for pre-training weights from remote sensing images is proposed. This approach makes simultaneous use of the correspondence between remote sensing images and geo-tagged audio recordings. Therefore, it pre-trains the features using both the image information and the corresponding audio of each image.
The method we used in this paper is divided into two sections:
1. Self-supervised pre-training using the SimSiam algorithm.
2. Transferring the pre-trained weights in the previous step to downstream tasks and evaluating the generalizability of features by fine-tuning on various land cover classification datasets.
The features learned using most contrastive learning algorithms are highly generalizable. However, the main drawback of these methods is their high computational requirements. In general, this requirement for plenty of computational resources is motivated by the following three main items: 1. negative samples; 2. large batch size; 3. momentum encoder. Unlike other contrastive learning methods, the SimSiam algorithm does not require any of the three items mentioned above, making it significantly more computational resource-efficient. Therefore, we employ this algorithm to learn the visual features of remote sensing images [24]. The schematic of this algorithm is shown in Figure 1:
After applying different data augmentation techniques to the image x, two distinct views, \(x_{1}\) and \(x_{2}\), are generated. Both of these two distinct perspectives are entered into both sides of the Siamese architecture and then processed using the encoder f. This encoder utilizes a ResNet50 backbone, and an MLP projection head to extract the features from the input image. Therefore, each image is converted from pixel space to a smaller feature space. The projection head in the encoder f consists of three layers of MLP with batch normalization layers applied to each fully connected layer, including the output layer. Following the encoder(f), the architecture only has a top-side prediction head module. The prediction head MLP(h) merges the encoder outputs for the \(x_{1}\) view and then matches its dimensions to the encoder output on the bottom side of the architecture. This MLP (h) is composed of two fully connected layers and a hidden layer that has been subjected to a batch normalization layer. The trained weights are shared on both sides of the model. [28] demonstrated that copying weights on both sides of Siamese architectures produces poor results. They have provided a momentum update to avoid this issue. The proposed solution has the disadvantage of requiring a large number of computational resources. The SimSiam employs a stop-gradient operator on one side of the architecture to overcome the requirement for high computational resources as the BYOL[23] algorithm does. When the stop-gradient operator is applied to any side of the network, the gradients on that side are not updated via backpropagation. The proposed cost function is simple and is defined as a function of the cosine similarity of two vectors. If we illustrate the output vectors of two views with \(p_{1}\triangleq h\big{(}f(x_{1})\big{)}\) and \(z_{2}\triangleq f(x_{2})\), where \(h\big{(}f(x_{1})\big{)}\) represents the output of the prediction head applied to f(x\({}_{1}\)) and f(.) is the backbone which is applied to both views, then the objective function can be defined as follows:
\[D(p_{1},z_{2})=-\frac{p_{1}}{\|p_{1}\|_{2}}\cdot\frac{z_{2}}{\|z_{2}\|_{2}}\]
In this equation, \(\|.\|\) represents the \(L_{2}\) norm. Finally, the total cost function is a symmetric function, which is defined as follows:
\[L=\frac{1}{2}\;D(p_{1},z_{2})+\frac{1}{2}\;D(p_{2},z_{1})\,,p_{2}\;\triangleq h \big{(}f(x_{2})\big{)}\]
The cost obtained for all images in the batch is averaged and considered as the total loss. The stop gradient operator is a critical component that makes this algorithm work well. This operator applies to the features extracted from each view; therefore, the final cost function is defined as follows:
\[L=\frac{1}{2}\;D(p_{1},stopgrad(z_{2}))+\frac{1}{2}\;D(p_{2},stopgrad(z_{1}))\]
This relationship demonstrates that the defined cost function is perfectly symmetric. Additionally, gradients are updated only when a corresponding view enters the network from the top side of the architecture [24].
## 4 Datasets
We conducted our experiments using two sets of remote sensing datasets. The first category contains datasets selected for self-supervised pre-training with the SimSiam algorithm, while the second category contains datasets used to evaluate the features learned.
### _Self-supervised pre-training datasets_
We have used MLRSNet, NWPU-RESISC45, and PatternNet, to pre-train the general representations using the SimSiam. These datasets have different characteristics from which the number of classes, number of samples, and spatial resolution are more noticeable discrepancies. The variety of attributes in pre-train datasets can help us to identify the vital factors that make the dataset a good choice for acquiring general representations.
**MLRSNet:** MLRSNet is a multi-label high spatial resolution remote sensing dataset for semantic scene understanding. It contains 109,161 remote sensing images that are annotated into 46 categories, and the number of sample images in a category varies from 1,500 to 3,000. The images have a fixed size of 256\(\times\)256 pixels with various pixel resolutions (\(\sim\)0.1m to 10m). Moreover, the number of labels associated with each image varies from 1 to 13. The dataset can be used for multi-label based image classification, multi-label based image retrieval, and image segmentation.
**NWPU-RESISC45[10]:** This dataset contains 31.5k images classified into 45 classes. The images of this dataset have a high spatial resolution. For many samples, this dataset has a spatial resolution of between 0.2m to 30m per pixel. During the pre-training phase, we used all of the images in the dataset. During the transfer learning phase and fine-tuning of learned weights, we used 60% of the data for training, 20% for validation, and 20% for testing.
Figure 1: SimSiam architecture
**PatternNet[25]:** PatternNet has a higher spatial resolution than Resisc45 and consists of 38 classes with 800 images per class. Therefore, there are 30.4k samples in this dataset. The image size of this dataset is 256x256. Additionally, the spatial resolution of this dataset is between 0.06m to 4.96m.
### _Downstream Datasets_
In addition to the Resisc45 and PatternNet, we have evaluated the pre-trained representations using three distinct datasets with the following characteristics:
**AID[28]**: The dataset contains 10,000 RGB images with a resolution of 600x600 pixels divided into 30 classes. The spatial resolution of images is about 0.5m to 8m.
**EuroSAT[9]**: The dataset contains 27,000 images with 64x64 pixel dimensions classified into ten classes. This dataset has two versions of 13 channels and three RGB channels. The spatial resolution of each image in this dataset is about 10m to 30m, indicating that it has a low spatial resolution. We conducted our experiments using a three-channel version.
**UC_Merced[29]:** The dataset has 2,100 images divided into 21 classes with a resolution of 0.3m and image sizes 256x256.
The following table summarizes the characteristics of the datasets used in this article.
## 5 Experiments
Our primary objective is to obtain meaningful representations from aerial imagery in an unsupervised manner and use them to tackle the domain difference issue. As a result, we use the obtained weights either as initial weights or feature extractors. Some of our experiments are inspired by [17]. However, they considered the supervised approach to learning visual representations. Additionally, through extensive experiments, we examine the effect of the pre-training dataset using the SimSiam algorithm. We have conducted our experiments using the PyTorch and PyTorch Lightning[42] frameworks on an Ubuntu system equipped with QuadroP6000 GPU. We repeated each experiment five times and reported the average results.
### _Self-supervised pre-training using SimSiam_
In the first phase, we performed in-domain self-supervised pre-training using the SimSiam algorithm on all instances of the MLRSNet, Resisc45, and PatternNet datasets. As previously described, the SimSiam algorithm utilizes an encoder (f) that consists of a backbone and a projection head. In our experiments, we used ResNet50 as the backbone and applied slight changes to the number of neurons in the projection and prediction head. The projection head is a three-layer MLP with 1024 and 512 neurons in its hidden layers. The predictor module (h) follows the encoder module. It consists of a two-layer MLP with 256 neurons in its hidden layer. We trained the obtained SimSiam on MLRSNet, Resisc45, and PatternNet, for 100k iterations. We used the SGD optimizer with a batch size of 128 and a base learning rate of 0.05 during training, as well as the MultiStepLR scheduler. We set the weight decay and SGD momentum to \(10^{-5}\) and 0.9, respectively. We also used ImageNet pre-trained weights during self-supervised pre-training. According to the results of[14], it leads to the higher accuracy of downstream tasks and decreases the time required for convergence. By performing this experiment, we obtained three distinct models pre-trained on datasets with different characteristics.
### _Transfer Learning to downstream tasks_
In this experiment, we fine-tuned the resulting models on five remote sensing datasets with different characteristics and reported global accuracy for each dataset to evaluate the quality of pre-trained representations. We used 60% of the datasets as a training set, 20% as a validation set, and the remaining 20% as a test set to solve downstream tasks. We also used the Adam optimizer with a batch size of 64 and the ReduceLrOnPlateau scheduler. We have fine-tuned all of the models for 100 epochs.
Our data augmentation pipelines are as follows:
We first resize all images to 256x256 pixels for all downstream datasets except EuroSAT and then apply random horizontal or vertical flips. We crop 224x224 pixels from the center of the resulting image. Finally, each dataset is normalized using the mean and standard deviation of the pixel intensities. We repeated each experiment five times and reported the average results.
In TableII, we compared our results to those reported in [17].
In [17], the ResNet50 model is pre-trained in a supervised manner on various remote sensing datasets. The model's final parameters are then fine-tuned using the Resisc45, UCM, Eurostat, and other datasets. As shown in TableII, the self-supervised pre-trained model on the high-resolution
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Dataset & Image size & Size & Classes & Resolution \\ & & & & (m) \\ \hline MLRSNet & 256x256 & 109.16k & 46 & 0.1-10m \\ \hline Resisc45 & 256x256 & 31.5k & 45 & 0.2-30m \\ \hline PatternNet & 256x256 & 30.4k & 38 & 0.06-4.96m \\ \hline AID & 600x600 & 10k & 30 & 0.5-8m \\ \hline UCM & 256x256 & 2.1k & 21 & 0.3m \\ \hline EuroSat & 64x64 & 27k & 10 & 10-30m \\ \hline \end{tabular}
\end{table} TABLE I: General characteristics of the selected datasets.
PatternNet dataset outperforms ImageNet pre-trained model and other in-domain supervised models.
In Table III, we have compared our best results obtained by Sim-PatternNet to some of the best available models. The results indicate that self-supervised pre-training using the SimSiam algorithm produced the best results across different land cover classification datasets.
**TableIII:** Comparison of results on selected remote sensing datasets. Our best results were obtained by fine-tuning in-domain representations captured by the SimSiam algorithm (Averaged over 5 runs).
\begin{tabular}{|l|c|l|l|} \hline
**Dataset** & **Reference** & **Description** & **Acc** \\ & & & **(\%)** \\ \hline \multirow{4}{*}{AID} & [15] & Unsupervised & 78.83 \\ & [41] & Unsupervised & 84.44 \\ & [9] & Supervised & 94.38 \\ & [19] & Supervised & 95.58 \\ & [6] & Supervised & 97.30 \\ & **Ours** & Unsupervised & **97.83** \\ \hline \multirow{4}{*}{EuroSAT} & [15] & Unsupervised & 76.37 \\ & [5] & Supervised & 99.20 \\ & [43] & Unsupervised & 98.91 \\ & [9] & Supervised & 98.57 \\ & **Ours** & Unsupervised & **99.26** \\ \hline \multirow{4}{*}{UCM} & [41] & Unsupervised & 89.71 \\ & [9] & Supervised & 96.42 \\ & [5] & Supervised & 99.61 \\ & [44] & Supervised & 92.40 \\ & [45] & Supervised & 97.10 \\ & [12] & Supervised & 99.41 \\ & [7] & Supervised & 98.50 \\ & **Ours** & Unsupervised & **99.90** \\ \hline \multirow{4}{*}{PatternNet} & [25] & Supervised & 96.65 \\ & [6] & Supervised & 99.84 \\ & **Ours** & Unsupervised & **99.90** \\ \hline \multirow{4}{*}{Resisc45} & [41] & Unsupervised & 84.88 \\ & [43] & Unsupervised & 96.28 \\ & [5] & Supervised & 96.83 \\ \cline{1-1} & [6] & Supervised & 97.03 \\ \cline{1-1} & **Ours** & Unsupervised & **97.20** \\ \hline \end{tabular}
### _Choosing the appropriate dataset for self-supervised pre-training_
**A. Fine-tunning all layers**
In this section, by conducting detailed experiments, we have examined the effect of the pre-training dataset on the final accuracy of downstream tasks to determine the effective characteristics for selecting the pre-training dataset using the SimSiam algorithm. For this purpose, we have used MLRSNet, Resisc45, and PatternNet for pre-training using the SimSiam. These datasets have different attributes in the number of samples, class diversity, and spatial resolutions. The similarity of the pre-training or source dataset to the downstream or target dataset is a critical factor affecting the accuracy of land cover classification tasks, as has been discussed for the supervised approach [17]. However, for representation learning from remote sensing images using a contrastive self-supervised learning approach, potentially influential factors must be examined through deliberate experiments.
The class similarity is a proxy that shows the similarity of source and target datasets. We calculated the similarities by comparing the number of identical classes in both pre-training and downstream datasets. Table 4 indicates that the downstream datasets used in our experiment are, on average, more similar to Resisc45 than PatternNet and MLRSNet. Another factor that causes the learning of global features and, as a result, high performance in target datasets is the class diversity of the pre-training dataset [17]; the higher the class diversity of the source dataset, the higher the generalizability of the pre-trained features on target tasks. Although Resisc45 has higher class diversity and more similarity to the target datasets, the pre-trained features on the PatternNet have a higher generalization ability on all downstream tasks. A vital factor that comes into view is the spatial resolution of source datasets. It is between 0.06m and 4.96m for PatternNet, 0.1m to 10m for MLRSNet, and 0.2m and 30m for Resisc45. Therefore, the spatial resolution of PatternNet is higher than Resisc45 and MLRSNet. The high spatial resolution in remote sensing images makes the edges of the objects in the images sharper, and because self-supervised learning methods provide supervisory signals from the data, the presence of these edges makes the differences between the objects in the images more accentuated. As a result, the SimSiam model can better learn the difference between the objects in the dataset. It is conclusive that the importance of other factors such as class similarity, class diversity, and the number of samples for learning general features from remote sensing images using the SimSiam are highlighted when the pre-training dataset has a high spatial resolution. MLRSNet is much larger than PatternNet, but the generalizability of PatternNet is better than that of MLRSNet. It means that; although the class diversity, class similarity, and number of samples of the PatternNet dataset to the target datasets are lower than Resisc45 and MLRSNet, these factors are still significant for the PatternNet dataset. All these factors make PatternNet an appropriate source for pre-training visual features using the SimSiam algorithm.
We compared the results of fine-tuning pre-trained weights on the PatternNet and Resisc45 datasets in Table 5. These results demonstrate that despite Resisc45's high similarity to downstream datasets and the diversity of its classes, the pre-trained model on the PatternNet performs significantly better when solving land cover classification tasks.
We compared the results of fine-tuning different pre-trained models in Table 5.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**D Dataset** & **Reference** & **Description** & **Acc** \\ & & & **(\%)** \\ \hline \multirow{4}{*}{AID} & [15] & Unsupervised & 78.83 \\ & [41] & Unsupervised & 84.44 \\ & [9] & Supervised & 94.38 \\ & [19] & Supervised & 95.58 \\ & [6] & Supervised & 97.30 \\ & **Ours** & Unsupervised & **97.83** \\ \hline \multirow{4}{*}{EuroSAT} & [15] & Unsupervised & 76.37 \\ & [5] & Supervised & 99.20 \\ & [43] & Unsupervised & 98.91 \\ & [9] & Supervised & 98.57 \\ & **Ours** & Unsupervised & **99.26** \\ \hline \multirow{4}{*}{PatternNet} & [41] & Unsupervised & 89.71 \\ & [9] & Supervised & 96.42 \\ & [5] & Supervised & 99.61 \\ & [44] & Supervised & 92.40 \\ & [45] & Supervised & 97.10 \\ & [12] & Supervised & 99.41 \\ & [7] & Supervised & 98.50 \\ & **Ours** & Unsupervised & **99.90** \\ \hline \multirow{4}{*}{PetterNet} & [25] & Supervised & 96.65 \\ & [6] & Supervised & 99.84 \\ \cline{1-1} & **Ours** & Unsupervised & **99.90** \\ \hline \multirow{4}{*}{Resisc45} & [41] & Unsupervised & 84.88 \\ \cline{1-1} & [43] & Unsupervised & 96.28 \\ \cline{1-1} & [5] & Supervised & 96.83 \\ \cline{1-1} & [6] & Supervised & 97.03 \\ \cline{1-1} & **Ours** & Unsupervised & **97.20** \\ \hline \end{tabular}
\end{table}
Table IV: Class similarity of MLRSNet, Resisc45 and PatternNet to the downstream tasks. PD and DS stand for Pretraining Dataset and DownStream task, respectively
other models. PD and DS stand for Pretraining Dataset and DownStream task, respectively.
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(\mathbf{PD}\ \rightarrow\)} & \multicolumn{1}{c|}{AID} & \multicolumn{1}{c|}{EuroSAT} & \multicolumn{1}{c|}{UCM} \\
**DT** & & & & & \\ \(\downarrow\) & & & & & \\ \hline Resisc45 & 97.62 & 97.75 & 98.24 \\ \hline MLRSNet & 97.78 & 98.45 & 98.85 \\ \hline PatternNet & **97.83** & **99.26** & **99.90** \\ \hline \end{tabular}
According to Table 4, the similarity of the AID to the MLRSNet, Resisc45, and PatternNet is 66.6%, 60%, and 30%, respectively. However, fine-tuning weights obtained from the PatternNet dataset on the AID perform better than other models. These results indicate that when using the SimSiam algorithm to train visual representations from remote sensing images, the choice of higher-resolution datasets is a critical factor with a huge impact on the final performance of downstream tasks. However, the conclusions are made based only on three datasets, and additional experiments with diverse datasets are required to make more precise generalizations.
## Appendix B Linear Evaluation with limited number of samples
In this section, by using linear evaluation, we further examined the quality of the pre-trained features. In the figure below, we have shown the general outline of the linear evaluation.
## Appendix C Conclusions
Recently, contrastive learning, a subset of self-supervised learning, has made significant progress in general visual representations of natural images. The available remote-sensing datasets have different numbers of samples and channels, spatial resolutions, and image sizes. Therefore, it is necessary to examine the transferability of self-supervised pre-trained
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{\(\mathbf{DT}\ \rightarrow\)}} & \multicolumn{4}{c|}{AID} & \multicolumn{4}{c|}{EuroSat} & \multicolumn{4}{c|}{UCM} \\ \cline{3-11} & \multicolumn{4}{c|}{Number of images per class} & \multicolumn{4}{c|}{Number of images per class} & \multicolumn{4}{c|}{Number of images per class} \\ \cline{3-11}
**PD** & & & & & & & & & \\ \(\downarrow\) & 5 & 10 & 20 & 50 & 5 & 10 & 20 & 50 & 5 & 10 & 20 & 50 \\ \hline ImageNet & 45.45 & 52.36 & 63.14 & 70.17 & 39.36 & 46.45 & 51.22 & 59.71 & 40.43 & 50.33 & 56.72 & 63.21 \\ \hline Resisc45 & 72.32 & 75.44 & 81.74 & 86.56 & 77.50 & 80.12 & 85.16 & 90.93 & 77.89 & 82.11 & 87.95 & 92.15 \\ \hline MLRSNet & 73.34 & 77.10 & 82.52 & **89.52** & 79.31 & 83.27 & 88.87 & 92.58 & 80.92 & 84.60 & 90.37 & **94.85** \\ \hline
**PatternNet** & **73.89** & **78.25** & **85.13** & 89.33 & **80.02** & **84.19** & **89.55** & **92.31** & **81.65** & **85.87** & **91.70** & 94.66 \\ \hline \end{tabular}
\end{table}
Table VI: Results of linear evaluation under limited number of samples (Averaged over 5 runs). The pre-trained model on the PatternNet performs better than other models. PD and DS stand for Pretraining Dataset and DownStream task, respectively.
Figure 1: Chematic of a linear evaluator. Pre-trained model serves as feature extractor.
features from remote sensing images and determine the right factors that make the dataset a good choice for feature pre-training. In this paper, we utilized the SimSiam for in-domain general feature learning from three remote-sensing datasets with different characteristics. The pre-trained weights were then evaluated by fine-tuning and linear evaluation on other land cover classification datasets achieving state-of-the-art results. Our deliberate experiments demonstrate that for contrastive self-supervised pre-training of remote-sensing images, higher resolution datasets lead to better performance on downstream tasks.
**Data Availability Statement (DAS)**
The data that support the findings of this study are available from the corresponding author, [H.S.], upon reasonable request.
|
2305.09368 | Unsupervised sequence-to-sequence learning for automatic signal quality
assessment in multi-channel electrical impedance-based hemodynamic monitoring | This study proposes an unsupervised sequence-to-sequence learning approach
that automatically assesses the motion-induced reliability degradation of the
cardiac volume signal (CVS) in multi-channel electrical impedance-based
hemodynamic monitoring. The proposed method attempts to tackle shortcomings in
existing learning-based assessment approaches, such as the requirement of
manual annotation for motion influence and the lack of explicit mechanisms for
realizing motion-induced abnormalities under contextual variations in CVS over
time. By utilizing long-short term memory and variational auto-encoder
structures, an encoder--decoder model is trained not only to self-reproduce an
input sequence of the CVS but also to extrapolate the future in a parallel
fashion. By doing so, the model can capture contextual knowledge lying in a
temporal CVS sequence while being regularized to explore a general relationship
over the entire time-series. A motion-influenced CVS of low-quality is
detected, based on the residual between the input sequence and its neural
representation with a cut--off value determined from the two-sigma rule of
thumb over the training set. Our experimental observations validated two
claims: (i) in the learning environment of label-absence, assessment
performance is achievable at a competitive level to the supervised setting, and
(ii) the contextual information across a time series of CVS is advantageous for
effectively realizing motion-induced unrealistic distortions in signal
amplitude and morphology. We also investigated the capability as a
pseudo-labeling tool to minimize human-craft annotation by preemptively
providing strong candidates for motion-induced anomalies. Empirical evidence
has shown that machine-guided annotation can reduce inevitable human-errors
during manual assessment while minimizing cumbersome and time-consuming
processes. | Chang Min Hyun, Tae-Geun Kim, Kyounghun Lee | 2023-05-16T11:52:06Z | http://arxiv.org/abs/2305.09368v2 | [
###### Abstract
This study proposes an unsupervised sequence-to-sequence learning approach that automatically assesses the motion-induced reliability degradation of the cardiac volume signal (CVS) in multi-channel electrical impedance-based hemodynamic monitoring. The proposed method attempts to tackle shortcomings in existing learning-based assessment approaches, such as the requirement of manual annotation for motion influence and the lack of explicit mechanisms for realizing motion-induced abnormalities under contextual variations in CVS over time. By utilizing long-short term memory and variational auto-encoder structures, an encoder-decoder model is trained not only to self-reproduce an input sequence of the CVS but also to extrapolate the future in a parallel fashion. By doing so, the model can capture contextual knowledge lying in a temporal CVS sequence while being regularized to explore a general relationship over the entire time-series. A motion-influenced CVS of low-quality is detected, based on the residual between the input sequence and its neural representation with a cut-off value determined from the two-sigma rule of thumb over the training set. Our experimental observations validated two claims: (i) in the learning environment of label-absence, assessment performance is achievable at a competitive level to the supervised setting, and (ii) the contextual information across a time series of CVS is advantageous for effectively realizing motion-induced unrealistic distortions in signal amplitude and morphology. We also investigated the capability as a pseudo-labeling tool to minimize human-craft annotation by preemptively providing strong candidates for motion-induced anomalies. Empirical evidence has shown that machine-guided annotation can reduce inevitable human-errors during manual assessment while minimizing cumbersome and time-consuming processes. The proposed method has a significance especially in the industrial field, where it is unavoidable to gather and utilize a large amount of CVS data to achieve high accuracy and robustness in real-world applications.
Astronomical information, ICM, ICML 20106450 100/11/97 1228-1-4/10/08 10/11/08 1008/10/08
1000
1000000
1
## 1 Introduction
Multi-channel electrical impedance (MEI)-based cardiopulmonary monitoring has emerged as a promising alternative to conventional technologies (e.g., mechanical ventilation and cardiac catheterization) owing to its invasive nature, which causes discomfort and inconvenience to the subject [4, 8, 10, 17, 20, 47]. MEI measurement is entirely based on several electrodes attached around a human chest and samples temporal data with a fine time-resolution of approximately 0.01-0.02s, such that it is capable of non-invasive, real-time, and long-term continuous monitoring [1, 36]. MEI techniques in lung ventilation tracking applications have arrived at a level suitable for commercial and practical use [33, 42]; however, it remains in question for hemodynamic monitoring in terms of accuracy, reliability, and long-term persistence [31]. By virtue of recent endeavors in the bio-impedance fields, accurate and continuous extraction of feeble cardiogenic components in MEI measurement, known as the cardiac volume signal (CVS), is capable of allowing a reliable long-term trace of pivotal hemodynamic quantities, such as stroke volume and cardiac output, in a fully non-invasive manner [16, 23].
However, motion makes all situations solely different by being a critical troublemaker that considerably interferes with MEI-based measurements [1, 7, 43, 45]. Owing to the relatively weak cardiogenic impedance change [5, 25], the hemodynamic monitoring system is notably vulnerable, resulting in a significant loss of accuracy and trustworthiness of the extracted CVS [13]. Although several studies [7, 24, 39, 43] have attempted to overcome this hurdle by recovering normal MEI measurements under the motion influence, their effectiveness appears to be uncertain in real-world situations inducing a colossal diversity of motion. Accordingly, as a clinical and industrial compromise within the current MEI technology, there is a preferential demand for sieving motion-influenced CVS of low-quality [23]. This filtering is intended to warn a device operator of motion corruption in order for minimizing negative ripple effects to the maximum extent, such as a waste of clinical resources and misapprehension regarding health conditions [6, 13].
To this end, an accurate CVS quality-indexing method that assesses motion-induced reliability degradation needs to
be developed. Automation and timeliness are crucial as well for the real-time monitoring. To satisfy these requirements, a data-driven solution using machine learning (ML) can be a good fit [3, 12, 14, 21, 44]. Hyun _et al._[13] recently paved the way for the development of ML-based CVS assessment approaches in a supervised learning framework. Their idea was based on the construction of a barcode-like feature from labeled data, differentiating anomalies from normal CVSs in an individual cardiac cycle.
Despite their remarkable performance, there is yet room for further improvement. The first is the requirement for the manual annotation of motion influence. To enhance model generalization or stability for real-world use, a large amount of data collection and labeling are required; however, they are expensive, time-consuming, and cumbersome [19, 29]. Moreover, they are prone to inevitable human errors, which may cause biased learning such that the model performance or robustness is limited in practical circumstances [9, 41, 46]. The second is the lack of explicit mechanisms for realizing motion-induced anomalies under contextual variations in CVS over time. Because of the strong regularity and periodicity according to the heartbeat, as highlighted in Figure 1, the contextual perception across a time-series of CVS is vital, as well as individuals, to identify motion influence, even for bio-signal experts.
To tackle these shortcomings, this study hence proposes an unsupervised sequence-to-sequence learning approach. By utilizing long short-term memory (LSTM) and variational auto-encoder (VAE) structures [11, 18], an encoder-decoder model is trained not only to self-reproduce an input sequence of CVS but also to extrapolate the future in a parallel fashion. By doing so, the model can capture contextual knowledge lying in a temporal sequence while being regularized to explore a general relationship over a time-series [26, 40]. The sequence is defined such that its point is either a value of the CVS (point-to-point) or its group during a certain heartbeat interval (cycle-to-cycle), the timing of which is identified from a synchronized electrocardiography (ECG) signal. A motion-influenced CVS of low-quality is detected, based on the residual between an input sequence and its neural representation with a cut-off value determined from the two-sigma rule of thumb [15, 32] over the training set.
The experimental observations validated the following: In a label-absence learning environment, the assessment is achievable at a competitive level in a supervised setting. The best model achieved an accuracy of 0.9566, true positive rate of 0.9743, true negative rate of 0.7001, and AUC of 0.9484, which were comparable to those in the supervised setting, with an accuracy of 0.9672, true positive rate of 0.9847, true negative rate of 0.7151, and AUC of 0.9503. Contextual knowledge across a time series of CVS is advantageous for effectively realizing motion-induced unrealistic distortions in signal amplitude and morphology. In particular, the enriched time context significantly improved the true-negative rate and AUC. Between the two proposed approaches, the cycle-to-cycle model outperforms the point-to-point model. The former achieved an accuracy of 0.9566, true positive rate of 0.9743, true negative rate of of 0.7001, and AUC of 0.9484, whereas the latter an accuracy of 0.9439, true positive rate of 0.9775, true negative rate of 0.4520, and AUC of 0.8338.
We also investigated the capability as a pseudo-labeling tool [22, 35] to minimize human-craft annotation by preemptively providing strong candidates for motion-induced anomalies. Empirical evidence has shown that machine-guided annotation can reduce inevitable human-errors during manual assessment. This shows that the proposed method can synergize with supervised learning as an aide means, not only an alternative branch.
## 2 Method
The main objective of this study is to find a CVS quality assessment map \(\mathbf{f}:\mathbf{x}_{t}\mapsto\mathbf{y}_{t}\), where \(\mathbf{x}_{t}\) is an extracted CVS value from the MEI-based hemodynamic monitoring device [13, 23] at a certain time \(t\) and \(\mathbf{y}_{t}\) is the corresponding quality index defined by
\[\mathbf{y}_{t}=\mathbf{f}(\mathbf{x}_{t})=\left\{\begin{array}{rl}1&\text{if }\mathbf{x}_{t} \text{ is normal,}\\ 0&\text{if }\mathbf{x}_{t}\text{ is motion-influenced.}\end{array}\right. \tag{1}\]
Here, 0 and 1 are numeric values representing low- and high-quality classes, respectively. Considering the previous analysis in [13], the transient CVS value \(\mathbf{x}_{t}\) can be decomposed as
\[\mathbf{x}_{t}=\mathbf{x}_{t}^{\text{normal}}+\mathbf{x}_{t}^{\text{motion}}, \tag{2}\]
where \(\mathbf{x}_{t}^{\text{motion}}\) is the motion artifact. See Appendix A for more details. The map \(\mathbf{f}\) in (1) can be viewed as the identification of \(\mathbf{x}_{t}^{\text{motion}}\) in \(\mathbf{x}_{t}\). To construct \(\mathbf{f}\), ML can be directly used in the following supervised learning framework:
\[\mathbf{f}=\underset{\mathbf{f}}{\text{argmin}}\ \sum_{t}\|\mathbf{f}(\mathbf{x}^{(t)})-\mathbf{y}^{(t )}\|, \tag{3}\]
where \(\{(\mathbf{x}^{(t)},\mathbf{y}^{(t)})\}_{i}\) is a paired dataset of the CVS value and corresponding quality index.
However, this approach has two main limitations. The first is a manual annotation. Labeling a large amount of CVS
Figure 1: Context knowledge from past is a basis for identifying motion-induced abnormal variations of CVS in real-time monitoring.
data is extremely costly in terms of human resources and economics. In addition, they are easily exposed to inevitable human errors. One particular source is the ambiguity in the CVS data annotation. For instance, when determining a critical point between normal and motion-influenced time regions, its perfect annotation is almost impossible using only CVS data. Second, point-level identification (1) is not practically advisable despite relation (2). Even for bio-impedance specialists, the realization of the motion influence is based on contextual knowledge associated with periodicity across repeated cardiac cycles and the time-series recognition of CVS variations.
### Unsupervised sequence-to-sequence learning for CVS quality assessment
The proposed method addresses the aforementioned hurdles. An unsupervised framework is used to learn a barcode-like feature, which plays a key role in sieving the influence of motion, from an unlabeled dataset. The network architecture is a recurrent neural network-style VAE, where LSTM is used as a base building block to provide an explicit mechanism of information propagation that enriches time contextuality. This is motivated by [26, 40, 2].
The proposed method is two-fold: (i) point-to-point and (ii) cycle-to-cycle sequence learning (see Figure 2). Their main difference lies in the definition of an input sequence sampled from the time-series CVS data. A sequence point is regarded in the former as a CVS value at a certain time and in the latter as a vector gathering all CVS data during a heartbeat interval. Cardiac cycle timing is obtained with the additional use of a synchronized ECG signal.
#### 2.1.1 Point-to-point
The point-to-point model \(\mathbf{f}^{\text{pt}}\) aims to provide
\[\mathbf{f}^{\text{pt}}(X_{t+r})=\mathbf{y}_{t+r}, \tag{4}\]
Figure 2: Unsupervised sequence-to-sequence learning models for automatic signal quality assessment in multi-channel impedance-based hemodynamic monitoring: (a) point-to-point and (b) cycle-to-cycle.
where \(\mathcal{X}_{t+r}\) is a consecutive sequence of CVS of length \(r\), defined by
\[\mathcal{X}_{t+r}=\left[\mathbf{x}_{t+1},\mathbf{x}_{t+2},\cdots,\mathbf{x}_{t+r}\right]. \tag{5}\]
Here, \(t+r\) is set as the current time step for the convenience of notation. The overall process is illustrated in Figure 3. The assessment is performed, based on CVS value histories (\(\mathcal{X}_{t+r}\)) to leverage contextual knowledge.
The map \(f^{\text{pt}}\) can be expressed as follows:
\[\mathbf{f}^{\text{pt}}=\mathcal{T}\circ(\mathbf{D}\circ\mathcal{E}-\mathcal{P}), \tag{6}\]
where
* \(\circ\) is the composition operation of functions.
* \(\mathcal{P}\) is an operator to reverse the order of vector.
* \(\mathcal{D}\circ\mathcal{E}\) is a VAE-LSTM model.
* \(\mathcal{T}\) is an assessment function with a cut-off value \(\tau\).
Here, the learning process is required for \(\mathbf{D}\circ\mathcal{E}\).
Schematically, the encoder \(\mathcal{E}\) and decoder \(\mathcal{D}\) are trained to satisfy
\[\mathbf{D}\circ\mathcal{E}(\mathcal{X}_{t+r})-\mathcal{P}(\mathcal{ X}_{t+r})\approx\mathbf{0}, \tag{7}\] \[\mathbf{D}_{\text{pred}}\circ\mathcal{E}(\mathcal{X}_{t+r})\approx \mathcal{X}_{t+r,p}, \tag{8}\]
where \(\mathbf{D}_{\text{pred}}\) is another decoder used only for training purposes and \(\mathcal{X}_{t+r,p}\) is the consecutive future sequence of the CVS with a length of \(p\), defined by
\[\mathcal{X}_{t+r,p}=\left[\mathbf{x}_{t+r+1},\mathbf{x}_{t+r+2},\cdots,\mathbf{x}_{t+r+p} \right]. \tag{9}\]
Here, the decoder \(\mathbf{D}\) reproduces the input sequence \(\mathcal{X}_{t+r}\) in reverse order, as shown in Figure 2 (a). The condition (7) causes \(\mathcal{E}\) and \(\mathbf{D}\) to learn a self-representation of the temporal sequence \(\mathcal{X}_{t+r}\). The condition (8) applies a regularization force to explore a general relation over the time-series CVS.
To avoid misleading, we clarify the abuse of notation that \(\mathcal{E}(\mathcal{X}_{t+r})\) is \(\mathbf{z}_{t+r}\) in (7) and \(\mathbf{h}_{t+r}\) in (8), where \(\mathbf{z}_{t+r}\) and \(\mathbf{h}_{t+r}\) are defined by
\[\mathbf{z}_{t+r}=[\text{FC}^{\mathbf{Z}}(\mathbf{z}_{t+r}),\mathbf{C}_{t+r}]\text{ and }\mathbf{h}_{t+r}=[\mathbf{H}_{t+r},\mathbf{C}_{t+r}]. \tag{10}\]
Here, \(\mathbf{H}_{t+r}\) and \(\mathbf{C}_{t+r}\) are outputs (hidden and cell states) in the encoder \(\mathcal{E}\) of LSTM, and \(\mathbf{Z}_{t+r}\) is given by
\[\mathbf{Z}_{t+r}\sim\mathcal{N}(\mathbf{\mu}_{t+r},\text{diag}(\mathbf{\sigma }_{t+r})), \tag{11}\] \[\mathbf{\mu}_{t+r}=\text{FC}^{\mathbf{\mu}}(\mathbf{h}_{t+r}),\mathbf{\sigma}_{t+ r}=\text{FC}^{\mathbf{\sigma}}(\mathbf{h}_{t+r}), \tag{12}\]
where FC is a fully-connected layer with reshaping, \(\text{diag}(\mathbf{\sigma})\) is a matrix whose diagonal entries are given by the components of \(\mathbf{\sigma}\), and \(\mathcal{N}(\mathbf{\mu},\Sigma)\) is a Gaussian distribution with a mean of \(\mathbf{\mu}\) and covariance of \(\mathbf{\Sigma}\). Appendix B explains more details.
A training objective \(\mathbf{J}\) is defined as
\[\mathbf{J}(\mathcal{X}_{t+r},\mathcal{X}_{t+r,p}) =\left\lVert\mathbf{D}(\mathbf{z}_{t+r})-\mathcal{P}(\mathcal{X}_{t+r}) \right\rVert_{\ell_{2}}^{2}\] \[+\left\lVert\mathbf{D}_{\text{pred}}(\mathbf{h}_{t+r})-\mathcal{X}_{t+r, p}\right\rVert_{\ell_{2}}^{2}\] \[+\text{KL}(\mathcal{N}(\mathbf{\mu}_{t+r},\text{diag}(\mathbf{\sigma}_{t+ r}))|\mathcal{N}(\mathbf{0},\mathbf{I})), \tag{13}\]
where KL is the Kullback-Leibler divergence. The encoder \(\mathcal{E}\) and decoder \(\mathcal{D}\) are optimized in the following sense:
\[(\mathcal{E},\mathcal{D},\mathcal{D}_{\text{pred}})=\underset{(\mathcal{E}, \mathcal{D},\mathcal{D}_{\text{pred}})}{\text{argminmin}}\mathbb{E}_{( \mathcal{X},\mathcal{X})}[\mathcal{J}(\mathcal{X},\mathcal{X})], \tag{14}\]
where \(\mathbb{E}_{(\mathcal{X},\mathcal{X})}\) is an empirical expectation over \((\mathcal{X},\mathcal{X})\). Note that the optimization (14) does not involve any loss term associated with the corresponding label \(\mathbf{y}\).
The remaining section explains the assessment function \(\mathcal{T}\). We define \(\mathcal{T}\) as
\[\mathcal{T}(\mathbf{a})=\left\{\begin{array}{ll}1&\text{if}\ \ \|\mathbf{a}\|\leq\tau\\ 0&\text{if}\ \ \|\mathbf{a}\|>\tau\end{array}\right., \tag{15}\]
where \(\|\cdot\|\) is either \(\ell_{1}\) or \(\ell_{2}\) norm. In \(\mathbf{f}^{\text{pt}}\), \(\mathbf{a}\) corresponds to the residual between the original CVS sequence \(\mathcal{P}(\mathcal{X})\) and its neural representation \(\mathbf{D}\circ\mathcal{E}(\mathcal{X})\), which is known as an excellent abnormality estimator [2, 13].
The lingering question is how to determine the cut-off value \(\tau\) in (15). We define a set \(\mathbb{T}_{\tau}\) as
\[\mathbb{T}_{\tau}=\left\{a^{(l)}=\left\lVert\mathcal{P}(\mathcal{X}^{(l)})- \widehat{\mathcal{X}}^{(l)}\right\rVert\ \Big{|}\ a^{(l)}\leq\tau,i=1,\cdots,N\ \right\}, \tag{16}\]
where \(\{\mathcal{X}^{(l)}\}_{i=1}^{N}\) is a training dataset of \(N\) sequences and \(\widehat{\mathcal{X}}^{(l)}=\mathbf{D}\circ\mathcal{E}(\mathcal{X}^{(l)})\). The cut-off value \(\tau\) is determined such that it satisfies the following \(\mathbf{2}\sigma\)-rule:
\[\left\lVert\begin{array}{l}\min_{\tau\in\mathbb{R}}\tau\\ \text{subject to}\ \frac{|\mathbb{T}_{\tau}|}{N}\geq 0.9545.\end{array}\right. \tag{17}\]
Here, \(|\cdot|\) is the set cardinality and \(0.9545\) can be viewed as the probability that an observation sampled from a normal distribution is within twice standard deviation of the mean.
The point-to-point model assesses the motion influence at each time point, based on the context formed by the learned relationships between CVS values. However, there is still a gap with the experts' perception. Owing to the
Figure 3: Overall process for the point-to-point model.
characteristics of hemodynamic monitoring, a meaningful and salient context is created according to the heartbeat. Although such a context can ideally be learned with large \(r\) and \(p\), it may be practically restrictive owing to training hurdles associated with very long-term connections and high learning complexity. The cycle-to-cycle model is then modeled to more similarly mimic the heuristic perception of bio-signal experts, who strongly take advantage of contexts associated with heartbeat-related regularity and periodicity.
#### Cycle-to-cycle
The cycle-to-cycle model \(\mathbf{f}^{\text{cyc}}\) aims to provide
\[\mathbf{f}^{\text{cyc}}(\mathbf{\lambda}_{T+R}^{\text{cyc}})=\mathbf{Y}_{T+R}, \tag{18}\]
where \(\mathbf{\lambda}_{T+R}^{\text{cyc}}\) is a consecutive CVS sequence with a length of \(R\), defined by
\[\mathbf{\lambda}_{T+R}^{\text{cyc}}=\left[\overline{\mathbf{X}}_{T+1},\overline{\mathbf{X} }_{T+2},...,\overline{\mathbf{X}}_{T+R}\right], \tag{19}\]
and \(\mathbf{Y}_{T+R}\in\{0,1\}\) is the corresponding assessment labels. Here, \(\overline{\mathbf{X}}_{T}\) represents a vector gathering all the CVS values during the \(T\)-th cardiac cycle. The assessment is based on accumulated cardiac cycle histories.
However, there here are two uncertainties: (i) identification of a cardiac cycle from consecutive times and (ii) addressing inconsistent point dimensions caused by the nature of heart-rate variability [13, 37]. Definition (19) is informal because the dimension of \(\overline{\mathbf{X}}_{T}\) does not usually match that of \(\overline{\mathbf{X}}_{T^{\prime}}\) for \(T^{\prime}\neq T\).
The structure of \(\mathbf{f}^{\text{cyc}}\) is conceptually equivalent to that of point-to-point, whereas an additional preprocessing \(\mathcal{C}\) is employed to resolve the aforementioned issues. \(\mathbf{f}^{\text{cyc}}\) can be expressed as follows:
\[\mathbf{f}^{\text{cyc}}=\mathcal{T}\circ(\mathbf{D}\circ\mathcal{E}-\mathbf{P})\circ \mathcal{C}. \tag{20}\]
Here, the pre-processing \(\mathcal{C}\) provides an input CVS sequence \(\mathbf{\lambda}_{T+R}^{\text{cyc}}\) from ECG and CVS data (see Figure 4).
The detailed procedures of \(\mathcal{C}\) are as follows: From the synchronized ECG signal data, we first identify the timing of the \((T+R)\)-th cardiac cycle through R-wave peak detection [28]. Thereafter, we obtain \(\overline{\mathbf{X}}_{T+R}\) and interpolate it such that having a fixed dimension. Denoting the interpolated vector as \(\mathbf{X}_{T+R}\), we obtain
\[\mathbf{\lambda}_{T+R}^{\text{cyc}}=\left[\mathbf{X}_{T+1},\mathbf{X}_{T+2},...,\mathbf{X}_{T +R}\right], \tag{21}\]
In our implementation, linear interpolation was used and the embedding dimension was 150.
The encoder-decoder model \(\mathbf{D}\circ\mathcal{E}\) is trained with the aid of \(\mathcal{D}_{\text{pred}}\) in the same manner as in (14), where \(\mathcal{D}_{\text{pred}}\) is trained to estimate a future sequence \(\mathcal{X}_{T+R,P}^{\text{cyc}}\) with a length of \(P\), defined by
\[\mathcal{X}_{T+R,P}^{\text{cyc}}=\left[\mathbf{X}_{T+R+1},\mathbf{X}_{T+R+2},...,\mathbf{X }_{T+R+P}\right]. \tag{22}\]
The final assessment \(\mathcal{T}\) is in the same manner as (15).
## 3 Experiments and Results
### Experimental Set-up
To evaluate the performance of the proposed method, we used a labeled dataset sourced from [13], in which CVS and synchronized ECG data were obtained from 19 healthy subjects using a commercial MEI-based hemodynamic monitoring device (HemoVista, BiLab, Republic of Korea). For the cardiac cycle, the dataset comprises total 12928 normal and 3212 motion influenced cycles. We have clearly mentioned that annotated labels were only used for model performance comparison.
We conducted ML experiments in a computing environment with four GeForce RTX 3080 Ti devices, two Intel Xeon CPUs E5-2630 v4, and 128GB DDR4 RAM. Our implementation was based on PyTorch [30] and PyTorch-lightning. See Appendix B for network and training details.
For quantitative analysis, the following evaluation metrics (ACC, TPR, TNR, and AUC) were used, where
\[\text{ACC}=\frac{N_{\text{TP}}+N_{\text{TN}}}{N_{\text{TP}}+N_{ \text{TN}}+N_{\text{FP}}+N_{\text{FN}}}, \tag{23}\] \[\text{TPR}=\frac{N_{\text{TP}}}{N_{\text{TP}}+N_{\text{FP}}}, \text{TNR}=\frac{N_{\text{TN}}}{N_{\text{TN}}+N_{\text{FN}}}, \tag{24}\]
and AUC is area under the curve of receiver operating characteristic (ROC). Here, \(N_{\text{TP}}\), \(N_{\text{TN}}\), \(N_{\text{FP}}\), and \(N_{\text{FN}}\) are the numbers of true positives, true negatives, false positives, and false negatives, respectively. Because our dataset was highly imbalanced, TNR and AUC were emphasized much more than ACC and TPR.
### Results
#### Point-to-point
This subsection demonstrates experimental results for the point-to-point approach.
Model performanceTable 1 summarizes the quantitative evaluation results for varying the reconstruction sequence length \(r\) and prediction \(p\). The empirically best performance
Figure 4: Overall process for the cycle-to-cycle model.
was achieved for \(r=p=200\), which included points of approximately \(1.5\) heartbeat cycles in the reconstruction and prediction sequences. For a given \(r\), \(p\approx r\) tends to produce a higher performance than \(p=0\) in terms of all statistical metrics. In the case of \(r=200\), as \(p\) increased, the model provided improved outcomes owing to enriched contextual information, whereas the model performance degraded for \(p=300\) because of the increased learning complexity or over-regularization. In our empirical implementation, the prediction decoder played a key role in stabilizing the model training and improving the ability to assess the motion influence.
Figure 5 shows the qualitative evaluation. This point-to-point approach has several limitations. In the left case, the model appeared to predict abnormal regions well; however, point-level mis-identifications were frequently observed, which did not occur from a heuristic perspective (see the red dotted boxes). The case shown on the right presents a similar problem. These results motivated us to introduce the cycle-to-cycle approach.
About cut-off value \(\tau\)We quantitatively compared the model performance by varying the cut-off value \(\tau\) (see Figure 6). We introduced Youden's \(J\)-statistic as a new evaluation metric, which is defined as
\[J=\text{TPR}+\text{TNR}-1 \tag{25}\]
Because \(J\) is known to be an excellent indicator for determining an optimal cut-off value in a class-imbalanced dataset [34], the proposed method was compared to the model at the cut-off value \(\tau\) that maximizes \(J\). Here, we clarify that \(J\) cannot be obtained in an unsupervised setting. The empirical results in Figure 6 support the claim that the two-sigma rule of thumb in (17) produces a competitive choice.
Model performanceTable 2 summarizes the quantitative evaluation results for varying \(R\) and \(P\). The overall performance was significantly better than that of the point-to-point. The best empirical model is of \(R=P=2\). Similar to the point-to-point, the use of prediction was empirically advantageous to enhance the training stability and final CVS assessment performance. The qualitative evaluation results are exhibited in Figure 7. The cycle-to-cycle model successfully addressed the limitation in the point-to-point.
About cut-off value\(\tau\)In Figure 8, model performance was quantitatively compared by varying \(\tau\). In the cycle-to-cycle model, the two-sigma rule of thumb provided an outcome very close to the optimal in terms of \(J\) statistic.
Figure 9 visualizes CVS qualitative assessment results of the cycle-to-cycle model for all the time-series data from one test subject.
#### 3.2.3 Comparison with supervised learning setting
This subsection examines the extent to which the performance gap exists between supervised and unsupervised settings. By utilizing only positive samples from the labeled dataset, the proposed models were trained to learn an underlying low dimensional distribution from motion-free CVS sequence data, which is desired to be implicitly realized in the unsupervised set-up.
Figure 10 exhibits the corresponding quantitative and qualitative evaluation results for the (a) point-to-point and (b) cycle-to-cycle models. Even though supervised learning was superior, unsupervised learning provided comparable outcomes in a label-absence environment.
### Investigation as a pseudo-labeling tool
This section investigates our method as a pseudo-labeling tool that can not only reduce time-consuming processes but also minimize inevitable errors during manual annotation. The proposed method is here considered as an aide means to supervised learning rather than an alternative branch, where strong candidates for motion-induced anomalies can be provided in advance of human annotation.
We found interesting observations that can be viewed as empirical evidence of the capability to reduce inevitable errors in the manual annotation for the motion influence of CVS. Figure 11 (a) shows cases in which the proposed method successfully estimated abnormalities (blue dots), whereas some mistakes were present in the human-craft labels (non-red regions with blue dots).
For quantitative examination, the following pilot study was conducted: A ten-year biosignal expert (Lee) was asked to reannotate one subject's CVS data with knowledge of abnormality predictions using the proposed method. Thereafter, we qualitatively measured the dissimilarity between the original and machine-guided annotations. There was inter-observer variability between them, and the machine-guided annotation approached the machine prediction, as shown in Figure 11 (b) and (c). This observation somewhat demonstrates the capability of the proposed method to reduce human errors during manual annotation as well as minimizing cumbersome and time-consuming processes.
## 4 Conclusion and Discussion
In this study, we propose a novel ML-based CVS quality assessment method for a real-time hemodynamic monitoring system using MEI measurements, during which deliberate or inevitable motions cause significant loss of functionality for the extracted biophysical quantity. Motivated by minimizing the labor, time, and economic burdens associated with manual annotation, the proposed method is an unsupervised learning approach that provides a competitive alternative to supervised learning or, at least, an auxiliary means of labeling support. Its significance can be emphasized in the industrial field, where it is unavoidable to gather and utilize a large amount of CVS data to achieve high accuracy and robustness in real-world applications. The other core is to incorporate the heuristic perception of bioimpedance professionals into the model architecture, where the time context is the key to realizing motion influence in CVS. Two models, point-to-point and cycle-to-cycle, were presented, which were designed to have an explicit mechanism to reflect the time context of CVS data. Recognizing a group of CVSs during a cardiac cycle as a sequence point, the cycle-to-cycle model is advantageous to capture a significant heartbeat context and, thus, shows better performance.
The cycle-to-cycle model requires the use of complementary information from ECG. Fortunately, hemodynamic monitoring systems typically acquire ECG signals simultaneously because of their significance as vital signs. Thus, the use of ECG is practically reasonable.
In practice, one strategy to utilize the developed method is as follows: when a large amount of label-absent CVS data is provided, the method is applied to obtain pseudo-labels. The labeler subsequently annotates the dataset with reduced workflows, possibly to minimize human errors. The cut-off value \(\tau\) in (15) is initially selected using the two-signal
Figure 8: Quantitative evaluation by changing \(\tau\) for the cycle-to-cycle model with \(R=P=2\).
Figure 9: Qualitative assessment results using the cycle-to-cycle method with \(R=P=2\) for all the time-series data from one test subject.
rule of thumb and can thereafter be adjusted at the labeler's discretion. If a small amount of paired data is available, \(\tau\) may be determined by maximizing the \(J\) statistic over the given small paired data.
By utilizing the proposed method as a pseudo-labeling tool, it may be favorable to increase the TNR (small false positives) and even sacrifice the TPR (large false negatives). The main reason for this is to reduce missing true negatives when restricting manual annotation to parts around abnormalities estimated by the ML algorithm. Although the \(J\) statistic is a good option for determining \(\tau\), it may not be optimal. A solid strategy to optimize \(\tau\) in terms of pseudo-labeling is an open question yet to be developed in our future studies.
## CRediT authorship contribution statement
C.M. Hyun: conceptualization, formal analysis, investigation, methodology, visualization, software, supervision, writing (original draft and review). T.-G. Kim: investigation, methodology, software, validation, visualization, and writing (Appendix B). K. Lee: data curation, formal analysis, validation, writing (review), and funding.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Data availability
The data that support the findings of this study are available from one of the corresponding authors (K. Lee) upon reasonable request.
## Acknowledgments
We sincerely express our deep gratitude to BiLab company (Seongnam, Republic of Korea) for their help and collaboration. This work was supported by the Ministry of Trade, Industry and Energy (MOTIE) in Korea through the Industrial Strategic Technology Development Program under Grant 20006024.
## Appendix A Motion Artifact in Cardiac Volume Signal
The hemodynamic monitoring system used in this study (Hemovista, BiLab, Republic of Korea) is a 16-channel thoracic electrical impedance device, which measures voltages
Figure 11: Qualitative and quantitative comparisons among original human-craft annotation (red regions), machine-driven prediction (blue dots), and machine-guided annotation (green regions).
Figure 10: Quantitative and qualitative comparison in supervised and unsupervised settings: (a) point-to-point (\(r=p=200\)) and (b) cycle-to-cycle (\(R=P=2\)) models.
using 16 electrodes attached around the human chest. See Figure 12. Once an alternative current of \(I\)mA is injected from \(i\)-th to \((i+1)\)-th electrodes, let \(V_{t}^{i,j}\) be a voltage response between \(j\)-th and \((j+1)\)-th electrodes. At sampling time \(t\), the following trans-conductance \(\mathbf{G}_{t}\) is obtained:
\[\mathbf{G}_{t}=I\ \left[G_{t}^{1,3},\cdots,G_{t}^{1,15},\cdots,G_{t}^{16,2}, \cdots,G_{t}^{16,14}\right]^{T}, \tag{26}\]
where \(G_{t}^{i,j}\) is the reciprocal number of a real part of \(V_{t}^{i,j}\) and \(T\) represents the transpose operator. From \(\mathbf{G}_{t}\), a CVS value \(\mathbf{x}_{t}\) is obtained as follows. For some reference time \(t_{0}\),
\[\mathbf{x}_{t}=\mathbf{w}^{T}\hat{\mathbf{G}}_{t}\text{ and }\hat{\mathbf{G}}_{t}=\mathbf{G}_{t}- \mathbf{G}_{t_{0}}, \tag{27}\]
where the weighting vector \(\mathbf{w}\) is designed to extract a cardiogenic component from \(\mathbf{G}_{t}\) that is comprehensively affected by multiple sources including lungs and heart. Kindly refer to [23] for more details.
Based on the complete electrode model and Reynolds transport theorem, the trans-conductance \(\mathbf{G}_{t}\) can be decomposed as follows [13, 24]:
\[\hat{\mathbf{G}}_{t}\approx\hat{\mathbf{G}}_{t}^{\text{normal}}+\hat{\mathbf{G}}_{t}^{ \text{motion}}, \tag{28}\]
where
\[|\hat{G}_{t}^{\text{normal},j,l}|\propto\int_{\Omega}\dot{\gamma}_ {t}(\mathbf{\xi})\nabla u_{t}^{l}(\mathbf{\xi})\cdot\nabla u_{t}^{l}(\mathbf{\xi})d\mathbf{\xi} \tag{29}\] \[|\hat{G}_{t}^{\text{motion},j,l}|\propto\int_{\partial\Omega}v_{ n}(\mathbf{\xi},t)\gamma_{t}(\mathbf{\xi})\nabla u_{t}^{l}(\mathbf{\xi})\cdot\nabla u_{t}^{k}( \mathbf{\xi})d\mathbf{s}. \tag{30}\]
Here, \(\Omega\subset\mathbb{R}^{3}\) is a time-varying human chest domain, \(v_{n}\) is an outward-normal directional velocity of \(\partial\Omega\), \(d\mathbf{s}\) is a surface measure, and \(u_{t}^{i}\) and \(\gamma_{t}\) are electric potential and conductivity distributions in \(\Omega\), respectively. The relations (27) and (28) yield
\[\mathbf{x}_{t}\approx\mathbf{x}_{t}^{\text{normal}}+\mathbf{x}_{t}^{\text{motion}}, \tag{31}\]
where \(\mathbf{x}_{t}^{\text{normal}}=\mathbf{w}^{T}\hat{\mathbf{G}}_{t}^{\text{normal}}\) and \(\mathbf{x}_{t}^{\text{motion}}=\mathbf{w}^{T}\hat{\mathbf{G}}_{t}^{\text{motion}}\). In the case of motion absence, \(\mathbf{x}_{t}=\mathbf{x}_{t}^{\text{normal}}\) follows from \(v_{n}=0\) in (30). In the presence of a large motion (i.e., \(|v_{n}|\) grows), motion artifacts in CVS (\(\mathbf{x}_{t}^{\text{motion}}\)) become significant.
## Appendix B Network and Training Details
This appendix provides details for network architectures and training procedures.
_Lstm_ A LSTM model with a stacked length of \(L\) can be represented as follows: For \(k=1,\cdots,L\) and \(\mathfrak{T}=t,\cdots,t+r\),
\[F_{\mathfrak{T}}^{k} =\text{sig}(W_{F}^{k}\mathbf{x}_{\mathfrak{T}}^{k}+U_{F}^{k}C_{ \mathfrak{T}-1}^{k}+b_{F}^{k})\] \[I_{\mathfrak{T}}^{k} =\text{sig}(W_{F}^{k}\mathbf{x}_{\mathfrak{T}}^{k}+U_{F}^{k}C_{ \mathfrak{T}-1}^{k}+b_{F}^{k})\] \[O_{\mathfrak{T}}^{k} =\text{sig}(W_{O}^{k}\mathbf{x}_{\mathfrak{T}}^{k}+U_{F}^{k}C_{ \mathfrak{T}-1}^{k}+b_{F}^{k})\] \[C_{\mathfrak{T}}^{k} =F_{\mathfrak{T}}\circ C_{\mathfrak{T}-1}^{k}+I_{\mathfrak{T}}^{ k}\odot\text{hyptan}(W_{C}^{k}\mathbf{x}_{\mathfrak{T}}^{k}+b_{C}^{k})\] \[H_{\mathfrak{T}}^{k} =O_{\mathfrak{T}}^{k}\odot\text{hyptan}(C_{\mathfrak{T}}^{k}), \tag{32}\]
where \(W\) and \(U\) are weight matrices for a fully connected layer, \(b\) is a bias vector, sig is the sigmoid function, \(\odot\) is the Hadamard product, and hyptan is the hyperbolic tangent function. Here, \(\mathbf{x}_{\mathfrak{T}}^{k}\) is defined by
\[\mathbf{x}_{\mathfrak{T}}^{1}=\mathbf{x}_{\mathfrak{T}}\text{ and }\mathbf{x}_{\mathfrak{T}}^{k}=H_{\mathfrak{T}}^{k-1}\text{ for }k\geq 2, \tag{33}\]
where \(\mathbf{x}_{\mathfrak{T}}\) is a point of an input sequence at time \(\mathfrak{T}\).
_VAE-LSTM Model_ In our implementation, we used LSTM with two stacked layers (\(L=2\)), each of which includes 32 units. The latent dimension was assigned to be 32. The overall structure of VAE-LSTM is illustrated in Figure 13.
For an input sequence \(\mathcal{X}_{t+r}\), the model is progressed as follows: A LSTM encoder \(\mathcal{E}\) ingests a sequence of CVS data and subsequently outputs an encapsulated vector, which can be expressed as \(\mathbf{h}_{t+r}=[\mathbf{H}_{t+r},\mathbf{C}_{t+r}]\in\mathbb{R}^{32\times 4}\), where
\[\mathbf{H}_{t+r}=[H_{t+r}^{1},H_{t+r}^{2}]\text{ and }\mathbf{C}_{t+r}=[C_{t+r}^{1},C_{t+r}^{2}]. \tag{34}\]
By duplicating \(\mathbf{h}_{t+r}\), one copy is used for the reconstruction stage (\(D\)) and the other for the prediction stage (\(D_{\text{pred}}\)).
In the reconstruction stage, we input \(\mathbf{h}_{t+r}\) to two fully connected layers with reshaping and then generate \(\mathbf{\mu}_{t+r}\)
Figure 12: 16-channel thoracic electrical impedance system for hemodynamic monitoring.
Figure 13: Network architecture details: (a) point-to-point and (b) sequence-to-sequence models. Here, \(N\) is a symbol for the number of training sequence data.
and \(\mathbf{\sigma}_{i+r}\). Afterwards, a latent vector \(\mathbf{Z}_{t+r}\) is stochastically sampled in the sense of (11) through the reparametrization trick [18] and passes through a fully connected layer with reshaping (FC\({}^{\mathbf{Z}}\)), restoring the original data dimensionality. By employing FC\({}^{\mathbf{Z}}(\mathbf{Z}_{t+r})\) and \(\mathbf{C}_{t+r}\) as initial hidden and cell states, the LSTM \(\mathbf{D}\) generates a sequence, estimating the mirror of \(\mathcal{X}_{t+r}\), from a dummy value (\(\mathbf{0}\)), The output sequence can be expressed as
\[\mathcal{D}(\mathbf{z}_{t+r})=[H_{t}^{2},H_{t+1}^{2},\cdots,H_{t+r}^{2}] \tag{35}\]
For the prediction stage, in the same manner as in (35), the LSTM decoder \(\mathcal{D}_{\text{pred}}\) produces a sequence, predicting future \(\mathcal{X}_{t+r,p}\), from a dummy value (\(\mathbf{0}\)). Here, \(\mathbf{H}_{t+r}\) and \(\mathbf{C}_{t+r}\) in \(\mathbf{h}_{t+r}\) are used as initial hidden and cell states.
Training DetailsFor network training, the AdamW optimizer [27] was consistently used, which is an extension of the Adam optimizer including a weight decay regularization to enhance precision of model parameter updates. We also used a learning rate scheduling strategy known as the one-cycle learning rate policy [38]. In our implementation, Weight & Biases was utilized as a tool for logging the training process and optimizing hyperparameters. For the point-to-point models, we used a batch size of 1024, leaning rate of 0.01, weight decay rate of 0.01, and maximum epoch of 100. For the cycle-to-cycle models, we used a batch size of 128, leaning rate of 0.001, weight decay rate of 0.01, and maximum epoch of 100.
|
2305.01734 | Spectrum of Massive and Massless Ambitwistor Strings | Inspired by recent work arXiv:2301.11227 on massive ambitwistor strings this
paper examines the spectrum of such models using oscillator expansions. The
spectrum depends heavily on the constant related to the normal ordering of the
zero mode operator ${L_0}$ of the Virasoro algebra. The supergravity model is
investigated in more detail, and two anomaly-free variations are presented,
both with a rich spectrum and with tree scattering amplitudes that include a
kinematic Parke-Taylor factor for particles other than gravitons without a need
for an external current algebra. The spectrum of some of the models can also be
interpreted as containing three generations of the Pati-Salam model. | Christian Kunz | 2023-05-02T19:06:04Z | http://arxiv.org/abs/2305.01734v2 | # Spectrum of Massive and Massless Ambitwistor Strings
###### Abstract
Inspired by recent work arXiv:2301.11227 on massive ambitwistor strings this paper examines the spectrum of such models using oscillator expansions. The spectrum depends heavily on the constant related to the normal ordering of the zero mode operator \(L_{0}\) of the Virasoro algebra. The supergravity model is investigated in more detail, and two anomaly-free variations are presented, both with a rich spectrum and with tree scattering amplitudes that include a kinematic Parke-Taylor factor for particles other than gravitons without a need for an external current algebra. The spectrum of some of the models can also be interpreted as containing three generations of the Pati-Salam model.
Introduction
The authors in [1, 2, 3, 4, 5] examined dimensional reductions to 5 and 4 dimensions of a 6-dimensional massless ambitwistor string model. The resulting models in 4 dimensions contain two twistors describing massive particles. When equipped with maximal supersymmetry the models in 5 and 4 dimensions exhibit anomaly cancellation for its little group SL(2, \(\mathbb{C}\)) and also zero central charge for the Virasoro algebra if one assumes a central charge contribution of \(c_{5d}=10\) or \(c_{6d}=12\) arising from 5 or 6 compactified dimensions, respectively. Using the vertex operators provided in above references the genus zero worldsheet correlation functions lead to expected manifestly supersymmetric n-particle tree amplitudes.
These ambitwistor string models use worldsheet spinors and, therefore, the strings are either in the Neveu-Schwarz (NS) or Ramond (R) sector, depending on their oscillator expansion. The actual spectrum then depends on the value \(a\) of the zero mode \(L_{0}\) of the Virasoro algebra when applied to a physical state. \(a\) is determined by the Virasoro commutator \([L_{n},L_{-n}]\) when all fields, including ghosts and antighosts, are taken into account:
\[[L_{n},L_{m}]=(n-m)L_{n+m}+(\frac{c}{12}n^{3}-2an)\delta_{m+n,0} \tag{1.1}\]
It will be shown that the vertex operators in [3, 4, 5] implicitly assume that in (1.1) \(a=0\) in the R sector and \(a=1\) in the NS sector. It follows that in the R sector only the zero modes of the twistor fields and in the NS sector only the \(-\frac{1}{2}\) modes can contribute to physical states. For instance, this implicit assumption can be confirmed to happen when the extra dimensions are compactified with help of scalar fields (see appendix A of [5]).
On the other hand, one could also compactify extra components of the supertwistor and it will be shown that this can be done in such a way that in the massless limit it results in a supersymmetric ambitwistor string model with complete anomaly cancellation and a value of \(a=1\) in (1.1) for both the R and NS sector. This model seems to have a richer spectrum than the previous model (in section 5 it will be shown that this is actually not really true), with non-zero twistor modes contributing to physical states in both the NS and R sector1. The spectrum from modes in the NS sector leads to the same vertex operators as in the massless limit of the previous model, but the spectrum from modes in the R sector has some resemblance to the spectrum of the Berkovits-Witten twistor string with N=4 supersymmetry [6, 7], the main difference being that the new model has the additional SL(2,\(\mathbb{C}\)) little group symmetry, and it does not contain the 'dipole' states thought
of being responsible for the lack of unitarity in the Berkovits-Witten model of conformal supergravity[6]. One important feature of the new model is that tree scattering amplitudes of spin \(\leq 1\) particles contain a kinematic Parke-Taylor factor, without having to introduce an external current algebra. The model also has a non-supersymmetric variation with basically the same spectrum, except that the little group representation does not need to be the same for all the components of the multiplet. This non-supersymmetric version does not use auxiliary fields and was actually previously found independently by the author in [8, 9].
Unfortunately, because of the non-zero twistor modes in the R sector of the twistor components, these two models have complicated vertex operators, with unappealing scattering amplitudes. This motivated to find a model extension that keeps the Parke-Taylor factor for spin \(\leq 1\) particle scattering and stays anomaly-free but now with a vanishing \(L_{0}\) constant in the R sector. Then the spectrum nearly stays the same, as can be verified with a special interpretation of the spectrum of the \({\cal N}=8\) supersymmetry. Actually, the same interpretation can be applied to the spectrum of the original model of [4, 5] showing that it already includes the same rich spectrum.
The paper is organized as follows.
In section 2 the notation used in [5] is reviewed and supplemented with the oscillator expansion for all the supertwistor component fields.
In section 3 it is shown that the vertex operators in [3, 4, 5] indicate a spectrum that only involves the zero supertwistor modes in the R sector and the \(-\frac{1}{2}\) modes in the NS sector. There is also some discussion about different 'polarization pictures' in the quantization of twistor space one has to be careful about.
In section 4 the massless limit of the model is extended by changing the auxiliary fermionic fields to an auxiliary supertwistor with additional gauging such that the total model has all anomalies cancelled with \(c=0\) and \(a=1\) in (1.1). The spectrum is examined. When the twistor modes are in the NS sector, fixed vertex operators look like in the massless limit of the previous model, but when the modes are in the R sector, they rather resemble the ones of the Berkovits-Witten twistor string [6, 7], up to the little group symmetry and the absence of 'dipole' fields. It turns out that tree scattering of spin \(\leq 1\) particles leads automatically to a Parke-Taylor factor.
Motivated to simplify the vertex operators and scattering amplitudes of the model in section 4, it is extended once more in section 5 to get a vanishing \(L_{0}\) constant while staying anomaly-free and keeping the rich spectrum in the R sector. This is achieved mainly by reducing the bosonic auxiliary worldsheet spinors introduced in the previous section and treating the fermionic twistor modes and their duals in a more symmetric fashion. It is again a massless model without, unfortunately, a known relation to a massive version. The fixed and integrated vertex operators are determined and tree scattering amplitudes are computed, still containing a Parke-Taylor factor for spin \(\leq 1\) particle scattering.
Section 6 contains summary and discussion, including arguments that the spectrum of the
original massive model and the model of section 5 in the R sector includes the spin 1/2 spectrum of 3 generations of the Pati-Salam model [10], and also that the (truncated) spectrum in the NS sector can be re-interpreted as arising from spectral flow.
In appendix A the auxiliary supertwistor fields in the model of section 4 are replaced with the actual twistor fields, breaking the worldsheet supersymmetry in manifest fashion. The resulting model has the same spectrum as the section 4 model, but there is no need for the little group representation to be the same across the spectrum. When adjusting the notation, this model turns out to be the same as in [8, 9]. Like the model in section 4 it also has vertex operators leading to scattering amplitudes that are unappealing. This makes the model in section 5 more favorable.
## 2 Notation and Oscillator Expansion
The little group of the two twistor representation of a massive particle that keeps its timelike momentum \(k_{\alpha\dot{\alpha}}\) invariant includes SU(2) as a subgroup. In the following it will be referred to as 'the little group'. Dealing in this work with complexified twistor space, SU(2) will be regarded as extended to SL(2, \(\mathbb{C}\)) whose algebra is the complexification of the SU(2) algebra. \(k_{\alpha\dot{\alpha}}\) can be written using 2-spinors as
\[k_{\alpha\dot{\alpha}}=\kappa_{a\alpha}\kappa^{a}_{\dot{\alpha}}\,, \tag{2.1}\]
where a = 1, 2 is an SL(2, \(\mathbb{C}\)) little group index raised and lowered by \(\varepsilon^{ab}=\varepsilon^{[ab]},\varepsilon_{ab}=\varepsilon_{[ab]}, \varepsilon_{12}=1=\varepsilon^{12},\varepsilon_{ac}\varepsilon^{cb}=\delta^ {b}_{a}\). Little group contractions are denoted by
\[(v_{1}v_{2}):=v_{1a}v_{2b}\varepsilon^{ab}\,.\]
The dimensionally reduced action in 4 dimensions of [4, 5] is
\[S=\int_{\Sigma}{\cal Z}^{a}\cdot\bar{\partial}{\cal Z}_{a}+A_{ab}{\cal Z}^{a} \cdot{\cal Z}^{b}+a(\lambda^{2}-j^{H})+\tilde{a}(\tilde{\lambda}^{2}-j^{H})+ S_{m}\,, \tag{2.2}\]
where
* \(a=1,2\) is the little group index and the supertwistor fields \({\cal Z}_{a}=\varepsilon_{ab}{\cal Z}^{b}\) are worldsheet spinors, repackaged into _Dirac_ supertwistors of the form2 Footnote 2: There is some slight modification to the notation used in [4, 5] with regard to the position of spinor indices. \[{\cal Z}=(\lambda_{A},\mu^{A},\eta^{\cal I})\,:\qquad\lambda_{A}=(\lambda_{ \alpha},\tilde{\lambda}_{\dot{\alpha}})\,,\quad\mu^{A}=(\mu^{\dot{\alpha}}, \tilde{\mu}^{\alpha})\,,\quad\eta^{\cal I}=(\eta^{I},\tilde{\eta}_{I})\,,\] where \(\lambda_{A}\) and \(\mu^{A}\) are Dirac spinors made up of the homogeneous chiral and antichiral components of the twistor \(Z=(\lambda_{\alpha},\mu^{\dot{\alpha}})\) and dual twistor \(\tilde{Z}=(\tilde{\lambda}_{\dot{\alpha}},\tilde{\mu}^{\alpha})\). In the
fermionic components \(\eta^{\mathcal{I}}=(\eta^{I},\tilde{\eta}_{I})\) with \(I=1,\ldots,\frac{\mathcal{N}}{2}\), the index \(\mathcal{I}=1,\ldots,\mathcal{N}\) is the R-symmetry index, with \(\mathcal{N}=4\) for maximal Super-Yang-Mills (SYM) and \(\mathcal{N}=8\) for maximal supergravity. Indices are raised and lowered with a symmetric form for Grassmann even entities \(\epsilon^{AB},\epsilon_{AB},\epsilon^{AB}\epsilon_{AC}=\delta^{B}_{C}\), for example \(\lambda^{A}=\epsilon^{AB}\lambda_{B}=(\tilde{\lambda}^{\dot{\alpha}},\lambda^ {\alpha})\), and a skew form for Grassmann odd entities \(\Omega^{IJ},\Omega_{IJ},\Omega^{IJ}\Omega_{IK}=\delta^{J}_{K}\), for example \(\eta_{I}=\Omega_{IJ}\eta^{J}\). Also note the Dirac \(\gamma_{5}\) matrix defined here by \(\gamma_{5}^{AB}\rho_{B}=(-\rho^{\dot{\alpha}},\rho^{\alpha}),\gamma_{5}^{AB} \gamma_{5AC}=\delta^{B}_{C}\). For Grassmann odd entities it can be used to raise and lower indices, for Grassmann even identities it defines a 'dual' version. This will become important later. The inner product is defined as \(\mathcal{Z}_{1}\cdot\mathcal{Z}_{2}=\frac{1}{2}(\tilde{Z}_{1}\cdot Z_{2}+ \tilde{Z}_{2}\cdot Z_{1}+\tilde{\eta}_{1I}\eta_{2}^{I}+\tilde{\eta}_{2I}\eta_{ 1}^{I})\), \(\tilde{Z}_{1}\cdot Z_{2}=\tilde{\mu}_{1}^{\alpha}\lambda_{2\alpha}+\tilde{ \lambda}_{1\dot{\alpha}}\mu_{2}^{\dot{\alpha}}\,,\) with special treatment of \(\bar{\partial}Z\) when taking the dual: \(\widetilde{\bar{\partial}Z}=-\bar{\partial}\tilde{Z}\,\). The only non-trivial OPEs are \(\mathcal{Z}_{a}^{A}(z)\cdot\mathcal{Z}_{Bb}(0)=\frac{\delta^{A}_{B}\epsilon_ {ab}}{z}+\cdots\).
* little group transformations are gauged by the fields \(A_{ab}=A_{(ab)}\). \(a\) and \(\tilde{a}\) are worldsheet \((0,1)\)-forms that act as Lagrange multipliers to constrain the mass operators \(\lambda^{2}=\frac{1}{2}(\lambda_{\alpha}\lambda^{\alpha})=\det(\lambda^{a}_{ \alpha})\), \(\tilde{\lambda}^{2}=\frac{1}{2}(\tilde{\lambda}_{\dot{\alpha}}\tilde{\lambda }^{\dot{\alpha}})=\det(\tilde{\lambda}^{a}_{\dot{\alpha}})\) to be the same as the current \(j^{H}\) associated to the element \(h\in\mathcal{G}\) living in the Cartan subalgebra of some symmetry of the system. This article will not be concerned about the particularity of particle masses. Therefore, there will be no further discussion of \(j^{H}\).
* the action \(S_{m}\) represents worldsheet matter and different choices for \(S_{m}\) construct a variety of physically interesting models. For supergravity \(S_{m}\) is equal to \(S_{\rho_{1}}+S_{\tilde{\rho}_{2}}\) with \[S_{\rho}=\int_{\Sigma}\tilde{\rho}^{A}\bar{\partial}\rho_{A}+b_{a}\lambda^{ Aa}\rho_{A}+\tilde{b}_{a}\lambda^{a}_{A}\tilde{\rho}^{A}\,,\] where \((\rho_{A},\tilde{\rho}^{A})\) are fermionic worldsheet spinors raised and lowered with \(\gamma_{5}\) and \((b_{a},\tilde{b}_{a})\) are \((0,1)\)-forms on the worldsheet acting as fermionic Lagrange multipliers for the constraints \(\lambda^{Aa}\rho_{A}=0=\lambda^{a}_{A}\tilde{\rho}^{A}\).
In ambitwistor space vertex operators are typically built with help of plane wave representatives. For the current supersymmetric system this looks like
\[\mathcal{V}=\qquad\int\!\!d^{2}u\;d^{2}v\;\;\mathcal{W}(u)\;\; \bar{\delta}^{4}((u\lambda_{A})-(v\kappa_{A}))\;\;\bar{\delta}((\epsilon v)-1 )e^{u_{a}\left(\mu^{Aa}\epsilon_{A}+\eta^{\mathcal{I}a}q_{\mathcal{I}}\right) -\frac{1}{2}(\xi v)q^{2}}\;,\] \[V=\int_{\Sigma}\!\!d\sigma\!\!\int\!\!d^{2}u\;d^{2}v\;\;w(u)\;\; \bar{\delta}^{4}((u\lambda_{A})-(v\kappa_{A}))\;\;\bar{\delta}((\epsilon v)-1 )e^{u_{a}\left(\mu^{Aa}\epsilon_{A}+\eta^{\mathcal{I}a}q_{\mathcal{I}}\right) -\frac{1}{2}(\xi v)q^{2}}\;, \tag{2.3}\]
where \({\cal V}\) and \(V\) stand for a fixed and integrated vertex operator, respectively, polarization data \(\epsilon_{A}\) is defined by \(\epsilon_{A}=\epsilon_{a}\kappa_{A}^{a}\) with \(\kappa_{A}^{a}=(\kappa_{\alpha}^{a},\kappa_{\dot{\alpha}}^{a})\) being the momentum of the wave according to (2.1), super polarization data \(q_{\cal I}\) is defined by \(q_{\cal I}=\epsilon_{a}q_{\cal I}^{a}\) with \(q_{\cal I}^{a}\) being the supermomentum, and \((\epsilon_{a},\xi_{a})\) with \((\epsilon\xi)=1\) form a basis of the fundamental representation of SL(2, \(\mathbb{C}\)) so that the supersymmetry generators can be defined as \(Q_{a{\cal I}}=(\xi_{a}q_{\cal I}+\epsilon_{a}\Omega_{{\cal I}{\cal J}}\frac{ \partial}{\partial q_{\cal J}})\)[2].
The quadratic differentials \({\cal W}\) and \(w\) in (2.3) are theory dependent and are allowed to depend on the parameter \(u\) as well as the worldsheet matter systems. For a fixed vertex operator in supergravity \({\cal W}\) is just the product of fermionic ghost fields and delta functions of bosonic ghosts. For an integrated vertex operator an additional integration over the worldsheet is applied in (2.3) to take care of gauge fixing the worldsheet diffeomorphisms and \(w\) is [4]
\[w(u)=\delta\big{(}{\rm Res}_{\sigma}(\lambda^{2}-j^{H})\big{)} \ \delta\big{(}{\rm Res}_{\sigma}(\tilde{\lambda}^{2}-j^{H})\big{)}\] \[\ \
Setting \(\mathcal{N}=8\) from now on, the \(L_{0}\) constant \(a\) is given by
\[24a=0_{\mathcal{Z}}-2_{bc}-6_{M\!N}-2_{mn}-2_{\tilde{m}\tilde{n}}+2(a_{\rho \tilde{\rho}}+4_{\beta\gamma}+4_{\tilde{\beta}\tilde{\gamma}})=4+2a_{\rho\tilde {\rho}}+a_{6d}\,,\]
where \(a_{\rho\tilde{\rho}}=-8\) in the R sector and \(a_{\rho\tilde{\rho}}=4\) in the NS sector, i.e. \(a=-\frac{1}{2}+\frac{1}{24}a_{6d}\) in the R sector and \(a=\frac{1}{2}+\frac{1}{24}a_{6d}\) in the NS sector.
The actual spectrum of the model can be determined by considering oscillator expansions of the supertwistor fields, given here on the Riemann sphere:
\[\lambda_{aA}=\sum_{n}\!\lambda_{aAn}\sigma^{-n-\frac{1}{2}},\quad\mu^{A}_{a}= \sum_{n}\!\mu^{A}_{an}\sigma^{-n-\frac{1}{2}},\quad\eta^{\mathcal{I}}_{a}= \sum_{n}\!\eta^{\mathcal{I}}_{an}\sigma^{-n-\frac{1}{2}}\,, \tag{2.5}\]
with \(n\,\epsilon\,\mathbb{Z}\) in the R sector and \(n\,\epsilon\,\mathbb{Z}+\frac{1}{2}\) in the NS sector. The expansions of the energy momentum \(T\) and its zero mode \(L_{0}\) are after normal ordering
\[T=\sum_{n\in\mathbb{Z}}\!L_{n}\sigma^{-n-2},\quad L_{0}=\!\sum_{n\in\mathbb{Z} }\!n\,:\mu^{A}_{an}\lambda^{a}_{A-n}:+\sum_{n\in\mathbb{Z}}\!n\,:\tilde{\eta} ^{a}_{I-n}\,\eta^{I}_{an}:+\dots\,, \tag{2.6}\]
where \(\dots\) denotes contributions from non-twistor fields.
## 3 Spectrum of the Supergravity Model
After reviewing the notation in the previous section, the spectrum can now be analyzed. It is clear from looking at the fixed vertex operator in (2.3) and the expansion of \(L_{0}\) in (2.6) that the spectrum is either generated by appropriate homogeneous functions of just zero modes of \(\lambda^{a}_{A}\) and \(\eta^{\mathcal{I}}_{a}\) in the R sector or by products of exactly two \(-\frac{1}{2}\) modes of \(\lambda^{a}_{A}\) and \(\eta^{\mathcal{I}}_{a}\) in the NS sector, otherwise the vertex operator would be required to show derivatives of the twistor fields4. The \(\mu^{A}_{an}\) modes are excluded from the spectrum because BRST cohomology requires that the non-negative modes of the \(\lambda^{Aa}\rho_{A}\) current annihilate physical states.
Footnote 4: For instance such vertex operators appear for the Berkovits-Witten twistor string [7].
Therefore, the model must have an \(L_{0}\) constant that is \(a=0\) with \(a_{6d}=12\) in the R sector and \(a=1\) with the same \(a_{6d}=12\) in the NS sector. But there is also the need to give a central charge contribution of exactly 12. The requirement of \(c_{6d}=12=a_{6d}\) can easily be achieved by adding 6 bosonic scalars (see appendix A in [5]). On a side note, in the NS sector, \(a_{6d}=c_{6d}\) always holds independently of the number of fermionic or bosonic worldsheet scalar or spinor fields added for compactification, i.e. always \(a=1\) in the NS sector. This is not true for the R sector.
The conclusion is that the model of [5] operates in both the R and NS sector, using the same vertex operators (2.3). One important remark concerning the lowest modes of bosonic twistor components needs to be added. When taking the massless limit, the vertex operator (2.3) can be made to degenerate into a vertex operator of positive helicity and one of negative helicity [2, 5]. A vertex operator for positive helicity uses the twistor components \(\lambda^{a}_{\alpha 0}\) or \(\lambda^{a}_{\alpha-\frac{1}{2}}\) as creation operators, and a vertex operator for negative helicity uses the dual twistor components \(\tilde{\lambda}^{a}_{\dot{\alpha}0}\) or \(\tilde{\lambda}^{a}_{\alpha-\frac{1}{2}}\) as creation operators. But they cannot co-exist in the same picture, otherwise a combination of them could be used to generate physical states without being able to assign a helicity (the 'googly' problem). This is similar to being unable to choose a Kahler polarization simultaneously on both twistor and dual twistor space [11] (see also Table 1 in next section).
In other words, the vertex operator (2.3) hides the fact of potentially operating in two different pictures. In the R sector one picture has the first two components in \(\lambda^{a}_{A0}\) and \(\mu^{aA}_{0}\) as creation operators and the last two as annihilation operators and the other picture has the nature of the components reversed. In the NS sector it is even more complicated: in one picture \(\tilde{\lambda}^{a}_{\dot{\alpha}-\frac{1}{2}}\) is an annihilation operator interchanging with \(\mu^{a\dot{\alpha}}_{\frac{1}{2}}\) as creation operator and in the other picture \(\lambda^{a}_{\alpha-\frac{1}{2}}\) is an annihilation operator swapped with \(\tilde{\mu}^{a\alpha}_{\frac{1}{2}}\) as creation operator5. The scattering equations seem to be able to interpolate between the two pictures. This becomes more evident in the massless limit and the next sections will focus more on this limit, although in section 5 it will be seen that with maximal supersymmetry the vertex operators in the massive model can stay in one picture and still generate the full \({\cal N}\!=\!8\) supergravity particle spectrum, i.e. there is no inconsistency in using the vertex operators in (2.3). For the remainder of the article the two pictures are referred to as 'polarization pictures'.
Footnote 5: This is a well-known fact already observed for the original 4-dimensional ambitwistor[12]. Note that, depending on the polarization, the index of some oscillators in (2.5) might get shifted by 1.
One additional observation in the NS sector is that only two \(-\frac{1}{2}\) modes can appear in the spectrum in order for \(L_{0}=1\) to be valid, i.e. the supersymmetric spectrum is truncated and only contains a single graviton and some spin \(\frac{3}{2}\) and spin 1 particles but no scalars or spin \(\frac{1}{2}\) fermions. This also implies that all particles have to remain massless in the NS sector. This issue will be discussed more, with a possible solution, in the last section 6 reserved for summary and discussion.
Intermediate Modification of the Supergravity Model
If one is on the lookout for a self-contained supergravity model with a spectrum that includes particles that can be interpreted as gluons or quarks, one requirement would be that the scattering of such particles leads to a Parke-Taylor factor. The model in the previous sections does not fulfill this condition without an external current algebra. This section introduces a modification that keeps the model anomaly-free but allows for such a Parke-Taylor factor. On the other hand, the model picks up some undesired features. In the next section it will get an additional extension that makes it more satisfactory. Nevertheless, it is worthwhile to not skip the intermediate step because it relates closely to other models in the literature.
The modification of the model consists by not fixing the masses of the two twistors and insisting on massless particles only, by doubling the number of fermionic components of the supertwistor, and by gauging them in similar fashion to the bosonic components with help of auxiliary fields:
\[S=\int_{\Sigma}{\cal Z}^{a}\cdot\bar{\partial}{\cal Z}_{a}+A_{ab}{\cal Z}^{a} \cdot{\cal Z}^{b}+S_{\rho_{1}}+S_{\bar{\rho}_{2}}+S_{\tau_{1}}+S_{\tau_{2}}\,,\]
with
\[S_{\rho}=\int_{\Sigma}\tilde{\rho}^{A}\bar{\partial}\rho_{A}+b_{ a}\lambda^{aA}\rho_{A}+\tilde{b}_{a}\lambda^{a}_{A}\tilde{\rho}^{A}\,,\] \[S_{\tau_{1}}=\int_{\Sigma}\tilde{\tau}_{1}^{\cal I}\bar{\partial }\tau_{1{\cal I}}+d_{1a}\eta^{aI}\tau_{1I}+\tilde{d}_{1a}\eta^{a}_{I}\tilde{ \tau}_{1}^{I}\,,\] \[S_{\tau_{2}}=\int_{\Sigma}\tilde{\tau}_{2}^{\cal I}\bar{\partial }\tau_{2{\cal I}}+d_{2a}\tilde{\eta}^{a}_{I}\tau_{2}^{\prime I}+\tilde{d}_{2a }\tilde{\eta}^{aI}\tilde{\tau}_{2I}^{\prime}\,,\]
where supertwistor fields \({\cal Z}\) have been extended to
\[{\cal Z}=(\lambda_{A},\mu^{A},\eta^{\iota})\,: \lambda_{A}=(\lambda_{\alpha},\tilde{\lambda}_{\dot{\alpha}})\,, \quad\mu^{A}=(\mu^{\dot{\alpha}},\tilde{\mu}^{\alpha})\,,\quad\eta^{\iota}=( \eta^{\cal I},\tilde{\eta}_{\cal I})\,,\;\eta^{\cal I}=(\eta^{I},\tilde{\eta} _{I})\,,\] \[\tilde{\eta}_{\cal I}=(\tilde{\eta}^{\prime}_{I},\eta^{\prime I })\,,\;I=1\ldots\frac{{\cal N}}{2},\,{\cal I}=1,\ldots,{\cal N},\,\iota=1, \ldots,2{\cal N}\,,\,{\cal N}=8\,,\]
\((\tau_{\cal I},\tilde{\tau}^{\cal I})=((\tau_{I},\tau^{\prime I}),(\tilde{ \tau}^{I},\tilde{\tau}^{\prime}_{I}))\) are bosonic worldsheet spinors, and \((b_{ra},\tilde{b}_{ra}),(d_{ra},\tilde{d}_{ra})\) are \((0,1)\)-forms on the worldsheet acting as fermionic Lagrange multipliers for the constraints \(\lambda^{aA}\rho_{1A}=\lambda^{a}_{A}\tilde{\rho}^{A}_{1}=\lambda^{aA}\tilde {\rho}_{2A}=\lambda^{a}_{A}\rho^{A}_{2}=\eta^{aI}\tau_{1I}=\eta^{a}_{I}\tilde{ \tau}^{I}_{1}=\tilde{\eta}^{\sigma}_{I}\tau^{\prime I}_{2}=\tilde{\eta}^{aI} \tilde{\tau}^{\prime}_{2I}=0\),
During BRST quantization the additional fermionic fields \(\{d_{ra},\tilde{d}_{ra}\}\) lead to new bosonic ghosts \(\{(\beta^{\prime}_{ra},\gamma^{\prime}_{ra}),(\tilde{\beta}^{\prime}_{ra}, \tilde{\gamma}^{\prime}_{ra})\}\). The SL(2,\(\mathbb{C}\)) anomaly coefficient becomes:
\[a_{sl2}=\frac{3}{2}(\frac{1}{2}(8-2{\cal N}))_{\cal Z}+\frac{3}{2}(2_{\beta \gamma}+2_{\tilde{\beta}\tilde{\gamma}}+2_{\beta^{\prime}\gamma^{\prime}}+2_ {\tilde{\beta}^{\prime}\tilde{\gamma}^{\prime}})-6_{M\!N}=\frac{3}{2}(8-{\cal N })=0\,.\]
It is still zero. The central charge vanishes as well:
\[c=(-8+2{\cal N})_{\cal Z}-26_{bc}-6_{M\!N}+2(4_{\rho\bar{\rho}}+4_{\beta\gamma}+4 _{\tilde{\beta}\tilde{\gamma}})+2(-{\cal N}_{\tau\bar{\tau}}+4_{\beta^{\prime} \gamma^{\prime}}+4_{\tilde{\beta}^{\prime}\gamma^{\prime}})=0\,.\]
As mentioned earlier, the \(L_{0}\) constant \(a\) in the NS sector does not change and stays equal to 1 and in the R sector:
\[24a=(16-4{\cal N})_{\cal Z}-2_{bc}-6_{M\!N}+2(-8_{\rho\bar{\rho}}+4_{\beta \gamma}+4_{\tilde{\beta}\tilde{\gamma}})+2(2{\cal N}_{\tau\bar{\tau}}+4_{\beta ^{\prime}\gamma^{\prime}}+4_{\tilde{\beta}^{\prime}\tilde{\gamma}^{\prime}})=2 4\,.\]
i.e. \(a=1\) in the R sector as well.
Although the fermionic modes in the supertwistors are doubled, similarly to the exclusion of \(\mu_{n}^{A}\) modes from the spectrum because of BRST cohomology, now the \(\tilde{\eta}_{{\cal I}n}\) cannot contribute to the spectrum because of the requirement that the non-negative modes of the \(\eta^{aI}\tau_{1I}\) and \(\tilde{\eta}_{I}^{a}\tau_{2}^{II}\) currents annihilate physical states. This leaves the spectrum in the NS sector unchanged but the one in the R sector gets modified considerably, because there always has to be a single \(-1\) twistor mode of \(\lambda_{A}^{a}\) or \(\eta_{a}^{\cal I}\) in addition to an appropriate homogeneous function of just zero modes.
The internal little group representation is assumed to be in a singlet, making sure that there is only one graviton-like excitation. The R-symmetry is then an SU(4) group in each of the two pictures, one with polarization \(\tilde{\lambda}_{\dot{\alpha}}\!\sim\!\partial/\partial\mu^{\dot{\alpha}}, \tilde{\mu}^{\alpha}\!\sim\!\partial/\partial\lambda_{\alpha},\tilde{\eta}_{I }\!\sim\!\partial/\partial\eta^{I},\tilde{\eta}^{\prime}_{I}\!\sim\!\partial/ \partial\eta^{I}\) for the zero modes, the other one with polarization \(\lambda_{\alpha}\!\sim\!\partial/\partial\tilde{\mu}^{\alpha},\mu^{\dot{\alpha }}\!\sim\!\partial/\partial\tilde{\lambda}_{\dot{\alpha}},\eta^{I}\!\sim\! \partial/\partial\tilde{\eta}^{\prime}_{I},\eta^{\prime I}\!\sim\!\partial/ \partial\tilde{\eta}_{I}\). Table 1 displays the spectrum, using the standard notation
\[\langle\lambda g\rangle=\varepsilon^{\alpha\beta}\lambda_{\alpha}g_{\beta}=- \,\langle g\lambda\rangle\qquad[\tilde{\lambda}f]=\varepsilon^{\dot{\alpha} \dot{\beta}}\tilde{\lambda}_{\dot{\alpha}}f_{\dot{\beta}}=-[f\tilde{\lambda}]\,.\]
All helicity states have double occurrence, reflecting the symmetry between twistor fields and their duals, and being related to each other through a Fourier transformation [6]. The table also shows a strong resemblance with part of the conformal supergravity spectrum of the Berkovits-Witten twistor string with N=4 supersymmetry [6, 7]. However, because of the supersymmetric gauging it does not contain the 'dipole' states thought of being responsible for the lack of unitarity in the Berkovits-Witten model [6].
In the massless limit vertex operators are more conveniently distinguished by the polarization picture they are operating in, and the polarization data can be chosen in a special little group gauge such that only one of the two twistors is non-zero, different per picture, and such that the effect of one of the two \(S_{\rho}\) actions is swallowed up during the transition from the integration measure for scattering amplitudes of the massive model to the simplified integration measure in the massless limit [2, 5], thus providing vertex operators as in [13]. Further, the two actions of auxiliary fields \(S_{\tau_{1}}\) and \(S_{\tau_{2}}\) can be 'distributed' among the
two pictures, meaning that the \(\tau_{2}\) fields do not contribute to vertex operators involving \(\tau_{1}\) fields and the other way around. Then the vertex operators appear like
\[\begin{array}{ll}{\cal V}=&\int\!\frac{du}{u^{3}}\ \ {\cal W}(u)\ \vec{ \delta}^{2}(u\lambda_{\alpha}-\epsilon_{\alpha}\,)\ e^{u\left(\mu^{\dot{\alpha}} \tilde{\epsilon}_{\dot{\alpha}}+\eta^{\prime I}q_{I}+\eta^{I}q^{\prime}_{I} \right)}\,\\ \tilde{\cal V}=&\int\!\frac{du}{u^{3}}\ \ \tilde{\cal W}(u)\ \vec{\delta}^{2}(u \tilde{\lambda}_{\dot{\alpha}}-\tilde{\epsilon}_{\dot{\alpha}})\ e^{u\left( \tilde{\mu}^{\alpha}\epsilon_{\alpha}+\tilde{\eta}^{\prime}_{I}\tilde{q}^{I}+ \tilde{\eta}_{I}\tilde{q}^{\prime I}\right)}\,\\ V=&\int\!\!\frac{du}{u^{3}}\ \ w(u)\ \vec{\delta}^{2}(u \lambda_{\alpha}-\epsilon_{\alpha}\,)\ e^{u\left(\mu^{\dot{\alpha}}\tilde{ \epsilon}_{\dot{\alpha}}+\eta^{\prime I}q_{I}+\eta^{I}q^{\prime}_{I}\right)} \,\\ \tilde{V}=&\int\!\!\frac{du}{u^{3}}\ \ \tilde{w}(u)\ \vec{\delta}^{2}(u \tilde{\lambda}_{\dot{\alpha}}-\tilde{\epsilon}_{\dot{\alpha}})\ e^{u\left( \tilde{\mu}^{\alpha}\epsilon_{\alpha}+\tilde{\eta}^{\prime I}\tilde{q}^{I}+ \tilde{\eta}_{I}\tilde{q}^{\prime I}\right)}\,\end{array} \tag{4.1}\]
where for fixed vertex operators \({\cal W}(u)\) and \(\tilde{\cal W}(u)\) are products of fermionic ghost fields and delta functions of bosonic ghost fields and in the R-sector with an additional factor of the form \(\,u[\tilde{\lambda}\tilde{\epsilon}]\,\) or \(\,u\,\tilde{\eta}_{I}\tilde{q}^{I}\,\) in \({\cal W}(u)\) and \(\,u\langle\lambda\epsilon\rangle\,\) or \(\,u\,\eta^{I}q_{I}\,\) in \(\tilde{\cal W}(u)\), and for integrated vertex
operators \(w(u)\) and \(\tilde{w}(u)\) are
\[w(u) = \left(u[\tilde{\lambda}\tilde{\epsilon}]-u^{2}\,[\tilde{\epsilon} \rho_{2}]\,[\tilde{\epsilon}\tilde{\rho}_{2}]\right)\left(u\,\tilde{\eta}_{I}\, \tilde{q}^{I}-u^{2}\,\,\tilde{q}_{I}\tau_{2}^{\prime I}\,\,\tilde{q}^{J}\! \tilde{\tau}_{2J}^{\prime}\right)_{\tilde{q}^{I}\neq 0}\] \[\tilde{w}(u) = \left(u\langle\lambda\epsilon\rangle\!-u^{2}\!\langle\epsilon \rho_{1}\rangle\langle\epsilon\tilde{\rho}_{1}\rangle\right)\left(u\,\eta^{I}q _{I}\,-u^{2}\,\,q^{I}\tau_{1I}\,\,q_{J}\tilde{\tau}_{1}^{J}\right)_{q_{I}\neq 0 }\,, \tag{4.2}\]
with the same additional factor in the R sector as for fixed vertex operators. The distinctions between the \(q^{\prime}\) and \(q\) parameters and between \(\tilde{q}^{\prime}\) and \(\tilde{q}\) are actually not justified because they would lift the R-symmetry from SU(4) to SU(4)\(\,\otimes\,\)SU(4) (see next section). Also, by setting \(q_{I}\)=0(=\(q_{I}^{\prime}\)) and \(\tilde{q}^{I}\)=0(=\(\tilde{q}^{\prime I}\)) and omitting the new factors in (4.2) involving the \(\eta\) and \(\tau\) fields one obtains simpler vertex operators representing gravitons. Note that integrated vertex operators \(V\) and \(\tilde{V}\) containing the factor with \(\tau\) fields automatically have at least one fermionic twistor component and, therefore, stand for a particle with spin\(\leq\frac{3}{2}\). Further, for every second row in Table 1 the corresponding integrated vertex operators are not covered by (4.2) but they are taken care of by the ones for the equivalent Fourier-transformed states in the other picture.
Tree scattering amplitudes of these vertex operators will contain the reduced determinants of Hodges matrices like in references [13, 1, 2, 3, 4, 5], and also the supersymmetric exponential factor
\[e^{F_{\cal N}}=\exp\!\left[\,\sum_{j\in-\atop k\in+}\frac{u_{j}u_{k}}{\sigma_ {j}\!-\!\sigma_{k}}\tilde{q}_{j}^{I}q_{kI}\right]\,,\,\,q_{I}\!=\!\Omega_{IJ} \tilde{q}^{J}. \tag{4.3}\]
What is new here are the factors involving the \(\tau\) fields. They lead to fermionic Hodges-like matrices with reduced determinants that, because of the Grassmann odd nature of the components of the super polarization data, are limited in the sense that the same location can occur at most 4 times per term, and if one assumes that all vertex operators with \(\tau\) fields stand for particles with spin 1 then each term reduces to a product of propagators between all locations of the same helicity, each end of the propagators appearing exactly twice up to two showing up only once, one of them representing the location of a fixed vertex operator. By pulling down twice the exponent from (4.3), one for each fixed vertex operator, one can connect two propagator products, one for positive and one for negative helicities, and arrive at a Parke-Taylor factor for all these particles. More details will be given in the next section for the more interesting improved model.
The current model has some undesired features in the R sector. The vertex operators have these additional factors arising from non-zero twistor modes, which make the scattering amplitudes look unattractive. Also, although the model features a Parke-Taylor factor for scattering of spin 1 particles, it does not allow for full permutation symmetry of the entries (the same helicities need to be arranged together).
In appendix A another model found by the author earlier is described in the current notation which has the same spectrum as in Table 1 with similarly unappealing scattering amplitudes.
## 5 Improved Anomaly-free Supergravity Model Extension
The model of the previous section could be improved by making the \(L_{0}\) constant \(a\) zero in the R sector. This can be done by introducing 4 bosonic Lagrange multipliers with associated fermionic ghost-antighost pairs and adding 8 fermionic worldsheet spinors or, equivalently, reducing 8 bosonic worldsheet spinors, which keeps the central charge unchanged but reduces \(a\) by 1.
The minimally changed model is:
\[\begin{split} S=\int_{\Sigma}&\mathcal{Z}^{a} \cdot\bar{\partial}\mathcal{Z}_{a}+A_{ab}\mathcal{Z}^{a}\cdot\mathcal{Z}^{b}+ a\lambda^{2}+\tilde{a}\tilde{\lambda}^{2}+S_{\rho_{1}}+S_{\tilde{\rho}_{2}}+S_{ \tau_{1}}+S_{\tau_{2}}\,,\\ &\mathcal{Z}=\left(\lambda_{A},\mu^{A},\eta^{\iota}\right),\; \eta^{\iota}=\left(\eta^{\mathcal{I}},\tilde{\eta}_{\mathcal{I}}\right),\; \eta^{\mathcal{I}}=\left(\eta^{I},\tilde{\eta}_{I}\right),\;\tilde{\eta}_{ \mathcal{I}}=\left(\tilde{\eta}^{\prime}_{I},\eta^{\prime\dot{I}}\right),\\ & I,\dot{I}=1\ldots\frac{\mathcal{N}}{2},\,\mathcal{I}=1,\ldots, \mathcal{N},\,\iota=1,\ldots,2\mathcal{N}\,,\,\mathcal{N}=8\,,\end{split} \tag{5.1}\]
\[\begin{split} S_{\rho}&=\int_{\Sigma}\tilde{\rho}^{A }\bar{\partial}\rho_{A}+b_{a}\lambda^{aA}\rho_{A}+\tilde{b}_{a}\lambda^{a}_{A} \tilde{\rho}^{A}\,,\\ S_{\tau_{1}}&=\int_{\Sigma}\tilde{\tau}_{1}^{I}\, \bar{\partial}\tau_{1I}+d_{1a}\eta^{aI}\tau_{1I}+\tilde{d}_{1a}\eta^{a}_{I} \tilde{\tau}_{1}^{I}+g_{1}\tilde{\tau}_{1}^{I}\tau_{1I}\,,\\ S_{\tau_{2}}&=\int_{\Sigma}\tilde{\tau}_{2\dot{I}} \,\bar{\partial}\tau_{2}^{I}+d_{2a}\tilde{\eta}^{a}_{\dot{I}}\tau_{2}^{I}+ \tilde{d}_{2a}\tilde{\eta}^{a\dot{I}}\tilde{\tau}_{2\dot{I}}+g_{2}\tilde{\tau }_{2\dot{I}}\tau_{2}^{I}\,,\end{split} \tag{5.2}\]
where \(\mathcal{Z}\) is the extended supertwistor from last section, \((\rho_{A},\tilde{\rho}^{A})\) are the same auxiliary fermionic worldsheet spinors, \(\tau_{1I}\) and \(\tau_{2\dot{I}}\) are auxiliary bosonic worldsheet spinors gauging the extended fermionic components of the supertwistor, \((d_{ra},\tilde{d}_{ra})\) and \((g_{r})\) are fermionic and bosonic Lagrange multipliers, respectively, for gauge constraints satisfied by the \(\tau\) fields, and \(a\) and \(\tilde{a}\) are the Lagrange multipliers from the original action (2.2) in section 2. \(j^{H}\) is set to 0 here to ensure the massless limit, mainly because integrated vertex operators for the massive case are unknown. Also, the second half \(\dot{I}\) of indices \(\mathcal{I}\) and the indices of \(\tau_{2\dot{I}}\) have been dotted, for later convenience.
One observation is that the \(\rho_{r}\) and \(\tau_{r}\) fields could be bundled into a couple of supertwistors with reversed statistics, as has been done in Skinner's model [14], although there they transform under the little group. They do not contribute to the spectrum and are considered purely as auxiliary, not as matter fields.
During BRST quantization the additional fermionic \(\{d_{ra},\tilde{d}_{ra}\}\) and bosonic \(\{g_{r}\}\) fields lead to other new bosonic \(\{(\beta^{\prime}_{ra},\gamma^{\prime}_{ra}),(\tilde{\beta}^{\prime}_{ra},\tilde{ \gamma}^{\prime}_{ra})\}\) and fermionic ghosts \(\{(s_{r},t_{r}))\}\), respectively. The SL(2,\(\mathbb{C}\)) anomaly coefficient then becomes like in section 4:
\[a_{sl2}=\frac{3}{2}(\frac{1}{2}(8-2\mathcal{N}))_{\mathcal{Z}}-6_{M\!N}+\frac{3 }{2}(2_{\beta\gamma}+2_{\tilde{\beta}\tilde{\gamma}}+2_{\beta^{\prime}\gamma^{ \prime}}+2_{\tilde{\beta}^{\prime}\tilde{\gamma}^{\prime}})=\frac{3}{2}(8- \mathcal{N})=0\,.\]
The central charge is:
\[c\!=\!(\!-8\!+\!2\mathcal{N})_{\mathcal{Z}}\!\!-\!26_{bc}\!-\!6_{M\!N}\!\!-\!2_ {mn}\!\!-\!2_{\tilde{m}\tilde{n}}\!\!+\!2(4_{\rho}\!+\!4_{\beta\gamma}\!+\!4_{ \tilde{\beta}\tilde{\gamma}}\!\!-\!\frac{1}{2}\mathcal{N}_{\!\tau}\!\!+\!4_{ \beta^{\prime}\gamma^{\prime}}\!\!+\!4_{\tilde{\beta}^{\prime}\tilde{\gamma}^{ \prime}}\!\!-\!2_{st})\!=\!-8\!+\!\mathcal{N}\!=\!0\,.\]
The \(L_{0}\) constant \(a\) stays 1 in the NS sector and in the R sector it changes to:
\[24a\!=\!\!(\!16\!-\!4\mathcal{N})_{\mathcal{Z}}\!\!-\!2_{bc}\!-\!6_{M\!N}\!\!- \!2_{mn}\!-\!2_{\tilde{m}\tilde{n}}\!\!+\!2(\!-8_{p}\!\!+\!4_{\beta\gamma}\!\! +\!4_{\tilde{\beta}\tilde{\gamma}}\!\!+\!\mathcal{N}_{\!\tau}\!\!+4_{\beta^{ \prime}\gamma^{\prime}}\!\!+\!4_{\tilde{\beta}^{\prime}\tilde{\gamma}^{\prime }}\!\!-\!2_{st})\!=\!16\!-\!2\mathcal{N}\!=\!0\,.\]
i.e. \(a=0\) in the R sector, as desired. And the model is anomaly-free as well.
The spectrum can be examined from the value of the \(L_{0}\) constant \(a\). As in the previous section it might look differently depending on the selected polarization picture, \(\tilde{\lambda}_{\dot{\alpha}}\sim\partial/\partial\mu^{\dot{\alpha}},\tilde {\mu}^{\alpha}\sim\partial/\partial\lambda_{\alpha},\tilde{\eta}_{\dot{I}} \sim\partial/\partial\eta^{I}\), \(\tilde{\eta}^{\prime}_{I}\sim\partial/\partial\eta^{I}\) versus \(\lambda_{\alpha}\sim\partial/\partial\tilde{\mu}^{\alpha},\mu^{\dot{\alpha}} \sim\partial/\partial\tilde{\lambda}_{\dot{\alpha}},\eta^{I}\sim\partial/ \partial\tilde{\eta}^{\prime}_{I},\eta^{\prime\dot{I}}\sim\partial/\partial \tilde{\eta}_{\dot{I}}\). Because the fermionic modes are not critical for the consistency of helicity assignment in the polarization, one can try to use in the R sector both of the zero modes \(\eta^{I}_{0}\) and \(\tilde{\eta}_{0\dot{I}}\) as creation operators with \(\tilde{\eta}^{\prime}_{0I}\) and \(\eta^{\prime\dot{I}}_{0}\) as annihilation operators in both pictures, i.e. treating them in symmetric fashion, and similarly the \(-\frac{1}{2}\) modes in the NS sector. This lifts the R-symmetry from SU(4) to SU(4)\(\,\times\,\)SU(4), subgroup of SU(8), as reflected in Table 2. By counting all particle modes of equal spin together one obtains the \(\mathcal{N}=8\) supergravity multiplet:
1 spin \(\pm 2\) boson, 8 spin \(\pm\frac{3}{2}\) fermions, 28 vector bosons, 56 spin \(\pm\frac{1}{2}\) fermions, and 70 scalars, all in all 128 bosonic and 128 fermionic modes.
This applies to the original massive model as well (although from a strict oscillator point of view there are only enough fermionic creation modes for \(\frac{\mathcal{N}}{2}=4\) supersymmetry). Exactly the same spectrum is covered fully in both polarization pictures, and they can be related through a Fourier transformation like in conformal supergravity[6]. Therefore, in principle one can stay just in one picture when using vertex operators and computing scattering amplitudes. This is also valid in the massive case, with some modes potentially grouped together making up massive particles, for instance
\((\frac{3}{2},-\frac{1}{2}|4\!\otimes\!1),(-\frac{3}{2},\frac{1}{2}|1\!\otimes\!4 ),(\frac{3}{2},-\frac{1}{2}|1\!\otimes\!\bar{4}),(-\frac{3}{2},\frac{1}{2}|\bar {4}\!\otimes\!1)\) to represent up to 8 massive spin \(\frac{3}{2}\) particles or
\((\pm 1|1\!\otimes\!6),(\pm 1|6\!\otimes\!1)\) and the 6 from \((0|4\!\otimes\!4=10\!\oplus\!6),(0|\bar{4}\!\otimes\!\bar{4}=\bar{10}\!\oplus\!6)\) for up to 12 massive vector bosons.
First it seems that the spectrum in the NS sector is the same as in the R sector, but because of the requirement of not having more than 2 on-shell oscillator modes it is severely truncated: the supersymmetric multiplet is reduced to one graviton with spin \(\pm 2\), 8 spin \(\pm\frac{3}{2}\) particles, and 28 vector bosons, leaving out 56 spin \(\pm\frac{1}{2}\) fermions and 70 scalars, and thus breaking target space supersymmetry. Further, without these lower spin modes, the particles cannot acquire mass through some symmetry breakdown and must remain massless. On the other hand, in the discussion of section 6 it will be argued that the spectrum
in the NS sector can be viewed as arising from the R sector through a little group gauge transformation that in the quantized theory can be interpreted as a spectral flow operation.
Choosing the same little group gauge and polarization data as in section 4 similar vertex operators (4.1) and (4.2) are obtained in the two polarization pictures but without the additional factor in the R-sector and with the \(\tau^{\prime}\) fields replaced by the ordinary \(\tau\):
\[{\cal V}= \int\!\frac{du}{u^{3}}\ \ {\cal W}(u)\ \bar{\delta}^{2}(u\lambda_{ \alpha}-\epsilon_{\alpha})\ \ e^{u\left(\mu^{\dot{\alpha}}\tilde{\epsilon}_{\dot{\alpha}}+{\eta^{ \prime}}^{I}q_{I}+\eta^{I}q^{\prime}_{I}\right)}\,\] \[\tilde{\cal V}= \int\!\frac{du}{u^{3}}\ \ \tilde{\cal W}(u)\ \bar{\delta}^{2}(u \tilde{\lambda}_{\dot{\alpha}}-\tilde{\epsilon}_{\dot{\alpha}})\ \ e^{u\left(\bar{\mu}^{\alpha} \epsilon_{\alpha}+\bar{\eta}^{\prime}_{I}\bar{q}^{I}+\bar{\eta}_{I}\bar{q}^{ \prime}\right)}\,\] \[V= \int_{\Sigma}\!d\sigma\!\!\int\!\frac{du}{u^{3}}\ \ w(u)\ \ \bar{\delta}^{2}(u \lambda_{\alpha}-\epsilon_{\alpha})\ \ e^{u\left(\mu^{\dot{\alpha}}\tilde{ \epsilon}_{\dot{\alpha}}+{\eta^{\prime}}^{I}q_{I}+\eta^{I}q^{\prime}_{I}\right)}\, \tag{5.2}\] \[\tilde{V}= \int_{\Sigma}\!d\sigma\!\!\int\!\frac{du}{u^{3}}\ \ \tilde{w}(u)\ \ \bar{\delta}^{2}(u \tilde{\lambda}_{\dot{\alpha}}-\tilde{\epsilon}_{\dot{\alpha}})\ \ e^{u\left(\bar{\mu}^{\alpha} \epsilon_{\alpha}+\bar{\eta}^{\prime}_{I}\bar{q}^{I}+\bar{\eta}_{I}\bar{q}^{ \prime}\right)}\,\]
where for fixed vertex operators \({\cal W}(u)\) and \(\tilde{\cal W}(u)\) are again products of fermionic ghost fields and delta functions of bosonic ghost fields, and for integrated vertex operators
\[w(u)= \left(u[\tilde{\lambda}\tilde{\epsilon}]-u^{2}\left[\tilde{ \epsilon}\rho_{2}\right]\left[\tilde{\epsilon}\tilde{\rho}_{2}\right]\right) \left(u\,\tilde{\eta}_{I}\Omega^{I\dot{J}}q_{J}-u^{2}\ q_{\dot{I}}\tau_{2}^{ \dot{I}}\ q_{\dot{J}}\Omega^{j\dot{K}}\tilde{\tau}_{2\dot{K}}\right)_{q_{I}\neq 0}\] \[\tilde{w}(u)= \left(u\langle\lambda\epsilon\rangle-u^{2}\langle\epsilon\rho_ {1}\rangle\langle\epsilon\tilde{\rho}_{1}\rangle\right)\left(u\,\eta^{I} \Omega_{IJ}\tilde{q}^{J}\ -u^{2}\ \bar{q}^{I}\tau_{1I}\ \tilde{q}^{J}\Omega_{JK}\tilde{\tau}_{1}^{J}\right)_{\bar{q}^{I}\neq 0}. \tag{5.3}\]
In contrast to the model in the previous section, \(q^{\prime}\) and \(q\) are distinct parameters and also \(\tilde{q}^{\prime}\) and \(\tilde{q}\) because of the \(\mathop{\rm SU}(4)\otimes\mathop{\rm SU}(4)\) R-symmetry. Further, by setting \(q_{I}\)=0 and \(\tilde{q}^{I}\)=0 and omitting the new factors in (5.3) involving the \(\eta\) and \(\tau\) fields one obtains simpler vertex operators including the ones representing gravitons.
To get the tree scattering matrix, let \(g\) denote the set of positive-helicity vertex operators \(V^{g}\) that do not include the factor with \(\tau_{2}\) fields in \(w(u)\) in (5.3), \(\tilde{g}\) the set of negative-helicity vertex operators \(\tilde{V}^{g}\) that do not include the factor with \(\tau_{1}\) fields in \(\tilde{w}(u)\) in (5.3), \(h\) the set of the other positive-helicity vertex operators \(V\), and \(\tilde{h}\) the set of other negative-helicity vertex operators \(\tilde{V}\). Note that \(g\) and \(\tilde{g}\) include the vertex operators for the gravitons. Then a tree scattering amplitude looks like:
\[{\cal A}=\left\langle\frac{1}{\mathop{\rm vol}\mathop{\rm GL}(2,{\mathbb{C}}) }\ \prod_{j\in\tilde{h}}\!\tilde{V}_{j}\prod_{p\in h}\!V_{p}\ \prod_{i\in\tilde{g}}\!\tilde{V}_{i}^{g}\prod_{k\in g}\!V_{k}^{g}\right\rangle\,,\]
where the factor \(\mathop{\rm vol}\mathop{\rm GL}(2,{\mathbb{C}})\) comes from three zero-modes of the \(c\) ghost and one degree of freedom remaining from gauging the little group [2], and where one member of \(h\) (or of \(g\) when \(h\) is empty) and one member of \(\tilde{h}\) (or of \(\tilde{g}\) when \(\tilde{h}\) is empty) are fixed vertex oper
ators6, fixed for every worldsheet supersymmetry represented in the correlation function.
Footnote 6: \(g\cup h\) and \(\tilde{g}\cup\tilde{h}\) are either both empty or non-empty [15, 16, 17].
The computation of the amplitude is standard [13, 1, 2, 3, 4, 5] except for the product of factors containing the auxiliary \(\tau\) fields leading to reduced determinants of fermionic Hodges-like matrices, similarly to the bosonic fields. Explicitly:
\[{\cal A}=\int\!\!\prod_{\begin{subarray}{c}i\in g\cup h\\ \cup g\cup h\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
G\({}^{\prime}\) is 1 for empty \(h\) and \(\tilde{h}\), otherwise stands for the product of reduced determinants \(\det^{\prime}G_{\tilde{h}}\det^{\prime}G_{h}\) of the fermionic Hodges-like matrices \(G_{\tilde{h}},G_{h}\) defined by
\[\begin{split} G_{h}^{lr}&=\frac{u_{l}u_{r}}{\sigma_{l} -\sigma_{r}}q_{l\tilde{l}}\Omega^{\hat{I}\,\hat{J}}q_{r\hat{l}}\,,\,l\!\neq\!r \in h,& G_{h}^{ll}=\!\!-\!\!\!\!\sum_{r\neq l\in h}\!\!\!\frac{u_{l }u_{r}}{\sigma_{l}-\sigma_{r}}q_{l\hat{l}}\Omega^{\hat{I}\,\hat{J}}q_{r\hat{J }}\,,\\ G_{\tilde{h}}^{lr}&=\frac{u_{l}u_{r}}{\sigma_{l}- \sigma_{r}}\tilde{q}_{l}^{\tilde{l}}\Omega_{IJ}\tilde{q}_{r}^{\tilde{J}}\,, \,\,\,l\!\neq\!r\in\tilde{h},& G_{\tilde{h}}^{ll}=\!\!-\!\!\! \sum_{r\neq l\in\tilde{h}}\!\!\!\frac{u_{l}u_{r}}{\sigma_{l}-\sigma_{r}}\,\, \tilde{q}_{l}^{\tilde{l}}\Omega_{IJ}\tilde{q}_{r}^{\tilde{J}}\,.\end{split} \tag{5.5}\]
\(G_{h}\) and \(G_{\tilde{h}}\) have co-rank 1 and the reduced determinant \(\det^{\prime}G_{m}\) is basically the product of all but one diagonal element of \(G_{m}\) with all terms cancelled that include some propagator loop of elements in the \(m\) set. This means, for instance, that terms in the scattering amplitude of spin 1 particles with propagator loops containing only helicities of the same sign are forbidden. This is known to be valid for gluon scattering in QCD [18]. When \(g\cup\tilde{g}\) only contains gravitons and all particles in \(h\cup\tilde{h}\) have spin \(\leq 1\) then a single trace term will be a propagator loop with two missing links typically between each of the two fixed vertex operators and some other location which are filled by pulling down the appropriate propagators from the exponential \(e^{F_{\mathcal{N}}}\). Such a single trace loop constitutes a kinematic Parke-Taylor (PT) factor:
\[\text{PT}=N/\left((\sigma_{|h|+|\tilde{h}|}-\sigma_{1})\!\!\!\!\!\!\!\!\!\!\! \prod_{r<s\in h}\!\!(\sigma_{r}\!-\!\sigma_{s})\,\,\,(\sigma_{|h|}-\sigma_{|h |+1})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In conclusion, for pure graviton scattering the model in this section gives the expected graviton tree scattering amplitude [13, 5] and the main difference with a SYM tree scattering amplitude for spin \(\leq\!1\) particles consists of the additional reduced determinants \(\det^{\prime}\!H\)\(\det^{\prime}\!\tilde{H}\) indicating that all particles exchange gravitons with each other, even without any external gravitons present. For spin \(\leq 1\) particles there might be even more differences due to the general form of the fermionic Hodges-like matrices (5.5).
The actual calculation of an n-point amplitude can proceed by solving the scattering equations and inserting a solution into the various parts of the integrand including the Jacobian. This gives a power series in the Grassmann odd parameters \(q_{\cal I}\) reflecting the \({\cal N}=8\) supersymmetry, and the coefficients describe scattering of particles of different spin.
## 6 Summary and Outlook
This work examined the spectrum of the massive ambitwistor supergravity model found in [3, 4, 5] based on oscillator expansions. It turned out that the spectrum is covered by Table 2 reflecting \({\cal N}=8\) supersymmetry. As the twistors are worldsheet spinors there is a R sector and a NS sector to consider and both have the same particle spectrum except in the NS sector it is truncated because of the constraint that physical states have at most two on-shell oscillator modes.
The massive models considered in [3, 4, 5] have a vanishing little group anomaly coefficient and also a zero Virasoro central charge after a contribution from compactifying extra dimensions. In this article, the supergravity model got first extended by doubling and gauging the fermionic twistor components with help of auxiliary fields in such a way that the resulting model was anomaly-free and the scattering amplitudes for spin \(\leq 1\) particles exhibited a kinematic (no color trace) PT factor like a SYM model but without an external current algebra. The spectrum of the first modification which was limited to massless particles showed similarity to the spectrum of the Berkovits-Witten model though without the disturbing 'dipole' modes. Also, it is the same spectrum as of another model found by the author earlier, described in appendix A. Unfortunately, for both models the scattering amplitudes can look complicated and difficult to handle. Then, a much improved, still massless model variation was presented with the same spectrum as the original massive model and scattering amplitudes that differ from the massless limit of the original model only by a factor consisting of two reduced determinants of fermionic Hodges-like matrices with co-rank 1 that under special circumstances can lead to a Parke-Taylor factor.
The issue about the truncated spectrum in the NS sector can be argued away. This is based on the observation that a little group \(\mathrm{SL}(2,\mathbb{C})\) gauge transformation can change the
pair of supertwistors from the R sector to the NS sector and vice versa, for instance by multiplying one supertwistor with \(\sigma^{\frac{1}{2}}\) and the other one with \(\sigma^{-\frac{1}{2}}\). Thus, from a classical point of view, the NS and R sector are equivalent in the two-twistor models. Therefore, it is sufficient to quantize only the more convenient R sector. On the quantized level, the NS sector can then be regarded as resulting from a spectral flow operation on the R sector, with the \(\pm\frac{1}{2}\) modes in the NS sector becoming generalized zero modes8, overcoming the truncation constraint of the NS spectrum. To have equivalent spectrum before and after spectral flow and to be able to identify the spectrum in the NS sector with the one in the R sector, this interpretation works exactly when for the \(L_{0}\) constant \(a=0\) in the R sector and \(a=1\) in the NS sector, i.e. for the original massive model and the model in section 5 but not for the intermediate model in section 4 or the model in appendix A.
Footnote 8: For spectral flow of twistor modes in the context of ADS/CFT duality, see for instance [19].
Concentrating on the spectrum of \({\cal N}=8\) supersymmetry as shown in Table 2 and comparing modes in the triangles above and below diagonal lines drawn from upper left to right bottom, then the fermionic (half-integer spin) modes in the lower triangle represent antiparticles of the ones in the upper triangle in both halves of the table. Further, the 6 representation of SU(4) can be conveniently decomposed into a \({\rm SU}(2)\otimes{\rm SU}(2)\otimes{\rm U}(1)\) basis [20] where out of a sextuple the first two elements and the next two can be used each as a duplet for a fundamental representation of SU(2) leaving the last two elements as a third duplet although not representing an SU(2). Nevertheless, this allows to consider \((\pm\frac{1}{2}|4\!\otimes\!6)\) and \((\pm\frac{1}{2}|\bar{4}\!\otimes\!6)\) as 3 generations of the spin \(\frac{1}{2}\) content of the Pati-Salam model [10]. Also, it is intriguing to see that the spectrum contains \((\pm 1|4\!\otimes\!\bar{4})=(\pm 1|15\!\oplus\!1)\) exactly once that is available for the adjoint vector bosons of SU(4) and also two \((\pm 1|1\otimes 6)\) that have each a decomposition that includes an adjoint representation of SU(2)[21]9.
Footnote 9: The fact that the N=8 supergravity spectrum contains exactly the spin \(\frac{1}{2}\) content for all the fermions of the standard model and also for 8 massive gravitinos is well known [22, 23].
For future work, when disregarding graviton exchange, i.e. setting the reduced determinants \({\rm det}^{\prime}H{\rm det}^{\prime}\tilde{H}\) equal to 1, genus 0 scattering amplitudes of the model in section 5 should be checked for how far they match gluon and qcd tree amplitudes from the literature. For instance, what is immediately apparent, single trace N\({}^{k}\)MHV amplitudes look the same as in SYM, apart from color traces.
Other open questions for the improved model in section 5 are a massive generalization, loop scattering amplitudes for genus \(\geq\!1\) worldsheets, modular invariance, and unitarity. The ultimate goal, of course, would be to show that it leads to a consistent UV-complete string field theory in twistor space with a classical limit that can be compared with general relativity.
The above solution for the truncated spectrum in the NS sector deserves more exami
nation, working out more details about the spectral flow operation.
Further, the physical meaning of the auxiliary fields seems to be a mystery. They got introduced in an ad hoc fashion, as a kind of supertwistors with reversed statistics to support worldsheet supersymmetries. Without showing up in physical states they are on the level of ghosts but if they are just a matter of mathematical convenience what are the deeper lying principles behind their origin? Maybe they are superfield components in a superspace although the superfield would look rather complicated because of the many supersymmetries and other worldsheet symmetries breaking the supertwistors apart. On the other hand, if they are remnants of compactifying extra dimensions the exact reduction mechanism would need to be investigated.
## Appendix A Non-Supersymmetric Model
Here the intermediate model of section 4 gets changed to one that gauges worldsheet supersymmetries without the use of (mysterious) auxiliary fields. The model considered here is, in the notation of section 4,
\[S=\int_{\Sigma}{\cal Z}^{a}\cdot\bar{\partial}{\cal Z}_{a}+A_{ab}\,Z^{a}\cdot Z ^{b}+S_{g}\,,\]
with
\[S_{g}=\int_{\Sigma}F^{\alpha\dot{\alpha}}_{ab}\lambda^{a}_{\alpha}\tilde{ \lambda}^{b}_{\dot{\alpha}}+G^{\alpha I}_{ab}\lambda^{a}_{\alpha}\tilde{\eta} ^{b}_{I}+\tilde{G}^{\dot{\alpha}I}_{ab}\tilde{\lambda}^{a}_{\dot{\alpha}}\eta ^{b}_{I}\,,\]
where the fermionic twistor components \(\eta^{a}_{I}\) do not participate in the little group symmetry of the two twistors which breaks the supertwistors apart. \(F^{\alpha\dot{\alpha}}_{ab}\) gauges the conformal translations and \(G^{\alpha I}_{ab},\tilde{G}^{\dot{\alpha}I}_{ab}\) gauge the superconformal translations.
BRST quantization leads to \((b,c)\) ghosts for worldsheet gravity and:
from the bosonic fields \(\{A_{ab},F^{\alpha\dot{\alpha}}_{ab}\}\) to fermionic ghosts \(\{(M_{ab},N_{ab}),(f^{\alpha\dot{\alpha}}_{ab},e^{\alpha\dot{\alpha}}_{ab})\}\) and
from the fermionic fields \(\{G^{\alpha I}_{ab},\tilde{G}^{\dot{\alpha}I}_{ab}\}\) to bosonic ghosts \(\{(\beta^{\alpha I}_{ab},\gamma^{\alpha I}_{ab}),(\tilde{\beta}^{\alpha I}_{ ab}\tilde{\gamma}^{\alpha I}_{ab}\}\).
The SL(2,\(\mathbb{C}\)) anomaly coefficient is:
\[a_{sl2}=\frac{3}{2}(4_{\cal Z})-6(4_{fe})+\frac{3}{2}(2{\cal N}_{\beta\gamma} +2{\cal N}_{\tilde{\beta}\tilde{\gamma}})-6_{M\!N}=-6(4-{\cal N})\,.\]
The central charge is
\[c=(-8+2{\cal N})_{\cal Z}-26_{bc}-6_{M\!N}-32_{fe}+8{\cal N}_{\beta\gamma}+8{ \cal N}_{\tilde{\beta}\tilde{\gamma}}=-18(4-{\cal N})\,.\]
Therefore, the theory is anomaly-free for \(\mathcal{N}=4\). Then the \(L_{0}\) constant \(a\) is in both the NS and R sector given by
\[24a=(0)_{\mathcal{Z}}-2_{bc}-6_{M\!N}-32_{fe}+8\mathcal{N}_{\beta\gamma}+8 \mathcal{N}_{\tilde{\beta}\tilde{\gamma}}=24\,,\]
i.e. \(a=1\), exactly like for the intermediate model in section 4. Although \(\mathcal{N}\) has here half the value, it has the same R-symmetry because the index \(a\) of \(\eta_{I}^{a}\) is not part of the little group but of the R-symmetry. This means the two models have the same spectrum although the internal little group representation does not need to be the same across the R-symmetry multiplet when disregarding target space supersymmetry10. The current model was originally found by the author using other notation [8], and explored more in [9] providing vertex operators and scattering amplitudes. Because it suffers from similar issues as the model in section 4, it can be abandoned in favor of the improved model in 5 that have more potential to be realistic.
Footnote 10: This allows an interpretation of the spectrum to include 3 generations of the Pati-Salam model [9], similar to the model in section 5.
|
2308.01910 | Deep Policy Gradient Methods in Commodity Markets | The energy transition has increased the reliance on intermittent energy
sources, destabilizing energy markets and causing unprecedented volatility,
culminating in the global energy crisis of 2021. In addition to harming
producers and consumers, volatile energy markets may jeopardize vital
decarbonization efforts. Traders play an important role in stabilizing markets
by providing liquidity and reducing volatility. Several mathematical and
statistical models have been proposed for forecasting future returns. However,
developing such models is non-trivial due to financial markets' low
signal-to-noise ratios and nonstationary dynamics.
This thesis investigates the effectiveness of deep reinforcement learning
methods in commodities trading. It formalizes the commodities trading problem
as a continuing discrete-time stochastic dynamical system. This system employs
a novel time-discretization scheme that is reactive and adaptive to market
volatility, providing better statistical properties for the sub-sampled
financial time series. Two policy gradient algorithms, an actor-based and an
actor-critic-based, are proposed for optimizing a transaction-cost- and
risk-sensitive trading agent. The agent maps historical price observations to
market positions through parametric function approximators utilizing deep
neural network architectures, specifically CNNs and LSTMs.
On average, the deep reinforcement learning models produce an 83 percent
higher Sharpe ratio than the buy-and-hold baseline when backtested on
front-month natural gas futures from 2017 to 2022. The backtests demonstrate
that the risk tolerance of the deep reinforcement learning agents can be
adjusted using a risk-sensitivity term. The actor-based policy gradient
algorithm performs significantly better than the actor-critic-based algorithm,
and the CNN-based models perform slightly better than those based on the LSTM. | Jonas Hanetho | 2023-06-14T11:50:23Z | http://arxiv.org/abs/2308.01910v1 | # Deep Policy Gradient Methods
###### Abstract
The proposed algorithm is based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the gradient methods. The proposed gradient methods are based on the proposed gradient methods. The proposed gradient methods are based on the proposed gradient methods.
[MISSING_PAGE_POST]
## Deep Policy Gradient Methods in Commonality Markets
* (C) 2023 Jonas Rotschi Hanetho
Deep Policy Gradient Methods in Commodity Markets
[http://www.duo.uio.no/](http://www.duo.uio.no/)
Printed: Representralen, University of Oslo
## Acknowledgements
This thesis would not have been possible without my supervisors, Dirk Hesse and Martin Giese. My sincere thanks are extended to Dirk for his excellent guidance and mentoring throughout this project and to Martin for his helpful suggestions and advice. Finally, I would like to thank the Equinor data science team for insightful discussions and for providing me with the tools needed to complete this project.
[MISSING_PAGE_POST]
## Abstract
The energy transition has increased the reliance on intermittent energy sources, destabilizing energy markets and causing unprecedented volatility, culminating in the global energy crisis of 2021. In addition to harming producers and consumers, volatile energy markets may jeopardize vital decarbonization efforts. Traders play an important role in stabilizing markets by providing liquidity and reducing volatility. Forecasting future returns is an integral part of any financial trading operation, and several mathematical and statistical models have been proposed for this purpose. However, developing such models is non-trivial due to financial markets' low signal-to-noise ratios and nonstationary dynamics.
This thesis investigates the effectiveness of deep reinforcement learning methods in commodities trading. It presents related work and relevant research in algorithmic trading, deep learning, and reinforcement learning. The thesis formalizes the commodities trading problem as a continuing discrete-time stochastic dynamical system. This system employs a novel time-discretization scheme that is reactive and adaptive to market volatility, providing better statistical properties for the sub-sampled financial time series. Two policy gradient algorithms, an actor-based and an actor-critic-based, are proposed for optimizing a transaction-cost- and risk-sensitive trading agent. The agent maps historical price observations to market positions through parametric function approximators utilizing deep neural network architectures, specifically CNNs and LSTMs.
On average, the deep reinforcement learning models produce an 83 percent higher Sharpe ratio than the buy-and-hold baseline when backtested on front-month natural gas futures from 2017 to 2022. The backtests demonstrate that the risk tolerance of the deep reinforcement learning agents can be adjusted using a risk-sensitivity term. The actor-based policy gradient algorithm performs significantly better than the actor-critic-based algorithm, and the CNN-based models perform slightly better than those based on the LSTM. The backtest results indicate the viability of deep reinforcement learning-based algorithmic trading in volatile commodity markets.
[MISSING_PAGE_POST]
List of Figures
* 2.1 Time series cross-validation (backtesting) compared to standard cross-validation from [1].
* 3.1 An illustration of the relationship between the capacity of a function approximator and the generalization error from [1].
* 3.2 Feedforward neural network from [12].
* 3.3 ReLU activation function from [12].
* 3.4 Leaky ReLU activation function from [13].
* 3.5 An example of the effect of weight decay with parameter \(\lambda\) on a high-dimensional polynomial regression model from [1].
* 3.6 3D convolutional layer from [12].
* 3.7 Recurrent neural network from [12].
* 3.8 LSTM cell from [12].
* 4.1 Agent-environment interaction from [13].
* 4.1 Policy network architecture
* 4.2 Q-network architecture
* 4.3 Convolutional sequential information layer architecture
* 4.4 LSTM sequential information layer architecture
* 4.1 The training-validation-test split
* 4.2 Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0\)
* 4.3 Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0.01\)
* 4.4 Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0.01\)
* 4.5 Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0.01\)
* 4.6 Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0.1\)
* 4.7 Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0.1\)
* 4.8 Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0.2\)
* 4.9 Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0.2\)
[MISSING_PAGE_POST]
List of Tables
* 1 Hyperparameters
* 2 Backtest results
[MISSING_PAGE_POST]
###### Contents
* 1 Introduction
* 1.1 Motivation
* 1.2 Problem description
* 1.3 Thesis Organization
* I Background
* 2 Algorithmic trading
* 2.1 Commodity markets
* 2.2 Financial trading
* 2.3 Modern portfolio theory
* 2.4 Efficient market hypothesis
* 2.5 Forecasting
* 2.6 Mapping forecasts to market positions
* 2.7 Feature engineering
* 2.8 Sub-sampling schemes
* 2.9 Backtesting
* 3 Deep learning
* 3.1 Machine learning
* 3.1.1 No-free-lunch theorem
* 3.1.2 The curse of dimensionality
* 3.2 Supervised learning
* 3.2.1 Function approximation
* 3.3 Artificial neural networks
* 3.3.1 Feedforward neural networks
* 3.3.2 Parameter initialization
* 3.3.3 Gradient-based learning
* 3.3.4 Backpropagation
* 3.3.5 Activation function
* 3.3.6 Regularization
* 3.3.7 Batch normalization
* 3.3.8 Universal approximation theorem
* 3.3.9 Deep neural networks
* 3.3.10 Convolutional neural networks
* 3.3.11 Recurrent neural networks
* 4 Reinforcement learning
* 4.1 Introduction
* 4.2 Markov decision process
* 4.2.1 Infinite Markov decision process
* 4.2.2 Partially observable Markov decision process
* 4.3 Rewards
* 4 Value function and policy
* 4.5 Function approximation
* 4.6 Policy gradient methods
* 4.6.1 REINFORCE
* 4.6.2 Actor-critic
* II Methodology
* 5 Problem Setting
* 5.1 Assumptions
* 5.2 Time Discretization
* 5.3 State Space
* 5.4 Action Space
* 5.5 Reward Function
* 6 Reinforcement learning algorithm
* 6.1 Immediate reward environment
* 6.2 Direct policy gradient
* 6.3 Deterministic actor-critic
* 7 Network topology
* 7.1 Network input
* 7.2 Policy network
* 7.3 Q-network
* 7.4 Sequential information layer
* 7.4.1 Convolutional neural network
* 7.4.2 Long Short-Term Memory
* 7.5 Decision-making layer
* 7.6 Network optimization
* 7.6.1 Regularization
* III Experiments
* 8 Experiment and Results
* 8.1 Materials and Methods
* 8.1.1 Baselines
* 8.1.2 Hyperparameters
* 8.1.3 Training scheme
* 8.1.4 Performance metrics
* 8.1.5 Dataset
* 8.2 Results
* 8.3 Discussion of results
* 8.3.1 Risk/reward
* 8.3.2 RL models
8.3.3 Networks
* 8.4 Discussion of model 8.4.1 Environment 8.4.2 Optimization 8.4.3 Interpretability and trust
* 9 Future work
* 10 Conclusion
List of Abbreviations
[MISSING_PAGE_POST]
Introduction
### Motivation
The transition to sustainable energy sources is one of the most critical challenges facing the world today. By 2050, the European Union aims to become carbon neutral [eur]. However, rising volatility in energy markets, culminating in the 2021 global energy crisis, complicates this objective. Supply and demand forces determine price dynamics, where an ever-increasing share of supply stems from intermittent renewable energy sources such as wind and solar power. Increasing reliance on intermittent energy sources leads to unpredictable energy supply, contributing to volatile energy markets [24]. Already volatile markets are further destabilized by evolutionary traits such as fear and greed, causing human commodity traders to overreact [18]. Volatile markets are problematic for producers and consumers, and failure to mitigate these concerns may jeopardize decarbonization targets.
Algorithmic trading agents can stabilize commodity markets by systematically providing liquidity and aiding price discovery [19, 20]. Developing these methods is non-trivial as financial markets are non-stationary with complicated dynamics [21]. _Machine learning_ (ML) has emerged as the preferred method in algorithmic trading due to its ability to learn to solve complicated tasks by leveraging data [19]. The majority of research on ML-based algorithmic trading has focused on forecast-based _supervised learning_ (SL) methods, which tend to ignore non-trivial factors such as transaction costs, risk, and the additional logic associated with mapping forecasts to market positions [19]. _Reinforcement learning_ (RL) presents a suitable alternative to account for these factors. In reinforcement learning, autonomous agents learn to perform tasks in a time-series environment through trial and error without human supervision. Around the turn of the millennium, Moody and his collaborators [22, 23, 24] made several significant contributions to this field, empirically demonstrating the advantages of reinforcement learning over supervised learning for algorithmic trading.
In the last decade, the _deep learning_ (DL) revolution has made exceptional progress in areas such as image classification [10] and natural language processing [25], characterized by complex structures and high signal-to-noise ratios. The strong representation ability of deep learning methods has even translated to forecasting low signal-to-noise financial data [26, 15, 14]. In complex, high-dimensional environments, deep reinforcement learning (deep RL), i.e., integrating deep learning techniques into reinforcement learning, has yielded impressive results. Noteworthy contributions include achieving superhuman play in Atari games [23] and chess [24], and training a robot arm to solve the Rubik's cube [1]. A significant breakthrough was achieved in 2016 when the deep reinforcement learning-based computer program AlphaGo[24] beat top Go player Lee Sedol. In addition to learning by reinforcement learning through self-play, AlphaGo uses supervised learning techniques to learn from a database of historical games. In 2017, an improved
version called AlphaGo Zero[SSS\({}^{+}\)17], which begins with random play and relies solely on reinforcement learning, comprehensively defeated AlphaGo. Deep RL has thus far been primarily studied in the context of game-playing and robotics, and its potential application to financial trading remains largely unexplored. Combining the two seems promising, given the respective successes of reinforcement learning and deep learning in algorithmic trading and forecasting.
### Problem description
This thesis investigates the effectiveness of deep reinforcement learning methods in commodities trading. It examines previous research in algorithmic trading, state-of-the-art reinforcement learning, and deep learning algorithms. The most promising methods are implemented, along with novel improvements, to create a transaction-cost- and risk-sensitive parameterized agent directly outputting market positions. The agent is optimized using reinforcement learning algorithms, while deep learning methods extract predictive patterns from raw market observations. These methods are evaluated out-of-sample by backtesting on energy futures.
Machine learning relies on generalizability. A common criticism against algorithmic trading approaches is their alleged inability to generalize to "extreme" market conditions [13]. This thesis investigates the performance of algorithmic trading agents out-of-sample under unprecedented market conditions caused by the energy crisis during 2021-2022. It will address the following research questions
1. Can the risk of algorithmic trading agents operating in volatile markets be controlled?
2. What reinforcement learning algorithms are suitable for optimizing an algorithmic training agent in an online, continuous time setting?
3. What deep learning architectures are suitable for modeling noisy, non-stationary financial data?
### Thesis Organization
The thesis consists of three parts: the background (part I), the methodology (part II), and the experiments (part III). The list below provides a brief outline of the chapters in this thesis:
* **Chapter 2:** Overview of relevant concepts in algorithmic trading.
* **Chapter 3:** Overview of relevant machine learning and deep learning concepts.
* **Chapter 4:** Overview of relevant concepts in reinforcement learning.
* **Chapter 5:** Formalization of the problem setting.
* **Chapter 6:** Description of reinforcement learning algorithms.
* **Chapter 7:** Description of the neural network function approximators.
* **Chapter 8:** Detailed results from experiments.
* **Chapter 9:** Suggested future work.
* **Chapter 10:** Summary of contributions, results, and main conclusions.
[MISSING_PAGE_POST]
## Part I Background
Algorithmic trading
A phenomenon commonly described as an arms race has resulted from fierce competition in financial markets. In this phenomenon, market participants compete to remain on the right side of information asymmetry, which further reduces the signal-to-noise ratio and the frequency at which information arrives and is absorbed by the market [11]. An increase in volatility and the emergence of a highly sophisticated breed of traders called high-frequency traders have further complicated already complex market dynamics. In these developed, modern financial markets, the dynamics are so complex and change at such a high frequency that humans will have difficulty competing. Indeed, there is reason to believe that machines already outperform humans in the world of financial trading. The algorithmic hedge fund Renaissance Technologies, founded by famed mathematician Jim Simons, is considered the most successful hedge fund ever. From 1988 to 2018, Renaissance Technologies' Medallion fund generated 66 percent annualized returns before fees relying exclusively on algorithmic strategies [20]. In 2020, it was estimated that algorithmic trading accounts for around 60-73 percent of U.S. and European equity trading, up from just 15 percent in 2003 [12]. Thus, it is clear that algorithms already play a significant role in financial markets. Due to the rapid progress of computing power1 relative to human evolution, this importance will likely only grow.
Footnote 1: Moore’s law states that the number of transistors in an integrated circuit doubles roughly every two years.
This chapter provides an overview of this thesis's subject matter, algorithmic trading on commodity markets, examines related work, and justifies the algorithmic trading methods described in part II. Section 2.1 presents a brief overview of commodity markets and energy futures contracts. Sections 2.2, 2.3, and 2.4 introduce some basic concepts related to trading financial markets that are necessary to define and justify a trading agent's goal of maximizing risk-adjusted returns. This goal has two sub-goals: forecasting returns and mapping forecasts to market positions, which are discussed separately in sections 2.5 and 2.6. Additionally, these sections provide an overview of how the concepts introduced in the following chapters 3 and 4 can be applied to algorithmic trading and provide an overview of related research. The sections 2.7 and 2.8 describe how to represent a continuous financial market as discrete inputs to an algorithmic trading system. To conclude, section 2.9 introduces backtesting, a form of cross-validation used to evaluate algorithmic trading agents.
### Commodity markets
Energy products trade alongside other raw materials and primary products on commodity markets. The commodity market is an exchange that matches buyers and sellers of the products offered at the market. Traditionally trading was done in an open-outcry manner, though now an electronic limit order book is used to maintain a continuous market. Limit orders specify the direction, quantity, and acceptable price of a security. Limit orders are compared to existing
orders in the limit order book when they arrive on the market. A trade occurs at the price set by the first order in the event of an overlap. The exchange makes money by charging a fee for every trade, usually a small percentage of the total amount traded.
The basis of energy trade is energy futures, a derivative contract with energy products as the underlying asset [1]. Futures contracts are standardized forward contracts listed on stock exchanges. They are interchangeable, which improves liquidity. Futures contracts obligate a buyer and seller to transact a given quantity of the underlying asset at a future date and price. The quantity, quality, delivery location, and delivery date are all specified in the contract. Futures contracts are usually identified by their expiration month. The "front-month" is the nearest expiration date and usually represents the most liquid market. Natural gas futures expire three business days before the first calendar day of the delivery month. To avoid physical delivery of the underlying commodity, the contract holder must sell their holding to the market before expiry. Therefore, the futures and underlying commodity prices converge as the delivery date approaches. A futures contract achieves the same outcome as buying a commodity on the spot market on margin and storing it for a future date. The relative price of these alternatives is connected as it presents an arbitrage opportunity. The difference in price between a futures contract and the spot price of the underlying commodity will therefore depend on the financing cost, storage cost, and convenience yield of holding the physical commodity over the futures contract. Physical traders use futures as a hedge while transporting commodities from producer to consumer. If a trader wishes to extend the expiry of his futures contract, he can "roll" the contract by closing the contract about to expire and entering into a contract with the same terms but a later expiry date [1]. The "roll yield" is the difference in price for these two contracts and might be positive or negative. The exchange clearinghouse uses a margin system with daily settlements between parties to mitigate counterparty risk [1].
### Financial trading
Financial trading is the act of buying and selling financial assets. Owning a financial asset is called being _long_ that asset, which will realize a profit if the asset price increases and suffer a loss if the asset price decreases. _Short_-selling refers to borrowing, selling, and then, at a later time, repurchasing a financial asset and returning it to the lender with the hopes of profiting from a price drop during the loan term. Short-selling allows traders to profit from falling prices.
### Modern portfolio theory
Harry Markowitz laid the groundwork for what is known as Modern Portfolio Theory (MPT) [14]. MPT assumes that investors are risk-averse and advocates maximizing risk-adjusted returns. The _Sharpe ratio_[2] is the most widely-used measurement of risk-adjusted return developed by economist
William F. Sharpe. The Sharpe ratio compares excess return with the standard deviation of investment returns and is defined as
\[Sharpe\;ratio=\frac{\mathbb{E}[r_{t}-\bar{r}]}{\sqrt{var[r_{t}-\bar{r}]}}\simeq \frac{\mathbb{E}[r_{t}]}{\sigma_{r_{t}}} \tag{2.1}\]
where \(\mathbb{E}[r_{t}]\) is the expected return over \(T\) samples, \(\bar{r}\) is the risk-free rate, and \(\sigma_{r_{t}}>0\) is the standard deviation of the portfolio's excess return. Due to negligible low interest rates, the risk-free rate is commonly set to \(\bar{r}=0\). The philosophy of MPT is that the investor should be compensated through higher returns for taking on higher risk. The St. Petersburg paradox2 illustrates why maximizing expected reward in a risk-neutral manner might not be what an individual wants. Although market participants have wildly different objectives, this thesis will adopt the MPT philosophy of assuming investors want to maximize risk-adjusted returns. Hence, the goal of the trading agent described in this thesis will be to maximize the risk-adjusted returns represented by the Sharpe ratio. Maximizing future risk-adjusted returns can be broken down into two sub-goals; forecasting future returns and mapping the forecast to market positions. However, doing so in highly efficient and competitive financial markets is non-trivial.
Footnote 2: For an explanation of the paradox, see the article [2].
### Efficient market hypothesis
Actively trading a market suggests that the trader is dissatisfied with market returns and believes there is potential for extracting excess returns, or alpha. Most academic approaches to finance are based on the Efficient Market Hypothesis (EMH) [12], which states that all available information is fully reflected in the prices of financial assets at any time. According to the EMH, a financial market is a stochastic martingale process. As a consequence, searching for alpha is a futile effort as the expected future return of a non-dividend paying asset is the present value, regardless of past information, i.e.,
\[\mathbb{E}[R_{t+1}|I_{t}]=R_{t} \tag{2.2}\]
Practitioners and certain parts of academia heavily dispute the EMH. Behavioral economists reject the idea of rational markets and believe that human evolutionary traits such as fear and greed distort market participants' decisions, creating irrational markets. The Adaptive Market Hypothesis (AMH) [14] reconciles the efficient market hypothesis with behavioral economics by applying evolution principles (competition, adaptation, and natural selection) to financial interactions. According to the AHM, what behavioral economists label irrational behavior is consistent with an evolutionary model of individuals adapting to a changing environment. Individuals within the market are continually learning the market dynamics, and as they do, they adapt their trading strategies, which in turn changes the dynamics of the market. This loop creates
complicated price dynamics. Traders who adapt quickly to changing dynamics can exploit potential inefficiencies. Based on the AHM philosophy, this thesis hypothesizes that there are inefficiencies in financial markets that can be exploited, with the recognition that such opportunities are limited and challenging to discover.
### Forecasting
Unless a person is gambling, betting on the price movements of volatile financial assets only makes sense if the trader has a reasonable idea of where the price is moving. Since traders face non-trivial transaction costs, the expected value of a randomly selected trade is negative. Hence, as described by the gambler's ruin, a person gambling on financial markets will eventually go bankrupt due to the law of large numbers. Forecasting price movements, i.e., making predictions based on past and present data, is a central component of any financial trading operation and an active field in academia and industry. Traditional approaches include fundamental analysis, technical analysis, or a combination of the two [1]. These can be further simplified into qualitative and quantitative approaches (or a combination). A qualitative approach, i.e., fundamental analysis, entails evaluating the subjective aspects of a security [1], which falls outside the scope of this thesis. Quantitative (i.e., technical) traders use past data to make predictions [1]. The focus of this thesis is limited to fully quantitative approaches.
Developing quantitative forecasts for the price series of financial assets is non-trivial as financial markets are non-stationary with a low signal-to-noise ratio [13]. Furthermore, modern financial markets are highly competitive and effective. As a result, easily detectable signals are almost certainly arbitraged out. Researchers and practitioners use several mathematical and statistical models to identify predictive signals leading to excess returns. Classical models include the _autoregressive integrated moving average_ (ARIMA) and the _generalized autoregressive conditional heteroskedasticity_ (GARCH). The ARIMA is a linear model and a generalization of the _autoregressive moving average_ (ARMA) that can be applied to time series with nonstationary mean (but not variance) [20]. The assumption of constant variance (i.e., volatility) is not valid for financial markets where volatility is stochastic [13]. The GARCH is a non-linear model developed to handle stochastic variance by modeling the error variance as an ARMA model [21]. Although the ARIMA and GARCH have practical applications, their performance in modeling financial time series is generally unsatisfactory [22, 23].
Over the past 20 years, the availability and affordability of computing power, storage, and data have lowered the barrier of entry to more advanced algorithmic methods. As a result, researchers and practitioners have turned their attention to more complex machine learning methods because of their ability to identify signals and capture relationships in large datasets. Initially, there was a flawed belief that the low signal-to-noise ratio leaves viable only simple forecasts such as those based on low-dimensional ordinary least squares [14].
With the recent deep learning revolution, deep neural networks have demonstrated strong representation abilities when modeling time series data [14]. The Makridakis competition evaluates time series forecasting methods. In its fifth installment held in 2020, all 50 top-performing models were based on deep learning architectures [17]. A considerable amount of recent empirical research suggests that deep learning models significantly outperform traditional models like the ARIMA and GARCH when forecasting financial time series [14, 15, 16, 17]. These results are somewhat puzzling. The risk of overfitting is generally higher for noisy data like financial data. Moreover, the loss function for DNNs is non-convex, which makes finding a global minimum impossible. Despite the elevated overfitting risk and the massive over-parameterization of DNNs, they still demonstrate stellar generalization. Thus, based on recent research, the thesis will apply deep learning techniques to model financial time series.
A review of deep learning methods in financial time series forecasting [11] found that LSTMs were the preferred choice in sequence modeling, possibly due to their ability to remember both long- and short-term dependencies. Convolutional neural networks are another common choice. CNNs are best known for their ability to process 2D grids such as images; however, they have shown a solid ability to model 1D grid time series data. Using historical prices, Hiransha et al. [11] tested FFNs, vanilla RNNs, LSTMs, and CNNs on forecasting next-day stock market returns on the National Stock Exchange (NSE) of India and the New York Stock Exchange (NYSE). In the experiment, CNNs outperformed other models, including the LSTM. These deep learning models can extract generalizable patterns from the price series alone [11].
### Mapping forecasts to market positions
Most research on ML in financial markets focuses on forecast-based supervised learning approaches [18]. These methods tend to ignore how to convert forecasts into market positions or use some heuristics like the Kelly criterion to determine optimal position sizing [19]. The forecasts are usually optimized by minimizing a loss function like the Mean Squared Error (MSE). An accurate forecast (in the form of a lower MSE) may lead to a more profitable trader, but this is not always true. Not only does the discovered signal need adequate predictive power, but it must consistently produce reliable directional calls. Moreover, the mapping from forecast to market position needs to consider transaction costs and risk, which is challenging in a supervised learning framework [20]. Neglecting transaction costs can lead to aggressive trading and overestimation of returns. Neglecting risk can lead to trading strategies that are not viable in the real world. Maximizing risk-adjusted returns is only feasible when accounting for transaction costs and risk. These shortcomings are addressed using reinforcement learning [20, 21]. Using RL, deep neural networks can be trained to output market positions directly. Moreover, the DNN can be jointly optimized for risk- and transaction-cost-sensitive returns, thus directly optimizing for the true goal: maximizing risk-adjusted returns.
Moody and Wu [14] and Moody et al. [15] empirically demonstrated the advantages of reinforcement learning relative to supervised learning. In particular, they demonstrated the difficulty of accounting for transaction costs using a supervised learning framework. A significant contribution is their model-free policy-based RL algorithm for trading financial instruments _recurrent reinforcement learning_ (RRL). The name refers to the recursive mechanism that stores the past action as an internal state of the environment, allowing the agent to consider transaction costs. The agent outputs market positions and is limited to a discrete action space \(a_{t}\in\{-1,0,1\}\), corresponding to maximally short, no position, and maximally long. At time \(t\), the previous action \(a_{t-1}\) is fed into the policy network \(f_{\theta}\) along with the external state of the environment \(s_{t}\) in order to make the trade decision, i.e.,
\[a_{t}=f_{\theta}(s_{t},a_{t-1})\]
where \(f_{\theta}\) is a linear function, and the external state is constructed using the past 8 returns. The return \(r_{t}\) is realized at the end of the period \((t-1,t]\) and includes the returns resulting from the position \(a_{t-1}\) held through this period minus transaction costs incurred at time \(t\) due to a difference in the new position \(a_{t}\) from the old \(a_{t-1}\). Thus, the agent learns the relationship between actions and the external state of the environment and the internal state.
Moody and Saffel [15] compared their actor-based RRL algorithm to the value-based Q-learning algorithm when applied to financial trading. The algorithms are tested on two real financial time series; the U.S. dollar/British pound foreign exchange pair and the S&P 500 stock index. While both perform better than a buy-and-hold baseline, the RRL algorithm outperforms Q-learning on all tests. The authors argue that actor-based algorithms are better suited to immediate reward environments and may be better able to deal with noisy data and quickly adapt to non-stationary environments. They point out that critic-based RL suffers from the curse of dimensionality and that when extended to function approximation, it sometimes fails to converge even in simple MDPs.
Deng et al. [13] combine Moody's direct reinforcement learning framework with a recurrent neural network to introduce feature learning through deep learning. Another addition is the use of continuous action space. To constrain actions to the interval \([-1,1]\), the RNN output is mapped to a \(\tanh\) function. Jiang et al. [16] presents a deterministic policy gradient algorithm that trades a portfolio of multiple financial instruments. The policy network is modeled using CNNs and LSTMs, taking each period's closing, lowest, and highest prices as input. The DNNs are trained on randomly sampled mini-batches of experience. These methods account for transaction costs but not risk. Zhang et al. [17] present a deep RL framework for a risk-averse agent trading a portfolio of instruments using both CNNs and LSTMs. Jin and El-Saawy [18] suggest that adding a risk-term to the reward function that penalizes the agent for volatility produces a higher Sharpe ratio than optimizing for the Sharpe ratio directly. Zhang et al. [17] apply a similar risk-term penalty to the reward function.
### Feature engineering
Any forecast needs some predictor data, or features, to make predictions. While ML forecasting is a science, feature engineering is an art and arguably the most crucial part of the ML process. Feature engineering and selection for financial forecasting are only limited by imagination. Features range from traditional technical indicators (e.g., Moving Average Convergence Divergence, Relative Strength Index) [26] to more modern deep learning-based techniques like analyzing social-media sentiment of companies using Natural Language Processing [27] or using CNNs on satellite images along with weather data to predict cotton yields [28]. Research in feature engineering and selection is exciting and potentially fruitful but beyond this thesis's scope. The most reliable predictor of future prices of a financial instrument tends to be its past price, at least in the short term [10]. Therefore, in this thesis, higher-order features are not manually extracted. Instead, only the price series are analyzed.
### Sub-sampling schemes
Separating high- and low-frequency trading can be helpful, as they present unique challenges. High-frequency trading (HFT) focuses on reducing software and hardware latency, which may include building a $300 million fiber-optic cable to reduce transmission time by four milliseconds between exchanges to gain a competitive advantage [12]3. This type of trading has little resemblance to the low-frequency trading examined in this thesis, described in minutes or hours rather than milliseconds.
Footnote 3: Turns out they forgot that light travels about 30% slower in glass than in air, and they lost their competitive advantage to simple line-of-sight microwave networks [12].
Technical traders believe that the prices of financial instruments reflect all relevant information [13]. From this perspective, the market's complete order history represents the financial market's state. This state representation would scale poorly, with computational and memory requirements growing linearly with time. Consequently, sub-sampling schemes for periodic feature extraction are almost universally employed. While sampling information at fixed intervals is straightforward, there may be more effective methods. As exchange activity varies throughout the day, sampling at fixed intervals may lead to oversampling during low-activity periods and undersampling during high-activity periods. In addition, time-sampled series often exhibit poor statistical properties, such as non-normal returns, autocorrelation, and heteroskedasticity [14].
The normality of returns assumption underpins several mathematical finance models, e.g., Modern Portfolio Theory [15], and the Sharpe-ratio [16]. There is, however, too much peaking and fatter tails in the actual observed distribution for it to be relative to samples from Gaussian populations [15]4. Mandelbrot showed in 1963 [15] that a Levy alpha-stable distribution with
infinite variance can approximate returns over fixed periods. In 1967, Mandelbrot and Taylor [14] argued that returns over a fixed number of transactions may be close to Independent and Identically Distributed (IID) Gaussian. Several empirical studies have since confirmed this [11, 1]. Clark [11] discovered that sampling by volume instead of transactions exhibits better statistical properties, i.e., closer to IID Gaussian distribution. Sampling by volume instead of ticks has intuitive appeal. While tick bars count one transaction of \(n\) contracts as one bar, \(n\) transactions of one contract count as \(n\) bars. Sampling according to transaction volume might lead to significant sampling frequency variations for volatile securities. When the price is high, the volume will be lower, and therefore the number of observations will be lower, and vice versa, even though the same value might be transacted. Therefore, sampling by the monetary value transacted, also called dollar bars, may exhibit even better statistical properties [10]. Furthermore, for equities, sampling by monetary value exchanged makes an algorithm more robust against corporate actions like stock splits, reverse splits, stock offerings, and buybacks. To maintain a suitable sampling frequency, the sampling threshold may need to be adjusted if the total market size changes significantly.
Although periodic feature extraction reduces the number of observations that must be processed, it scales linearly in computation and memory requirements per observation. A history cut-off is often employed to represent the state by only the \(n\) most recent observations to tackle this problem. Representing the state of a partially observable MDP by the \(n\) most recent observations is a common technique used in many reinforcement learning applications. Mnih et al. [12] used 4 stacked observations as input to the DQN agent that achieved superhuman performance on Atari games to capture the trajectory of moving objects on the screen. The state of financial markets is also usually approximated by stacking past observations [1, 2, 13].
### Backtesting
Assessing a machine learning model involves estimating its generalization error on new data. The most widely used method for estimating generalization error is Cross-Validation (CV), which assumes that observations are IID and drawn from a shared underlying data-generating distribution. However, the price of a financial instrument is a nonstationary time series with an apparent temporal correlation. Conventional cross-validation ignores this temporal component and is thus unsuitable for assessing a time series forecasting model. Instead, backtesting, a form of cross-validation for time series, is used. Backtesting is a historical simulation of how the model would have performed should it have been run over a past period. The purpose of backtesting is the same as for cross-validation; to determine the generalization error of an ML algorithm.
To better understand backtesting, it is helpful to consider an algorithmic trading agent's objective and how it operates to achieve it. The algorithmic trading process involves the agent receiving information and, based on that information, executing trades at discrete time steps. These trades are intended
to achieve a specific objective set by the stakeholder, which, in the philosophy of modern portfolio theory, is maximizing risk-adjusted returns. Thus, assessing an algorithmic trading agent under the philosophy of modern portfolio theory entails estimating the risk-adjusted returns resulting from the agent's actions. However, when testing an algorithmic trading agent, it cannot access data ahead of the forecasting period, as that would constitute information leakage to the agent. For this reason, conventional cross-validation fails in algorithmic trading.
The most precise way to assess the performance of an algorithmic trading agent is to deploy it to the market, let it trade with the intended amount of capital, and observe its performance. However, this task would require considerable time since low-frequency trading algorithms are typically assessed over long periods, often several years5. Additionally, any small error would likely result in devastating losses, making algorithmic trading projects economically unfeasible.
Footnote 5: There are a couple of reasons for this; first, there are, on average, 252 trading days per year. Low-frequency trading algorithms typically make fewer than ten trades per day. In order to obtain sufficient test samples, the agent must trade for several years. Second, testing the algorithmic trading agent under various market conditions is crucial. A successful model in particular market conditions may be biased towards those conditions and fail to generalize to other market conditions.
Backtesting is an alternative to this expensive and time-consuming process where performance on historical simulations functions as a proxy for general
Figure 2.1: Time series cross-validation (backtesting) compared to standard cross-validation from [1].
ization error. Backtesting involves a series of tests that progress sequentially through time, where every test set consists of a single observation. At test \(n\), the model trains on the training set consisting of observations prior to the observation in the test set (\(i<n\)). The forecast is, therefore, not based on future observations, and there is no leakage from the test set to the training set. Then the backtest progresses to the subsequent observation \(n+1\), where the training set increases6 to include observations \(i<n+1\). The backtest progresses until there are no test sets left. The backtesting process is formalized in algorithm 1.
Footnote 6: The test set can also be a fixed-size FIFO queue.
```
Train the model on the first \(k\) observations (of \(T\) total observations). for\(i=0,1,...,T-k\)do Select observation \(k+i\) from the test set. Register trade \(a_{k+i}\). Train the model using observations at times \(t\leq k+i\). endfor Measure performance using registered trades \(a_{k},a_{k+1},...,a_{T}\) and the corresponding prices at time \(k,k+1,...,T\).
```
**Algorithm 1** Backtesting
When conducting historical simulations, knowing what information was available during the studied period is critical. Agents should not have access to data beyond the point at which they are located in the backtest, in order to avoid lookahead bias. Lookahead bias is a research bug of inadvertently using future data, whereas real-time production trading is free of such "feature". Data used in forecasting must be stored point-in-time (PIT), which indicates when the information was made available. An incorrectly labeled dataset can lead to lookahead bias.
Backtesting is a flawed procedure as it suffers from lookahead bias by design. Having experienced the hold-out test sample provides insight into what made markets rise and fall, a luxury not available before the fact. Thus, only live trading can be considered genuinely out-of-sample. Another form of lookahead bias is repeated and excessive backtest optimization leading to information leakage from the test to the training set. A machine learning model will enthusiastically grab any lookahead but fail to generalize to live trading. Furthermore, the backtests performed in this thesis rely on assumptions of zero market impact, zero slippage, fractional trading, and sufficient liquidity to execute any trade instantaneously at the quoted price. These assumptions do not reflect actual market conditions and will lead to unrealistic high performance in the backtest.
Backtesting should emulate the scientific method, where a hypothesis is developed, and empirical research is conducted to find evidence of inconsistencies with the hypothesis. It should be distinct from a research tool for discovering predictive signals. It should only be conducted after research has been done. A backtest is not an experiment but a simulation to see if the model behaves
as expected [1]. Random historical patterns might exhibit excellent performance in a backtest. However, it should be viewed cautiously if no ex-ante logical foundation exists to explain the performance [1].
Deep learning
An intelligent agent requires some means of modeling the dynamics of the system in which it operates. Modeling financial markets is complicated due to low signal-to-noise ratios and non-stationary dynamics. The dynamics are highly nonlinear; thus, several traditional statistical modeling approaches cannot capture the system's complexity. Moreover, reinforcement learning requires parameterized function approximators, rendering nonparametric learners, e.g., support vector machines and random forests, unsuitable. Additionally, parametric learners are generally preferred when the predictor data is well-defined[1], such as when forecasting security prices using historical data. In terms of nonlinear parametric learners, artificial neural networks comprise the most widely used class. Research suggests that deep neural networks, such as LSTMs and CNNs, effectively model financial data [13, 14, 15]. Therefore, the algorithmic trading agent proposed in this thesis will utilize deep neural networks to model commodity markets.
This chapter introduces the fundamental concepts of deep learning relevant to this thesis, starting with some foundational concepts related to machine learning 3.1 and supervised learning 3.2. Next, section 3.3 covers artificial neural networks, central network topologies, and how they are initialized and optimized to achieve satisfactory results. The concepts presented in this chapter are presented in the context of supervised learning but will be extended to the reinforcement learning context in the next chapter (4).
### Machine learning
Machine Learning (ML) studies how computers can automatically _learn_ from experience without being explicitly programmed. A general and comprehensive introduction to machine learning can be found in "Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman [12]. Tom Mitchell defined the general learning problem as "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance on T, as measured by P, improves with experience E." [16]. In essence, ML algorithms extract generalizable predictive patterns from data from some, usually unknown, probability distribution to build a model about the space. It is an optimization problem where performance improves through leveraging data. Generalizability relates to a model's predictive performance on independent test data and is a crucial aspect of ML. Models should be capable of transferring learned patterns to new, previously unobserved samples while maintaining comparable performance. ML is closely related to statistics in transforming raw data into knowledge. However, while statistical models are designed for inference, ML models are designed for prediction. There are three primary ML paradigms; _supervised_ learning, _unsupervised_ learning, and _reinforcement_ learning.
#### 3.1.1 No-free-lunch theorem
The _no-free-lunch_ theorem states that there exists no single universally superior ML learning algorithm that applies to all possible datasets. Fortunately, this does not mean ML research is futile. Instead, domain-specific knowledge is required to design successful ML models. The no-free-lunch theorem results only hold when averaged over all possible data-generating distributions. If the types of data-generating distributions are restricted to classes with certain similarities, some ML algorithms perform better on average than others. Instead of trying to develop a universally superior machine learning algorithm, the focus of ML research should be on what ML algorithms perform well on specific data-generating distributions.
#### 3.1.2 The curse of dimensionality
The _Hughes phenomenon_ states that, for a fixed number of training examples, as the number of features increases, the average predictive power for a classifier will increase before it starts deteriorating. Bellman termed this phenomenon _the curse of dimensionality_, which frequently manifests itself in machine learning [11]. One manifestation of the curse is that the sampling density is proportional to \(N^{1/p}\), where \(p\) is the dimension of the input space and \(N\) is the sample size. If \(N=100\) represents a dense sample for \(p=1\), then \(N^{1/10}=100^{10}\) is the required sample size for the same sampling density with \(p=10\) inputs. Therefore, in high dimensions, the training samples sparsely populate the input space [10]. In Euclidean spaces, a non-negative term is added to the distance between points with each new dimension. Thus, generalizing from training samples becomes more complex, as it is challenging to say something about a space without relevant training examples.
### Supervised learning
Supervised Learning (SL) is the machine learning paradigm where a labeled training set of \(N\in\mathbb{N}_{+}\) observations \(\tau=\{x_{i},y_{i}\}_{i=1}^{N}\) is used to learn a functional dependence \(\mathbf{y}=\hat{f}(\mathbf{x})\) that can predict \(\mathbf{y}\) from a previously unobserved \(\mathbf{x}\). Supervised learning includes regression tasks with numerical targets and classification tasks with categorical targets. A supervised learning algorithm adjusts the input/output relationship of \(\hat{f}\) in response to the prediction error to the target \(y_{i}-\hat{f}(x_{i})\). The hypothesis is that if the training set is representative of the population, the model will generalize to previously unseen examples.
#### 3.2.1 Function approximation
Function approximation, or function estimation, is an instance of supervised learning that concerns selecting a function among a well-defined class that underlies the predictive relationship between the input vector \(\mathbf{x}\) and output variable \(\mathbf{y}\). In most cases, \(\mathbf{x}\in\mathbb{R}^{d}\), where \(d\in\mathbb{N}_{+}\), and \(\mathbf{y}\in\mathbb{R}\). Function approximation relies on the assumption that there exists a function \(f(\mathbf{x})\) that describes
the approximate relationship between \((\mathbf{x},\mathbf{y})\). This relationship can be defined as
\[\mathbf{y}=f(\mathbf{x})+\epsilon \tag{3.1}\]
where \(\epsilon\) is some irreducible error that is independent of \(\mathbf{x}\) where \(\mathbb{E}[\epsilon]=0\) and \(Var(\epsilon)=\sigma_{\epsilon}^{2}\). All departures from a deterministic relationship between \((\mathbf{x},\mathbf{y})\) are captured via the error \(\epsilon\). The objective of function approximation is to approximate the function \(f\) with a model \(\hat{f}\). In reality, this means finding the optimal model parameters \(\theta\). Ordinary least squares can estimate linear models' parameters \(\theta\).
Bias-variance tradeoffBias and variance are two sources of error in model estimates. Bias measures the in-sample expected deviation between the model estimate and the target and is defined as
\[Bias^{2}(\hat{f}(\mathbf{x}))=[\mathbb{E}[\hat{f}(\mathbf{x})]-f(\mathbf{x})] ^{2} \tag{3.2}\]
and is a decreasing function of complexity. Variance measures the variability in model estimates and is defined as
\[Var(\hat{f}(\mathbf{x}))=\mathbb{E}[\hat{f}(\mathbf{x})-\mathbb{E}[\hat{f}( \mathbf{x})]]^{2} \tag{3.3}\]
and is an increasing function of complexity. The out-of-sample mean square error for model estimates is defined as
\[MSE=Bias^{2}(\hat{f})+Var(\hat{f})+Var(\epsilon) \tag{3.4}\]
where the last term \(Var(\epsilon)\) is the irreducible noise error due to a target component \(\epsilon\) not predictable by \(\mathbf{x}\). The bias can be made arbitrarily small using a more complex model; however, this will increase the model variance, or generalization error, when switching from in-sample to out-of-sample. This is known as the _bias-variance tradeoff_.
OverfittingA good ML model minimizes the model error, i.e., the training error (bias) and the generalization error (variance). This is achieved at some optimal complexity level, dependent on the data and the model. Increasing model complexity, or capacity, can minimize training error. However, such a model is unlikely to generalize well. Therefore, the difference between training error and generalization error will be high. This is known as _overfitting_ and happens when the complexity of the ML model exceeds that of the underlying problem. Conversely, _underfitting_ is when the training error is high because the model's complexity is lower than that of the underlying problem.
To minimize model error, the complexity of the ML model, or its inductive bias, must align with that of the underlying problem. The principle of parsimony, known as _Occam's razor_, states that among competing hypotheses explaining observations equally well, one should pick the simplest one. This heuristic is typically applied to ML model selection by selecting the simplest model from models with comparable performance.
Recent empirical evidence has raised questions about the mathematical foundations of machine learning. Complex models such as deep neural networks have been shown to decrease both training error and generalization error with growing complexity [2]. Furthermore, the generalization error keeps decreasing past the interpolation limit. These surprising results contradict the bias-variance tradeoff that implies that a machine learning model should balance over- and underfitting. Belkin et al. [1] reconciled these conflicting ideas by introducing a "double descent" curve to the bias-variance tradeoff curve. This increases performance when increasing the model capacity beyond the interpolation point.
### Artificial neural networks
An Artificial Neural Network (ANN) is a parametric learner fitting nonlinear models. The network defines a mapping \(h_{\theta}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) where \(n\) is the input dimension, \(m\) is the output dimension, and \(\theta\) are the network weights. A neural network has a graph-like topology. It is a collection of nodes organized in layers like a directed and weighted graph. The nodes of an ANN are typically separated into layers; the input layer, one or more hidden layers, and the output layer. Their dimensions depend on the function being approximated. A multi-layer neural network is called a Deep Neural Network (DNN).
#### 3.3.1 Feedforward neural networks
A Feedforward Network (FFN), or fully-connected network, defines the foundational class of neural networks where the connections are a directed acyclic graph that only allows signals to travel forward in the network. A feedforward network
Figure 3.1: An illustration of the relationship between the capacity of a function approximator and the generalization error from [1].
is a mapping \(h_{\theta}\) that is a composition of multivariate functions \(f_{1},f_{2},...,f_{k},g\), where \(k\) is the number of layers in the neural network. It is defined as
\[h_{\theta}=g\circ f_{k}\circ...\circ f_{2}\circ f_{1}(x) \tag{3.5}\]
The functions \(f_{j}\), \(j=1,2,...,k\) represent the network's hidden layers and are composed of multivariate functions. The function at layer \(j\) is defined as
\[f_{j}(x)=\phi_{j}(\theta_{j}x+b_{j}) \tag{3.6}\]
where \(\phi_{j}\) is the activation function and \(b_{j}\) is the bias at layer \(j\). The activation function is used to add nonlinearity to the network. The network's final output layer function \(g\) can be tailored to suit the problem the network is solving, e.g., linear for Gaussian output distribution or Softmax distribution for categorical output distribution.
#### 3.3.2 Parameter initialization
Neural network learning algorithms are iterative and require some starting point from which to begin. The initial parameters of the networks can affect the speed and level of convergence or even whether the model converges at all. Little is known about weight initialization, a research area that is still in its infancy. Further complicating the issue: initial parameters favorable for optimization might be unfavorable for generalization [1]. Developing heuristics for parameter initialization is, therefore, non-trivial. Randomness and asymmetry between the network units are desirable properties for the initial weights [1]. Weights are usually drawn randomly from a Gaussian or uniform distribution in a neighborhood around zero, while the bias is usually set to some heuristically chosen constant. Larger initial weights will prevent signal loss during forward- and
Figure 3.2: Feedforward neural network from [FFLb].
backpropagation. However, too large values can result in exploding values, a problem particularly prevalent in recurrent neural networks. The initial scale of the weights is usually set to something like \(1/\sqrt{m}\) where \(m\) is the number of inputs to the network layer.
Kaiming initialization [10] is a parameter initialization method that takes the type of activation function (e.g., Leaky-ReLU) used to add nonlinearity to the neural network into account. The key idea is that the initialization method should not exponentially reduce or magnify the magnitude of input signals. Therefore, each layer is initialized at separate scales depending on their size. Let \(m_{l}\in\mathbb{N}_{+}\) be the size of the inputs into the layer \(l\in\mathbb{N}_{+}\). Kaiming He et al. recommend initializing weights such that
\[\frac{1}{2}m_{l}\text{Var}[\theta_{l}]=1\]
Which corresponds to an initialization scheme of
\[w_{l}\sim\mathcal{N}(0,2/m_{l})\]
Biases are initialized at 0.
#### 3.3.3 Gradient-based learning
Neural nets are optimized by adjusting their weights \(\theta\) with the help of objective functions. Let \(J(\theta)\) define the differentiable objective function for a neural network, where \(\theta\) are the network weights. The choice of the objective function and whether it should be minimized or maximized depends on the problem being solved. For regression tasks, the objective is usually to minimize some loss function like mean-squared error (MSE)
\[J(\theta)=\frac{1}{n}\sum_{i=1}^{n}\left(h_{\theta}(x_{i})-y_{i}\right)^{2} \tag{3.7}\]
Due to neural nets' nonlinearity, most loss functions are non-convex, meaning it is impossible to find an analytical solution to \(\nabla J(\theta)=0\). Instead, iterative, gradient-based optimization algorithms are used. There are no convergence guarantees, but it often finds a satisfactorily low value of the loss function relatively quickly. Gradient descent-based algorithms adjust the weights \(\theta\) in the direction that minimizes the MSE loss function. The update rule for parameter weights in gradient descent is defined as
\[\theta_{t+1}=\theta_{t}-\alpha\nabla_{\theta}J(\theta_{t}) \tag{3.8}\]
where \(\alpha>0\) is the learning rate and the gradient \(\nabla J(\theta_{t})\) is the partial derivatives of the objective function with respect to each weight. The learning rate defines the rate at which the weights move in the direction suggested by the gradient of the objective function. Gradient-based optimization algorithms, also called first-order optimization algorithms, are the most common optimization algorithms for neural networks [1].
Stochastic gradient descentOptimization algorithms that process the entire training set simultaneously are known as batch learning algorithms. Using the average of the entire training set allows for calculating a more accurate gradient estimate. The speed at which batch learning converges to a local minima will be faster than online learning. However, batch learning is not suitable for all problems, e.g., problems with massive datasets due to the high computational costs of calculating the full gradient or problems with dynamic probability distributions.
Instead, Stochastic Gradient Descent (SGD) is often used when optimizing neural networks. SGD replaces the gradient in conventional gradient descent with a stochastic approximation. Furthermore, the stochastic approximation is only calculated on a subset of the data. This reduces the computational costs of high-dimensional optimization problems. However, the loss is not guaranteed to decrease when using a stochastic gradient estimate. SGD is often used for problems with continuous streams of new observations rather than a fixed-size training set. The update rule for SGD is similar to the one for GD but replaces the true gradient with a stochastic estimate
\[\theta_{t+1}=\theta_{t}-\alpha_{t}\nabla_{\theta}J^{(j)}(\theta_{t}) \tag{3.9}\]
where \(\nabla_{\theta}J^{(j)}(\theta)\) is the stochastic estimate of the gradient computed from observation \(j\). The total loss is defined as \(J(\theta)=\sum_{j=1}^{N}J^{(j)}(\theta)\) where \(N\in\mathbb{N}\) is the total number of observations. The learning rate at time \(t\) is \(\alpha_{t}>0\). Due to the noise introduced by the SGD gradient estimate, gradually decreasing the learning rate over time is crucial to ensure convergence. Stochastic approximation theory guarantees convergence to a local optima if \(\alpha\) satisfies the conditions \(\sum\alpha=\infty\) and \(\sum\alpha^{2}<\infty\). It is common to adjust the learning rate using the following update rule \(\alpha_{t}=(1-\beta)\alpha_{0}+\beta\alpha_{\tau}\), where \(\beta=\dfrac{t}{\tau}\), and the learning rate is kept constant after \(\tau\) iterations, i.e., \(\forall t\geq\tau\), \(\alpha_{t}=\alpha_{\tau}\)[1].
Due to hardware parallelization, simultaneously computing the gradient of \(N\) observations will usually be faster than computing each gradient separately [1]. Neural networks are, therefore, often trained on mini-batches, i.e., sets of more than one but less than all observations. Mini-batch learning is an intermediate approach to fully online learning and batch learning where weights are updated simultaneously after accumulating gradient information over a subset of the total observations. In addition to providing better estimates of the gradient, mini-batches are more computationally efficient than online learning while still allowing training weights to be adjusted periodically during training. Therefore, minibatch learning can be used to learn systems with dynamic probability distributions. Samples of the mini-batches should be independent and drawn randomly. Drawing ordered batches will result in biased estimates, especially for data with high temporal correlation.
Due to noisy gradient estimates, stochastic gradient descent and mini-batches of small size will exhibit higher variance than conventional gradient descent during training. The higher variance can be helpful to escape local minima and
find new, better local minima. However, high variance can also lead to problems such as overshooting and oscillation that can cause the model to fail to converge. Several extensions have been made to stochastic gradient descent to circumvent these problems.
Adaptive gradient algorithmThe Adaptive Gradient (AdaGrad) is an extension to stochastic gradient descent introduced in 2011 [1]. It outlines a strategy for adjusting the learning rate to converge quicker and improving the capability of the optimization algorithm. A per-parameter learning rate allows AdaGrad to improve performance on problems with sparse gradients. Learning rates are assigned lower for parameters with frequently occurring features and higher for parameters with less frequently occurring features. The AdaGrad update rule is given as
\[\theta_{t+1}=\theta_{t}-\frac{\alpha}{\sqrt{G_{t}+\epsilon}}g_{t} \tag{3.10}\]
where \(g_{t}=\nabla_{\theta}J^{(j)}(\theta_{t})\), and \(G_{t}=\sum_{\tau=1}^{t}g_{t}g_{t}^{\top}\), is the outer product of all previous subgradients. \(\epsilon>0\) is a smoothing term to avoid division by zero. As training proceeds, the squared gradients in the denominator of the learning rate will continue to grow, resulting in a strictly decreasing learning rate. As a result, the learning rate will eventually become so small that the model cannot acquire new information.
Root mean square propagationRoot Mean Square Propagation (RMSProp) is an unpublished extension to SGD developed by Geoffrey Hinton. RMSProp was developed to resolve the problem of AdaGrad's diminishing learning rate. Like AdaGrad, it maintains a per-parameter learning rate. To normalize the gradient, it keeps a moving average of squared gradients. This normalization decreases the learning rate for more significant gradients to avoid the exploding gradient problem and increases it for smaller gradients to avoid the vanishing gradient problem. The RMSProp update rule is given as
\[\theta_{t+1}=\theta_{t}-\frac{\alpha}{\sqrt{E[g^{2}]_{t}+\epsilon}}g_{t} \tag{3.11}\]
where \(E[g^{2}]_{t}=\beta E[g^{2}]_{t}+(1-\beta)g_{t}^{2}\) where \(v_{t}\) is the exponentially decaying average of squared gradients and \(\beta>0\) is a second learning rate conventionally set to \(\beta=0.9\).
AdamThe Adam optimization algorithm is an extension of stochastic gradient descent that has recently seen wide adoption in deep learning. It was introduced in 2015 [13] and derives its name from adaptive moment estimation. It utilizes the Adaptive Gradient (AdaGrad) Algorithm and Root Mean Square Propagation (RMSProp). Adam only requires first-order gradients and little memory but is computationally efficient and works well with high-dimensional parameter spaces. As with AdaGrad and RMSProp, Adam utilizes independent per-parameter learning rates separately adapted during training. Adam stores
a moving average of gradients \(E[g]_{t}=\beta_{1}E[g]_{t}+(1-\beta_{1})g_{t}\) with learning rate \(\beta_{1}>0\). Like RMSProp, Adam also stores a moving average of squared gradients \(E[g^{2}]_{t}\) with learning rate \(\beta_{2}>0\). The Adam update rule is given as
\[\theta_{t+1}=\theta_{t}-\frac{\alpha}{\sqrt{E[\hat{g^{2}}]_{t}+\epsilon}}E[\hat {g}]_{t} \tag{3.12}\]
where \(\hat{E[\hat{g^{2}}]_{t}}=\frac{E[g^{2}]_{t}}{1-\beta_{2}^{t}}\) and \(\hat{E[g]_{t}}=\frac{E[g]_{t}}{1-\beta_{1}^{t}}\). The authors recommend learning rates \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), as well as \(\epsilon=10^{-8}\). Adam has been shown to outperform other optimizers in a wide range of non-convex optimization problems. Researchers at Google [1] recommend the Adam optimization algorithm for SGD optimization in reinforcement learning.
#### 3.3.4 Backpropagation
Gradient-based optimization requires a method for computing a function's gradient. For neural nets, the gradient of the loss function with respect to the weights of the network \(\nabla_{\theta}J(\theta)\) is usually computed using the backpropagation algorithm (backprop) introduced in 1986 [14]. Backprop calculates the gradient of the loss function with respect to each weight in the network. This is done by iterating backward through the network layers and repeatedly applying the chain rule. The chain rule of calculus is used when calculating derivatives of functions that are compositions of other functions with known derivatives. Let \(y,z:\mathbb{R}\rightarrow\mathbb{R}\) be functions defined as \(y=g(x)\) and \(z=f(g(x))=f(y)\). By the chain rule
\[\frac{dz}{dx}=\frac{dz}{dy}\frac{dy}{dx} \tag{3.13}\]
Generalizing further, let \(x\in\mathbb{R}^{m},y\in\mathbb{R}^{n}\), and define mappings \(g:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) and \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\). If \(y=g(x)\) and \(z=f(y)\), then the chain rule is
\[\frac{\partial z}{\partial x_{i}}=\sum_{j}\frac{\partial z}{\partial y_{j}} \frac{\partial y_{j}}{\partial x_{i}} \tag{3.14}\]
which can be written in vector notation as
\[\nabla_{x}z=\left(\frac{\partial y}{\partial x}\right)^{\top}\nabla_{y}z \tag{3.15}\]
where \(\frac{\partial y}{\partial x}\) is the \(n\times m\) Jacobian matrix of \(g\). Backpropagation is often performed on tensors and not vectors. However, backpropagation with tensors is performed similarly by multiplying Jacobians by gradients. Backpropagation with tensors can be performed by flattening a tensor into a vector, performing backprop on the vector, and then reshaping the vector back into a tensor. Let \(X\) and \(Y\) be tensors and \(Y=g(X)\) and \(z=f(Y)\). The chain rule for tensors is
\[\nabla_{x}z=\sum_{j}\left(\nabla_{x}Y_{j}\right)\frac{\partial z}{\partial Y _{j}} \tag{3.16}\]
By recursively applying the chain rule, a scalar's gradient can be expressed for any node in the network that produced it. This is done recursively, starting from the output layer and going back through the layers of the network to avoid storing subexpressions of the gradient or recomputing them several times.
#### Activation function
The activation function \(\phi(\xi)\) adds nonlinearity to a neural net. If the activation function in the hidden layer is linear, then the network is equivalent to a network without hidden layers since linear functions of linear functions are themselves linear. The activation function must be differentiable to compute the gradient. Choosing an appropriate activation function depends on the specific problem. Sigmoid functions \(\sigma\), like the logistic function, are commonly used, as well as other functions such as the hyperbolic tangent function \(tanh\). The derivative of the logistic function is close to 0 except in a small neighborhood around 0. At each backward step, the \(\delta\) is multiplied by the derivative of the activation function. The gradient will therefore approach 0 and thus produce extremely slow learning. This is known as the _vanishing gradient_ problem. For this reason, the Rectified Linear Unit (ReLU) is the default recommendation for activation function in modern deep neural nets [1]. ReLU is a ramp function defined as \(ReLU(x)=\max\{0,x\}\). The derivative of the ReLU function is defined as
\[ReLU^{\prime}(x)=\begin{cases}0&if\ x<0\\ 1&if\ x>0\end{cases} \tag{3.17}\]
The derivative is undefined for \(x=0\), but it has subdifferential \([0,1]\), and it conventionally takes the value \(ReLU^{\prime}(0)=0\) in practical implementations. Since ReLU is a piecewise linear function, it optimizes well with gradient-based methods.
ReLU suffers from what is known as the _dying ReLU_ problem, where a large gradient could cause a node's weights to update such that the node will
Figure 3.3: ReLU activation function from [FFLb].
never output anything but 0. Such nodes will not discriminate against any input and are effectively "dead". This problem can be caused by unfortunate weight initialization or a too-high learning rate. Generalizations of the ReLU function, like the Leaky ReLU (LReLU) activation function, has been proposed to combat the dying ReLU problem [6]. Leaky ReLU allows a small "leak" for negative values proportional to some slope coefficient \(\alpha\), e.g., \(\alpha=0.01\), determined before training. This allows small gradients to travel through inactive nodes. Leaky ReLU will slowly converge even on randomly initialized weights but can also reduce performance in some applications [6].
#### Regularization
Minimization of generalization error is a central objective in machine learning. The representation capacity of large neural networks, expressed by the universal approximation theorem (3.3.8), comes at the cost of increased overfitting risk. Consequently, a critical question in ML is how to design and train neural networks to achieve the lowest generalization error. Regularization addresses this question. Regularization is a set of techniques designed to reduce generalization error, possibly at the expense of training error.
Regularization of estimators trades increased bias for reduced variance. If effective, it reduces model variance more than it increases bias. _Weight decay_ is used to regularize ML loss functions by adding the squared \(L^{2}\) norm of the parameter weights \(\Omega(\theta)=\frac{1}{2}||\theta||_{2}^{2}\) as a regularization term to the loss function
\[\tilde{J}(\theta)=J(\theta)+\lambda\Omega(\theta) \tag{3.18}\]
where \(\lambda\geq 0\) is a constant weight decay parameter. Increasing \(\lambda\) punishes larger weights harsher. Weight decay creates a tradeoff for the optimization algorithm between minimizing the loss function \(J(\theta)\) and the regularization term \(\Omega(\theta)\).
_Dropout_[12] is another regularization strategy that reduces the risk of overfitting by randomly eliminating non-output nodes and their connections
Figure 3.4: Leaky ReLU activation function from [FFLb].
during training, preventing units from co-adapting too much. Dropout can be considered an ensemble method, where an ensemble of "thinned" sub-networks trains the same underlying base network. It is computationally inexpensive and only requires setting one parameter \(\alpha\in[0,1)\), which is the rate at which nodes are eliminated.
_Early stopping_ is a common and effective implicit regularization technique that addresses how many epochs a model should be trained to achieve the lowest generalization error. The training data is split into training and validation subsets. The model is iteratively trained on the training set, and at predefined intervals in the training cycle, the model is tested on the validation set. The error on the validation set is used as a proxy for the generalization error. If the performance on the validation set improves, a copy of the model parameters is stored. If performance worsens, the learning terminates, and the model parameters are reset to the previous point with the lowest validation set error. Testing too frequently on the validation set risks premature termination. Temporary dips in performance are prevalent for nonlinear models, especially when trained with reinforcement learning algorithms when the agent explores the state and action space. Additionally, frequent testing is computationally expensive. On the other hand, infrequent testing increases the risk of not registering the model parameters near their performance peak. Early stopping is relatively simple but comes at the cost of sacrificing parts of the training set to the validation set.
#### Batch normalization
Deep neural networks are sensitive to initial random weights and hyperparameters. When updating the network, all weights are updated using a loss estimate under the false assumption that weights in the prior layers are fixed. In practice, all layers are updated simultaneously. Therefore, the optimization step is
Figure 3.5: An example of the effect of weight decay with parameter \(\lambda\) on a high-dimensional polynomial regression model from [1].
constantly chasing a moving target. The distribution of inputs during training is forever changing. This is known as _internal covariate shift_, making the network sensitive to initial weights and slowing down training by requiring lower learning rates.
_Batch normalization_ (batch norm) is a method of adaptive reparametrization used to train deep networks. It was introduced in 2015 [15] to help stabilize and speed up training deep neural networks by reducing internal covariate shift. Batch norm normalizes the output distribution to be more uniform across dimensions by standardizing the activations of each input variable for each mini-batch. Standardization rescales the data to standard Gaussian, i.e., zero-mean unit variance. The following transformation is applied to a mini-batch of activations to standardize it
\[\hat{x}^{(k)}_{norm}=\frac{x^{(k)}-\mathbb{E}[x^{(k)}]}{\sqrt{Var[x^{(k)}]+ \epsilon}} \tag{3.19}\]
where the \(\epsilon>0\) is a small number such as \(10^{-8}\) added to the denominator for numerical stability. Normalizing the mean and standard deviation can, however, reduce the expressiveness of the network [1]. Applying a second transformation step to the mini-batch of normalized activations restores the expressive power of the network
\[\tilde{x}^{(k)}=\gamma\hat{x}^{(k)}_{norm}+\beta \tag{3.20}\]
where \(\beta\) and \(\gamma\) are learned parameters that adjust the mean and standard deviation, respectively. This new parameterization is easier to learn with gradient-based methods. Batch normalization is usually inserted after fully connected or convolutional layers. It is conventionally inserted into the layer before activation functions but may also be inserted after. Batchnorm speeds up learning and reduces the strong dependence on initial parameters. Additionally, it can have a regularizing effect and sometimes eliminate the need for dropout.
#### 3.3.8 Universal approximation theorem
In 1989, Cybenko [11] proved that a feedforward network of arbitrary width with a sigmoidal activation function and a single hidden layer can approximate any continuous function. The theorem asserts that given any \(f\in C([0,1]^{n})\)7, \(\epsilon>0\) and sigmoidal activation function \(\phi\), there is a finite sum of the form
Footnote 7: Continuous function on the \(n\)-dimensional unit cube.
\[\hat{f}(x)=\sum_{i=1}^{N}\alpha_{i}\phi(\theta_{i}^{\top}x+b_{i}) \tag{3.21}\]
where \(\alpha_{i},b_{i}\in\mathbb{R}\) and \(\theta_{i}\in\mathbb{R}^{2}\), for which
\[|\hat{f}(x)-f(x)|<\epsilon \tag{3.22}\]
for all \(x\in[0,1]^{n}\). Hornik [12] later generalized to include all squashing activation functions in what is known as the _universal approximation_ theorem. The
theorem establishes that there are no theoretical constraints on the expressivity of neural networks. However, it does not guarantee that the training algorithm will be able to learn that function, only that it can be learned for an extensive enough network.
#### 3.3.9 Deep neural networks
Although a single-layer network, in theory, can represent any continuous function, it might require the network to be infeasibly large. It may be easier or even required to approximate more complex functions using networks of deep topology [14]. The class of ML algorithms that use neural nets with multiple hidden layers is known as Deep Learning (DL). Interestingly, the universal approximation theorem also applies to networks of bounded width and arbitrary depth. Lu et al. [15] showed that for any Lebesgue-integrable function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and any \(\epsilon>0\), there exists a fully-connected ReLU network \(A\) with width \((n+4)\), such that the function \(F_{A}\) represented by this network satisfies
\[\int_{\mathbb{R}^{n}}|f(x)-F_{A}(x)|dx<\epsilon \tag{3.23}\]
i.e., any continuous multivariate function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) can be approximated by a deep ReLU network with width \(d_{m}\leq n+4\).
Poggio et al.[16] showed that a deep network could have exponentially better approximation properties than a wide shallow network of the same total size. Conversely, a network of deep topology can attain the same expressivity as a larger shallow network. They also show that a deep composition of low-dimensional functions has a theoretical guarantee, which shallow networks do not have, that they can resist the curse of dimensionality for a large class of functions.
Several unique network architectures have been developed for tasks like computer vision, sequential data, and machine translation. As a result, they can significantly outperform larger and more deeply layered feedforward networks. The architecture of neural networks carries an inductive bias, i.e., an a priori algorithmic preference. A neural network's inductive bias must match that of the problem it is solving to generalize well out-of-sample.
#### 3.3.10 Convolutional neural networks
A Convolutional Neural Network (CNN) is a type of neural network specialized in processing data with a known, grid-like topology such as time-series data (1-dimensional) or images (2-dimensional) [17]. Convolutional neural networks have profoundly impacted fields like computer vision [17] and are used in several successful deep RL applications [18, 19, 20]. A CNN is a neural net that applies convolution instead of general matrix multiplication in at least one layer. A convolution is a form of integral transform defined as the integral of the product of two functions after one is reflected about the
y-axis and shifted
\[s(t)=(x*w)(t)=\int x(a)w(t-a)da \tag{3.24}\]
where \(x(t)\in\mathbb{R}\) and \(w(t)\) is a weighting function.
The convolutional layer takes the input \(x\) with its preserved spatial structure. The weights \(w\) are given as filters that always extend the full depth of the input volume but are smaller than the full input size. Convolutional neural nets utilize weight sharing by applying the same filter across the whole input. The filter slides across the input and convolves the filter with the image. It computes the dot product at every spatial location, which makes up the activation map, i.e., the output. This can be done using different filters to produce multiple activation maps. The way the filter slides across the input can be modified. The stride specifies how many pixels the filter moves every step. It is common to zero pad the border if the stride is not compatible with the size of the filter and the input.
After the convolutional layer, a nonlinear activation function is applied to the activation map. Convolutional networks may also include pooling layers after the activation function that reduce the dimension of the data. Pooling can summarize the feature maps to the subsequent layers by discarding redundant or irrelevant information. _Max pooling_ is a pooling operation that reports the maximum output within a patch of the feature map. Increasing the stride of the convolutional kernel also gives a downsampling effect similar to pooling.
#### Recurrent neural networks
A Recurrent Neural Network (RNN) is a type of neural network that allows connections between nodes to create cycles so that outputs from one node affect
Figure 3.6: 3D convolutional layer from [FFLc].
inputs to another. The recurrent structure enables networks to exhibit temporal dynamic behavior. RNNs scale far better than feedforward networks for longer sequences and are well-suited to processing sequential data. However, they can be cumbersome to train as their recurrent structure precludes parallelization. Furthermore, conventional batch norm is incompatible with RNNs, as the recurrent part of the network is not considered when computing the normalization statistic.
RNNs generate a sequence of hidden states \(h_{t}\). The hidden states enable weight sharing that allows the model to generalize over examples of various lengths. Recurrent neural networks are functions of the previous hidden state \(h_{t-1}\) and the input \(x_{t}\) at time \(t\). The hidden units in a recurrent neural network are often defined as a dynamic system \(h^{(t)}\) driven by an external signal \(x^{(t)}\)
\[h^{(t)}=f(h^{(t-1)},x^{(t)};\theta) \tag{3.25}\]
Hidden states \(h_{t}\) are utilized by RNNs to summarize problem-relevant aspects of the past sequence of inputs up to \(t\) when forecasting future states based on previous states. Since the hidden state is a fixed-length vector, it will be a lossy summary. The forward pass is sequential and cannot be parallelized. Backprop uses the states computed in the forward pass to calculate the gradient. The backprop algorithm used on unrolled RNNs is called _backpropagation through time_ (BPTT). All nodes that contribute to an error should be adjusted. In addition, for an unrolled RNN, nodes far back in the calculations should also be adjusted. _Truncated_ backpropagation through time that only backpropagates for a few backward steps can be used to save computational resources at the cost of introducing bias.
Every time the gradient backpropagates through a vanilla RNN cell, it is multiplied by the transpose of the weights. A sequence of vanilla RNN cells will therefore multiply the gradient with the same factor multiple times. If \(x>1\) then \(\lim_{n\rightarrow\infty}x^{n}=\infty\), and if \(x<1\) then \(\lim_{n\rightarrow\infty}x^{n}=0\). If the largest singular
Figure 3.7: Recurrent neural network from [FFLa].
value of the weight matrix is \(>1\), the gradient will exponentially increase as it backpropagates through the RNN cells. Conversely, if the largest singular value is \(<1\), the opposite happens, where the gradient will shrink exponentially. For the gradient of RNNs, this will result in either exploding or vanishing gradients. This is why vanilla RNNs trained with gradient-based methods do not perform well, especially when dealing with long-term dependencies. Bengio et al. [3] present theoretical and experimental evidence supporting this conclusion. Exploding gradients lead to large updates that can have a detrimental effect on model performance. The standard solution is to clip the parameter gradients above a certain threshold. Gradient clipping can be done element-wise or by the norm over all parameter gradients. Clipping the gradient norm has an intuitive appeal over elementwise clipping. Since all gradients are normalized jointly with the same scaling factor, the gradient still points in the same direction, which is not necessarily the case for element-wise gradient clipping [1]. Let \(\|\mathbf{g}\|\) be the norm of the gradient \(\mathbf{g}\) and \(v>0\) be the norm threshold. If the norm crosses over the threshold \(\|\mathbf{g}\|>v\), the gradient is clipped to
\[\mathbf{g}\leftarrow\frac{\mathbf{g}^{v}}{\|\mathbf{g}\|} \tag{3.26}\]
Gradient clipping solves the exploding gradient problem and can improve performance for reinforcement learning with nonlinear function approximation [1]. For vanishing gradients, however, the whole architecture of the recurrent network needs to be changed. This is currently a hot topic of research [1].
Long short-term memoryLong Short-Term Memory (LSTM) is a form of gated RNN designed to have better gradient flow properties to solve the problem of vanishing and exploding gradients. LSTMs were introduced in 1997 [10] and are traditionally used in natural language processing [1]. Recently, LSMT networks have been successfully applied to financial time series forecasting [14]. Although new architectures like transformers have impressive natural language processing and computer vision performance, LSTMs are still considered state-of-the-art for time series forecasting.
The LSTM is parameterized by the weight \(W\in\mathbb{R}^{n}\), which is optimized using gradient-based methods. While vanilla RNNs only have one hidden state, LSTMs maintain two hidden states at every time step. One is \(h_{t}\), similar to the hidden state of vanilla RNNs, and the second is \(c_{t}\), the cell state that gets kept inside the network. The cell state runs through the LSTM cell with only minor linear interactions. LSTMs are composed of a cell and four gates which regulate the flow of information to and from the cell state and hidden state
* Input gate \(i\); decides which values in the cell state to update
* Forget gate \(f\); decides what to erase from the cell state
* Output gate \(o\); decides how much to output to the hidden state
* Gate gate \(g\); how much to write to cell, decides how much to write to the cell state
The output from the gates is defined as
\[\begin{vmatrix}i\\ f\\ o\\ g\end{vmatrix}=\begin{vmatrix}\sigma\\ \sigma\\ \sigma\\ \tanh\end{vmatrix}W\begin{vmatrix}h_{t-1}\\ x_{t}\end{vmatrix} \tag{3.27}\]
where \(\sigma\) is the sigmoid activation function. The cell state \(c_{t}\) and hidden state \(h_{t}\) are updated according to the following rules
\[c_{t}=f\odot c_{t-1}+i\odot g \tag{3.28}\]
\[h_{t}=o\odot\tanh\left(c_{t}\right) \tag{3.29}\]
When the gradient flows backward in the LSTM, it backpropagates from \(c_{t}\) to \(c_{t-1}\), and there is only elementwise multiplication by the \(f\) gate and no multiplication with the weights. Since the LSTMs backpropagate from the last hidden state through the cell states backward, it is only exposed to one \(tanh\) nonlinear activation function. Otherwise, the gradient is relatively unimpeded. Therefore, LSTMs handle long-term dependencies without the problem of exploding or vanishing gradients.
Figure 3.8: LSTM cell from [FFLa].
Reinforcement learning
An algorithmic trading agent maps observations of some predictor data to market positions. This mapping is non-trivial, and as noted by Moody et al. [14], accounting for factors such as risk and transaction costs is difficult in a supervised learning setting. Fortunately, reinforcement learning provides a convenient framework for optimizing risk- and transaction-cost-sensitive algorithmic trading agents.
The purpose of this chapter is to introduce the fundamental concepts of reinforcement learning relevant to this thesis. A more general and comprehensive introduction to reinforcement learning can be found in "Reinforcement Learning: An Introduction" by Richard Sutton and Andrew Barto [1]. An overview of deep reinforcement learning may be found in "Deep Reinforcement Learning" by Aske Plaat [11]. This chapter begins by introducing reinforcement learning 4.1 and the Markov decision process framework 4.2, and some foundational reinforcement learning concepts 4.3, 4.4. Section 4.5 discusses how the concepts introduced in the previous chapter (3) can be combined with reinforcement learning to generalize over high-dimensional state spaces. Finally, section 4.6 introduces policy gradient methods, which allow an agent to optimize a parameterized policy directly.
### Introduction
Reinforcement Learning (RL) is the machine learning paradigm that studies how an intelligent agent can learn to make optimal sequential decisions in a time series environment under stochastic or delayed feedback. It is based on the concept of learning optimal behavior to solve complex problems by training in an environment that incorporates the structure of the problem. The agent optimizes a policy that maps states to actions through reinforcement signals from the environment in the form of numerical rewards. The goal of using RL to adjust the parameters of an agent is to maximize the expected reward generated due to the agent's actions. This goal is accomplished through trial and error exploration of the environment. A key challenge of RL is balancing exploring uncharted territory and exploiting current knowledge, known as the _exploration-exploitation_ tradeoff. Although it has been studied for many years, the exploration-exploitation tradeoff remains unsolved. Each action must be tried multiple times in stochastic environments to get a reliable estimate of its expected reward. For environments with non-stationary dynamics, the agent must continuously explore to learn how the distribution changes over time. The agent-environment interaction in RL is often modeled as a Markov decision process.
### Markov decision process
A Markov Decision Process (MDP) is a stochastic control process and a classical formalization of sequential decision-making. A MPD is a tuple \((\mathcal{S},\mathcal{A},p,\mathcal{R},\gamma)\)
where
* \(\mathcal{S}\) is a countable non-empty set of states (state space).
* \(\mathcal{A}\) is a countable non-empty set of actions (action space)
* \(p(s^{\prime}|s,a)=Pr(s_{t+1}=s^{\prime}|s_{t}=s,a_{t}=a)\) is the transition probability matrix.
* \(\mathcal{R}\subset\mathbb{R}\) is the set of all possible rewards.
* \(\gamma\in[0,1]\) is the discount rate.
The agent interacts with the environment at discrete time steps \(t=0,1,2,3,...\), which are not necessarily fixed intervals of real-time. At each step \(t\), the agent receives a representation of the state of the environment \(s_{t}\in\mathcal{S}\), where \(s_{0}\in\mathcal{S}\) is the initial state drawn from some initial state distribution \(p_{0}\in\Delta(\mathcal{S})\). Based on the state \(s_{t}=s\), the agent chooses one of the available actions in the current state \(a_{t}\in A(s)\). After performing the action \(a_{t}\), the agent receives an immediate numerical reward \(r_{t+1}\in\mathcal{R}\) and the subsequent state representation \(s_{t+1}\in\mathcal{S}\). This interaction with a Markov decision process produces a sequence known as a _trajectory_: \(s_{0},a_{0},r_{1},s_{1},a_{1},r_{2},s_{2},a_{2},r_{3},...\). This sequence is finite for episodic tasks (with the termination time usually labeled \(T\)); for continuing tasks, it is infinite.
The dynamics of the system can be completely described by the one-step transition function \(p:\mathcal{S}\times\mathcal{R}\times\mathcal{S}\times\mathcal{A}\to[0,1]\) that is defined as
\[p(s^{\prime},r|s,a)=Pr\{s_{t}=s^{\prime},r_{t}=r|s_{t-1}=s,a_{t-1}=a\} \tag{4.1}\]
for all \(s,s^{\prime}\in\mathcal{S}\), \(r\in\mathcal{R}\), and \(a\in\mathcal{A}(s)\). It defines a probability distribution such that
\[\sum_{s\in\mathcal{S}}\sum_{a\in\mathcal{A}(s)}p(s^{\prime},r|s,a)=1 \tag{4.2}\]
Figure 4.1: Agent-environment interaction from [2].
for all \(s\in\mathcal{S}\), and \(a\in\mathcal{A}(s)\). Note that the one-step transition function depends only on the current state \(s\) and not previous states, i.e., the state has the Markov property. Essentially, MDPs are Markov chains with actions and rewards. The transition probabilities \(p:\mathcal{S}\times\mathcal{S}\times\mathcal{A}\to[0,1]\) are defined through the dynamics function \(p(s^{\prime},r|s,a)\), as
\[p(s^{\prime}|s,a)=Pr(s_{t}=s^{\prime}|s_{t-1}=s,a_{t-1}=a)=\sum_{r\in\mathcal{ R}}p(s^{\prime},r|s,a) \tag{4.3}\]
The reward generating function \(r:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\), is defined through the dynamics function \(p(s^{\prime},r|s,a)\), as
\[r(s,a)=\mathbb{E}[r_{t+1}|s_{t}=s,a_{t}=a]=\sum_{r\in\mathcal{R}}r\sum_{s^{ \prime}\in\mathcal{S}}p(s^{\prime},r|s,a) \tag{4.4}\]
The reward-generating function determines the expected reward from performing an action \(a\) in a state \(s\). In practice, the dynamics of the system \(p(s^{\prime},r|s,a)\) are seldom known a priori but learned through interacting with the environment.
#### 4.2.1 Infinite Markov decision process
A _finite_ MDP is a Markov decision process with countably finite state space \(|\mathcal{S}|<\infty\), action space \(|\mathcal{A}|<\infty\). Finite MDPs can be described as tabular and solved by dynamic programming algorithms with convergence guarantees if the state and action space dimensions \(|\mathcal{S}\times\mathcal{A}|\) are not too large. Unfortunately, the applicability of these methods is severely limited by the assumption that state-action spaces are countable. These assumptions must be relaxed for MDPs to have significant real-world applications. Fortunately, the same theory for finite MDPs also applies to continuous and countably infinite state-action spaces under function approximation. The system dynamics are then described by a transition probability function \(\mathcal{P}\) instead of a matrix.
#### 4.2.2 Partially observable Markov decision process
Let \(s_{t}\) be the environment state and \(s_{t}^{a}\) be the agent state. A Markov decision process assumes full observability of the environment, i.e., that \(s_{t}=s_{t}^{a}\). A Partially Observable Markov Decision Process (POMDP) relaxes this assumption and allows for optimal decision-making in environments that are only partially observable to the agent. Since they are generalizations of MDPs, all algorithms used for MDPs are compatible with POMDPs. A POMDP is a tuple \((\mathcal{S},\mathcal{A},\mathcal{O},\mathcal{P},\mathcal{R},\mathcal{Z},\gamma)\) that extends a MDP with two additional elements
* \(\mathcal{O}\) is the observation space.
* \(\mathcal{Z}\) is the observation function, \(\mathcal{Z}=Pr(o_{t+1}=o|s_{t+1}=s^{\prime},a_{t}=a)\).
An agent in a POMDP cannot directly observe the state; instead, it receives an observation \(o\in\mathcal{O}\) determined by the observation function \(\mathcal{Z}\). The agent
state approximates the environment state \(s_{t}^{a}\approx s_{t}\). However, a single observation \(o\) is not a Markovian state signal. Direct mapping between observation and action is insufficient for optimal behavior, and a memory of past observations is required. The history of a POMDP is a sequence of actions and observations \(h_{t}=\{o_{1},a_{1},...,o_{t},a_{t}\}\). The agent state can be defined as the history \(s_{t}^{a}=h_{t}\). However, storing and processing the complete history of every action scales linearly with time, both in memory and computation. A more scalable alternative is a stateful sequential model like a recurrent neural network (RNN). In this model, the agent state is represented by the network \(s_{t}^{a}=f_{\theta}(s_{t-1}^{a},o_{t})\).
A state can be split into an agent's internal state and the environment's external state. Anything that cannot be changed arbitrarily by the agent is considered external and, thus, part of the external environment. On the other hand, the internal data structures of the agent that the agent can change are part of the internal environment.
### Rewards
The goal of a reinforcement learning agent is to maximize the expected return \(\mathbb{E}[G_{t}]\), where the return \(G_{t}\) is defined as the sum of rewards
\[G_{t}=r_{t+1}+r_{t+2}+...+r_{T} \tag{4.5}\]
In an episodic setting, where \(t=0,...,T\), this goal is trivial to define as the sequence of rewards is finite. However, some problems, like financial trading, do not naturally break into subsequences and are known as continuing problems. For continuing problems, where \(T=\infty\) and there is no terminal state, it is clear that the sum of rewards \(G_{t}\) could diverge. Discounting was introduced to solve the problem of returns growing to infinity. Discounted returns are defined as
\[G_{t}=\sum_{k=0}^{\infty}\gamma^{k}r_{t+k+1}=r_{t+1}+\gamma G_{t+1} \tag{4.6}\]
where \(\gamma\in[0,1]\) is the discount rate used to scale future rewards. Setting \(\gamma=0\) suggests that the agent is myopic, i.e., only cares about immediate rewards. As long as \(\gamma<1\) and the reward sequence is bounded, the discounted return \(G_{t}\) is finite. Discounting allows reinforcement learning to be used in continuing problems.
Reinforcement signals \(r_{t+1}\) from the environment can be immediate or delayed. Games and robot control are typical examples of delayed reward environments, where an action affects not only the immediate reward but also the next state and, through that, all subsequent rewards. An example of delayed reward is when chess players occasionally sacrifice a piece to gain a positional advantage later in the game. Although sacrificing a piece in isolation is poor, it can still be optimal long-term. Consequently, temporal credit assignment is a fundamental challenge in delayed reward environments. AlphaZero [16] surpassed human-level play in chess in just 24 hours, starting from random play, using
reinforcement learning. Interestingly, AlphaZero seems unusually (by human standards) open to material sacrifices for long-term positional compensation, suggesting that the RL algorithm estimates delayed reward better than human players. Throughout this thesis, financial trading is modeled as a stochastic immediate reward environment. This choice is justified in chapter 5. Therefore, the problem reduces to an associative reinforcement learning problem, a specific instance of the full reinforcement learning problem. It requires generalization and trial-and-error exploration but not temporal credit assignment. The methods presented in this chapter will only be those relevant in an immediate reward environment. Unless otherwise stated, the discount rate \(\gamma\), a tradeoff between immediate and delayed rewards, is assumed to be zero, making the agent myopic. As a result, the return \(G_{t}\) in an immediate reward environment is defined as the immediate reward
\[G_{t}=\sum_{k=0}^{\infty}\gamma^{k}r_{t+k+1}=r_{t+1} \tag{4.7}\]
### Value function and policy
A stochastic policy is a mapping from states to a probability distribution over the action space and is defined as
\[\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A}) \tag{4.8}\]
The stochastic policy is a probability measure \(\pi(a|s)=Pr\{a_{t}=a|s_{t}=s\}\), which is the probability that the agent performs an action \(a\), given that the current state is \(s\). Stochastic policies can be advantageous in problems with perceptual aliasing. Furthermore, it handles the exploration-exploitation trade-off without hard coding it. A deterministic policy maps states \(\mathcal{S}\) to actions \(\mathcal{A}\), and is defined as
\[\mu:\mathcal{S}\rightarrow\mathcal{A} \tag{4.9}\]
RL algorithms determine how policies are adjusted through experience, where the goal is for the agent to learn an optimal or near-optimal policy that maximizes returns.
Value-based RL algorithms, like Q-learning and SARSA, estimate a state-value function or an action-value function and extract the policy from it. The state-value function \(V_{\pi}(s)\) is the expected return when starting in state \(s\) and following policy \(\pi\). It is defined \(\forall s\in\mathcal{S}\) as
\[V_{\pi}(s)=\mathbb{E}_{\pi}[G_{t}|s_{t}=s] \tag{4.10}\]
The action-value function \(Q_{\pi}(s,a)\) is the expected return when performing action \(a\) in state \(s\) and then following the policy \(\pi\). It is defined \(\forall s\in\mathcal{S},a\in\mathcal{A}(s)\) as
\[Q_{\pi}(s,a)=\mathbb{E}_{\pi}[G_{t}|s_{t}=s,a_{t}=a] \tag{4.11}\]
An example of a value-based policy is the \(\epsilon\)-greedy policy, defined as
\[\pi(a|s,\epsilon)=\begin{cases}\operatorname*{arg\,max}_{a}Q_{\pi}(s,a)&\text{ with probability }1-\epsilon\\ \text{sample random action }a\sim\mathcal{A}(s)&\text{with probability }\epsilon\end{cases} \tag{4.12}\]
where \(\epsilon\in[0,1]\) is the exploration rate.
Reinforcement learning algorithms are divided into _on_-policy and _off_-policy algorithms. The same policy that generates the trajectories is being optimized for on-policy algorithms. In contrast, for off-policy algorithms, the policy generating trajectories differs from the one being optimized. For off-policy learning, the exploration can be delegated to an explorative behavioral policy while the agent optimizes a greedy target policy.
### Function approximation
Tabular reinforcement learning methods include model-based methods like dynamic programming as well as model-free methods like Monte-Carlo and temporal-difference learning methods (e.g., Q-learning and SARSA). Unfortunately, tabular methods require discrete state and action spaces, and due to the curse of dimensionality, these spaces must be relatively small. Thus, their applicability to real-world problems is limited. Complex environments like financial trading cannot be represented in discrete states. Instead, feature vectors represent states in environments where the state space is too large to enumerate. As most states will probably never be visited and visiting the same states twice is unlikely, it is necessary to generalize from previous encounters to states with similar characteristics. This is where the concepts of function approximation discussed in the previous chapter (3.2.1) come in. Using function approximation, samples from the desired function, e.g., the value function, are generalized to approximate the entire function.
Value-based reinforcement learning algorithms such as Deep Q-Network (DQN) [13] and Deep Recurrent Q-Network (DRQN) [15] use deep neural networks as function approximators to generalize over a continuous space that is optimized using the Q-learning algorithm. DQN and DRQN achieved superhuman performance in playing Atari games using raw pixels as input into a convolutional neural network that outputs action-value estimates of future returns. These value-based algorithms are still limited by discrete action space and the curse of dimensionality as it has to calculate the Q-value of every single action. In the games DQN and DRQN are tested on, the agent is limited to a small discrete set of actions (between 4 and 18). However, for many applications, a discrete action space is severely limiting. Furthermore, these algorithms use the naive exploration heuristics \(\epsilon\)-greedy, which is not feasible in critical domains. Fortunately, policy gradient methods bypass these problems entirely.
### Policy gradient methods
While value-based reinforcement learning algorithms extract a policy from action-value estimates, Policy Gradient (PG) methods learn a parameterized policy and
optimize it directly. The policy's parameter vector is \(\theta\in\mathbb{R}^{d^{\prime}}\), with the policy defined as
\[\pi_{\theta}(a|s)=Pr\{a_{t}=a|s_{t}=s,\theta_{t}=\theta\} \tag{4.13}\]
Continuous action space is modeled by learning the statistics of the probability of the action space. A natural policy parameterization in continuous action spaces is the Gaussian distribution \(a\sim\mathcal{N}(\mu_{\theta}(s),\sigma_{\theta}(s)^{2})\) defined as
\[\pi_{\theta}(a|s)=\frac{1}{\sigma_{\theta}(s)\sqrt{2\pi}}e^{-\frac{(a-\mu_{ \theta}(s))^{2}}{2\sigma_{\theta}(s)^{2}}} \tag{4.14}\]
where \(\mu_{\theta}(s)\in\mathbb{R}\) and \(\sigma_{\theta}(s)\in\mathbb{R}_{+}\) are parametric function approximations of the mean and standard deviation, respectively. The mean decides the space where the agent will favor actions, while the standard deviation decides the degree of exploration. It is important to note that this gives a probability density, not a probability distribution like the softmax distribution.
For policy gradient methods in the continuous time setting, the goal of optimizing the policy \(\pi_{\theta}\) is to find the parameters \(\theta\) that maximize the average rate of return per time step [1]. The performance measure \(J\) for the policy \(\pi_{\theta}\) in the continuing setting is defined in terms of the average rate of reward per time step as
\[J(\pi_{\theta}) =\int_{\mathcal{S}}d^{\pi}(s)\int_{\mathcal{A}}r(s,a)\pi_{ \theta}(a|s)dads\] \[=\mathbb{E}_{s\sim d^{\pi},a\sim\pi_{\theta}}[r(s,a)] \tag{4.15}\]
where \(d^{\pi}(s)=\lim_{t\rightarrow\infty}Pr\{s_{t}=s|a_{0:t}\sim\pi_{\theta}\}\) is the steady-state distribution under the policy \(\pi_{\theta}\).
Policy optimization aims to find the parameters \(\theta\) that maximize the performance measure \(J\). Gradient ascent is used as the optimization algorithm for the policy. The policy parameter \(\theta\) is moved in the direction suggested by the gradient of \(J\) to maximize the return, yielding the following gradient ascent update
\[\theta_{t+1}=\theta_{t}+\alpha\widehat{\nabla_{\theta}J(\pi_{\theta_{t}})} \tag{4.16}\]
where \(\alpha\) is the step-size and \(\widehat{\nabla_{\theta}J(\pi_{\theta_{t}})}\) is a stochastic estimate whose expectation approximates the gradient of \(J\) with respect to \(\theta\)[1].
The policy gradient theorem8 for the continuing case provides the following expression for the gradient
Footnote 8: For the full proof see chapter 13.6 in [1]
\[\nabla_{\theta}J(\pi_{\theta}) =\int_{\mathcal{S}}d^{\pi}(s)\int_{\mathcal{A}}Q_{\pi}(s,a)\nabla _{\theta}\pi_{\theta}(a|s)dads\] \[=\mathbb{E}_{s\sim d^{\pi},a\sim\pi_{\theta}}[Q_{\pi}(s,a)\nabla _{\theta}\log\pi_{\theta}(a|s)] \tag{4.17}\]
Even though the steady-state distribution \(d^{\pi}\) depends on the policy parameters \(\theta\), the gradient of the performance measure does not involve the gradient of \(d^{\pi}\)
allowing the agent to simulate paths and update the policy parameter at every step [14].
#### 4.6.1 Reinforce
REINFORCE is an on-policy direct policy optimization algorithm derived using the policy gradient theorem [14]. The algorithm is on-policy. Consequently, the agent will encounter the states in the proportions specified by the steady-state distribution. Using the policy gradient theorem, the calculation of the policy gradient reduces to a simple expectation. The only problem is estimating the action-value function \(Q_{\pi}(s,a)\). REINFORCE solves this problem by using the sampled return \(G_{t}\) as an unbiased estimate of the action-value function \(Q_{\pi}(s_{t},a_{t})\). Observing that the state-value is equal to the expectation of the sampled return, i.e., \(\mathbb{E}_{\pi}[G_{t}|s_{t},a_{t}]=Q_{\pi}(s_{t},a_{t})\), the following expression for the policy gradient can be defined
\[\nabla_{\theta}J(\pi_{\theta}) =\mathbb{E}_{s\sim d^{\pi},a\sim\pi_{\theta}}[Q_{\pi}(s,a)\nabla_ {\theta}\log\pi_{\theta}(a|s)]\] \[=\mathbb{E}_{s\sim d^{\pi},a\sim\pi_{\theta}}[G_{t}\nabla_{ \theta}\log\pi_{\theta}(a|s)] \tag{4.18}\]
This expression can be sampled on each time step t, and its expectation equals the gradient. The gradient ascent policy parameter update for REINFORCE is defined as
\[\theta_{t+1}=\theta_{t}+\alpha G_{t}\nabla_{\theta}\log\pi_{\theta_{t}}(a_{t }|s_{t}) \tag{4.19}\]
where \(\alpha\) is the step size. The direction of the gradient is in the parameter space that increases the probability of repeating action \(a_{t}\) on visits to \(s_{t}\) in the future the most [14]. The higher the return, the more the agent wants to repeat that action. The update is inversely proportional to the action probability to adjust for different frequencies of visits to states, i.e., some states might be visited often and have an advantage over less visited states.
While REINFORCE is unbiased and only requires estimating the policy, it might exhibit high variance due to the high variability of sampled returns (if the trajectory space is large). High variance leads to unstable learning updates and slower convergence. Furthermore, the stochastic policy used to estimate the gradient can be disadvantageous in critical domains such as health care or finance. Thankfully, both these problems can be solved by a class of policy gradient methods called actor-critic methods.
#### 4.6.2 Actor-critic
Policy-based reinforcement learning is effective in high-dimensional and continuous action space, while value-based RL is more sample efficient and more convenient for online learning. Actor-Critic (AC) methods seek to combine the best of both worlds where a policy-based actor chooses actions, and the value-based critic critique those actions. The actor optimizes the policy parameters using stochastic gradient ascent in the direction suggested by the critic. The critic's value function is optimized using stochastic gradient descent to minimize
the loss to the target. This use of a critic introduces bias since the critique is an approximation of the return and not actual observed returns like in actor-based algorithms like REINFORCE. There are numerous actor-critic algorithms like advantage actor-critic (A2C) [2], asynchronous advantage actor-critic (A3C) [1], and proximal policy optimization (PPO) [16], that have exhibited impressive performance in a variety of applications. These methods rely on stochastic policies and computing the advantage function. For critical domains such as finance, a deterministic policy directly optimized by a learned action-value function might be more appropriate. Fortunately, the policy gradient framework can be extended to deterministic policies [13, 14].
The idea behind deterministic actor-critic algorithms is based on Q-learning, where a network \(Q(s,a)\) approximates the return. Q-learning can be extended to high-dimensional state spaces by defining the Q-network as a function approximator \(Q_{\phi}(s,a):\mathcal{S}\times\mathcal{A}\to\mathbb{R}\), parameterized by \(\phi\in\mathbb{R}^{b^{\prime}}\). If the Q-network is optimal (\(Q_{\phi}^{*}\)), finding the optimal action (\(a^{*}\)) in a small discrete action space is trivial; \(a^{*}(s)=\arg\max_{a}Q_{\phi}^{*}(s,a)\). However, the exhaustive computations required for this process are not feasible in high-dimensional or continuous action spaces due to the curse of dimensionality. This problem can be bypassed by learning a deterministic policy \(\mu_{\theta}(s):\mathcal{S}\to\mathcal{A}\), parameterized by \(\theta\in\mathbb{R}^{d^{\prime}}\), as an approximator to \(a(s)\), such that \(\max_{a}Q_{\phi}(s,a)\approx Q_{\phi}(s,\mu(s))\).
Deterministic policy gradientLet \(\mu_{\theta}:\mathcal{S}\to\mathcal{A}\) be the deterministic policy parameterized by \(\theta\in\mathbb{R}^{d^{\prime}}\). The performance measure \(J\) for the deterministic policy \(\mu_{\theta}\) in the continuous time average reward setting is defined as
\[J(\mu_{\theta}) =\int_{\mathcal{S}}d^{\mu}(s)r(s,\mu_{\theta}(s))ds\] \[=\mathbb{E}_{s\sim d^{\mu}}[r(s,\mu_{\theta}(s))] \tag{4.20}\]
Initially, there was a belief that the deterministic policy gradient did not exist; however, it was proven by Silver et al. [13], which provides the following expression for the gradient
\[\nabla_{\theta}J(\mu_{\theta}) =\int_{\mathcal{S}}d^{\mu}(s)\nabla_{\theta}\mu_{\theta}(s)\nabla _{a}Q^{\mu}(s,a)|_{a=\mu_{\theta}(s)}ds\] \[=\mathbb{E}_{s\sim d^{\mu}}[\nabla_{\theta}\mu_{\theta}(s)\nabla _{a}Q^{\mu}(s,a)|_{a=\mu_{\theta}(s)}] \tag{4.21}\]
The deterministic policy gradient theorem holds for both on-policy and off-policy methods. Deterministic policies only require integrating over the state space and not both the state and action space like stochastic policies. The true action-value can be approximated by a parameterized critic, i.e., \(Q_{\phi}(s,a)\approx Q^{\mu}(s,a)\).
Off-policy learningLearning a deterministic policy in continuous action spaces on-policy will generally not ensure sufficient exploration and can lead
to sub-optimal solutions. To solve this problem, the deterministic actor-critic algorithm learns off-policy by introducing an exploration policy \(\mu^{\prime}_{\theta}\) defined as
\[\mu^{\prime}_{\theta}(s)=\mu_{\theta}(s)+\mathcal{W} \tag{4.22}\]
where \(\mathcal{W}\) is sampled noise from a noise-generating function. The exploration policy \(\mu^{\prime}_{\theta}\) explores the environment and generates trajectories that optimize the target policy \(\mu_{\theta}\) and Q-network \(Q_{\phi}\).
Q-network optimizationLet \(Q_{\phi}(s,a):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) be the Q-network parameterized by \(\phi\in\mathbb{R}^{b^{\prime}}\). The Q-network is iteratively updated to fit a target defined by the recursive relationship \(y=r+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{\prime})\) known as the Bellman equation [1]. The Bellman equation reduces to the immediate reward in an immediate reward environment, where \(\gamma=0\). The goal is to find the weights \(\phi\) that minimize the loss (usually MSE) to the target
\[L(Q_{\phi})=\mathbb{E}_{s\sim d^{\mu^{\prime}},a\sim\mu^{\prime},r\sim E}[(Q_{ \phi}(s,a)-y)^{2}] \tag{4.23}\]
where \(d^{\mu^{\prime}}\) is the steady-state distribution under the exploration policy \(\mu^{\prime}_{\theta}\), and \(E\) is the environment. The gradient of the loss function with respect to the Q-network parameter weights \(\phi\) is defined as
\[\nabla_{\phi}L(Q_{\phi})=\mathbb{E}_{s\sim d^{\mu^{\prime}},a\sim\mu^{\prime },r\sim E}[(Q_{\phi}(s,a)-y)\nabla_{\phi}Q_{\phi}(s,a)] \tag{4.24}\]
and is used to calculate the backward pass in the Q-network's stochastic gradient descent optimization algorithm.
Replay memoryLearning policies and Q-networks with large nonlinear function approximators is generally considered difficult and unstable and do not come with convergence guarantees. Another challenge of combining deep neural networks with reinforcement learning is that most ML optimization algorithms assume that samples are independent and identically distributed (IID). The IID assumption is rarely valid for RL agents sequentially exploring the state space. Furthermore, minibatch learning is advantageous as it efficiently utilizes hardware optimization. The introduction of replay memory [13] addresses these problems and trains large nonlinear function approximators stably and robustly. A replay memory \(\mathcal{D}=\{\tau_{t-k+1},\tau_{t-k+2},...,\tau_{t}\}\) is a finite cache storing the past \(k\) transitions \(\tau_{t}=(s_{t},a_{t},r_{t})\). A minibatch \(\mathcal{B}\subseteq\mathcal{D}\) of \(|\mathcal{B}|>0\) transitions are randomly sampled from the replay memory and used to update both the policy and Q-network.
Randomly sampled batches are ineffective for training recurrent neural networks, which carry forward hidden states through the mini-batch. Deep Recurrent Q-Network (DRQN) [15] is an extension of DQN for recurrent neural networks. DRQN uses experience replay like DQN; however, the sampled batches are in sequential order. The randomly sampled batch \(\mathcal{B}\subseteq\mathcal{D}\) consists of the transitions \(\mathcal{B}=\{\tau_{i},\tau_{i+1},...,\tau_{i+|\mathcal{B}|-2},\tau_{i+| \mathcal{B}|-1}\}\), where \(i\) is some random starting point for the batch. The RNNs initial hidden state is zeroed at the start of the mini-batch update but then carries forward through the mini-batch.
[MISSING_PAGE_POST]
## Part II Methodology
Problem Setting
In reinforcement learning, the agent learns through interaction with the environment. Thus, developing a model of the environment, in this case, the commodity market, is necessary to optimize an algorithmic trading agent through reinforcement signals. Commodities trading involves sequential decision-making in a stochastic and nonstationary environment to achieve some objective outlined by the stakeholder. This chapter describes a discrete-time Markov decision process that models this environment 9. Neither the strong assumption of countable state-actions nor the assumption of full environment observability can be satisfied. Thus, based on the previously proposed financial markets dynamical system [22, 17, 14], this chapter presents an infinite partially observable MDP for commodities trading.
Footnote 9: Although this thesis focuses on commodities, the model’s general concepts apply to other financial markets.
### Assumptions
Since the model will be tested ex-post by backtesting, described in section 2.9, it is necessary to make a couple of simplifying assumptions about the markets the agent operates in:
1. No slippage, i.e., there is sufficient liquidity in the market to fill any orders placed by the agent, regardless of size, at the quoted price. In other words, someone is always willing to take the opposite side of the agent's trade. This assumption relates to external factors that may affect the price between the time the agent is quoted the price and the time the order is filled10. Footnote 10: In reality, prices may significantly change between receiving a quote and placing an order.
2. No market impact, i.e., the money invested by the agent is not significant enough to move the market. This assumption relates to the agent's own trades' impact on the market. The reasonability of this assumption depends on the depth of the market.
### Time Discretization
Financial trading involves continuously reallocating capital in one or more financial assets. The problem does not naturally break into sub-sequences with terminal states. Therefore, this MDP is in the continuous-time setting. A discretization operation is applied to the continuous timeline to study the reinforcement learning-based algorithmic trading described in this thesis, discretizing the timeline into steps \(t=0,1,2,...\). As described in section 2.8, sampling at fixed time intervals is unsatisfactory in markets where activity varies throughout the day and exhibits undesirable statistical properties like non-normality of returns and heteroskedasticity. Instead of the traditional time-based constant duration \(\Delta t\), the observations are sampled as a function of dollar volume based on the
ideas from Mandelbrot and Taylor [14, 15], and Clark [13] presented in section 2.8. Dollar volume-based sampling provides better statistical properties for the agent and can, without human supervision, adapt to changes in market activity.
In practice, observations are sampled by sequentially summing the product of the volume \(v_{i}\) and price \(p_{i}\) of every trade in the market and then sampling a new observation once this sum breaches a predefined threshold \(\delta>0\) before starting the process over again. Define the sum of the total transacted dollar volume from the past sampled point \(k\) to point \(i\) as
\[\chi_{i}=\sum_{j=k+1}^{i}v_{j}\cdot p_{j} \tag{5.1}\]
where \(i\geq k+1\). Once \(\chi_{i}\) breaches the threshold, i.e., \(\chi_{i}>\delta\), the sub-sampling scheme samples the trade at time \(i\) as a new observation, \(k=i+1\), and resets the sum of dollar volume \(\chi_{i+1}=0\).
Due to the increasing volume in the energy futures markets in recent years, defining an appropriate threshold \(\delta\) is complicated. On the one hand, the purpose of using this sampling scheme is that the sampling frequency will deviate throughout the day and weeks depending on the transacted dollar volume. However, if structural changes in the market significantly alter the transacted dollar volume over long periods, e.g., three months, it would be advantageous for the threshold to adjust to that change. A constant threshold will therefore be unsatisfactory as it would not be reactive to these structural changes over long periods. A more robust alternative is a threshold that adjusts itself without human supervision. Therefore, the threshold \(\delta\) is defined using a simple moving average over the daily dollar volume of the past 90 days, avoiding lookahead bugs. The threshold is tuned using one parameter, the target number of samples per day, defined as \(tgt\in\mathbb{R}_{+}\). The threshold \(\delta\) is defined as
\[\delta=\frac{SMA_{90d}(v\cdot p)}{tgt} \tag{5.2}\]
which is the threshold needed to achieve the target number of samples per day in the past 90 days. The threshold continuously updates as trades occur in the market. There is no guarantee that the threshold will lead to the desired amount of samples per day, as it is computed from historical data. Nonetheless, it does achieve satisfactory results, even in unstable markets.
The time discretization scheme presented in this section represents progress in the research area from fixed time-interval-based sampling, providing better statistical properties while being more robust and adaptive to changing market environments.
### State Space
The universe of possible investments is limited to one instrument. The state space of a financial market includes all market participants and their attitudes
toward the financial instrument. Thus, the state space \(\mathcal{S}\) is continuous and partially observable. Representing the environment state \(\mathbf{s}_{t}\) to an algorithmic trading agent is impossible, so it needs to be approximated by an agent state, i.e., \(\mathbf{s}_{t}^{a}\approx\mathbf{s}_{t}\). This thesis adopts the philosophy of technical traders described in section 2.5. It uses past trades, specifically their price and volume, as observations \(\mathbf{o}_{t}\) of the environment. Let \(k\in\mathbb{R}_{+}\) be the number of trades for the instrument during the period \((t-1,t]\). An observation \(\mathbf{o}_{t}\) at time \(t\) is defined as
\[\mathbf{o}_{t}=[\mathbf{p}_{t},\mathbf{v}_{t}] \tag{5.3}\]
where
* \(\mathbf{p}_{t}\in\mathbb{R}^{k}\) are the prices of all \(k\) trades during the period \((-1,t]\). The opening price is denoted \(p_{t}\).
* \(\mathbf{v}_{t}\in\mathbb{R}^{k}\) are the volumes of all \(k\) trades during the period \((-1,t]\).
A single observation \(\mathbf{o}_{t}\) is not a Markovian state signal, and the agent state can be defined by the entire history \(\mathbf{h}_{t}\). However, this alternative is not scalable. Section 5.2 introduced the time discretization scheme for this environment, which is a form of sub-sampling. However, the computational and memory requirements still grow linearly with the number of samples, so a history cut-off is also employed. In other words, the agent will only have access to the past \(n\in\mathbb{N}_{+}\) observations \(\mathbf{o}_{t-n+1:t}\). In addition, the recursive mechanism of considering the past action as a part of the internal state of the environment introduced by Moody et al. [14] is adopted to consider transaction costs. The agent state is formed by concatenating the external state consisting of stacking the \(n\) most recent observations with the internal state consisting of the past action \(a_{t-1}\), i.e.,
\[\mathbf{s}_{t}^{a}=\{\mathbf{o}_{t-n+1:t},a_{t-1}\ \} \tag{5.4}\]
The dimension of the agent state vector is \(\dim\left(\mathbf{s}_{t}^{a}\right)=2kn+1\).
### Action Space
At every time step \(t\), the agent can buy or sell the instrument on the market. The opening price of a period \(p_{t}\), the price the agent can buy or sell the instrument for at time \(t\), is the last observed price, i.e., the closing price of the previous period \((t-1,t]\). The no slippage assumption from section 5.1 implies that the instrument can be bought or sold in any quantity for the time \(t\) at the opening price of that period \(p_{t}\).
Some trading environments allow the agent to output the trade directly; e.g., \(a_{t}=-5\) corresponds to selling five contracts, or \(a_{t}=+10\) corresponds to purchasing ten contracts. However, despite its intuitive nature, it can be problematic because the agent must maintain a continuous record of the number of contracts it holds and the amount of available balance at all times in order to avoid making irrational decisions such as selling contracts they do not own or purchasing more contracts than they can afford. Adding this layer of complexity
complicates the learning process. Instead, a more straightforward approach is to have the agent output its desired position weight. In this case, a trade is not directly outputted but inferred from the difference between the agent's current position and its chosen next position.
At every step \(t\), the agent performs an action \(a_{t}\in[-1,1]\), representing the agent's position weight during the period \((t,t+1]\). The weight represents the type and size of the position the agent has selected, where
* \(a_{t}>0\) indicates a long position, where the agent bets the price will rise from time \(t\) to time \(t+1\). The position is proportional to the size of the weight, where \(a_{t}=1\) indicates that the agent is maximally long.
* \(a_{t}=0\) indicates no position.
* \(a_{t}<0\) indicates a short position, where the agent bets the price will fall. \(a_{t}=-1\) indicates that the agent is maximally short. This thesis assumes that there is no additional cost or restriction on short-selling.
The trading episode starts and ends (if it ends) with no position, i.e., \(a_{0}=a_{T}=0\).
The weight \(a_{t}\) represents a fraction of the total capital available to the agent at any time. For this problem formulation, it is irrelevant if \(a_{t}=1\) represents $1 or $100 million. However, this requires that any fraction of the financial instrument can be bought and sold. E.g., if the agent has $100 to trade and wants to take the position \(a_{t}=0.5\), i.e., a long position worth $50, the price might not be a factor of 50, meaning that the agent would not get the exact position it selected. The fractional trading assumption is less reasonable the smaller the amount of capital available to the agent. On the other hand, the assumptions made in section 5.1 are less reasonable the higher the amount of capital.
### Reward Function
As noted in section 2.6, the goal of an algorithmic trading agent should not be to minimize forecast loss but to maximize returns, as it is more in line with the ultimate goal of the trader. Transaction costs represent a non-trivial expense that must be accounted for to generalize to real-world markets. Moreover, section 2.3 introduced the philosophy of modern portfolio theory, which advocates maximizing risk-adjusted returns. An advantage of reinforcement learning is that the trading agent can be directly optimized to maximize returns while considering transaction costs and risk. This section introduces a reward function sensitive to transaction costs and risk.
The reward \(r_{t}\) is realized at the end of the period \((t-1,t]\) and includes the return of the position \(a_{t-1}\) held during that interval. The objective of financial trading is generally to maximize future returns, or in more vernacular terms; to buy when the price is low and sell when the price is high. The multiplicative
return of a financial instrument at time \(t\) is defined as the relative change in price from time \(t-1\) to \(t\)
\[y_{t}=\frac{p_{t}}{p_{t-1}}-1 \tag{5.5}\]
Multiplicative returns, unlike additive returns, have the advantage that they are insensitive to the size of the capital traded. Logarithmic returns \(\log\left(y_{t}+1\right)\) are typically used in algorithmic trading for their symmetric properties [11, 12, 13]. The gross log return realized at time \(t\) is
\[r_{t}^{gross}=\log\left(y_{t}+1\right)a_{t-1} \tag{5.6}\]
At the end of the period \((t-1,t]\), due to price movements \(y_{t}\) in the market, the weight \(a_{t-1}\) evolve into
\[a_{t}^{\prime}=\frac{a_{t-1}\frac{p_{t}}{p_{t-1}}}{a_{t-1}y_{t}+1} \tag{5.7}\]
where \(a_{t}^{\prime}\in\mathbb{R}\). At the start of the next period \(t\), the agent must rebalance the portfolio from its current weight \(a_{t}^{\prime}\) to its chosen weight \(a_{t}\). As noted in section 2.1, the subsequent trades resulting from this rebalancing are subject to transaction costs. The size of the required rebalancing at time \(t\) is represented by \(||a_{t}-a_{t}^{\prime}||\). The log-return net of transaction costs at time \(t\) is defined as
\[r_{t}^{net}=r_{t}^{gross}-\lambda_{c}||a_{t-1}-a_{t-1}^{\prime}|| \tag{5.8}\]
where \(\lambda_{\eta}\in[0,1]\) is the transaction cost fraction that is assumed to be identical for buying and selling.
The log-return net of transaction costs assumes that the trader is risk-neutral, which is rarely true. The Sharpe ratio is the most common measure of risk-adjusted return; however, as noted in section 2.6, directly optimizing the Sharpe ratio might not be optimal. Instead, this thesis adopts the variance over returns [12] as a risk term
\[\sigma^{2}(r_{i}^{net}|i=t-L+1,...,t)=\sigma_{L}^{2}(r_{t}^{net}) \tag{5.9}\]
where \(L\in\mathbb{N}_{+}\) is the lookback window to calculate the variance of returns. In this thesis, the lookback window is \(L=60\). In conclusion, subtracting the risk term defined in equation 5.9 from the net returns defined in equation 5.8 gives the risk-adjusted log-return net of transaction costs \(r_{t}\), defined as
\[r_{t}=r_{t}^{net}-\lambda_{\sigma}\sigma_{L}^{2}(r_{t}^{net}) \tag{5.10}\]
where \(\lambda_{\sigma}\geq 0\) is a risk-sensitivity term that can be considered a trade-off hyperparameter for the stochastic gradient descent optimizer. If \(\lambda_{\sigma}=0\), the agent is risk-neutral. The reinforcement learning agents are optimized using the reward function defined in equation 5.10.
Reinforcement learning algorithm
This chapter presents two model-free reinforcement learning algorithms that solve the trading MDP defined in chapter 5. There are three types of reinforcement learning algorithms: critic-based, actor-based, and actor-critic-based. Despite the popularity of critic-based algorithms, such as Q-learning, they are unsuitable in this context due to their inability to handle high-dimensional or continuous action spaces. Actor-based and actor-critic-based methods, known as policy gradient methods (4.6), are appropriate since they can handle continuous action and state spaces. Furthermore, policy gradient methods are suitable for continuing tasks like trading. As both actor-based and actor-critic-based methods have advantages and disadvantages, it remains to be determined which methodology is most appropriate for this problem. Actor-based RL methods like REINFORCE are generally successful in stochastic continuous action spaces and have been applied to both single instrument trading and portfolio optimization [13, 14]. However, actor-based RL suffers high variance in learning and tends to be unstable and inconvenient in online learning. Actor-critic methods like Deep Deterministic Policy Gradient (DDPG) [12] have become popular lately and have been applied to several RL trading and portfolio optimization problems [13, 14]. Deterministic policies can be appropriate for financial trading, and off-policy learning combined with replay memory can be practical for online learning. However, training two neural networks is generally deemed to be unstable. Thus, the selection of a reinforcement learning algorithm is non-trivial. This chapter presents an actor-based algorithm (6.2) and an actor-critic-based algorithm (6.3) for solving the trading MDP.
### Immediate reward environment
The zero market impact assumption in chapter 5.1 implies that the agent's participation in the market will not affect future prices \(\mathbf{p}\). In other words, the zero market assumption implies that the agent's actions will not affect the future external state of the environment. However, actions performed at the start of period \(t\) affect the transaction costs paid by the agent at the start of the subsequent period \(t+1\). The reward \(r_{t+1}\) depends on transaction costs incurred at time \(t\), and thus the agent's previous action \(\mathbf{a}_{t-1}\) will affect the following action. In this framework, this influence is encapsulated by adopting the recursive mechanism introduced by Moody et al. [15] of considering the past action as a part of the internal state of the environment. Consequently, large position changes are discouraged.
The goal of the policy gradient agent is to find the policy parameters \(\theta\) that maximize the average rate of reward per time step. All rewards are equally important to the final return through commutativity. Since the agent does not affect the subsequent state of the environment, the goal is to maximize the expected immediate reward \(\mathbb{E}[r_{t+1}]\), exactly expressed in equation 5.10 as the expected cumulative logarithmic return net of transaction costs and the risk-sensitivity term. Therefore, the action-value of the action \(a_{t}\) is its immediate
reward \(r_{t+1}\), i.e.,
\[Q(s_{t},a_{t})=r_{t+1} \tag{6.1}\]
\(\forall s_{t}\in\mathcal{S},a_{t}\in\mathcal{A}(s_{t})\). As an immediate reward process, the reward function can be directly optimized by the policy gradient from rewards.
The actor-based direct policy gradient method introduced in section 6.2 optimizes the policy by using the immediate reward directly. In contrast, the actor-critic method introduced in section 6.3 optimizes the policy using critique from a Q-network optimized to minimize the loss to the immediate reward.
### Direct policy gradient
The first actor-based reinforcement learning algorithm is a direct policy gradient method inspired by the REINFORCE algorithm. Instead of computing learned probabilities for each action, the direct policy gradient method stochastically samples actions from a Gaussian distribution. Let \(\pi_{\theta,\epsilon}:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) be the stochastic policy parameterized by the weights \(\theta\in\mathbb{R}^{d^{\prime}}\). The policy is defined as a normal probability density over a real-valued scalar action
\[\pi_{\theta,\epsilon}(a|\mathbf{s})=\frac{1}{\epsilon\sqrt{2\pi}}e^{\left(- \frac{\left(a-\mu_{\theta}(\mathbf{s})\right)^{2}}{2\epsilon^{2}}\right)} \tag{6.2}\]
where the mean is given by a parametric function approximator \(\mu_{\theta}(\mathbf{s}):\mathbb{R}^{|\mathbf{s}|}\rightarrow[-1,1]\) that depends on the state and outputs an independent mean for the Gaussian distribution. The standard deviation is given as an exploration rate \(\epsilon\). The exploration rate \(\epsilon\in\mathbb{R}\) is positive and decays at \(\lambda_{\epsilon}\in[0,1]\) to encourage exploration of the action space in early learning epochs. The rate has a minimum \(\epsilon_{min}\geq 0\) such that \(\epsilon\geq\epsilon_{min}\), \(\forall t\). After each episode, the exploration rate updates according to the following update rule
\[\epsilon\leftarrow\max\left(\lambda_{\epsilon}\epsilon,\epsilon_{min}\right) \tag{6.3}\]
At every step \(t\), the agent samples an action \(a_{t}\sim\pi_{\theta}\) from the policy and clips the action to the interval \([-1,1]\).
The novel idea of using the exploration rate \(\epsilon\) as a controlled, decaying standard deviation of the stochastic policy represents progress in the research area. As \(\epsilon\) approaches \(0\), the policy becomes more or less deterministic to the mean given by the parametric function approximation \(\mu_{\theta}\), which is advantageous in critical domains such as financial trading. However, being a stochastic policy, the stochastic sampling required for the REINFORCE update is still available, blending the best of both worlds for an algorithmic trading agent in an immediate reward environment.
OptimizationAs the model should be compatible with pre-trade training and online learning, optimization is defined in an online stochastic batch learning
scheme. Trajectories are divided into mini-batches \([t_{s},t_{e}]\), where \(t_{s}<t_{e}\). The policy's performance measure on a mini-batch is defined as
\[J(\pi_{\theta,\epsilon})_{[t_{s},t_{e}]}=\mathbb{E}_{\pi_{\theta,\epsilon}}\left[ \sum_{t=t_{s}+1}^{t_{e}}r_{t}\right] \tag{6.4}\]
i.e., the expected sum of immediate rewards during the mini-batch \([t_{s},t_{e}]\) when following the policy \(\pi_{\theta,\epsilon}\). Using the policy gradient theorem, the gradient of the performance measure \(J\) with respect to the parameter weights \(\theta\) is defined as
\[\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s},t_{e}]}=\mathbb{E}_{\pi_{ \theta,\epsilon}}\left[\sum_{t=t_{s}+1}^{t_{e}}r_{t}\nabla_{\theta}\log\pi_{ \theta,\epsilon}(a_{t}|s_{t})\right] \tag{6.5}\]
This expectation is empirically estimated from rollouts under \(\pi_{\theta,\epsilon}\). The parameter weights are updated using a stochastic gradient ascent pass
\[\theta\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]} \tag{6.6}\]
PseudocodeThe pseudocode for the actor-based algorithm is given in Algorithm 2.
```
1:Initialize \(\theta\) and \(\epsilon\)
2:for\(t=1,2,\ldots,T\)do
3:for\(t=1,2,\ldots,T\)do
4:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
5:for\(t=1,2,\ldots,T\)do
6:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
7:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
8:endfor
9:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
10:endfor
11:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
12:endfor
13:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
14:endfor
15:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
16:endfor
17:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
18:endfor
19:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
20:endfor
21:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
22:endfor
23:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
24:endfor
25:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
26:endfor
27:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
28:endfor
29:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
30:endfor
21:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
31:endfor
22:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
32:endfor
23:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
33:endfor
24:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
34:endfor
25:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
35:endfor
26:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
36:endfor
27:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
37:endfor
28:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
38:endfor
29:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
40:endfor
39:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
41:endfor
31:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
42:endfor
32:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
43:endfor
44:endfor
45:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
46:endfor
47:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
48:endfor
49:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
49:endfor
50:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
51:endfor
52:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
53:endfor
54:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
55:endfor
56:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
57:endfor
58:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
59:endfor
59:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
60:endfor
50:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
61:endfor
50:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
62:endfor
51:endfor
52:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
63:endfor
53:endfor
54:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
64:endfor
55:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
65:endfor
56:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
66:endfor
57:endfor
58:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
67:endfor
59:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
68:endfor
50:\(\theta_{t}\leftarrow\theta+\alpha\nabla_{\theta}J(\pi_{\theta,\epsilon})_{[t_{s}, t_{e}]}\)
[MISSING_PAGE_POST]
**Algorithm 2** Actor-Based Algorithm for Trading
Input: a differentiable stochastic policy parameterization \(\pi_{\theta,\epsilon}(a|s)\)
Algorithm parameters: learning rate \(\alpha^{\theta}>0\), mini-batch size \(b>0\), initial exploration rate \(\epsilon\geq 0\), exploration decay rate \(\lambda_{\epsilon}\in[0,1]\), exploration minimum \(\epsilon_{min}\geq 0\)
Initialize: empty list \(\mathcal{B}\) of size \(b\)
**repeat**
Receive initial state of the environment \(s_{0}\in\mathcal{S}\)
**repeat**
**for** t = 0,1,...,T-1 **do**
Sample action \(a_{t}\sim\pi_{\theta,\epsilon}(\cdot|s_{t})\)
Execute action \(a_{t}\) in the environment and observe \(r_{t}\) and \(s_{t+1}\)
Store pair of reward \(r_{t}\) and log-probabilities \(\nabla_{\theta}\ln\pi_{\theta,\epsilon}(a_{t}|s_{t})\) in \(\mathcal{B}\)
**if**\(|\mathcal{B}|==b\) or \(s_{t}\) is terminal **then**
Update the policy \(\pi_{\theta,\epsilon}\) by one step of gradient ascent using:
\[\nabla_{\theta}J(\pi_{\theta,\epsilon})\approx\sum_{\mathcal{B}}r_{t}\nabla_ {\theta}\ln\pi_{\theta,\epsilon}(a_{t}|\mathbf{s}_{t})\]
Reset \(\mathcal{B}\) to empty list
**end if**
**end for**
**until** terminal state
Update the exploration rate \(\epsilon=\max\left(\epsilon\lambda_{\epsilon},\epsilon_{min}\right)\)
**until** convergence
where \(\mathcal{W}\sim\mathcal{U}_{[-1,1)}\) is sampled noise from an uniform distribution. The exploration parameters \(\epsilon,\epsilon_{min},\lambda_{\epsilon}\) are defined similarly for the direct policy gradient algorithm 6.2. Clipping agents' actions to the interval \([-1,1]\) prevents them from taking larger positions than their available capital.
OptimizationBoth the actor and critic networks are updated using randomly sampled mini-batches \(\mathcal{B}\) from a replay memory \(\mathcal{D}\). The replay memory provides random batches in sequential order for stateful RNNs, and random batches not in sequential order that minimize correlation between samples for non-stateful DNNs. The exploration policy \(\mu^{\prime}_{\theta}\) explores the environment and generates transitions \(\tau\) stored in the replay memory \(\mathcal{D}\).
The objective function \(J\) for the deterministic policy \(\mu_{\theta}\) is defined as
\[J(\mu_{\theta})=\mathbb{E}_{s\sim\mathcal{B}}[Q_{\phi}(s,\mu_{\theta}(s))] \tag{6.8}\]
and its gradient is given as
\[\nabla_{\theta}J(\mu_{\theta})=\mathbb{E}_{s\sim\mathcal{B}}[\nabla_{\theta} \mu_{\theta}(s)\nabla_{a}Q_{\phi}(s,a)|_{a=\mu_{\theta}(s)}] \tag{6.9}\]
Since the environment is an immediate reward environment, the target for the Q-network updates is the immediate reward, i.e., \(y=r\). MSE loss is used as a loss function as the outliers are of critical importance to the success of the trading agent. The loss function \(L(\phi)\) for the Q-network \(Q_{\phi}\) is defined as
\[L(Q_{\phi})=\mathbb{E}_{s,a,r\sim\mathcal{B}}[(Q_{\phi}(s,a)-r)^{2}] \tag{6.10}\]
and its gradient is given as
\[\nabla_{\phi}L(Q_{\phi})=\mathbb{E}_{s,a,r\sim\mathcal{B}}[(Q_{\phi}(s,a)-r) \nabla_{\phi}Q_{\phi}(s,a)] \tag{6.11}\]
PseudocodeThe pseudocode for the deterministic actor-critic algorithm is given in Algorithm 3.
**Algorithm 3** Actor-Critic Algorithm for Trading
Input: a differentiable deterministic policy parameterization \(\mu_{\theta}(s)\)
Input: a differentiable state-action value function parameterization \(Q_{\phi}(s,a)\)
Algorithm parameters: learning rates \(\alpha^{\theta}>0\), \(\alpha^{\phi}>0\), mini-batch size \(b>0\), replay memory size \(d\geq b\), initial exploration rate \(\epsilon\geq 0\), exploration decay rate \(\lambda_{\epsilon}\in[0,1]\), exploration minimum \(\epsilon_{min}\geq 0\)
Initialize empty replay memory cache \(\mathcal{D}\)
**repeat**
Receive initial state of the environment \(s_{0}\in\mathcal{S}\)
**for** t = 1,...,T **do**
Select action \(a_{t}=\mu_{\theta}(s_{t})+\epsilon\mathcal{W}\) from the exploration policy
Execute \(a_{t}\) in the environment and observe \(r_{t}\) and \(s_{t+1}\)
Store transition \(\tau_{t}=(s_{t},a_{t},r_{t})\) in the replay memory \(\mathcal{D}\)
Sample a random mini-batch \(\mathcal{B}\) of \(|\mathcal{B}|\) transitions \(\tau\) from \(\mathcal{D}\)
Update the Q-network by one step of gradient descent using
\[\nabla_{\phi}\frac{1}{|\mathcal{B}|}\sum_{(s,a,r)\in\mathcal{B}}(Q_{\phi}(s,a) -r)^{2}\]
Update the policy by one step of gradient ascent using
\[\nabla_{\theta}\frac{1}{|\mathcal{B}|}\sum_{s\in\mathcal{B}}Q_{\phi}(s,\mu_{ \theta}(s))\]
**end for**
Update the exploration rate \(\epsilon=\max\left(\epsilon\cdot\lambda_{\epsilon},\epsilon_{min}\right)\)
**until** convergence
Network topology
The reinforcement learning algorithms introduced in chapter 6 utilize function approximation to generalize over a continuous state and action space. Section 2.5 introduced function approximators for extracting predictive patterns from financial data, where empirical research suggested the superiority of deep learning methods. Thus, the function approximators introduced in this chapter rely on deep learning techniques introduced in 3. In the research presented in section 2.5, the function approximators based on convolutional neural networks (3.3.10) and those based on the LSTM (3.3.11) consistently performed well. Thus, this section introduces two function approximators based on CNNs and LSTMs, respectively. The sequential information layer, presented in section 7.4, leverages these techniques to extract predictive patterns from the data. Furthermore, the decision-making layer that maps forecasts to market positions, presented in section 7.5, employs the recursive mechanism introduced by Moody et al. [17], enabling the agent to consider transaction costs.
The direct policy gradient algorithm presented in chapter 6.2 is an actor-based RL algorithm that only uses a parameterized policy network. The deterministic actor-critic algorithm presented in chapter 6.3 uses a parameterized policy network and a parameterized critic network. This chapter outlines these function approximators, which fortunately consist of many of the same components. Section 7.2 describes the policy network, while section 7.3 describes the Q-network. The last section 7.6 describes the optimization and regularization of the networks.
### Network input
The first step is to specify the input into the networks. Section 5.3 defined the agent state \(\mathbf{s}_{t}^{a}\) of the partially observable environment. This section describes the modified version of the agent state \(\mathbf{s}_{t}^{a^{\prime}}\), which both the policy and critic agents receive as input. The modified agent state applies two forms of processing to the network input; the first is extracting the relevant observations from the agent state, which ensures that the network input is of fixed size, and the second is normalizing the network input, which is advantageous for the non-linear function approximators introduced in this chapter. This thesis adopts the philosophy of technical traders of the price reflecting all necessary information, and therefore the past price is used to represent the agent state. The primary reason for selecting the price series alone as the state representation is to examine the ability of a general deep reinforcement learning model to extract patterns from raw, relatively unprocessed data. Although additional data sources could aid the agent in discovering predictive patterns, that is beyond the scope of this thesis.
Adopting the ideas of Jiang et al. [16], the agent state is down-sampled by extracting the three most relevant prices from a period; the closing price, the highest price, and the lowest price. Thus, the price tensor used to represent
the external agent state at time \(t\) is defined as
\[\hat{\mathbf{p}}_{t}=\left[p_{t},p_{t}^{high},p_{t}^{low}\right] \tag{7.1}\]
Normalizing input data for neural networks speeds up learning [1] and is beneficial for reinforcement learning as well [1]. However, normalizing the whole time series ex-ante is a form of lookahead. The normalization scheme can only use data up to time \(\leq t\) for the observation \(\mathbf{p}_{t}\)\(\forall t\). The choice of instrument weights depends on relative log returns rather than absolute price changes. The price tensor \(\hat{\mathbf{p}}_{t}\) is normalized using the log-returns from the previous closing price \(p_{t-1}\). Additionally, adopting the ideas from Zhang et al. [14], the input is further normalized by dividing by a volatility term defined as
\[\sigma^{2}\left(\log\left(\frac{p_{i}}{p_{i-1}}\right)\lvert i=t-L+1,...,t \right)\sqrt{L}=\sigma_{L,t}^{2}\sqrt{L} \tag{7.2}\]
where \(L\in\mathbb{N}_{+}\) is the lookback window to calculate the volatility of the closing price, which is set to \(L=60\) similarly as [14]. The normalized price tensor at time \(t\) is thus defined as
\[\bar{\mathbf{p}}_{t}=\log\left(\hat{\mathbf{p}}_{t}\oslash p_{t-1}\right) \oslash\sigma_{L,t}^{2}\sqrt{L} \tag{7.3}\]
As a precaution against outliers in volatile markets, which can be detrimental to the performance of DNNs, the normalized price tensor \(\bar{\mathbf{p}}_{t}\) is clipped to the interval \([-1,1]\).
Stacking the past \(n\) observations produces the approximated environment state. Thus, the final price tensor is adjusted to contain the \(n\) most recent observations \(\bar{\mathbf{p}}_{t-n+1:t}\in\mathbb{R}^{3\times n}\). The stacked price tensor is considered the external agent state and defined as \(\mathbf{x}_{t}^{S}=\bar{\mathbf{p}}_{t-n+1:t}\) The networks also adopt the recursive mechanism introduced by Moody et al. [15] of considering the past action as a part of the internal environment, allowing the agent to take the effects of transaction costs into account. The instrument weight from the previous period \(a_{t-1}\) is inserted into the final decision-making layer after extracting the sequential features in the sequential information layer. The modified agent state thus approximates the state of the environment
\[\mathbf{s}_{t}^{a^{\prime}}=(\mathbf{x}_{t}^{S},a_{t-1}) \tag{7.4}\]
The policy networks and Q-network receive this modified agent state \(\mathbf{s}_{t}^{a^{\prime}}\) as input. As an action-value function, the Q-network also takes the current action \(a_{t}\) as input.
### Policy network
The deterministic policy 6.3 and the mean-generating parametric function approximator in the stochastic policy 6.2 are the same function approximator \(\mu_{\theta}:\mathbb{R}^{|\mathcal{S}|}\rightarrow[-1,1]\) parameterized by \(\theta\in\mathbb{R}^{d^{\prime}}\), and will in this chapter be referred to as the policy network. The policy network consists of a sequential
information layer, a decision-making layer, and a \(\tanh\) function. The input to the policy network is the modified agent state \(\mathbf{s}_{t}^{a^{\prime}}\). The external part of the agent state \(\mathbf{x}_{t}^{S}\), i.e., the price tensor of stacked observations, is input into the sequential information layer. The sequential information layer output is concatenated with the previous action \(a_{t-1}\) to produce input into the decision-making layer. The output from the decision-making layer maps to a \(\tanh\) function that produces the action constrained to the action space \([-1,1]\).
### Q-network
The Q-network \(Q_{\phi}:\mathbb{R}^{|\mathcal{S}|}\times\mathbb{R}^{|\mathcal{A}|}\to\mathbb{R}\) is a function approximator parameterized by \(\phi\in\mathbb{R}^{b^{\prime}}\). It is an action-value function that assigns the value of performing a specific action in a specific state and thus takes two arguments, the modified agent state \(\mathbf{s}_{t}^{a^{\prime}}\) and the action \(a_{t}\). Other than that, there are two differences between the critic and policy networks. Firstly, the Q-network has an additional layer before the sequential information net that concatenates the agent state \(\mathbf{s}_{t}^{a}\) and the current action \(a_{t}\) and maps it through a fully-connected layer into a leaky-ReLU activation function with negative slope \(0.01\) and dropout with probability \(0.2\). The second difference is that the output after the decision-making layer does not map to a \(\tanh\) function since the Q-network outputs action-values, which are not constrained to any specific interval.
Figure 7.1: Policy network architecture
Figure 7.2: Q-network architecture
### Sequential information layer
In essence, an algorithmic trading agent places bets on the relative price change, or returns, of financial instruments. The agent's success ultimately depends on its ability to predict the future. However, doing so in highly competitive and efficient markets is non-trivial. To remain competitive in continuously evolving markets, the agent must _learn_ to recognize patterns and generate rules based on past experiences. The sequential information layer extracts the sequential features from the input data and is arguably the most integral part of the model. Let \(\mathbf{x}_{t}^{I}\) be the input into the sequential information net (for the policy network \(\mathbf{x}_{t}^{I}=\mathbf{x}_{t}^{S}\)). The sequential information layer is a parametric function approximator that takes the input \(\mathbf{x}_{t}^{I}\) and outputs a feature vector \(\mathbf{g}_{t}\), defined as
\[f^{S}(\mathbf{x}_{t}^{I})=\mathbf{g}_{t} \tag{7.5}\]
The choice of the appropriate function approximator for this task is non-trivial. The inductive bias of the model must align with that of the problem for the model to generalize effectively. Therefore, selecting a model that captures the problem's underlying structure while also being efficient and scalable is imperative. Research on financial time series forecasting found that deep learning models, specifically those based on the CNN and LSTM architecture, consistently outperformed traditional time series forecasting methods such as the ARIMA and GARCH [22, 23, 24, 25]. The universal approximation theorem (3.3.8) establishes that there are no theoretical constraints on feedforward networks'11 expressivity. However, feedforward networks are not as naturally well-suited to processing sequential data as CNNs and LSTMs. Therefore, they may not achieve the same level of performance, even though it is theoretically possible. Additionally, feedforward networks may require significantly more computing power and memory to achieve the same performance as CNNs or LSTMs on sequential data. Transformers were also considered due to their effectiveness in forecasting time series [26]. Transformers employ an encoder-decoder architecture and rely on attention mechanisms to capture long-term dependencies. Thus, they do not require a hidden state, like RNNs, and are relatively easy to parallelize, enabling efficient training on large datasets. A variant called decision transformers [12] has been applied to offline reinforcement learning. However, it is unclear how to apply the transformer in its conventional encoder-decoder topology to online reinforcement learning. Therefore, the transformer is, for the moment, unsuitable for this problem. The _gated recurrent unit_ (GRU) is a newer version of the recurrent neural network that is less computationally complex than the LSTM. However, LSTMs are generally considered superior for forecasting financial data [27].
Footnote 11: Of arbitrary width or height.
This section defines two distinct DNN topologies for the sequential information layer; the first is based on convolutional neural networks, while the second is based on recurrent neural networks, specifically the LSTM. The two sequential information topologies both consist of two hidden layers, which is enough for
the vast majority of problems. Performance is usually not improved by adding additional layers.
#### Convolutional neural network
The CNN-based sequential information layer topology includes two 1-dimensional convolutional layers. In the absence of established heuristics, determining the parameters for a CNN can be challenging. Thus, the parameters chosen for these layers are partly informed by research on CNNs in financial time series forecasting [20] and partly determined through experimentation. The first convolutional layer has kernel size 3 and stride 1 and processes each of the 3 columns in the input \(\mathbf{x}_{t}^{I}\) as separate channels of size \(1\times n\), where \(n\) is the number of stacked observations. It outputs 32 feature maps of size \(1\times n-2\). The second convolutional layer has kernel size 3 and stride 1 and outputs 32 feature maps of size \(1\times n-4\). Batch norm is used after both convolutional layers on the feature maps to stabilize and speed up learning. The CNN uses the Leaky-ReLU activation function with a negative slope of 0.01 after the batch norm layers to generate the activation maps. Dropout with probability \(p=0.2\) is used between the layers. Max pooling with kernel size 2 and stride 2 is applied after the final convolutional layer to down-sample the output before all activation maps are concatenated into one big activation map.
#### Long Short-Term Memory
The second sequential information network topology introduces memory through a recurrent neural network to solve the partially observable MDP. Long Short-term Memory (LSTM) is the go-to solution for environments where memory is required and are good at modeling noisy nonstationary data. The LSTM sequential information net architecture consists of two stacked LSTM layers. Both LSTM layers have 128 units in the hidden state, which was chosen experimentally. Following both LSTM layers, the network employs dropout with dropout-probability \(p=0.2\). The LSTM cell contains three sigmoid functions and one hyperbolic tangent function, so inserting an activation function after the LSTM layer is superfluous. Batchnorm is incompatible with RNNs, as the
Figure 7.3: Convolutional sequential information layer architecture
recurrent part of the network is not considered when computing the normalization statistic and is, therefore, not used.
### Decision-making layer
In the decision-making layer, the previous action \(a_{t-1}\) is concatenated into the features \(\mathbf{g}_{t}\), i.e., the output from the sequential information layer, and mapped to a single output value. The previous action weight \(a_{t-1}\) allows the agent to consider transaction costs when making trading decisions (policy network) or when assigning value to actions in states (Q-network). The mapping comprises a fully-connected layer between the features and the previous action to a single output value. This output value is the action (or Gaussian mean) for the policy (after mapping it to a tanh function) or the action-value for the Q-network. The input to the decision-making layer is defined as
\[\mathbf{x}_{t}^{D}=(\mathbf{g}_{t},a_{t-1}) \tag{7.6}\]
The decision-making layer is a dot product between a weight vector \(\mathbf{w}^{D}\in\mathbb{R}^{|\mathbf{x}_{t}^{D}|}\) and the input \(\mathbf{x}_{t}^{D}\), defined as
\[f_{D}(\mathbf{x}_{t}^{D})=(\mathbf{w}^{D})^{\top}\mathbf{x}_{t}^{D} \tag{7.7}\]
### Network optimization
The weights \(\theta\) and \(\phi\) of the policy network \(\mu_{\theta}\) and Q-network \(Q_{\phi}\) are initialized using Kaiming initialization which ensures that the initial weights of the network are not too large and have a small variance, which helps to prevent the network from getting stuck in local minima and allows the network to generalize better. Additionally, Kaiming initialization considers the type of activation function used in the network, contrary to conventional initialization schemes. The weight initialization scheme centers the initial output distribution of the networks around 0 with a small standard deviation regardless of the input. The weights are updated using the Adam stochastic gradient descent algorithm on mini-batches, allowing the network to update the weights more efficiently and accurately than other SGD algorithms. The gradient norm for each mini-batch
Figure 7.4: LSTM sequential information layer architecture
is clipped to 1 to prevent exploding gradients. There are many potential activation functions for neural networks, including the default recommendation, the ReLU. To combat the "dying ReLU problem", the leaky-ReLU activation function is used in the networks. The negative slope, or the "leak", is set to the standard value of 0.01.
#### 7.6.1 Regularization
Machine learning research generally focuses on problems with complex structures and high signal-to-noise ratios, such as image classification. For these problems, complicated non-linear models like neural nets have demonstrated their effectiveness. However, in a high-noise environment such as financial forecasting, where the R-squared is often of order \(10^{-4}\)[11], anything beyond linear regression poses a significant overfitting risk. An overfitted network will likely perform well on the training set but generalize poorly out-of-sample. An algorithmic trading agent that performs well on the training set is of little use, and it is imperative to reduce the generalization error. Therefore, regularization is needed to mitigate the risk of overfitting and reduce the generalization error. A description of the regularization techniques used for these networks is provided in section 3.3.6.
For ML models to be generalizable, they must learn data patterns rather than individual data points to identify a bigger picture agnostic of noisy details. Regularization techniques such as weight decay limit the capacity of the networks by adding a parameter norm penalty to the objective function. Weight decay uses the \(L^{2}\) norm; other norms, such as the \(L^{1}\) norm, can also be used. The \(L^{2}\) norm is appropriate since it punishes outliers harsher and is easier to optimize with gradient-based methods. The parameter \(\lambda_{wd}\) controls the degree of penalization, balancing the tradeoff between increased bias and decreased variance. The network optimizer introduced in this section uses weight decay with the constant parameter \(\lambda_{wd}=0.001\) to mitigate the network's overfitting risk. Experimentally, this value delivered the optimal balance for the bias-variance tradeoff. Although increasing the weight decay penalty could further reduce overfitting risk, this was too restrictive for the networks.
It is important to note that weight decay reduces, but does not eliminate, the risk of overfitting. Dropout is another explicit regularization technique almost universally used in deep neural networks. Dropout forces the network to learn multiple independent data representations, resulting in a more robust model. When training networks on noisy financial data, dropout effectively ensures the network ignores the noise. Similarly to weight decay, the dropout rate is a tradeoff. There is no established heuristic for choosing the dropout rate; instead, it is usually chosen through experimentation. In this case, a dropout rate of 0.2 provided a suitable regularizing effect where the model generalized well and produced accurate predictions. Dropout is used between all hidden layers in the networks.
Although explicit regularizers such as weight decay and dropout reduce overfitting risk, it remains tricky to determine the optimal training duration. This
challenge is addressed with early stopping, which functions as an implicit regularizer. The networks are trained in an early stopping scheme, with testing on the validation set every 10th epoch. As reinforcement learning involves random exploration, the models are tested slightly less frequently than conventional to prevent premature stopping.
[MISSING_PAGE_POST]
## Part III Experiments
Experiment and Results
Experiments play a vital role in science and provide the basis for scientific knowledge. This chapter presents the experiments and results where the methods presented in part II are tested on historical market data using the backtesting framework described in section 2.9. The backtest requires simplifying market assumptions, specified in chapter 5. Section 8.1 details the experiment setting. The results of the experiment are presented and discussed in sections 8.2 and 8.3. Finally, the overall approach is discussed in section 8.4. The experiment aims to answer the research questions posed at the start of this thesis.
1. Can the risk of algorithmic trading agents operating in volatile markets be controlled?
2. What reinforcement learning algorithms are suitable for optimizing an algorithmic training agent in an online, continuous time setting?
3. What deep learning architectures are suitable for modeling noisy, non-stationary financial data?
### Materials and Methods
Chapter 6 described two reinforcement learning algorithms to solve the commodity trading problem; the direct policy gradient (PG) and the deterministic actor-critic (AC). Chapter 7 described two sequential information layers, one based on CNN architecture and the other LSTM-based. Both the actor and critic in the actor-critic algorithm are modeled using the same architecture. In total, that leaves four combinations which are specified below with their respective abbreviations
* **PG-CNN:** Direct policy gradient algorithm where the policy network is modeled using the CNN-based sequential information layer.
* **PG-LSTM:** Direct policy gradient algorithm where the policy network is modeled using the LSTM-based sequential information layer.
* **AC-CNN:** Deterministic actor-critic algorithm where the policy and Q-network are modeled using the CNN-based sequential information layer.
* **AC-LSTM:** Deterministic actor-critic algorithm where the policy and Q-network are modeled using the LSTM-based sequential information layer.
#### 8.1.1 Baselines
Defining a baseline can be helpful when evaluating the performance of the methods presented in part II. A challenge with testing algorithmic trading agents is the lack of established baselines. However, the by far most common alternative is the _buy-and-hold_ baseline [12][ZZR20][ZZW\({}^{+}\)20]. The buy-and-hold
baseline consists of buying and holding an instrument throughout the experiment, i.e., \(a_{t}=1,\forall t\). Compared to a naive buy-and-hold baseline, an intelligent agent actively trading a market should be able to extract excess value and reduce risk.
#### 8.1.2 Hyperparameters
Table 1 shows the hyperparameters used in this experiment. The learning rates for the policy network and Q-network are denoted \(\alpha_{actor}\) and \(\alpha_{critic}\), respectively, and were tuned experimentally. \(|\mathcal{B}|\) is the batch size, and \(|\mathcal{D}|\) is the replay memory size. Large batch sizes are necessary to obtain reliable gradient estimates. However, large batch sizes also result in less frequent updates to the agent and updates that may contain outdated information. As a result of this tradeoff, the batch and replay memory sizes used in this experiment were selected as appropriate values. The transaction cost fraction \(\lambda_{c}\) is set to a reasonable value that reflects actual market conditions. The initial exploration rate is denoted \(\epsilon\), with decay rate \(\lambda_{\epsilon}\) and minimum \(\epsilon_{min}\). The number of stacked past observations is given by \(n\), considered a reasonable value for the agent to use for short-term market prediction.
#### 8.1.3 Training scheme
The dataset is split into three parts; a training set, a validation set, and a test set, in fractions of \(1/4\), \(1/4\), and \(1/2\), respectively. The RL agents train on the training set, the first \(1/4\) of the dataset, and then validate on the validation set, the next \(1/4\). Early stopping is used, with testing every 10th epoch. The early stopping frequency is low because the RL agents exhibit randomness and stochasticity, especially in the early epochs. Setting a high early stopping frequency can cause premature convergence. The weight initialization scheme, described in section 7.6, causes the initial action distribution of the policy to be centered around 0 with a small standard deviation. However, the agents learn faster when exploring the edge values of the state space in the early stages. Exploration of the action space is controlled, for both RL algorithms, by the three exploration parameters \(\epsilon,\lambda_{\epsilon},\epsilon_{min}\). During training, the exploration rate starts at \(\epsilon=1\) and decays at \(\lambda_{\epsilon}=0.9\) per episode to a minimum exploration rate of \(\epsilon_{min}=0.01\).
When the agent has finished training, it tests once out-of-sample on the test set, the last \(1/2\) of the dataset. Leaving half of the dataset for final testing ensures the test set is sufficiently large to evaluate the trading agents. Exploring the action space is no longer necessary after initial training. Therefore, \(\epsilon=0\) for the out-of-sample test. According to the optimization strategies specified in
\begin{table}
\begin{tabular}{|l||l|l|l||l||l||l|l|l|l|} \hline Model & \(\alpha_{\mathbf{actor}}\) & \(\alpha_{\mathbf{critic}}\) & \(|\mathcal{B}|\) & \(|\mathcal{D}|\) & \(\lambda_{\mathbf{c}}\) & \(\epsilon\) & \(\lambda_{\epsilon}\) & \(\epsilon_{\mathbf{min}}\) & \(n\) \\ \hline
**PG** & 0.0001 & - & 128 & - & 0.0002 & 1 & 0.9 & 0.01 & 20 \\
**AC** & 0.0001 & 0.001 & 128 & 1000 & 0.0002 & 1 & 0.9 & 0.01 & 20 \\ \hline \end{tabular}
\end{table}
Table 1: Hyperparameters
their respective pseudocodes 2 and 3, the RL agents continuously refit themselves as they observe transitions. The results section (8.2) presents results from these backtests.
#### 8.1.4 Performance metrics
The objective of the algorithmic trading agent is described by modern portfolio theory (2.3) of maximizing risk-adjusted returns, usually represented by the Sharpe ratio. Thus, the Sharpe ratio defined in equation 2.1 will be the primary performance metric for the backtest. The reward function defined in equation 5.10 is not a comparable performance measure to related work. Instead, the standard method for assessing performance by linear returns net of transaction costs is adopted. In a backtest, the agent interacts with the environment and generates a sequence of actions \(\{a_{0},a_{1},...,a_{T-1},a_{T}\}\). The linear net return after \(T\in\mathbb{N}_{+}\) trades is defined as
\[R_{T}=\prod_{t=1}^{T}\left(y_{t}\cdot a_{t-1}\right)+1-\lambda_{c}||a_{t-1}^{ \prime}-a_{t-1}|| \tag{8.1}\]
where \(y_{t},a_{t},a_{t}^{\prime},\lambda_{c}\) are defined in chapter 5. The return \(R_{T}\) is used to calculate the Sharpe ratio. As there is randomness in the models, either through stochastic action selection or random mini-batch sampling, along with random weight initialization, performance is averaged over 10 runs. In addition to the Sharpe ratio, additional performance metrics can help paint a clearer picture of the performance of an algorithmic trading agent. This thesis adopts some of the performance metrics most frequently found in related work [13, 20, 21]. The performance metrics used in this thesis are defined as
1. \(\mathbb{E}[R]\): the annualized expected rate of linear trade returns.
2. \(Std(R)\): the annualized standard deviation of linear trade returns.
3. Sharpe: a measure of risk-adjusted returns defined in equation 2.1. The risk-free rate is assumed to be zero, and the annualized Sharpe ratio is thus \(\mathbb{E}[R]/Std(R)\).
4. MDD: Maximum Drawdown (MDD), the maximum observed loss from any peak.
5. Hit: the rate of positive trade returns.
#### 8.1.5 Dataset
The dataset consists of the front-month _TTF Natural Gas Futures_ contracts from 2011 to 2022. The observations are sampled according to the transacted euro volume on the exchange, defined in section 5.2. Larger sample sizes are desirable to ensure statistical significance, especially for highly overparameterized approximators such as neural networks. In addition, predictability is generally
higher over shorter horizons [11]. However, as sampling frequency (and therefore trading frequency) increases, simplifying assumptions, such as no impact and perfect liquidity, become increasingly consequential. Thus, an appropriate target number of samples per day is \(tgt=5\), which provides a little over 20 000 total samples. The data processing is limited to what is described in section 7.1.
The first quarter of the dataset consisting of trades from 01/01/2011 to 01/01/2014 makes up the training set. The validation set is the second quarter of the dataset from 01/01/2014 to 01/01/2017. Finally, the test set is the second half of the dataset from 01/01/2017 to 01/01/2023. Figure 8.1 illustrates the training-validation-test split.
### Results
This section presents the results of the experiments described in the previous section. The models are tested using four different values of the risk-sensitivity term \(\lambda_{\sigma}\) (0, 0.01, 0.1, and 0.2), and the results of all four values are presented. The results are visualized using three tools; a table and two types of plots, and they are briefly described below
* The table consists of the performance metrics (described in section 8.1.4) of each model (described in section 8.1) from the backtests.
* A standard line plot illustrates the performance of the models against the baseline indexed in time of the cumulative product of logarithmic trade returns, where the trade returns are defined in equation 8.1.
* A boxplot illustrates the distribution of the monthly logarithmic returns12 of each model and the baseline. Boxplots summarize a distribution by its sampled median, the first quantile (\(Q_{1}\)), and the third quantile (\(Q_{3}\)), represented by the box. The upper whisker extends to the largest observed value within \(Q_{3}+\frac{3}{2}IQR\), and the lower whisker extends to the smallest observed value within \(Q_{1}-\frac{3}{2}IQR\), where the interquartile range (IQR) is \(Q_{3}-Q_{1}\). Dots represent all values outside of the whiskers (outliers). Footnote 12: Again, trade returns are defined in equation 8.1 and resampled to produce monthly values. The logarithmic monthly returns are then calculated based on these values.
The plots display the performance of all models and the baseline and are grouped by risk-sensitivity terms.
Figure 8.1: The training-validation-test split
Table 2 below shows the results of the backtests averaged over 10 runs. The variation between runs was small enough to warrant the level of precision of the results given in the table.
A pair of plots (line plot and boxplot) are grouped by risk-term values (0, 0.01, 0.1, and 0.2, respectively).
\begin{table}
\begin{tabular}{|l||l||l||l||l||l|} \hline & \(\mathbb{E}[R]\) & \(Std(R)\) & Sharpe & MDD & Hit \\ \hline Buy \& Hold & 0.271 & 0.721 & 0.376 & 0.877 & 0.524 \\ \hline \multicolumn{5}{|c|}{\(\lambda_{\sigma}=0\)} \\ \hline PG-CNN & **0.403** & 0.558 & **0.722** & 0.753 & 0.529 \\ PG-LSTM & 0.297 & **0.502** & 0.591 & 0.726 & 0.527 \\ AC-CNN & 0.302 & 0.610 & 0.495 & 0.724 & 0.538 \\ AC-LSTM & 0.226 & 0.694 & 0.325 & **0.637** & **0.541** \\ \hline Average & 0.307 & 0.591 & 0.533 & 0.710 & 0.534 \\ \hline \multicolumn{5}{|c|}{\(\lambda_{\sigma}=0.01\)} \\ \hline PG-CNN & **0.401** & 0.437 & **0.918** & 0.665 & 0.537 \\ PG-LSTM & 0.258 & 0.326 & 0.791 & 0.540 & 0.526 \\ AC-CNN & 0.346 & 0.471 & 0.735 & 0.601 & **0.545** \\ AC-LSTM & 0.251 & **0.300** & 0.837 & **0.443** & 0.535 \\ \hline Average & 0.314 & 0.383 & 0.820 & 0.562 & 0.536 \\ \hline \multicolumn{5}{|c|}{\(\lambda_{\sigma}=0.1\)} \\ \hline PG-CNN & **0.371** & 0.356 & **1.042** & 0.591 & 0.537 \\ PG-LSTM & 0.235 & 0.264 & 0.890 & 0.373 & 0.524 \\ AC-CNN & 0.091 & 0.239 & 0.380 & 0.392 & **0.539** \\ AC-LSTM & 0.110 & **0.190** & 0.579 & **0.261** & 0.525 \\ \hline Average & 0.202 & 0.262 & 0.723 & 0.404 & 0.531 \\ \hline \multicolumn{5}{|c|}{\(\lambda_{\sigma}=0.2\)} \\ \hline PG-CNN & **0.243** & 0.298 & **0.815** & 0.410 & 0.533 \\ PG-LSTM & 0.179 & 0.247 & 0.725 & 0.373 & **0.537** \\ AC-CNN & 0.136 & **0.198** & 0.687 & 0.454 & 0.531 \\ AC-LSTM & 0.114 & 0.229 & 0.498 & **0.341** & 0.522 \\ \hline Average & 0.168 & 0.243 & 0.681 & 0.394 & 0.531 \\ \hline \end{tabular}
\end{table}
Table 2: Backtest results
Figure 8.3: Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0\)
Figure 8.2: Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0\)
Figure 8.4: Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0.01\)
Figure 8.5: Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0.01\)
Figure 8.6: Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0.1\)
Figure 8.7: Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0.1\)
Figure 8.9: Boxplot of monthly logarithmic trade returns for \(\lambda_{\sigma}=0.2\)
Figure 8.8: Cumulative logarithmic trade returns for \(\lambda_{\sigma}=0.2\)
### Discussion of results
This section will discuss the experiment results and how they relate to the three research questions posed at the start of this thesis.
#### 8.3.1 Risk/reward
The first research question posed at the start of this thesis was:
Can the risk of algorithmic trading agents operating in volatile markets be controlled?
Note some general observations about the buy-and-hold baseline against which the models are compared. The baseline mirrors the direction of the natural gas market during the out-of-sample test. Due to the volatility and upwards price pressure stemming from the energy crisis in 2021-2022, the buy-and-hold baseline has a high annualized expected return but also a high annualized standard deviation of returns. As a result, the Sharpe ratio, the primary performance metric in this experiment, is relatively low. The primary goal of the deep reinforcement learning models is to increase the Sharpe ratio. Increasing the Sharpe ratio is achieved by increasing the expected returns or decreasing the standard deviation. The reward function 5.10 defines this trade-off, where the risk-sensitivity term \(\lambda_{\sigma}\) functions as a trade-off hyperparameter. Low values of \(\lambda_{\sigma}\) make the agent more risk-neutral, i.e., more concerned with increasing expected returns and less concerned with decreasing the standard deviation of returns. Conversely, high values of \(\lambda_{\sigma}\) make the agent more risk-averse, i.e., more concerned with decreasing the standard deviation of returns and less concerned with increasing the expected returns. The experiments in this thesis use four risk-sensitivity terms; 0, 0.01, 0.1, and 0.2. For \(\lambda_{\sigma}=0\), the agent is risk-neutral and only concerned with maximizing the expected return. For values exceeding 0.1, the agent becomes so risk-averse that it hardly participates in the market. This trade-off is evident in the results where the annualized expected return and the standard deviation are on average 83% and 143% higher, respectively, for \(\lambda_{\sigma}=0\) compared to \(\lambda_{\sigma}=0.2\). The boxplots (figures 8.3, 8.5, 8.7, 8.9) illustrate how the monthly logarithmic returns are more concentrated as \(\lambda_{\sigma}\) increase. This phenomenon is also observed in the line plots (figures 8.2, 8.4, 8.6, 8.8), where it is apparent that the variability of returns decreases as \(\lambda_{\sigma}\) increases. The agent's action, i.e., the position size (and direction), is its only means to affect the reward. By definition, a risk-averse agent will choose outcomes with low uncertainty, so a preference for smaller positions is a natural consequence of increasing risk sensitivity. Consequently, the risk-averse deep RL agents have a significantly lower maximum drawdown than the baseline, but they do not fully capitalize on the increasing prices from mid-2020 to 2023.
risk-neutral agents (i.e., those where \(\lambda_{\sigma}=0\)), on average, increase the returns by 13% compared to the baseline. Although they have no risk punishment, they also decrease the standard deviation of returns by 18%. This last point is surprising but could be a byproduct of an intelligent agent trying to maximize returns. For \(\lambda_{\sigma}=0.01\), the deep RL agents, on average, produce 16% increased returns and 47% reduced standard deviation of returns compared to the baseline. This combination results in a 118% higher Sharpe ratio. For \(\lambda_{\sigma}=0.1\), the agents on average produce 25% lower returns; however, the standard deviation of returns is reduced more by 64%. Thus, the Sharpe is increased by 92% compared to the baseline. The most risk-averse agents (i.e., those where \(\lambda_{\sigma}=0.2\)) on average produce 38% lower returns with 66% lower standard deviation of returns, yielding an 83% increase in Sharpe compared to the baseline. The risk-sensitivity term \(\lambda_{\sigma}=0.01\) produces the highest Sharpe ratio on average. Thus, the backtests suggest that of the four risk-sensitivity options tested in this thesis, \(\lambda_{\sigma}=0.01\) strikes the best risk/reward balance.
#### 8.3.2 RL models
The second research question posed at the start of this thesis was:
What reinforcement learning algorithms are suitable for optimizing an algorithmic training agent in an online, continuous time setting?
A curious result from the experiment is that, for three out of four risk-sensitivity terms13, the model with the highest hit-rate has the lowest Sharpe. In other words, the model making the highest rate of profitable trades also produces the lowest risk-adjusted returns. This result illustrates the complexity of trading financial markets and justifies the methods chosen in this thesis. Firstly, there is no guarantee that a higher percentage of correct directional calls will result in higher returns. Therefore, a forecast-based supervised learning approach optimized for making correct directional calls may not align with the stakeholder's ultimate goal of maximizing risk-adjusted returns. For an algorithmic trading agent to achieve the desired results, e.g., making trades that maximize the Sharpe ratio, it should be optimized directly for that objective. However, doing so in a supervised learning setting is not straightforward. Reinforcement learning, on the other hand, provides a convenient framework for learning optimal sequential behavior under uncertainty. Furthermore, discrete position sizing, a drawback of value-based reinforcement learning, can expose the agent to high risks. However, the agent can size positions based on confidence through the continuous action space offered by policy gradient methods, allowing for more effective risk management. Section 2.6 presented research arguing that, for algorithmic trading, reinforcement learning is superior to supervised learning and policy gradient methods are superior to value-based methods, and this result supports those arguments.
Footnote 13: \(\lambda_{\sigma}=0\), 0.01, and 0.1
Although previous research supported policy gradient methods, there was no consensus on which one was superior in this context. Chapter 6 presented two
policy gradient methods: one based on an actor-only framework and the other based on an actor-critic framework, and discussed their respective advantages and disadvantages. The previous section (8.2) presented results from the back-tests where both algorithms were tested out-of-sample. For all risk-sensitivity terms, the direct policy gradient algorithm, on average, produces a 49% higher Sharpe ratio than the deterministic actor-critic algorithm. Comparing the two algorithms using the same network architecture and risk sensitivity term reveals that the actor-based algorithm outperforms the actor-critic-based algorithm in 7 out of 8 combinations. The only case where the actor-critic-based algorithm performs better14 is the case with the smallest performance gap. Furthermore, the actor-only direct policy gradient method strictly increases the Sharpe ratio for both network architectures as the risk-sensitivity parameter \(\lambda_{\sigma}\) increases to a maximum at \(\lambda_{\sigma}=0.1\). The actor-critic method does not follow this pattern, suggesting it fails to achieve its optimization objective.
Footnote 14: PG-LSTM vs. AC-LSTM for \(\lambda_{\sigma}=0.01\)
The performance gap between the actor-based and actor-critic-based algorithms is significant enough to warrant a discussion. An explanation for the performance gap could be that the actor-critic-based algorithm optimizes the policy using a biased Q-network reward estimate instead of the observed unbiased reward. As a data-generating process, the commodity market is complex and non-stationary. If the Q-network closely models the data-generating distribution, using reward estimates from sampled experience from a replay memory is an efficient method for optimizing the policy. On the other hand, it is also clear that a policy that is optimized using Q-network reward estimates that are inaccurate will adversely affect performance. The direct policy gradient algorithm optimizes the policy using the observed unbiased reward and avoids this problem altogether. Given that the reward function is exactly expressed, optimizing it directly, as the direct policy gradient method does, is the most efficient approach. Many typical RL tasks work well with the actor-critic framework, but the backtests indicate that financial trading is not one of them.
#### 8.3.3 Networks
The third and final research question posed at the start of this thesis was:
What deep learning architectures are suitable for modeling noisy, non-stationary financial data?
In the research presented in section 2.5, two types of deep learning architectures stood out; the long short-term memory and the convolutional neural network. Chapter 7 presented two types of parametric function approximators based on the CNN- and LSTM-architecture, respectively. The previous section (8.2) presented results from the backtests where both these function approximators are tested out-of-sample. On average, the CNN-based models produce over 5% higher Sharpe than those based on the LSTM, which is surprising, as LSTMs are generally viewed as superior in sequence modeling and, due to their memory,
are the preferred option when modeling POMDPs. In contrast to the CNN, the LSTM can handle long-term dependencies, but it seems the lookback window provides enough historical information for the CNN to make trade decisions. However, the performance gap is not big enough to say anything conclusive, and the LSTM outperforms the CNN in some tests, so it is unclear which is most suitable.
One interesting observation is that the CNN-based models produce higher returns and standard deviation of returns compared to the LSTM. On average, the CNN-based models produce 37% higher returns and 15% higher standard deviation of returns. From the line plots in figures 8.2, 8.4, 8.6, and 8.8, it looks like a possible explanation for this is that the LSTM-based models prefer smaller position sizes compared to the CNN-based models. One potential reason for this phenomenon involves the difference in how the CNN and LSTM are optimized. Generally speaking, the CNN-based model is far easier and quicker to optimize than the LSTM-based model, partly due to batch norm, which in its conventional form is incompatible with RNNs. Another reason is that when the LSTM is trained for long sequences, the problem of vanishing gradients makes back-propagating the error difficult and slow. Increasing the learning rate leads to exploding gradients. The CNN-based model with batch norm quickly and effectively adjusts its parameters to take full advantage of newly observed information during out-of-sample tests. The LSTM-based model, on the other hand, adjusts its parameters much slower. As a result, the actions it selects often end up someplace in the middle of the action space causing smaller position sizes, lower returns, and lower standard deviation of returns. For that reason, the author of this thesis theorizes that the performance gap between the CNN-based and LSTM-based models would increase with time.
### Discussion of model
Following the discussion of the results, it is interesting to take a step back and have a more general discussion of the model. This includes discussing the strengths and weaknesses of the model, as well as potential applications and limitations.
#### 8.4.1 Environment
Solving complex real-world problems with reinforcement learning generally requires creating a simplified version of the problem that lends itself to analytical tractability. Usually, this involves removing some of the frictions and constraints of the real-world problem. In the context of financial trading, the environment described in chapter 5 makes several simplifying assumptions about the environment, including no market impact, no slippage, the ability to purchase or sell any number of contracts at the exact quoted price, no additional costs or restrictions on short-selling, and fractional trading. It is imperative to note that these assumptions do not necessarily reflect real-world conditions. As such, it is crucial to know the problem formulation's limitations and how it will negatively
affect the model's generalizability to the real-world problem. Poorly designed environments, where agents learn to exploit design flaws rather than the actual problem, are a frequent problem in reinforcement learning [14]. At the same time, these simplifying assumptions allow for a clean theoretical analysis of the problem. Furthermore, the environment introduces some friction through transaction costs, an improvement over many existing models.
Lookahead bias in the input data is avoided by using the price series alone as input, as described in section 7.1. The price series of a financial instrument is generally the most reliable predictor of future prices. However, price series only provide a limited view of the market and do not consider the broader economic context and the potential impact of external factors. As a result, an agent relying solely on price series may miss out on meaningful predictive signals. Furthermore, since the model learns online, an effective data governance strategy is required to ensure the quality and integrity of the real-time input data stream, as data quality issues can harm the model's performance. The dollar bars sampling scheme described in section 5.2 has solid theoretical foundations for improving the statistical properties of the sub-sampled price series compared to traditional time-based sampling. When using this sampling scheme, however, the agent cannot be certain of the prediction horizon, which complicates forecasting.
Commodity trading firms often conduct asset-backed trading in addition to paper trading, which incurs additional costs for booking pipeline capacity, storage capacity, or LNG tankers. The model does not currently include these costs, but the environment could be adjusted to include them.
#### 8.4.2 Optimization
Statistical learning relies on an underlying joint feature-target distribution \(F(x,y)\), with non-vanishing mutual information. The algorithmic trading agent approximates this function by learning the distribution through historical data. As financial markets are nonstationary, statistical distributions constantly change over time, partly because market participants learn the market dynamics and adjust their trading accordingly. In order to remain relevant for the near future, the model must be continuously refitted using only data from the immediate past, at the expense of statistical significance. On the other hand, training a complex model using only a relatively small set of recent data is challenging in a high-noise setting such as financial forecasting, often resulting in poor predictive power. This tradeoff between timeliness and statistical significance is known as the _timeliness-significance_ tradeoff [15]. The timeliness-significance tradeoff highlights a central challenge in optimizing algorithmic trading models.
This thesis investigates the use of reinforcement learning in algorithmic trading, a field traditionally dominated by supervised learning-based approaches. Supervised learning is a straightforward method for easily labeled tasks, such as forecasting financial prices. Reinforcement learning, on the other hand, is better suited to complex problems, such as sizing positions, managing risk and transaction costs. In fact, with only minor modifications, the model outlined in
this thesis can optimize an agent trading a portfolio of arbitrary size. For this reason, reinforcement learning was chosen as the algorithmic trading agents' optimization framework. Temporal credit assignment is one of the main strengths of reinforcement learning, making it ideal for game playing and robotics, involving long-term planning and delayed reward. In this problem, however, temporal credit assignment does not apply since trading is an immediate reward process. Furthermore, the complexity of reinforcement learning compared to supervised learning comes at a price. As well as requiring a model of the environment in which the agent interacts and learning to associate context with reward-maximizing actions, reinforcement learning introduces added complexity by introducing the exploration-exploitation tradeoff. With more moving parts, reinforcement learning can be significantly more challenging to implement and tune and is generally less sample efficient than supervised learning. The learning process is more convoluted as the agent learns through reinforcement signals generated by interaction with the environment and involves stochasticity. Consequently, the model can display unstable behavior where the policy diverges, or the agent overfits to noise. E.g., if the market experiences a sustained downward trend, the agent can be deceived into believing that the market will continue to decline indefinitely. As a result, the agent may adjust its policy to always short the market, which will have disastrous effects once the market reverses. The phenomenon is caused by the temporal correlation of sequential interactions between RL agents and the market, and that reinforcement learning is sample inefficient, making it difficult to obtain good gradient estimates. Replay memory can ensure that gradient estimates are derived from a wide variety of market conditions. However, replay memory introduces biased gradient estimates, which, according to backtests, is a poor tradeoff. The timeliness-significance tradeoff further complicates this problem of obtaining suitable gradient estimates. A supervised learning framework is more straightforward and avoids much of the complexity associated with reinforcement learning. Thus, it is unclear whether reinforcement learning or supervised learning is the most appropriate optimization framework for algorithmic trading.
#### 8.4.3 Interpretability and trust
Interpretability is the ability to determine the cause and effect of a model. Due to their over-parameterized nature, deep neural networks possess a remarkable representation capacity, enabling them to solve a wide range of complex machine learning problems, but at the cost of being difficult to interpret. Neural networks are black boxes, acceptable for low-risk commercial applications such as movie recommendations and advertisements. Interpreting these predictions is of little concern as long as the models have sufficient predictive power. However, deep learning's opaque nature prevents its adoption in critical applications, as the failure of a commodity trading model could result in substantial financial losses and threaten global energy security. Setting aside people generally being risk-averse and unwilling to bet large sums of money on a black box, is the aversion to applying deep learning in critical domains such as commodity trading
reasonable? Does understanding the model even matter as long as it delivers satisfactory backtest performance?
This question can be answered by reviewing statistical learning theory. Generally, machine learning models are tested under the assumption that observations are drawn from the same underlying distribution, the data-generating distribution, and that observations are IID. In this setting, the test error serves as a proxy for the generalization error, i.e., the expected error on new observations. However, the dynamics of the financial markets are constantly changing. Fierce competition in financial markets creates a cycle in which market participants attempt to understand the underlying price dynamics. As market participants better understand market dynamics, they adjust their trading strategies to exploit that knowledge, further changing market dynamics. Due to the constantly changing dynamics of the market, models that worked in the past may no longer work in the future as inefficiencies are arbitraged away15. Therefore, it is important to be cautious when interpreting backtest errors as generalization errors, as it is unlikely that observations sampled at different points in time are drawn from the same probability distribution. Even if one disregards all the flaws of a backtest16, the backtest, at best, only reflects performance on historical data. In no way is this intended to discourage backtesting. However, naively interpreting backtest performance as an assurance of future performance is dangerous. Referring back to section 2.9; a backtest is a historical simulation of how the model would have performed should it have been run over a past period. Even exceptional results from the most flawlessly executed backtest can never guarantee that the model generalizes to the current market. Furthermore, the results should be interpreted cautiously if no ex-ante logical foundation exists to explain them. Deep neural networks are highly susceptible to overfitting to random noise when trained on noisy financial time series. It is, however, difficult to determine if the agent has detected a legitimate signal if the model is not interpretable. Even if the model detects a legitimate signal in the backtests, other market participants may discover the same signal and render the model obsolete. Again, determining this is difficult without knowing what inefficiencies the model exploits, and deploying it until it displays a sustained period of poor performance will be costly.
Footnote 15: Volatility, for example, used to be a reliable indicator of future returns [11].
Footnote 16: E.g., not accounting for market impact and lookahead bias.
In response to the question of whether or not understanding a model matters if it performs well on a backtest, the answer is an emphatic _yes_. Blindly taking backtest performance of a black box as an assurance of future performance in a noisy and constantly changing environment can prove costly. Thus, the aversion to adopting deep learning in algorithmic trading is reasonable. Ensuring that the trading decisions are explainable and the models are interpretable is essential for commercial and regulatory acceptance. To address this challenge, models should be created with a certain degree of interpretability. This way, stakeholders can get insight into which inefficiencies the model exploits, evaluate its generalizability, and identify its obsolescence _before_ incurring significant
losses. The use of deep learning in algorithmic trading can still be viable with techniques such as explainable AI and model monitoring.
[MISSING_PAGE_POST]
Future work
The methods presented in this thesis leave room for improvement in further work. More investigation should be done to evaluate the effectiveness of existing methods in different contexts. Further investigation will enable a deeper understanding of the model and its generalizability and provide an opportunity to identify potential areas for improvement. Considering the lack of real-world market data, one option is to use generative adversarial networks (GANs) to generate synthetic markets [21]. GANs can generate unlimited data, which can be used to train and test the model and its generalizability. Additionally, the lack of established baselines could be improved upon. While the buy-and-hold baseline is well understood and trusted, it is unrealistic in this context, as futures contracts expire. Although it presents its own challenges, developing a baseline more appropriate for futures trading would improve the current model. Furthermore, a greater level of interpretability is required to achieve real-world adoption. Therefore, combining algorithmic trading research with explainable AI is imperative to improve existing methods' interpretability.
Incorporating non-traditional data sources, such as social media sentiment or satellite images, may prove beneficial when forecasting market returns. Alternative data can provide a more comprehensive and holistic view of market trends and dynamics, allowing for more accurate predictions. By leveraging alternative data, algorithmic trading agents can gain an edge over their competitors and make better-informed decisions. Using deep learning techniques such as natural language processing and computer vision to analyze text or image data in an algorithmic trading context is promising. Neural networks are generally effective in problems with complex structures and high signal-to-noise ratios. Thus, it may be more appropriate to use deep learning to extract features from images or text rather than analyzing price series.
Lastly, the methods presented in this thesis are limited to trading a single instrument. They are, however, compatible with portfolio optimization with minimal modifications. Further research in this area would be interesting, as it better utilizes the potential of the reinforcement learning framework and the scalability of data-driven decision-making.
[MISSING_PAGE_POST]
Conclusion
This thesis investigates the effectiveness of deep reinforcement learning methods in commodities trading. Previous research in algorithmic trading, state-of-the-art reinforcement learning, and deep learning algorithms was examined, and the most promising methods were implemented and tested. This chapter summarizes the thesis' most important contributions, results, and conclusions.
This thesis formalizes the commodities trading problem as a continuing discrete-time stochastic dynamical system. The system employs a novel time-discretization scheme that is reactive and adaptive to market volatility, providing better statistical properties of the sub-sampled financial time series. Two policy gradient algorithms, an actor-based and an actor-critic-based, are proposed to optimize a transaction-cost- and risk-sensitive agent. Reinforcement learning agents parameterized using deep neural networks, specifically CNNs and LSTMs, are used to map observations of historical prices to market positions.
The models are backtested on the front month TTF Natural Gas futures contracts from 01-01-2017 to 01-01-2023. The backtest results indicate the viability of deep reinforcement learning agents in commodities trading. On average, the deep reinforcement learning agents produce an 83% higher Sharpe ratio out-of-sample than the buy-and-hold baseline. The backtests suggest that deep RL models can adapt to the unprecedented volatility caused by the energy crisis during 2021-2022. Introducing a risk-sensitivity term functioning as a trade-off hyperparameter between risk and reward produces satisfactory results, where the agents reduce risk as the risk-sensitivity term increases. The risk-sensitivity term allows the stakeholder to control the risk of an algorithmic trading agent in volatile markets. The direct policy gradient algorithm produces significantly higher Sharpe (49% on average) than the deterministic actor-critic algorithm, suggesting that an actor-based policy gradient method is more suited to algorithmic trading in an online, continuous time setting. The parametric function approximator based on the CNN architecture performs slightly better (5% higher Sharpe on average) than the LSTM, possibly due to the problem of vanishing gradients for the LSTM.
The algorithmic trading problem is made analytically tractable by simplifying assumptions that remove market frictions. Performance may be inflated due to these assumptions and should be viewed with a high degree of caution.
[MISSING_PAGE_POST]
## Acronyms
**AC**: Actor-Critic. 41
**AMH**: Adaptive Market Hypothesis. 7
**ANN**: Artificial Neural Network. 19
**CNN**: Convolutional Neural Network. 29
**CV**: Cross-Validation. 12
**DL**: Deep Learning. 29
**DNN**: Deep Neural Network. 19
**DQN**: Deep Q-Network. 39
**DRQN**: Deep Recurrent Q-Network. 39
**EMH**: Efficient Market Hypothesis. 7
**FFN**: Feedforward Network. 19
**IID**: Independent and Identically Distributed. 12
**LSTM**: Long Short-Term Memory. 32
**MDD**: Maximum Drawdown. 67
**MDP**: Markov Decision Process. 34
**ML**: Machine Learning. 16
**MPT**: Modern Portfolio Theory. 6
**MSE**: Mean Squared Error. 9
**PG**: Policy Gradient. 39
**POMDP**: Partially Observable Markov Decision Process. 36
**ReLU**: Rectified Linear Unit. 25
**RL**: Reinforcement Learning. 34
**RNN**: Recurrent Neural Network. 30
**SGD**: Stochastic Gradient Descent. 22
**SL**: Supervised Learning. 17
[MISSING_PAGE_POST] |
2309.00927 | A practical guide to loss measurements using the Fourier transform of
the transmission spectrum | Analyzing the internal loss characteristics and multimodedness of
(integrated) optical devices can prove difficult. One technique to recover this
information is to Fourier transform the transmission spectrum of optical
components. This article gives instruction on how to perform the transmission
measurement, prepare the data, and interpret the Fourier spectrum. Our guide
offers insights into the influence of sampling, windowing, zero padding as well
as Fourier spectrum peak heights and shapes which are previously neglected in
the literature but have considerable impact on the results of the method. For
illustration, we apply the method to a Bragg-reflection waveguide. We find that
the waveguide is multimodal with two modes having very similar group refractive
indices but different optical losses. | Hannah Thiel, Bianca Nardi, Alexander Schlager, Stefan Frick, Gregor Weihs | 2023-09-02T12:36:00Z | http://arxiv.org/abs/2309.00927v1 | # A practical guide to loss measurements using the Fourier transform of the transmission spectrum
###### Abstract
Analyzing the internal loss characteristics and multimodedness of (integrated) optical devices can prove difficult. One technique to recover this information is to Fourier transform the transmission spectrum of optical components. This article gives instruction on how to perform the transmission measurement, prepare the data, and interpret the Fourier spectrum. Our guide offers insights into the influence of sampling, windowing, zero padding as well as Fourier spectrum peak heights and shapes which are previously neglected in the literature but have considerable impact on the results of the method. For illustration, we apply the method to a Bragg-reflection waveguide. We find that the waveguide is multimodal with two modes having very similar group refractive indices but different optical losses.
## 1 Introduction
In any optics or photonics system detailed knowledge of the optical loss of individual components is a prerequisite to optimizing performance. This could be to stay competitive on the market, to preserve light coming from weak sources, as is often the case in biosensing [1, 2] and whenever light cannot simply be amplified like light signals transmitted via free-space links [3] or quantum states of light [4]. Quantifying these losses, however, can prove difficult; especially in integrated semiconductor devices. The internal losses of any component tend to superimpose with the reflectivity of input and output facets, additional cavities forming between optical components, probe laser stability, and coupling and detector efficiencies. A way of determining the true internal optical propagation loss of a component is a Fabry-Perot measurement.
It can be performed when the component undergoing testing acts as a cavity, such as a waveguide with cleaved facets. When scanning the input wavelength, a fringe pattern arises at the output that originates from interference of the light inside the cavity. The visibility of the Fabry-Perot fringes reveals the cavity loss, and therefore, propagation loss in a simple cavity [5]. However, if the component in question is multimodal, structurally more complex or has fabrication imperfections, the fringe pattern becomes challenging to interpret.
An elegant way of analyzing the Fabry-Perot fringes in those cases is to perform a Fourier transform from the angular wavenumber \(k\) to the optical path length \(d\). The Fourier spectrum contains information about the loss or gain in the medium, designed or unintentional defects and the modal landscape including group refractive indices. The method was made popular by Hofstetter et al. [6] as a way to measure losses in semiconductor lasers and has since been expanded to, among others, study defects in laser diodes [7], the internal cavities introduced by tapers in photonic crystals [8], the single- or multi-modedness of waveguides [9] and to modally resolve loss in waveguides [10].
While the theory is well developed and analysis seems straightforward, the actual measurement and its interpretation need to be done with care. The resulting resonances and their magnitude depend sensitively on the implementation of the method. We find that the necessary details are seldom described in publications. This is why we would like to provide a practical guide for those wishing to apply the method in the lab.
To this end, we first introduce the basics of Fabry-Perot fringes, their Fourier transform, and the elements visible in the Fourier spectrum. The practical guide explains how exactly to take
the measurement and how to prepare the data for Fourier analysis paying attention to details like zero padding and windowing. We detail the interpretation of the Fourier spectrum looking at peak shapes and heights and possible contributions from multiple modes. Following this, we show how to obtain the propagation loss coefficients and group refractive indices from the peaks in the Fourier spectrum. Finally, as a demonstration, we apply the method to an AlGaAs Bragg-reflection waveguide, a type of ridge waveguide used mainly as a source for photon pairs produced via parametric down-conversion [11].
## 2 The Fourier transform of a transmission spectrum
A mathematical description of the transmission through a cavity and its Fourier transform has been given by Hofstetter and others and will not be repeated here [6, 7, 12]. To perform the measurement it is sufficient to understand the following picture:
Consider light being reflected back and forth in a cavity. In integrated photonics, this cavity could be made up of a waveguide with cleaved facets with moderate reflectivity. The light will interfere destructively or constructively depending on its wavelength and the optical length of the cavity. This interference manifests in the Fabry-Perot fringes measurable as light intensity coupled out of the cavity when the wavelength of the input light is scanned.
A propagating mode acquires phase shifts as it travels through the cavity. When the Fabry-Perot fringes are Fourier transformed, these phase shifts manifest as signals at different values of the optical pathlength \(d\). For instance, every time light is reflected at the facets, it experiences a phase shift that sets apart light having travelled fewer or more passes through the cavity. This is characterized by peaks in the Fourier spectrum spaced at integer multiples of the optical cavity length \(L_{\text{opt}}\), as showcased in Figure 1. The optical cavity length is proportional to the physical cavity length \(L\) via a factor of \(n/\pi\), where \(n\) is the refractive index of the material, or, more specifically, the group index of the travelling mode 1. The peaks decrease in amplitude as less of the light manages further passes. This amplitude decrease depends on the reflectivity of the waveguide facets and the losses within the waveguide material. Therefore, if the reflectivity \(R\) of the facets is known and the ratio \(\tilde{R}\) of subsequent peak heights is gathered from the Fourier spectrum, the intrinsic waveguide propagation loss coefficient
Footnote 1: For the definition of the optical cavity length we follow the notation introduced by Hofstetter et al. in References [12] and [6].
\[\alpha=-\frac{1}{L}\ln\left(\frac{\tilde{R}}{R}\right) \tag{1}\]
can be calculated via the formula derived in Ref. [12]. If the facet reflectivity is not known, it is also possible to determine it together with the loss coefficient. In this case, the measurement must be done for multiple identical samples only differing in length [10].
In addition to the phase shifts upon reflection from the facets, each mode acquires a phase shift dependent on its group index during propagation. Hence, instead of individual, equally spaced peaks, the Fourier spectrum features bunches of peaks each of which can be assigned to a mode with an individual optical cavity length. The Fourier spectrum therefore contains information about the number of excited modes in the cavity, their relative strengths and their respective optical cavity lengths. From this, one can calculate the travel times within the cavity and the group indices.
## 3 Practical Guide: measurement, Fourier transform and analysis
Knowing how powerful a tool the Fourier transform of a Fabry-Perot spectrum is, we can now move on to the practical guide. Here, we take a look at the influence of sampling and data preparation. We then Fourier transform the transmission spectrum and analyze the Fourier
spectrum paying attention to peak positions, heights, and shapes. Our protocol uses the fast Fourier transform contained in the NumPy library in Python specifically, but the results should hold for any established fast Fourier transform (FFT) algorithm.
### Influence of sampling
The peak positions and amplitudes in the Fourier spectrum depend critically on how exactly the measurement is taken. The first step is to record the transmission spectrum.
**Measurement:** The FFT algorithm requires equally spaced input values, hence we record the transmitted power in equally spaced steps. It is usually recorded as a function of wavelength as this is the turning knob available for most lasers. However, after Fourier transforming, this leads to broadened peaks because a signal that is periodic in angular wavenumber \(k\) is sampled in equally-spaced wavelength steps. This means that the frequency is estimated to be lower (higher) where the wavelength steps correspond to larger (smaller) steps in \(k\)-space. The resulting peaks in the Fourier spectrum are smeared across a range of x-values, as can be seen in Figure 2. One solution would be to employ the non-uniform discrete Fourier transform, for which a Python package called PyNUFFT exists [13]. This form of FFT, however, is not as well established. Therefore, for reliable results, we set the laser to equally spaced \(k\)-values (or their corresponding wavelengths) during the measurement, where the Fourier domain, again, is the optical path length \(d\).
**Resolution:** In addition to being equally spaced, the recorded data points should also afford a certain range and thus resolution in the Fourier spectrum. On the one hand, if the data points are not dense enough, high spatial frequencies cannot be measured. On the other hand, in order to keep the measurement time reasonably small, one needs to find a balance between density and range. The density of data points should be increased as far as the measurement equipment allows and then the range increased until individual peaks are resolved. However, one should be careful not to extend the measurement range to angular wavenumbers near the bandgap or other strong nonlinearities. The existence of nonlinearities could also be a reason to limit the input laser power for the transmission measurement. There is a tradeoff between precision and
Figure 1: Fourier spectrum of a simulated transmission spectrum (inset) for two modes with group indices \(n_{\text{red}}=3.5\) and \(n_{\text{orange}}=2.5\) and loss coefficients \(\alpha_{\text{red}}=1.30\,\text{mm}^{-1}\) and \(\alpha_{\text{orange}}=0.08\,\text{mm}^{-1}\). The peaks belonging to one mode are separated by the optical cavity length which corresponds to a physical cavity length of \(1.5\,\text{mm}\).
accuracy in the \(d\)-domain. While a larger measurement range improves the precision of the Fourier transform, it also means that losses across a larger range of wavenumbers contribute to the loss coefficient. This needs to be evaluated carefully for each case.
### Influence of data preparation
**Windowing:** Once the data is recorded, it is advisable to multiply it with a window function. The transmission data from complicated, multi-modal devices, especially, is unlikely to match periodic boundary conditions. The FFT algorithm interprets the start and endpoint of the dataset as if they were neighboring data points, which results in discontinuities for non-periodic datasets. Not using any windowing is equivalent to applying a rectangular window, the Fourier transform of which approximates a sinc function. This sinc superimposes with the signal in the Fourier spectrum distorting peak heights and therefore the magnitudes of modes. To conserve the "true" Fourier spectral amplitudes, we suggest using a window function with a wide main lobe. Options are the Tukey window with a small shape parameter or the Flattop window [14, 15]. Figure 3 shows the comparison of a simulated dataset not treated with any window and that multiplied with a Tukey window (with a shape parameter of \(\sim 0.25\)) before the Fourier transform. One can see how the underlying sinc modifies the peak heights depending on where the side lobes sit relative to the peaks. If, however, spectral resolution is important, for instance in order to discern two different modes, it is best to choose a window with a narrow main lobe, such as the popular Hann window [15].
**Zero Padding:** One of the parameters of the Python discrete FFT routine numpy.fft.rfft is the number of input points the function will use. Here, one can set a number larger than the input data set to add zero padding. It is important to consider this when using the Fourier transformed Fabry-Perot spectrum as it influences peak heights and positions. Zero padding is usually added to improve the numerical efficiency of the FFT algorithm or \(d\)-domain resolution. A high number of added zeros may be useful for some applications where the \(d\)-axis position of the peaks needs to be known as precisely as possible, for instance in order to deduce group indices. This aspect is not as important when trying to determine loss coefficients for which only amplitude accuracy is relevant, as shown in Equation 1. Instead of their amplitude, one could also use the peak areas for
Figure 2: Fourier spectrum of simulated transmission spectrum (inset). Measuring the transmission at equally spaced wavelength steps leads to broadened peaks, while equally spaced \(k\)-values result in well defined peaks, as seen in Figure 3 (bottom).
the loss calculation. However, this becomes impossible if the multimodedness of the structure causes overlapping peaks.
When the FFT is performed, the algorithm divides the \(d\)-axis into bins over which the signal is distributed. The size and position of a bin depends on the number of data points used by the algorithm. A maximum peak height is a result of the bin's spatial frequency coinciding with the exact frequency solution of the Fourier transform. When a peak is separated into multiple bins, its spectral amplitude decreases accordingly. This effect is visible even for a very low number of zeros added to a large data set with tens of thousands of data points, as illustrated in Figure 4. We add zeros to the data set and, for each zero padding length, perform the FFT to find the peak heights. One can see that the peak heights oscillate as a function of the padding length. That means the Fourier spectra for these different zero paddings would yield vastly different loss coefficients. Hence, choosing the appropriate zero padding length for each individual peak is
Figure 3: Fourier spectra of a raw dataset (top) and of the same dataset multiplied with a Tukey window (bottom). Using a window removes the sinc that comes from Fourier transforming rectangular data. In both cases the data was augmented using a 200 entries long zero padding.
crucial for the calculation of the loss coefficient.
The most rigorous way to determine the ideal zero padding would be to add infinitely many zeros to the measured data to resolve all frequencies and obtain the true height of each peak. As this is numerically impossible, the next best thing is to add enough zeros for the peak heights to converge. The values they converge to are their respective maxima. However, the point at which they converge depends on the number and density of the data points measured and would have to be determined anew for each data set. Therefore, to simplify the procedure, we scan through up to 150 added zeros and choose the smallest zero padding that maximizes the peak height for each of the peaks involved. The heights of the different peaks do not necessarily reach a maximum at the same zero padding, as shown in Figure 6. We then extract each peak height from its individual spectrum to perform further analyses and calculations.
### Analysis
**Peak Shapes and Modes:** When analyzing a Fourier spectrum, there are a couple of things to look out for. The peak shape can vary from a clear single peak coming from a single mode to a single peak made up of contributions from multiple modes with similar group index or to a bundle of peaks corresponding to modes with different group indices. If the peak is made up of a single mode, the ratios between subsequent peak heights will remain constant assuming a constant loss coefficient across the measured range. It is then straightforward to use this ratio for the calculation of the loss coefficient. In this case, one can also consider using the peak area instead of amplitude, as done in Ref. [7]. When two modes with very similar group index propagate in the cavity, they might not be resolved and show up as signal in the same peak. However, they might have a different loss coefficient which would then manifest as a change in peak height ratio between subsequent peaks. This multimodedness can also be seen when plotting peak heights as a function of zero padding length; the height maxima of the first peak do not coincide with those of the later peaks. To see this, compare Figures 4 and 6. When there are multiple modes with distinct group indices, they will show up as separate peaks in the Fourier spectrum. Each peak will have higher harmonics at multiples of their individual optical resonator length. While the
Figure 4: The spectral amplitudes of the first five peaks (different colors) are plotted as a function of zero padding length used in the Fourier transform. The peak height maxima coincide with those of the first peak because the simulation of the transmission spectrum was done for a single mode (black dashed line as a guide to the eye). These maxima are what the peak heights converge to for large zero paddings.
peak bundle might change appearance, the constituent modes are easily identifiable, as is the case in Figure 1.
**Peak Height Uncertainty:** Once the contributions of the modes in the spectrum have been identified, the peak heights (or areas if applicable) can be determined from the Fourier spectrum. Depending on what the final goal of the analysis is, it may be important to consider the uncertainties of the peak heights. In Ref. [16], Eichstaedt et al. present an open-source Python software tool which treats the propagation of uncertainties in the (inverse) discrete Fourier transform, also taking into account windowing and zero padding. Another way of arriving at an estimate of the peak height uncertainty is to calculate the peak heights in the Fourier spectra created with one more or one fewer zero pad than the ideal one. We then calculate the difference to the peak heights obtained with the ideal zero padding. Whichever difference is smaller can then be used as the uncertainty.
**Peak Height Ratios:** Finally, in order to determine the loss coefficient \(\alpha\), we need to calculate the ratio of subsequent peak heights. For a sample that acts as a single mode cavity with constant loss across the measurement range, the peak height ratios are constant. Depending on the quality factor of the cavity, one can identify just a handful or many peaks in the Fourier spectrum. As we move along the \(d\)-axis towards larger optical lengths, the peaks decrease in height and correspond to lower signal intensity. They should therefore be weighted accordingly when determining the overall \(\tilde{R}\). If the sample is more complex, it might make sense to choose an individual weighting or even to only use two peaks and their peak height ratio for the calculations. One such case is explained in detail in the following section.
## 4 Example: Analysis of a Bragg-reflection waveguide
To demonstrate the technique of Fourier transforming a transmission spectrum, we apply it to a Bragg-reflection waveguide (BRW). Our BRWs are semiconductor devices made of AlGaAs that are used for the production of entangled photon pairs in the telecom C-band via parametric down-conversion from the NIR. In order to be able to guide and phase-match the modes involved in the down-conversion process, the waveguide is made up of layers with different aluminum concentration. A more detailed description of BRWs can be found in Refs. [11, 17, 18]. The complicated layer stack and large cross-section mean that the waveguide is multimodal. Additionally, fabrication is challenging and imperfections reflect strongly in the loss coefficient [19].
We perform the Fourier analysis of the transmission spectrum for TE-polarized light in the telecom wavelength range. To this end, we scan the wavelength of our tunable telecom laser (Santec TSL-710) in steps that correspond to wavenumber steps of around \(2.49\,\mathrm{m}^{-1}\) in the range from \(1520-1550\,\mathrm{nm}\). We apply a Tukey window with shape parameter \(0.25\) to the data and Fourier transform for different zero padding lengths. Figure 5 shows the Fourier spectrum for one specific zero padding. It features single peaks separated by the optical length of the cavity pointing to a single-mode operation or multi-mode operation where the modes have very similar group indices. A plot of the peak heights as a function of zero padding length is shown in Figure 6.
We determine that a zero padding of \(z_{1}=29\), \(z_{2}=45\), \(z_{3}=40\), \(z_{4}=49\), and \(z_{5}=2\) maximizes the peak height of the first, second, third, fourth, and fifth peak, respectively, and deduce from this the peak heights \(h_{1}=4.7664(1)\cdot 10^{7}\), \(h_{2}=9.2927(3)\cdot 10^{6}\), \(h_{3}=2.0140(4)\cdot 10^{6}\), \(h_{4}=4.61784(6)\cdot 10^{5}\), and \(h_{5}=1.4013(6)\cdot 10^{5}\). Using the peak height ratios \(\tilde{R}_{12}=h_{2}/h_{1}\), \(\tilde{R}_{23}=h_{3}/h_{2}\), \(\tilde{R}_{34}=h_{4}/h_{3}\), \(\tilde{R}_{45}=h_{5}/h_{4}\), the physical cavity length of \(L=1.80(5)\,\mathrm{mm}\), and the facet reflectivity of \(R=0.35(4)\)[10], we calculate the loss coefficients \(\alpha\) using Equation 1. The main contributions to the final uncertainty of the loss coefficient come from the systematic uncertainty of the cavity physical length and the facet reflectivity. The length of the BRW was measured using an optical microscope and calibration slide. The facet reflectivity was
determined in an experiment involving multiple BRW samples with different length [10]. These uncertainties dominate over the statistical errors of the peak height ratios, which are smaller than \(1.3\cdot 10^{-4}\). We calculate \(\alpha\) separately for every neighboring peak pair to be able to analyze the multimodedness of the waveguide sample. Looking at Figure 7, one can see that it decreases from \(\alpha_{12}=0.33\,(6)\,\) mm\({}^{-1}\) between peaks one and two to \(\alpha_{45}=0.08\,(6)\,\) mm\({}^{-1}\) between peaks four and five. A non-constant loss coefficient means that at least two modes with very similar group indices propagate in the cavity. The high loss coefficient of one mode dominates at the beginning, but its contribution diminishes for later peaks as it loses power. We suspect that the field intensity of this mode is higher near the sidewall of the waveguide ridge. It therefore experiences increased scattering from the sidewall imperfections. Another mode, presumably closer to the core of the waveguide, benefits from lower scattering rates, survives more passes through the waveguide ridge.
Figure 5: The Fourier transform of the telecom transmission spectrum through a BRW features individual peaks spaced at the optical cavity length. Adapted from [19].
Figure 6: The spectral amplitudes of the first five peaks are plotted as a function of zero padding length used in the Fourier transform. The peak height maxima do not coincide for the BRW. This points to multiple modes propagating in the cavity.
through the waveguides, and therefore features a lower loss coefficient. The loss coefficient \(\alpha_{45}\) calculated from the latest peak ratio \(\tilde{R}_{45}\) therefore approximates the loss coefficient of that less lossy mode. We are thus able to analyze the loss characteristics of a multimodal waveguide structure, even if the group indices of the propagating modes are similar.
## 5 Conclusion and Outlook
We showed how to analyze the Fabry-Perot fringes in the transmission spectrum of multimodal or imperfect waveguide cavities using the Fourier transform. The Fourier spectrum contains information about the loss characteristics and group refractive indices of propagating modes that can be extracted by following our practical guide. We achieve satisfactory resolution by performing the transmission measurement at steps corresponding to densely and equally distributed wavenumbers \(k\). To prepare the data for the Fourier transform, we suggest a window with a wide main lobe like the Tukey window with small shape parameter. For the peaks in the Fourier spectrum to reach their true heights, we find the ideal zero padding for each of the peaks individually. Finally, after applying Python's NumPy FFT algorithm, we interpret the Fourier spectrum. We find that the BRW analyzed is multimodal with two modes having very similar group refractive indices. From the peak height ratios we can calculate the optical loss, which decreases from \(\alpha_{12}=0.33\,(6)\,\) mm\({}^{-1}\) to \(\alpha_{45}=0.08\,(6)\,\) mm\({}^{-1}\). This means that the higher-loss mode dominates at few passes through the waveguide cavity while the lower-loss mode survives longer. Detailed knowledge about the loss characteristics and modal landscape of the BRW helps us understand design flaws and fabrication imperfections. The Fourier transform of the transmission spectrum can therefore be a useful tool for designers of any optics or photonics component. Especially for integrated devices, this practical guide helps investigate internal losses separately from surrounding optical components.
FundingThe authors acknowledge funding by the Uniqorn project (Horizon 2020 grant agreement no. 820474) and the BeyondC project (FWF project no. F7114).
AcknowledgmentsThe authors thank Lukas Einkemmer for fruitful discussions and constructive criticism.
Author contributionsConceptualization, H.T., S.F., G.W.; Formal analysis, H.T., S.F.; Methodology, H.T., S.F.; Investigation, H.T., A.S., B.N.; Software, H.T., A.S., S.F.; Supervision, S.F., G.W.; Writing - original draft, H.T.; Writing - review & editing, All Authors; Funding acquisition, G.W.
DisclosuresThe authors declare no conflicts of interest.
Figure 7: Loss coefficients \(\alpha\) calculated from the ratios of neighboring peaks in the Fourier spectrum of a BRW. Adapted from [19].
Data Availability Statement.The data that support the findings of this study are openly available at the following DOI: 10.5281/zenodo.7966624
|
2308.09394 | An Eigenvalue-Free Implementation of the Log-Conformation Formulation | The log-conformation formulation, although highly successful, was from the
beginning formulated as a partial differential equation that contains an, for
PDEs unusual, eigenvalue decomposition of the unknown field. To this day, most
numerical implementations have been based on this or a similar eigenvalue
decomposition, with Knechtges et al. (2014) being the only notable exception
for two-dimensional flows.
In this paper, we present an eigenvalue-free algorithm to compute the
constitutive equation of the log-conformation formulation that works for two-
and three-dimensional flows. Therefore, we first prove that the challenging
terms in the constitutive equations are representable as a matrix function of a
slightly modified matrix of the log-conformation field. We give a proof of
equivalence of this term to the more common log-conformation formulations.
Based on this formulation, we develop an eigenvalue-free algorithm to evaluate
this matrix function. The resulting full formulation is first discretized using
a finite volume method, and then tested on the confined cylinder and
sedimenting sphere benchmarks. | Florian Becker, Katharina Rauthmann, Lutz Pauli, Philipp Knechtges | 2023-08-18T08:51:31Z | http://arxiv.org/abs/2308.09394v2 | # An Eigenvalue-Free Implementation of the Log-Conformation Formulation
###### Abstract
The log-conformation formulation, although highly successful, was from the beginning formulated as a partial differential equation that contains an, for PDEs unusual, eigenvalue decomposition of the unknown field. To this day, most numerical implementations have been based on this or a similar eigenvalue decomposition, with Knechtges et al. (2014) being the only notable exception for two-dimensional flows.
In this paper, we present an eigenvalue-free algorithm to compute the constitutive equation of the log-conformation formulation that works for two- and three-dimensional flows. Therefore, we first prove that the challenging terms in the constitutive equations are representable as a matrix function of a slightly modified matrix of the log-conformation field. We give a proof of equivalence of this term to the more common log-conformation formulations. Based on this formulation, we develop an eigenvalue-free algorithm to evaluate this matrix function. The resulting full formulation is first discretized using a finite volume method, and then tested on the confined cylinder and sedimenting sphere benchmarks.
keywords: Log-conformation, Oldroyd-B model, Giesekus model, Finite Volume Method +
Footnote †: journal: Computer Physics Communications
## 1 Introduction
Since its inception [1], the log-conformation formulation undoubtedly has been a huge success. It had a considerable impact on attacking the High Weissenberg Number Problem (HWNP) that had riddled simulation results the decades before.
The general idea of the log-conformation formulation is simple: The conformation tensor \(\mathbf{C}(x,t)\in\mathbb{R}^{d\times d}\), which, for a given instant of space \(x\in\mathbb{R}^{d}\) and time \(t\), essentially encodes a macroscopically averaged covariance of the microscopic configuration, is replaced by its matrix logarithm \(\mathbf{\Psi}\) such that the conformation tensor can be recovered by the matrix exponential \(\mathbf{C}=\exp\mathbf{\Psi}\). The initial motivation was to better resolve exponential stress profiles. However, another important fact is that the matrix exponential function ensures that \(\mathbf{C}\) stays a symmetric positive definite matrix; a property all non-degenerate covariance matrices share. In fact, it was already known before [2] that a substantial class of macroscopic models respect this microscopic property also in the macroscopic equations, and the divergence of numerical simulations quite often coincided with the loss of this property.
This introduction, so far, suggests that the log-conformation formulation is a rather technical trick to enforce positivity, but in order to shed more light on the failure mechanism of numerical simulations, we want to also highlight the fact that \(\mathbf{\Psi}\) naturally appears in the free energy density. E.g., in the Oldroyd-B model or Giesekus model with polymeric viscosity \(\mu_{p}\) and relaxation time \(\lambda\), it has been known for quite some time [3; 4; 5; 6], that the free energy density of the polymeric part \(\mathcal{F}_{P}\) is given by \(\mathcal{F}_{P}=\mu_{P}/(2\lambda)\left(\operatorname{tr}(\mathbf{C})-\log \det\mathbf{C}-d\right)\). Acknowledging that \(\log\det\mathbf{C}=\operatorname{tr}\log\mathbf{C}\) this can be rewritten in \(\mathbf{\Psi}\)
\[\mathcal{F}_{P}=\frac{\mu_{P}}{2\lambda}\operatorname{tr}\left(e^{\mathbf{\Psi}}- \mathbf{\Psi}-\mathbf{1}\right). \tag{1}\]
The implications of this statement are quite remarkable, since the second law of thermodynamics states that the free energy in total and in absence of external forces has to be non-increasing, which thus
puts severe bounds on \(\mathbf{\Psi}\). At best, any reasonable numerical simulation should respect this dissipative nature of the free energy, and in the light of this insight it does not seem too unexpected that it is of course easier to construct such a dissipative scheme in \(\mathbf{\Psi}\) than in \(\mathbf{\mathrm{C}}\).
However, even potentially violating this physical principle does not directly explain the failure of numerical simulations. That the free energy relates to the stability of the numerical schemes is mostly an indication from the known mathematical existence results in the discretized setting [7; 8; 9; 10]. They all use the free energy to prove existence, and it is thus not unreasonable to conclude that the free-energy-dissipative nature of a numerical scheme and the existence of a numerical solution essentially appear as two sides of the same medal. It is this insight that brings us to the conclusion that the log-conformation formulation has, as far as the fully nonlinear numerical schemes are concerned, solved the HWNP.
Nonetheless, all these advantages have a drawback: the resulting constitutive equation as formulated in \(\mathbf{\Psi}\) becomes much more complex. Beginning from the first log-conformation formulation, almost all new constitutive equations in \(\mathbf{\Psi}\) made use of an eigenvalue decomposition of \(\mathbf{\Psi}\). The latter is highly unusual for a partial differential equation in the sense that the new equation contains an eigenvalue decomposition of the unknown degrees of freedom. Two notable exceptions to this were [11], which introduced an eigenvalue-free formulation in two-dimensions, and [12], which substituted the eigenvalue-based terms by a Cauchy-type integral in the three-dimensional setting. However, [12] still relied on eigenvalues for the actual numerical computation, since Cauchy integrals are known to be prone to numerical cancellation issues. With this paper we will bridge the gap, and provide an eigenvalue-free implementation also for the three-dimensional setting.
In order to derive this new algorithm, we will use a formulation of the constitutive equation that was introduced in [10]. Since we do not want to derive a constitutive equation from first principles, as it was done in [10], and for the sake of brevity, we rather make the connection to the more popular log-conformation formulations in Section 2. There it will be shown that all these log-conformation formulations are equal in perfect arithmetic.
Given this new formulation, we will, in Section 3, derive an algorithm that allows for the eigenvalue-free numerical evaluation of this term. This algorithm is in principle not bound to a particular discretization scheme, and thus suitable for either finite element or finite volume discretizations.
In Section 4, we then subsequently introduce shortly the finite volume aspects of the numerical scheme we chose to conduct our experiments in. Our implementation is based on the RheoTool software [13], and since our reformulation independent of the actual discretization of differential operators, we keep the changes minimal. Therefore, we will also not discuss matters of stable discretization of the incompressible Navier-Stokes equations using the finite volume method, and rather refer to [14]. Furthermore, we also want to point the interested reader to the review paper [15] and the references therein for a broader picture on the simulation of viscoelastic fluid flows.
In Section 5, we present two benchmarks: the confined cylinder and the sedimenting sphere. Both benchmarks consider fluid flow around an obstacle, a cylinder and a sphere, respectively. Furthermore, drag coefficient values are computed and compared to results from selected publications.
At last, we also want, for the sake of completeness, mention that other schemes than the log-conformation formulation have been proposed and successfully employed to enforce the positive-definiteness of \(\mathbf{\mathrm{C}}\). Most notably are the square-root-based approach in [16] or the Cholesky-type decomposition in [17], as well as the more recently introduced contravariant deformation tensor approach [18].
## 2 Theory of Log-Conformation Formulations
Over the course of the years there have been many different log-conformation formulations, which in perfect arithmetic all yield the same result. Starting point is a constitutive equation of the symmetric conformation tensor \(\mathbf{\mathrm{C}}\)
\[\partial_{t}\mathbf{\mathrm{C}}+(\mathbf{\mathrm{u}}\cdot\nabla)\mathbf{\mathrm{C}}- \nabla\mathbf{\mathrm{u}}\,\mathbf{\mathrm{C}}-\mathbf{\mathrm{C}}\,\nabla\mathbf{\mathrm{u}}^ {\mathrm{T}}=-P(\mathbf{\mathrm{C}})\,. \tag{2}\]
Here we have chosen the convention that \([\nabla\mathbf{\mathrm{u}}]_{ij}=\partial_{i}u_{j}\) is the Jacobian of the velocity field \(\mathbf{\mathrm{u}}\), such that the left-hand side of the equation corresponds to the upper-convected derivative of the conformation tensor. \(P\) is in full generality a function of \(\mathbf{\mathrm{C}}\) that maps \(\mathbf{\mathrm{C}}\) to another symmetric matrix that commutes with \(\mathbf{\mathrm{C}}\), i.e., \(P(\mathbf{\mathrm{C}})\,\mathbf{\mathrm{C}}=\mathbf{\mathrm{C}}\,P(\mathbf{\mathrm{C}})\). Common choices, that are relevant for later sections of this paper,
are the Oldroyd-B model \(P(\mathbf{C})=\frac{1}{\lambda}\left(\mathbf{C}-\mathbf{1}\right)\) with a relaxation time \(\lambda\), as well as the Giesekus model \(P(\mathbf{C})=\frac{1}{\lambda}\left(\mathbf{1}+\alpha\left(\mathbf{C}- \mathbf{1}\right)\right)\left(\mathbf{C}-\mathbf{1}\right)\) with an additional mobility parameter \(\alpha\).
The log-conformation formulation now replaces \(\mathbf{C}\) by an auxiliary symmetric tensor \(\boldsymbol{\Psi}\) such that the two relate via the matrix exponential function \(\mathbf{C}=\exp(\boldsymbol{\Psi})\). As stated in the introduction, the replacement has the advantage that \(\mathbf{C}\) stays positive definite. However, this necessitates a new constitutive equation for \(\boldsymbol{\Psi}\) that replaces Eq. (2). In the formulation that will be used throughout this paper, this equation is stated as
\[\begin{split} 0=\partial_{t}\boldsymbol{\Psi}+(\mathbf{u} \cdot\nabla)\boldsymbol{\Psi}+\boldsymbol{\Psi}\omega(\mathbf{u})-\omega( \mathbf{u})\boldsymbol{\Psi}\\ -2\,f(\mathrm{ad}\,\boldsymbol{\Psi})\,\epsilon(\mathbf{u})+P( \epsilon^{\boldsymbol{\Psi}})e^{-\boldsymbol{\Psi}}\,,\end{split} \tag{3}\]
where \(\omega(\mathbf{u})\coloneqq(\nabla\mathbf{u}-\nabla\mathbf{u}^{T})/2\) is the vorticity tensor and \(\epsilon(\mathbf{u})\coloneqq(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})/2\) is the strain tensor. The most important part, however, is \(f(\mathrm{ad}\,\boldsymbol{\Psi})\,\epsilon(\mathbf{u})\), for which different formulations and numerical algorithms exist. Note that this term distinguishes the different log-conformation formulations, which in perfect arithmetic all yield the same numerical results.
The formulation chosen here is in full generality proven to be equal to the original conformation equation (2) in [10, Theorem A.42]. We will refrain here from an exposition that shows this equivalence from first principles and in full generality. Instead, we explain our formulation first by defining \(f(\mathrm{ad}\,\boldsymbol{\Psi})\,\epsilon(\mathbf{u})\) properly, and then show the equivalence of the different log-conformation formulation to this formulation in a second step. Those already familiar with one of the other log-conformation formulations should thus more easily grasp the formulation in Eq. (3).
For the definition, we first introduce some terminology: Given two square matrices \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{d\times d}\), we define the commutator \([\mathbf{A},\mathbf{B}]=\mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A}\). Then, the adjoint operator \(\mathrm{ad}\,\mathbf{A}:\mathbb{R}^{d\times d}\rightarrow\mathbb{R}^{d\times d}\) is defined as the linear operator that maps any matrix \(\mathbf{B}\) to \([\mathbf{A},\mathbf{B}]\), i.e.,
\[\mathrm{ad}\,\mathbf{A}\left(\mathbf{B}\right)\coloneqq\left[\mathbf{A}, \mathbf{B}\right].\]
The important point to note here, which will become crucial for our algorithm, is that \(\mathrm{ad}\,\mathbf{A}\) is a linear operator, i.e., a homomorphism from a vector space \(\mathbb{R}^{d\times d}\) to the same vector space \(\mathbb{R}^{d\times d}\). As such, it is in linear algebra terms representable as a matrix: There exists a matrix \(\mathbf{M}\) in \(\mathbb{R}^{d\times d^{2}}\) such that
\[\mathrm{ad}\,\mathbf{A}\left(\mathbf{B}\right)=\mathbf{M}\,\mathbf{\tilde{b}}\,,\]
where \(\mathbf{\tilde{b}}\) is just a reshaping of the matrix \(\mathbf{B}\) to a vector in \(\mathbb{R}^{d^{2}}\), and the product between \(\mathbf{M}\) and \(\mathbf{\tilde{b}}\) is the usual matrix vector product. To make this more explicit and less abstract, e.g., in the \(d=2\) case we could write \(\mathbf{D}=\mathrm{ad}\,\mathbf{A}\left(\mathbf{B}\right)=\mathbf{A}\mathbf{B }-\mathbf{B}\mathbf{A}\) as
\[\begin{pmatrix}D_{11}\\ D_{12}\\ D_{21}\\ D_{22}\end{pmatrix}=\begin{pmatrix}0&-A_{21}&A_{12}&0\\ -A_{12}&A_{11}-A_{22}&0&A_{12}\\ A_{21}&0&A_{22}-A_{11}&-A_{21}\\ 0&A_{21}&-A_{12}&0\end{pmatrix}\begin{pmatrix}B_{11}\\ B_{12}\\ B_{21}\\ B_{22}\end{pmatrix}.\]
For the sake of brevity, and since it will not be used for the actual algorithm, we skip the related formula for \(d=3\). For the rest of the paper \(d\) will be fixed to \(d=3\).
Hence, for a given instant of space \(x\) and time \(t\), the operation \(\mathrm{ad}\,\boldsymbol{\Psi}\left(\epsilon(\mathbf{u})\right)\) can be thought of as a matrix-vector multiplication of a matrix \(\mathbb{R}^{9\times 9}\) and a vector \(\mathbb{R}^{9}\) for the three-dimensional case. In the following, as is customary for linear operators and especially matrix-vector multiplications, we will omit the parentheses around the argument and just write \(\mathrm{ad}\,\boldsymbol{\Psi}\,\epsilon(\mathbf{u})\).
Lastly, we define \(f(\mathrm{ad}\,\boldsymbol{\Psi})\) as the application of the function
\[f(x)=\frac{x/2}{\tanh(x/2)} \tag{4}\]
to the \(9\times 9\)-dimensional matrix that represents \(\mathrm{ad}\,\boldsymbol{\Psi}\).
To summarize: For each instant of space and time, we think of \(f(\mathrm{ad}\,\boldsymbol{\Psi})\,\epsilon(\mathbf{u})\) as the function \(f\) applied to a \(9\times 9\)-matrix representation of \(\mathrm{ad}\,\boldsymbol{\Psi}\), and the result being multiplied with a \(9\)-vector representation of \(\epsilon(\mathbf{u})\).
This is already, modulo several optimizations for symmetric matrices, the gist of the Algorithm 1 in the following section: We will evaluate this function \(f\) of a matrix that represents \(\mathrm{ad}\,\boldsymbol{\Psi}\) without the need to do an eigenvalue decomposition of \(\boldsymbol{\Psi}\).
This brings us to the second part of this section: The question how previous log-conformation formulations have evaluated this term.
A straightforward way is using the Taylor expansion of \(f\), which is given by
\[f(x)=\sum_{n=0}^{\infty}\frac{B_{2n}}{(2n)!}x^{2n}\,, \tag{5}\]
where \(B_{2n}\) are the even Bernoulli numbers. Substituting \(x\) by \(\operatorname{ad}\boldsymbol{\Psi}\) yields
\[\begin{split} f(\operatorname{ad}\boldsymbol{\Psi})\,\epsilon( \mathbf{u})=&\sum_{n=0}^{\infty}\frac{B_{2n}}{(2n)!}\operatorname{ ad}^{2n}\boldsymbol{\Psi}\,\epsilon(\mathbf{u})\\ =&\sum_{n=0}^{\infty}\frac{B_{2n}}{(2n)!}\underbrace{ \left\{\boldsymbol{\Psi},[\boldsymbol{\Psi},[\boldsymbol{\Psi},[\ldots,[ \boldsymbol{\Psi}\,,\epsilon(\mathbf{u})]\ldots]\right\}}_{2n\text{ commutators}}.\end{split} \tag{6}\]
This formulation was first proven in [11, Theorem 1]. However, as it was noted in [11], this formulation alone is for practical numerical simulations not directly usable, since \(f(x)\) has singularities at \(\pm 2\pi i\), which limits the convergence radius of the Taylor expansion.
To make a connection with the eigenvalue-based formulations, we introduce the eigenvalue decomposition of \(\boldsymbol{\Psi}\)
\[\boldsymbol{\Psi}=\mathbf{O}\begin{pmatrix}\lambda_{1}&&\\ &\lambda_{2}&&\\ &&\lambda_{3}\end{pmatrix}\mathbf{O}^{T}\,, \tag{7}\]
with \(\mathbf{O}=(\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3})\) being an orthogonal matrix. \(\lambda_{i}\) are the eigenvalues and \(\mathbf{e}_{i}\) the corresponding eigenvectors. For the following, it is also customary to introduce the projection operators \(\mathbf{P}_{i}=\mathbf{e}_{i}\mathbf{e}_{i}^{\top}\), which allows us to state the decomposition in the form \(\boldsymbol{\Psi}=\sum_{i}\lambda_{i}\mathbf{P}_{i}\). Furthermore, the fact \(\mathbf{O}\mathbf{O}^{T}=\mathbf{1}\) yields \(\mathbf{1}=\sum_{i}\mathbf{P}_{i}\).
In combination, we can thus state
\[\begin{split}\operatorname{ad}\boldsymbol{\Psi}\,\epsilon( \mathbf{u})&=\boldsymbol{\Psi}\epsilon(\mathbf{u})-\epsilon( \mathbf{u})\boldsymbol{\Psi}\\ &=\sum_{i,j}(\lambda_{i}-\lambda_{j})\mathbf{P}_{i}\epsilon( \mathbf{u})\mathbf{P}_{j}\,.\end{split}\]
Furthermore, it is not difficult to see by algebraic manipulations that this can be generalized to any polynomial \(p\)
\[p(\operatorname{ad}\boldsymbol{\Psi})\,\epsilon(\mathbf{u})=\sum_{i,j}p( \lambda_{i}-\lambda_{j})\mathbf{P}_{i}\epsilon(\mathbf{u})\mathbf{P}_{j}\,.\]
It is now mostly an application of the Stone-Weierstrass theorem that this not only holds for polynomials, but also for the continuous function \(f\)
\[f(\operatorname{ad}\boldsymbol{\Psi})\,\epsilon(\mathbf{u})=\sum_{i,j}f( \lambda_{i}-\lambda_{j})\mathbf{P}_{i}\epsilon(\mathbf{u})\mathbf{P}_{j}\,. \tag{8}\]
Eq. (8) is the formulation as it was used for numerical evaluation in [12, 10, 19], and is in some sense closest to what was used in [20].
To see that the more popular eigenvalue-based formulations are just variations of this formulation, we also need to incorporate the rotational term
\[\begin{split}\boldsymbol{\Psi}\omega(\mathbf{u})&- \omega(\mathbf{u})\boldsymbol{\Psi}\\ &=\sum_{i,j}(\lambda_{i}-\lambda_{j})\frac{e^{\lambda_{i}}-e^{ \lambda_{j}}}{e^{\lambda_{i}}-e^{\lambda_{j}}}\mathbf{P}_{i}\frac{\nabla \mathbf{u}-\nabla\mathbf{u}^{T}}{2}\mathbf{P}_{j}\,.\end{split}\]
Using \(\tanh((\lambda_{i}-\lambda_{j})/2)=(e^{\lambda_{i}}-e^{\lambda_{i}})/(e^{ \lambda_{i}}+e^{\lambda_{j}})\) we can combine this with Eq. (8) to get
\[\begin{split}\boldsymbol{\Psi}\omega(\mathbf{u})&- \omega(\mathbf{u})\boldsymbol{\Psi}-2\,f(\operatorname{ad}\boldsymbol{\Psi}) \,\epsilon(\mathbf{u})\\ &=-\sum_{i,j}\frac{\lambda_{i}-\lambda_{j}}{e^{\lambda_{i}}-e^{ \lambda_{j}}}\mathbf{P}_{i}\left(e^{\lambda_{i}}\nabla\mathbf{u}+e^{\lambda_{ i}}\nabla\mathbf{u}^{T}\right)\mathbf{P}_{j}\,.\end{split} \tag{9}\]
Furthermore, note that \(\lim_{\lambda_{j}\to\lambda_{i}}\frac{\lambda_{i}-\lambda_{j}}{e^{\lambda_{i}}- e^{\lambda_{j}}}=e^{-\lambda_{i}}\), which allows us to split off the \(i=j\) part
\[\begin{split}\boldsymbol{\Psi}\omega(\mathbf{u})&- \omega(\mathbf{u})\boldsymbol{\Psi}-2\,f(\operatorname{ad}\boldsymbol{\Psi}) \,\epsilon(\mathbf{u})\\ &=-2\mathbf{B}-\sum_{i\neq j}\frac{\lambda_{i}-\lambda_{j}}{e^{ \lambda_{i}}-e^{\lambda_{j}}}\mathbf{P}_{i}\left(e^{\lambda_{i}}\nabla \mathbf{u}+e^{\lambda_{i}}\nabla\mathbf{u}^{T}\right)\mathbf{P}_{j}\,,\end{split} \tag{10}\]
with
\[\mathbf{B}=\sum_{i}\mathbf{P}_{i}\,\epsilon(\mathbf{u})\,\mathbf{P}_{i}=\sum_ {i}\mathbf{P}_{i}\,\nabla\mathbf{u}\,\mathbf{P}_{i}\,. \tag{11}\]
Except notation, Eq. (10) is the same formulation as given in [21, Eq. (44)].
To prove the equivalence to the most widespread log-conformation formulation, we introduce
\[\mathbf{\tilde{M}}=\begin{pmatrix}\hat{m}_{11}&\hat{m}_{12}&\hat{m}_{13} \\ \hat{m}_{21}&\hat{m}_{22}&\hat{m}_{23}\\ \hat{m}_{31}&\hat{m}_{32}&\hat{m}_{33}\end{pmatrix}\coloneqq\mathbf{O}^{T} \nabla\mathbf{u}\mathbf{O}\,.\]
We can thus express \(\mathbf{B}\) as
\[\mathbf{B}=\mathbf{O}\begin{pmatrix}\hat{m}_{11}&0&0\\ 0&\hat{m}_{22}&0\\ 0&0&\hat{m}_{33}\end{pmatrix}\mathbf{O}^{T}\,. \tag{12}\]
Moreover, considering the case of distinct eigenval
ues, we introduce
\[\mathbf{\Omega}=-\sum_{i\neq j}\frac{1}{e^{\lambda_{i}}-e^{\lambda_{j}}}\mathbf{P}_{i} \left(e^{\lambda_{i}}\nabla\mathbf{u}+e^{\lambda_{i}}\nabla\mathbf{u}^{T}\right) \mathbf{P}_{j}\,. \tag{13}\]
With the projection operators \(\mathbf{P}_{i}\) being orthogonal and idempotent, i.e., \(\mathbf{P}_{i}\mathbf{P}_{j}=\delta_{ij}\mathbf{P}_{i}\) and \(\delta_{ij}\) being the Kronecker Delta, this yields the equivalent formulation
\[\begin{split}\mathbf{\Psi}\omega(\mathbf{u})-\omega(\mathbf{u})\mathbf{ \Psi}-2\,f(\mathrm{ad}\,\mathbf{\Psi})\,\epsilon(\mathbf{u})\\ =-2\mathbf{B}+\mathbf{\Psi}\mathbf{\Omega}-\mathbf{\Omega}\mathbf{\Psi}\,.\end{split} \tag{14}\]
Similarly to the formulation of \(\mathbf{B}\) we can also reformulate \(\mathbf{\Omega}\) using \(\tilde{\mathbf{M}}\) as
\[\mathbf{\Omega}= \mathbf{O}\begin{pmatrix}0&\omega_{12}&\omega_{13}\\ \omega_{21}&0&\omega_{23}\\ \omega_{31}&\omega_{32}&0\end{pmatrix}\mathbf{O}^{T}\,, \tag{15}\]
where the \(\omega_{ij}\) are given by
\[\omega_{ij}:=-\frac{e^{\lambda_{j}}\hat{m}_{ij}+e^{\lambda_{i}}\hat{m}_{ji}}{ e^{\lambda_{i}}-e^{\lambda_{j}}}\,. \tag{16}\]
This is mostly the original formulation, as it was first used by Fattal and Kupferman [1] and has been used in many numerical implementations.
For the sake of completeness, and without proof, we also mention the formulation using a Dunford-type/Cauchy-type integral
\[\begin{split}& f(\mathrm{ad}\,\mathbf{\Psi})\,\epsilon(\mathbf{u})= \frac{1}{(2\pi i)^{2}}\times\\ &\int_{\Gamma}\int_{\Gamma}f(z-z^{\prime})\,(z\mathbf{1}-\mathbf{ \Psi})^{-1}\,\epsilon(\mathbf{u})\,(z^{\prime}\mathbf{1}-\mathbf{\Psi})^{-1}\,\, dz\,dz^{\prime}\,,\end{split} \tag{17}\]
where \(\Gamma\) is a suitably chosen integration contour in the complex plane that encompasses the eigenvalues \(\lambda_{i}\), but avoids the singularities of \(f\). This formulation, which was proven in [12; 10], facilitates analytical insights into the log-conformation formulation, but is less suited for the direct numerical implementation, due to the expected cancellation effects in the Cauchy-type integral.
## 3 Eigenvalue-Free Algorithm Design
In the last section, we discussed several of the different existing formulations for the \(f(\mathrm{ad}\,\mathbf{\Psi})\,\epsilon(\mathbf{u})\) term in the logarithmic constitutive equation. We also mentioned the connection to the eigenvalue-based algorithms.
In this section, we will come to an eigenvalue-free algorithm that represents \(\mathrm{ad}\,\mathbf{\Psi}\) as a matrix on a suitably chosen vector space, which allows us to evaluate \(f(\mathrm{ad}\,\mathbf{\Psi})\) as a matrix function.
Since it is instructive for what comes, and since it is also necessary for the numerical implementation, we will first concern ourselves with the eigenvalue-free evaluation of the matrix function \(\exp(\mathbf{\Psi})\). We will use the Scaling&Squaring algorithm, which has been extensively studied. For an in-depth review article, we refer to [22].
The basic idea of the Scaling&Squaring algorithm consists of two ingredients: One ingredient is a simple approximation of the function. This can, e.g., be a truncated Taylor series, or a rationale approximation. For the exponential function, the Pade approximation \(R_{m,m}\) as a rationale approximation is a common choice. Usually, such an approximation is only reasonable in a small region close to some pivot point, which, for our approximation of the exponential function, is the origin of the coordinate system.
At this point, the second ingredient comes into action: a functional relation that helps to map the argument to the region where the aforementioned simple approximation is valid, and thus allows us to construct a more universal approximation. For the exponential function this relation is
\[\exp(\mathbf{\Psi})=\exp(\mathbf{\Psi}/2)^{2}\,. \tag{18}\]
Given a general \(\mathbf{\Psi}\) and iterating this functional equation, one can choose a \(j\in\mathbb{N}\) such that \(2^{-j}\|\mathbf{\Psi}\|\) is small enough for the Pade approximant to be sufficiently good. Then evaluating
\[\exp(\mathbf{\Psi})\approx\left(R_{m,m}(\mathbf{\Psi}/2^{j})\right)^{2^{j}} \tag{19}\]
should give a reasonable approximation even for large \(\mathbf{\Psi}\). As is apparent, we first scale the argument with \(2^{-j}\) and after the evaluation of the Pade approximant, we employ \(j\) successive squarings. Hence, the name of the algorithm: Scaling&Squaring.
For the actual implementation, we use the software library Eigen [23], which uses variations of the Scaling&Squaring algorithm, as described in [24; Algorithm 2.3] and [25; Algorithm 3.1].
In principle, as already noted in the previous section, we want to employ a similar algorithm for \(f(\mathrm{ad}\,\mathbf{\Psi})\). However, we want to reduce the compu
tational complexity first, i.e., we do not want to represent \(\mathrm{ad}\,\mathbf{\Psi}\) as a \(9\times 9\) matrix.
Notice that we will apply \(f(\mathrm{ad}\,\mathbf{\Psi})\) to the symmetric matrix \(\epsilon(\mathbf{u})\) and will get as a result a symmetric matrix again. In fact, the application of a symmetric matrix to \(f(\mathrm{ad}\,\mathbf{\Psi})\) will always give a symmetric matrix. This can, e.g., be seen from Eq. (8) by simply transposing the equation, but also from the Taylor series expansion in Eq. (6) and the fact that \(\mathrm{ad}^{2}\,\mathbf{\Psi}\) also has this feature: \(\mathrm{ad}^{2}\,\mathbf{\Psi}\) maps symmetric matrices to symmetric matrices, and antisymmetric matrices to antisymmetric matrices.
The fact that \(\mathrm{ad}^{2}\,\mathbf{\Psi}\) decomposes into two parts, of course nurtures the idea of just using the part operating on symmetric matrices. Since the vector space of symmetric \(3\times 3\) matrices is only \(6\)-dimensional, this would already reduce the computational complexity. We could represent \(\mathrm{ad}^{2}\,\mathbf{\Psi}\) as a \(6\times 6\)-dimensional matrix, and then apply the function
\[g(z)=\begin{cases}\frac{\sqrt{z}}{\tanh\sqrt{z}}&\text{for}\,\,\Re z\geq 0\\ \frac{\sqrt{-z}}{\tan\sqrt{-z}}&\text{for}\,\,\Re z<0\end{cases}\,, \tag{20}\]
such that
\[f(\mathrm{ad}\,\mathbf{\Psi})= g\left(\frac{1}{4}\,\mathrm{ad}^{2}\,\mathbf{\Psi}\right)\,. \tag{21}\]
This shifts the problem from evaluating a matrix function \(f\) to a matrix function \(g\). Note that we have added the negative real part in Eq. (20) to illustrate that \(g\) can be continued analytically in the negative half-plane to a meromorphic function. It thus becomes evident that \(g\) has a pole at \(z=-\pi^{2}\). In fact, the Taylor expansion follows from Eq. (5)
\[g(z)=\sum_{n=0}^{\infty}\frac{B_{2n}}{(2n)!}4^{n}z^{n}\,, \tag{22}\]
which, due to the pole, only converges absolutely for \(|z|<\pi^{2}\).
However, we can go one step further: First, we notice that \(\mathrm{ad}\,\mathbf{\Psi}\) maps symmetric matrices, like \(\epsilon(\mathbf{u})\), to an antisymmetric \(3\times 3\)-matrix. More importantly, the vector space of antisymmetric \(3\times 3\)-matrices is \(3\)-dimensional, hence any \(\mathrm{ad}^{2n}\,\mathbf{\Psi}\) is at most of rank-\(3\) as a linear operator or matrix for \(n>0\). In other words, in the Taylor series of \(f\) or \(g\) applied to \(\mathrm{ad}\,\mathbf{\Psi}\), only the \(n=0\) term, which is the identity operator/matrix \(\mathbf{1}\), is of full rank, while all other terms are at most of rank-\(3\).
This clearly motivates to split off the identity matrix \(\mathbf{1}\) and only compute the remaining part on a \(3\times 3\)-matrix instead of an \(6\times 6\)-matrix or \(9\times 9\)-matrix. Thus, we define
\[h(x)=\frac{1}{x}\left(\frac{\sqrt{x}}{\tanh\sqrt{x}}-1\right)\,, \tag{23}\]
with its Taylor series for small \(x\) given as
\[h(x)=\sum_{n=1}\frac{B_{2n}}{(2n)!}4^{n}x^{n-1}\,. \tag{24}\]
Using the equations above, we can write
\[\begin{split}& f(\mathrm{ad}\,\mathbf{\Psi})\,\epsilon(\mathbf{u})\\ &\qquad=e(\mathbf{u})+\frac{1}{4}\,\mathrm{ad}\,\mathbf{\Psi}\,\,h \left(\frac{1}{4}\,\mathrm{ad}^{2}\,\mathbf{\Psi}\right)\,\,\mathrm{ad}\,\mathbf{\Psi} \,\epsilon(\mathbf{u})\,,\end{split} \tag{25}\]
which contains already all components of the final algorithm that will compute \(f(\mathrm{ad}\,\mathbf{\Psi})\epsilon(\mathbf{u})\).
In the actual computation, we will need different representations of \(\mathrm{ad}\,\mathbf{\Psi}\). Going through the different instances of \(\mathrm{ad}\,\mathbf{\Psi}\) in Eq. (25) from right to left:
* \(\mathrm{ad}\,\mathbf{\Psi}\epsilon(\mathbf{u})\) as noted earlier is an antisymmetric \(3\times 3\) matrix, and thus can be represented in some basis as a \(3\)-dimensional vector. We will denote this vector as \(\mathbf{v}\in\mathbb{R}^{3}\).
* On the \(3\)-dimensional space of antisymmetric \(3\times 3\)-matrices, the operator \(\mathrm{ad}^{2}\,\mathbf{\Psi}\) will be represented as an \(3\times 3\)-matrix, which we will denote by \(\mathbf{X}\in\mathbb{R}^{3\times 3}\). Dividing by four and applying \(h\) gives another \(3\times 3\)-matrix \(h\left(\frac{1}{4}\,\mathrm{ad}^{2}\,\mathbf{\Psi}\right)\), which is multiplied with the \(3\)-vector \(\mathbf{v}\) that represents \(\mathrm{ad}\,\mathbf{\Psi}\epsilon(\mathbf{u})\). The final result of \(h\left(\frac{1}{4}\,\mathrm{ad}^{2}\,\mathbf{\Psi}\right)\mathrm{ad}\,\mathbf{\Psi} \epsilon(\mathbf{u})\) is once again then represented by a \(3\)-vector \(h(\mathbf{X}/4)\,\mathbf{v}\).
* The last invocation of \(\mathrm{ad}\,\mathbf{\Psi}\) linearly maps an antisymmetric \(3\times 3\)-matrix to a symmetric \(3\times 3\)-matrix. Therefore, it can be represented as an \(6\times 3\)-matrix, which we will denote by \(\mathbf{Y}\in\mathbb{R}^{6\times 3}\). It is multiplied by the \(3\)-vector from the previous step, resulting in a \(6\)-vector \(\mathbf{Y}\,h(\mathbf{X}/4)\,\mathbf{v}\).
In order to concretize the computational steps, we will need to choose specific bases. We start with the basis for the symmetric \(3\times 3\)-matrices: the matrix \(\mathbf{\Psi}\) is already stored in most codes as a \(6\)-vector \((\Psi_{11},\Psi_{12},\Psi_{13},\Psi_{22},\Psi_{23},\Psi_{33})^{T}\). The same holds for \(\epsilon(\mathbf{u})\) with \((\epsilon_{11},\epsilon_{12},\epsilon_{13},\epsilon_{22},\epsilon_{23},\epsilon_ {33})^{T}\).
For the antisymmetric \(3\times 3\)-matrices to be represented as a \(3\)-vector, we want to have further properties for the representation of \(\operatorname{ad}^{2}\boldsymbol{\Psi}\) as a matrix on that vector space. Most notably, we want \(\operatorname{ad}^{2}\boldsymbol{\Psi}\) to be represented as a symmetric matrix \(\mathbf{X}\in\mathbb{R}^{3\times 3}_{sym}\).
For that, we first define a scaled Frobenius product of two matrices \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{3\times 3}\), i.e.,
\[(\mathbf{A},\mathbf{B})_{sf}\coloneqq\frac{1}{2}\operatorname{tr}\mathbf{A}^ {T}\mathbf{B}\,. \tag{26}\]
The factor \(1/2\) is introduced to avoid several \(\sqrt{2}\) factors in the following formulas. The more important aspect here is that \(\operatorname{ad}^{2}\boldsymbol{\Psi}\) is selfadjoint with respect to this scalar product
\[(\mathbf{A},\operatorname{ad}^{2}\boldsymbol{\Psi}\,\mathbf{B})_ {sf}= \frac{1}{2}\operatorname{tr}\left(\mathbf{A}^{T}\left(\boldsymbol {\Psi}^{2}\mathbf{B}-2\boldsymbol{\Psi}\mathbf{B}\boldsymbol{\Psi}+\mathbf{B} \boldsymbol{\Psi}^{2}\right)\right)\] \[= \frac{1}{2}\operatorname{tr}\left(\left(\boldsymbol{\Psi}^{2} \mathbf{A}-2\boldsymbol{\Psi}\mathbf{A}\boldsymbol{\Psi}+\mathbf{A} \boldsymbol{\Psi}^{2}\right)^{T}\mathbf{B}\right)\] \[= (\operatorname{ad}^{2}\boldsymbol{\Psi}\,\mathbf{A},\mathbf{B})_ {sf}\,.\]
One ramification of the selfadjointness is that the matrix representation of \(\operatorname{ad}^{2}\boldsymbol{\Psi}\) in that basis will yield a symmetric matrix \(\mathbf{X}\), if we choose the basis to be orthonormal with respect to the same scalar product.
This motivates our choice of an orthonormal basis \(\{\mathbf{E}_{i}\}\) of the antisymmetric \(3\times 3\)-matrices
\[\mathbf{E}_{1}= \begin{pmatrix}0&1&0\\ -1&0&0\\ 0&0&0\end{pmatrix} \tag{27}\] \[\mathbf{E}_{2}= \begin{pmatrix}0&0&1\\ 0&0&0\\ -1&0&0\end{pmatrix}\] (28) \[\mathbf{E}_{3}= \begin{pmatrix}0&0&0\\ 0&0&1\\ 0&-1&0\end{pmatrix}\,. \tag{29}\]
Going through the different needed representations of \(\operatorname{ad}\boldsymbol{\Psi}\), we will start with \(\operatorname{ad}\boldsymbol{\Psi}\epsilon(\mathbf{u})\), which we represent as a vector \(\mathbf{v}\in\mathbb{R}^{3}\), whose components are given by \(v_{i}=(\mathbf{E}_{i},\operatorname{ad}\boldsymbol{\Psi}\epsilon(\mathbf{u}) )_{sf}\). The latter yields
\[v_{1}= -\epsilon_{11}\Psi_{12}+\epsilon_{12}\Psi_{11}-\epsilon_{12}\Psi_ {22} \tag{30}\] \[\qquad-\epsilon_{13}\Psi_{23}+\epsilon_{22}\Psi_{12}+\epsilon_{23 }\Psi_{13}\] \[v_{2}= -\epsilon_{11}\Psi_{13}-\epsilon_{12}\Psi_{23}+\epsilon_{13}\Psi _{11}\] (31) \[\qquad-\epsilon_{13}\Psi_{33}+\epsilon_{23}\Psi_{12}+\epsilon_{33 }\Psi_{13}\] \[v_{3}= -\epsilon_{12}\Psi_{13}+\epsilon_{13}\Psi_{12}-\epsilon_{22}\Psi _{23}\] \[\qquad+\epsilon_{23}\Psi_{22}-\epsilon_{23}\Psi_{33}+\epsilon_{3 3}\Psi_{23}\,. \tag{32}\]
To represent \(\operatorname{ad}^{2}\boldsymbol{\Psi}\) on the space of antisymmetric \(3\times 3\)-matrices, we introduce \(\mathbf{X}\in\mathbb{R}^{3\times 3}\), whose entries are given by \(X_{ij}=(\mathbf{E}_{i},\operatorname{ad}^{2}\boldsymbol{\Psi}\,\mathbf{E}_{j} )_{sf}\). As noted, the resulting matrix \(\mathbf{X}\) is symmetric. With our chosen basis, the coefficients are given by
\[X_{11}= \Psi_{11}^{2}-2\Psi_{11}\Psi_{22}+4\Psi_{12}^{2}+\Psi_{13}^{2}+ \Psi_{22}^{2}+\Psi_{23}^{2} \tag{33}\] \[X_{12}= -2\Psi_{11}\Psi_{23}+3\Psi_{12}\Psi_{13}+\Psi_{22}\Psi_{23}+\Psi _{23}\Psi_{33}\] (34) \[X_{13}= -\Psi_{11}\Psi_{13}-3\Psi_{12}\Psi_{23}+2\Psi_{13}\Psi_{22}-\Psi _{13}\Psi_{33}\] (35) \[X_{22}= \Psi_{11}^{2}-2\Psi_{11}\Psi_{33}+\Psi_{12}^{2}+4\Psi_{13}^{2}+ \Psi_{23}^{2}+\Psi_{33}^{2}\] (36) \[X_{23}= \Psi_{11}\Psi_{12}+\Psi_{12}\Psi_{22}-2\Psi_{12}\Psi_{33}+3\Psi _{13}\Psi_{23}\] (37) \[X_{33}= \Psi_{12}^{2}+\Psi_{13}^{2}+\Psi_{22}^{2}-2\Psi_{22}\Psi_{33}+4 \Psi_{23}^{2}+\Psi_{33}^{2}\,. \tag{38}\]
For the last representation of \(\operatorname{ad}\boldsymbol{\Psi}\), from the space of antisymmetric matrices to the space of symmetric \(3\times 3\)-matrices, we compute \(\operatorname{ad}\boldsymbol{\Psi}\,\mathbf{E}_{i}\) and extract the coefficients. We denote the representation by \(\mathbf{Y}\in\mathbb{R}^{6\times 3}\) and its coefficients are given by
\[\mathbf{Y}= \begin{pmatrix}-2\Psi_{12}&-2\Psi_{13}&0\\ \Psi_{11}-\Psi_{22}&-\Psi_{23}&-\Psi_{13}\\ -\Psi_{23}&\Psi_{11}-\Psi_{33}&\Psi_{12}\\ 2\Psi_{12}&0&-2\Psi_{23}\\ \Psi_{13}&\Psi_{12}&\Psi_{22}-\Psi_{33}\\ 0&2\Psi_{13}&2\Psi_{23}\end{pmatrix}\,. \tag{39}\]
Taking for the moment the algorithm to compute \(h(\mathbf{X}/4)\) as given, we can then use Eq. (25) to compute \(f(\operatorname{ad}\boldsymbol{\Psi})\epsilon(\mathbf{u})\) as a series of matrix operations. The actual algorithm to compute \(f(\operatorname{ad}\boldsymbol{\Psi})\epsilon(\mathbf{u})\) is illustrated in Algorithm 1.
```
0:\(\boldsymbol{\Psi}\) given as \((\Psi_{11},\Psi_{12},\Psi_{13},\Psi_{22},\Psi_{23},\Psi_{33})^{T}\)
0:\(\epsilon(\mathbf{u})\) given as \((\epsilon_{11},\epsilon_{12},\epsilon_{13},\epsilon_{22},\epsilon_{23},\epsilon_{33})^{T}\)
0: compute \(\mathbf{v}\in\mathbb{R}^{3}\) according to Eq. (30)-(32)
0: compute \(\mathbf{X}\in\mathbb{R}^{3\times 3}\) according to Eq. (33)-(38)
0: compute \(\mathbf{Y}\in\mathbb{R}^{6\times 3}\) according to Eq. (39)
0: use Algorithm 2 to compute \(\mathbf{Z}\gets h(\mathbf{X}/4)\)
0:return\(\epsilon(\mathbf{u})+\frac{1}{4}\mathbf{Y}\mathbf{Z}\,\mathbf{v}\)
```
**Algorithm 1** Computing \(f(\operatorname{ad}\boldsymbol{\Psi})\epsilon(\mathbf{u})\)
Before we come to the general case of computing
\(h(\mathbf{X}/4)\), we want to first mention a case in which evaluating \(h\) becomes as easy as a simple function evaluation: the two-dimensional case.
To see this, note that in the two-dimensional case \(X_{12}\) and \(X_{13}\) are both zero. As such \(\mathbf{X}\) is the direct sum of two submatrices, of which the first one consists of just a single entry \(X_{11}\). Furthermore, since \(v_{1}\) is the only non-zero entry of \(\mathbf{v}\) in this case, it is also just \(h(X_{11})\) that needs to be calculated. In fact, acknowledging that
\[h(x)= \frac{1}{x}\left(\sqrt{x}+\frac{2\sqrt{x}}{e^{2\sqrt{x}}-1}-1 \right)\,, \tag{40}\]
this yields exactly the representation of \(f(\operatorname{ad}\boldsymbol{\Psi})\epsilon(\mathbf{u})\) that was given in (11, Theorem 2).
Coming to the general case, we so far only have a Taylor series of \(h\), Eq. (24), which only works for small \(\mathbf{X}\). Taking the Scaling&Squaring algorithm for the matrix exponential function as an instructive example, we seek a functional equation that allows us to reduce the computation to arguments that are amenable to the Taylor series.
In fact, using the formula for doubling the argument of \(\tanh x\)
\[\tanh x= \frac{2\tanh\frac{x}{2}}{1+\tanh^{2}\frac{x}{2}}\,, \tag{41}\]
we obtain
\[h(x)= \frac{1}{4}\left(h(x/4)+\left(g(x/4)\right)^{-1}\right) \tag{42}\]
with
\[g(x/4)=1+x/4\,h(x/4)\,. \tag{43}\]
Considering a general \(\mathbf{X}\in\mathbb{R}_{sym}^{3\times 3}\), we can readily use this to seek an appropriate \(j\in\mathbb{N}\) such that \(\mathbf{X}/4^{j}\) is small enough to be approximated by a truncated Taylor series. Then iterating Eqs. (42) and (43) \(j\)-times we get the final result. The full algorithm is displayed in Algorithm 2.
For the actual algorithm, we needed to decide on when \(\mathbf{X}/4^{j}\) is small enough. Evaluating Algorithm 2 for scalar instead of matrix arguments and comparing it with a high-precision calculation of \(h\) gives an indication on the accuracy of the algorithm. This analysis yields that \(j_{0}=4\) is sufficient for an absolute accuracy of \(10^{-16}\) in the scalar argument case. Therefore, this is also the value that was used for all our numerical evaluations. We also compared the matrix argument case with an eigenvalue-based evaluation for random \(\mathbf{X}\) and could not observe any severe issues.
However, this should not be taken without a word of caution. The functions \(g(x)\) and \(h(x)\) asymptotically behave like \(\sqrt{x}\) and \(1/\sqrt{x}\), respectively. In fact, \(\sqrt{x}\) as a function is known as a prime example, where an algorithm works for scalar arguments, but may fail for matrix arguments of even moderate condition number, cf. (26; 27). Hence, although our numerical experiments do already give a strong indication for a stable algorithm, a thorough mathematical error analysis of the algorithm is still outstanding and subject of future research.
## 4 Finite Volume Implementation
In the following, we are going to embed the new eigenvalue-free constitutive formulation in a numerical implementation. We will, therefore, augment the constitutive equation (3) with a system of partial differential equations consisting of the continuity equation and the momentum balance, as well as Kramers' expression to relate the polymeric stress and the log-conformation field. These equations will then be solved using a finite volume method (FVM), where the polymeric stress is computed with the log-conformation approach according to Eq. (3), and where the \(f(\operatorname{ad}\boldsymbol{\Psi})\epsilon(\mathbf{u})\)-term on the right-hand side is computed without an eigenvalue decomposition of \(\boldsymbol{\Psi}\) according to Eq. (25) and Algorithm 1.
As noted earlier, the eigenvalue-free log-conformation formulation is quite universal and not
necessarily tied to a specific discretization scheme. Like the eigenvalue-based formulation, it needs a point-based evaluation of \(\mathbf{\Psi}\), and the discretization scheme needs to provide a good approximation of \(\nabla\mathbf{u}\) at the same point, such that Algorithm 1 can compute the \(f(\operatorname{ad}\mathbf{\Psi})\epsilon(\mathbf{u})\) term also at this point in space and time.
To illustrate how easily this different evaluation of the \(f(\operatorname{ad}\mathbf{\Psi})\epsilon(\mathbf{u})\) term can be dropped into an existing code, we chose to base our numerical implementation on one of the existing and established open source computational rheology packages: RheoTool [13]. It is based on OpenFOAM(r)[28] and has many constitutive models for viscoelastic fluid simulations implemented already. The eigenvalue-free formulations for the log-conf variants of the Oldroyd-B and Giesekus models, which are subject of this work, are implemented among those models and can be used and configured analogously in the overall OpenFOAM(r) framework. More specifically, we use RheoTool in version 6 and OpenFOAM(r) in version 9.
A detailed description of the system of partial differential equations and algebraic equations that we will use, and of the corresponding finite volume discretization and linearization follows next. Afterwards, in Section 5, our implementation is applied to the study of two well-known tests for viscoelastic fluid flow: the confined cylinder and the sedimenting sphere benchmarks.
### Statement of the full set of partial differential equations
To state the full system of partial differential equations, which we are going to discretize and solve, we start with the incompressible isothermal Navier-Stokes equations
\[\nabla\cdot\mathbf{u}=0 \tag{44}\] \[\rho(\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u})=- \nabla p+\nabla\cdot(\eta_{s}\nabla\mathbf{u})+\nabla\cdot\mathbf{\tau}\,, \tag{45}\]
where \(\mathbf{u}\) is the velocity vector, \(p\) the pressure, \(\mathbf{\tau}\) the polymeric extra stress tensor, \(\eta_{s}\) the solvent viscosity, and \(\rho\) the density of the fluid. These equations are coupled with an additional partial differential equation for the log-conf tensor \(\mathbf{\Psi}\), which was already stated in its \(f(\operatorname{ad}\mathbf{\Psi})\) form in Eq. (3). Rearranging some terms, the constitutive equation can be written as
\[\begin{split}\partial_{t}\mathbf{\Psi}+(\mathbf{u}\cdot\nabla)\mathbf{ \Psi}&=-\mathbf{\Psi}\alpha(\mathbf{u})+\omega(\mathbf{u})\mathbf{\Psi} \\ +2\,f(\operatorname{ad}\mathbf{\Psi})\,\epsilon(\mathbf{u})-P(\mathbf{e }^{\mathbf{\Psi}})e^{-\mathbf{\Psi}}\,.\end{split} \tag{46}\]
In the following benchmarks, we only consider the Oldroyd-B and Giesekus constitutive models, thus setting \(P(\exp(\mathbf{\Psi}))=\frac{1}{\lambda}\left(\exp(\mathbf{\Psi})-\mathbf{1}\right)\) and \(P(\exp(\mathbf{\Psi}))=\frac{1}{\lambda}\left(\mathbf{1}+\alpha\left(\exp(\mathbf{ \Psi})-\mathbf{1}\right)\right)(\exp(\mathbf{\Psi})-\mathbf{1})\), respectively. The conformation tensor and the log-conformation field are related to the polymeric stress \(\mathbf{\tau}\) by means of Kramers' expression
\[\mathbf{\tau}=\frac{\eta_{p}}{\lambda}(\mathbf{e}^{\mathbf{\Psi}}-\mathbf{1})\,, \tag{47}\]
where \(\eta_{p}\) is the polymeric viscosity and \(\lambda\) the relaxation time of the fluid.
In total, the set of partial differential equations and one algebraic equation (44)-(47) composes, when augmented with appropriate initial and boundary conditions, the mathematical problem we try to solve. In the following, we will lay out our chosen discretization scheme.
### Temporal discretization, linearization and SIMPLEC
Starting off with the set of equations in Eqs. (44)-(47), we at first discretize in time using the backwards Euler scheme. This leads to
\[\nabla\cdot\mathbf{u}_{t}=0 \tag{48}\] \[\begin{split}\frac{\rho}{\Delta t}\mathbf{u}_{t}+\rho\left( \mathbf{u}_{t}\cdot\nabla\right)\mathbf{u}_{t}&=-\nabla p_{t}+ \nabla\cdot(\eta_{s}\nabla\mathbf{u}_{t})\\ &\qquad\qquad\qquad\qquad+\nabla\cdot\mathbf{\tau}_{t}+\frac{\rho}{ \Delta t}\mathbf{u}_{t-\Delta t}\end{split}\] (49) \[\begin{split}\frac{1}{\Delta t}\mathbf{\Psi}_{t}+(\mathbf{u}_{t} \cdot\nabla)\mathbf{\Psi}_{t}&=-\mathbf{\Psi}_{t}\omega(\mathbf{u}_{t} )+\omega(\mathbf{u}_{t})\mathbf{\Psi}_{t}\\ &\qquad\qquad\qquad\qquad+2\,f(\operatorname{ad}\mathbf{\Psi}_{t}) \,\epsilon(\mathbf{u}_{t})\\ &\qquad\qquad\qquad\qquad-P(\mathbf{e}^{\mathbf{\Psi}_{t}})e^{-\mathbf{\Psi}_ {t}}+\frac{1}{\Delta t}\mathbf{\Psi}_{t-\Delta t}\end{split}\] (50) \[\mathbf{\tau}_{t}=\frac{\eta_{p}}{\lambda}(\mathbf{e}^{\mathbf{\Psi}_{t}}- \mathbf{1})\,. \tag{51}\]
In order to not overload the notation, we drop the \(t\) indices from the current time-step and only keep \(\mathbf{u}_{t-\Delta t}\) and \(\mathbf{\Psi}_{t-\Delta t}\).
As a next step, we approach the non-linearity. Therefore, we choose a Picard-type fixed-point iteration. We indicate the current iteration with a suffix \(i\) and start our iteration with \(\mathbf{u}_{0}=\mathbf{u}_{t-\Delta t}\) and \(\mathbf{\Psi}_{0}=\mathbf{\Psi}_{t-\Delta t}\). We linearize our equations in such a way that \(\mathbf{\Psi}_{i}\) is solved for after \(\mathbf{u}_{i}\) and \(p_{i}\) have been computed. In the constitutive equation, all non-linear occurences of \(\mathbf{\Psi}\) are replaced by \(\mathbf{\Psi}_{i-1}\). In the momentum equation, we choose to linearize
the convective derivative as usual, by computing the flux based on the previous iteration. We thus obtain
\[\nabla\cdot\mathbf{u}_{i}=0\] (52) (53) \[\begin{aligned} \frac{1}{\Delta t}\mathbf{\Psi}_{i}+( \mathbf{u}_{i}\cdot\nabla)\mathbf{\Psi}_{i}=-&\mathbf{\Psi}_{i-1} \omega(\mathbf{u}_{i})+\omega(\mathbf{u}_{i})\mathbf{\Psi}_{i-1}\\ &+2\,f(\mathrm{ad}\,\mathbf{\Psi}_{i-1})\,\epsilon(\mathbf{u}_{i} )\\ &-P(e^{\mathbf{\Psi}_{i-1}})e^{-\mathbf{\Psi}_{i-1}}+\frac{1}{ \Delta t}\mathbf{\Psi}_{t-\Delta t}\end{aligned} \tag{54}\] \[\tau_{i}=\frac{\eta_{p}}{\lambda}(e^{\mathbf{\Psi}_{i}}-\mathbf{ \mathbf{1}})\,. \tag{55}\]
Note that the way we have linearized the system, first \(\mathbf{u}_{i}\) and \(p_{i}\) should be solved in a coupled way, then \(\mathbf{\Psi}_{i}\) can be computed based on \(\mathbf{u}_{i}\), which, at last, results in \(\tau_{i}\). It is also noteworthy that our chosen scheme does not use any type of both-sides diffusion (BSD), which was introduced in [29] and applied in a finite volume context in [14].
To further reduce the coupling between \(\mathbf{u}_{i}\) and \(p_{i}\) we employ the SIMPLEC method [30]. For that, consider the following form of the momentum equation (53)
\[\begin{aligned} \overbrace{\left(\frac{\rho}{\Delta t}+\rho( \mathbf{u}_{i-1}\cdot\nabla)-\nabla\cdot(\eta_{s}\nabla)\right)\mathbf{u}^{*} }_{=\mathbf{b}}\\ =-\nabla p^{*}+\underbrace{\nabla\cdot\tau_{i-1}+\frac{\rho}{ \Delta t}\mathbf{u}_{i-\Delta t}}_{=\mathbf{b}}\,,\end{aligned} \tag{56}\]
where \(A-H\) encodes the linear operator that operates on \(\mathbf{u}\) in the momentum equation.1 After the spatial discretization, which follows in Section 4.3, \(A\) will be the diagonal part of the matrix and \(-H\) the off-diagonal part. In particular \(A\) will be easy to invert.
Footnote 1: Our notation deviates a bit from the actual implementation in OpenFOAM®, where \(H\) is used to denote what is here given as \(H\mathbf{u}^{*}+\mathbf{b}\).
Now, assuming \(\mathbf{u}^{*}\) solves Eq. (56) given the pressure \(p^{*}\coloneqq p_{i-1}\) from the previous iteration, we seek an update \(\mathbf{u}^{*}\) such that \(\mathbf{u}_{i}=\mathbf{u}^{*}+\mathbf{u}^{*}\) solves the continuity equation (52). Introducing the pressure update \(p^{*}=p_{i}-p^{*}\), the velocity update \(\mathbf{u}^{*}\) needs to solve
\[(A-H)\,\mathbf{u}^{\prime}=-\nabla p^{\prime}\,. \tag{57}\]
SIMPLEC now approximates \(H\) by another operator \(H_{1}\), which like \(A\) is easy to invert. In the actual implementation, i.e., after the spatial discretization, \(H_{1}\) will be realized as a matrix lumping of the off-diagonal entries onto the diagonal. For the details consult [14]. Thus, we can solve
\[\mathbf{u}^{\prime}=-\left(A-H_{1}\right)^{-1}\nabla p^{\prime}\,. \tag{58}\]
Therefore, the continuity equation \(\nabla\cdot(\mathbf{u}^{\prime}+\mathbf{u}^{*})=0\) amounts to
\[\begin{aligned} 0=\nabla\cdot&\left(-\left(A-H_{1} \right)^{-1}\nabla(p_{i}-p^{*})\right.\\ &\left.+A^{-1}\left(H\mathbf{u}^{*}-\nabla p^{*}+\mathbf{b} \right)\right)\,,\end{aligned} \tag{59}\]
which can be rearranged to the pressure correction equation
\[\begin{aligned} \nabla&\cdot\left(\left(A-H_{1} \right)^{-1}\nabla p_{i}\right)\\ &=\nabla\cdot\left(A^{-1}(H\mathbf{u}^{*}+\mathbf{b})+\left((A-H _{1})^{-1}-A^{-1}\right)\nabla p^{*}\right)\,.\end{aligned} \tag{60}\]
The corrected velocity \(\mathbf{u}_{i}\) is then given by
\[\begin{aligned} \mathbf{u}_{i}&=A^{-1}(H\mathbf{u}^{*}+ \mathbf{b})+\left((A-H_{1})^{-1}-A^{-1}\right)\nabla p^{*}\\ &\qquad-\left(A-H_{1}\right)^{-1}\nabla p_{i}\,.\end{aligned} \tag{61}\]
In principle, we now have arrived at a set of decoupled partial differential equations (56),(60) and (54) and two algebraic evaluations (55) and (61) that can be composed into an algorithm as illustrated in Fig. 1.
However, Fig. 1 contains another interior fixed-point loop around the pressure correction equation (60). The rationale here is that the spatial discretization of the surface gradient \(\nabla p_{i}\), which will be described in the following section, is defective for non-orthogonal meshes. To correct for this, some computations in the scheme are deferred in a non-linear fashion, which then necessitate another fixed-point loop around the discretized version of Eq. (60). The latter happens even though Eq. (60) looks linear on the current level of abstraction. For the details, we refer the reader to [31, Sec. 9.8].
In all simulations that are presented in Section 5, a total of two inner iteration loops and two non-orthogonal correction steps per time-step are used.
Figure 1: Solver flowchart.
### Spatial discretization
After temporal discretization, linearization and decoupling of velocity and pressure with the SIMPLEC method, we arrive at three decoupled, linear partial differential equations (56),(60),(54). In order to solve those, we need to choose a method for spatial discretization. As noted earlier, we have chosen the Finite Volume Method (FVM), and in particular base our implementation on RheoTool [13] and OpenFOAM(r) [28].
In the FVM, the computational domain is subdivided into a set of appropriate interconnected control volumes (the mesh) and the integral form of these PDEs is then evaluated on every single control volume [31]. The variables of interest (\(\mathbf{u}^{\ast}\), \(p_{i}\) and \(\boldsymbol{\Psi}_{i}\)) are, in our choice of a cell-centered FVM, considered as discrete fields (vector-, scalar- and tensorfields, respectively) which attain their respective value at the cell center. The appearing spatial differential operators are then approximated using different schemes that solely depend on those cell-centered quantities. With the initial PDEs being linear, this approach results in sparse linear equation systems.
Next, we list the configuration of the spatial discretization schemes, which will be used throughout all simulations that follow in Section 5.
* The divergence terms are discretized according to the divergence theorem via the Gauss scheme. For that, the argument of the divergence operator needs to be evaluated on the faces of the cell. For \(\nabla\cdot\boldsymbol{\tau}_{i-1}\) or \(\nabla\cdot\left(A^{-1}(H\mathbf{u}^{\ast}+\mathbf{b})\right)\) this means that the cell-centered value is interpolated linearly from cell to face. In Eq. (60) the term \((A-H_{1})^{-1}-A^{-1}\) is also linearly interpolated from cell to face.
* The Laplacian terms, such as \(\nabla\cdot\left(\eta_{i}\nabla\mathbf{u}_{i}\right)\) and \(\nabla\cdot\left(\left(A-H_{1}\right)^{-1}\nabla p_{i}\right)\), are also discretized using Gaussian integration, with the difference that only the inner factors are linearly interpolated. The gradients \(\nabla\mathbf{u}_{i}\) and \(\nabla p_{i}\), but also \(\nabla p^{\ast}\) in Eq. (60), are directly evaluated on the face using a surface normal scheme. In all our computations we have employed a surface normal gradient scheme with an explicit deferred non-orthogonal correction.
* Cell-centered gradients, as \(\nabla p^{\ast}\) and \(\nabla\mathbf{u}_{i}\) in Eqs. (54),(56),(61), are computed using the Gauss scheme with linear interpolation. Interpolation in general is linear per default, whenever needed.
* For the convective term in the constitutive equation, \((\mathbf{u}_{i}\cdot\nabla)\boldsymbol{\Psi}_{i}\), the corrected, component-wise CUBISTA scheme is used, which is described in [14]. The convective term \((\mathbf{u}_{i-1}\cdot\nabla)\mathbf{u}_{i}\) in the momentum balance is removed from Eq. (56) in the later benchmarks (to enforce \(\mathrm{Re}=0\)) and, therefore, no discretization scheme is needed.
Overall, all used spatial discretization schemes are under ideal conditions, i.e., on orthogonal meshes, second order accurate. However, as for example shown in [32], the gradient computation may lose its second order accuracy on meshes of poor quality, e.g., high non-orthogonalities or skewnesses. As a consequence, particular attention was paid to the selection and design of the hexahedral meshes in Section 5.
A crucial aspect when simulating incompressible Navier-Stokes equations, regardless of the actually employed spatial discretization scheme, are the issues with checkerboard patterns and in general the saddle-point structure of the linearized problem. Here, this issue has been approached with the Rhie-Chow method [33], where \(\nabla p_{i},\nabla p^{\ast}\) are differently discretized in Eq. (60) than they are in Eqs. (56) and (61). We do not want to go into the details here, since they have already been laid out in [14], but solely mention two points: Firstly, there is a connection to the--in the finite element world important--inf-sup condition, and we refer the interested reader to [34] for a recent account into that direction. Secondly, on top of what has just been described, OpenFOAM(r) employs a correction of the flux in Eq. (60) that shall remedy unphysical dependencies of steady-state solutions on the actually chosen time-step size. The reader is once again referred to [14] for the details.
Of course, boundary conditions do also constitute an important aspect of numerical methods for partial differential equations. The specific choice of boundary conditions for the later benchmarks will follow in the corresponding sections 5.1 and 5.2. Nonetheless, it should be noted that boundary conditions are handled according to the technique that is implemented in OpenFOAM(r), where specific boundary structures, called patches, are used to store boundary information. Hence, whenever needed by a certain discretization scheme for elements at the edge of the computational domain, the required values, that can not be provided by interior neighbors, are fetched form these boundary patches.
Finally, we will mention that the choice of the viscoelastic model (e.g., Oldroyd-B or Giesekus) and in particular the implementation of the eigenvalue-free \(f(\text{ad}\,\mathbf{\Psi})\) term does not affect the overall procedure depicted in Fig. 1, but rather the assembling of the right-hand side of Eq. (54) and is therefore straightforward to implement.
### Choice of linear solvers
Through the spatial discretization in the last section, we now have effectively derived three systems of sparse linear equation system that correspond to Eqs. (56),(60),(54) and which are solved for the cell-centered values of \(\mathbf{u}^{*},p_{i}\) and \(\mathbf{\Psi}_{i}\). For the rest of this section, we will refer to these systems as the \(\mathbf{u}^{*},p_{i}\) and \(\mathbf{\Psi}_{i}\) equation respectively. One immediate computational optimization, which is employed in OpenFOAM(r), is that the left-hand sides of Eqs. (60) and (54) can be decoupled and solved individually for the components of \(\mathbf{u}^{*},\mathbf{\Psi}_{i}\).
After this optimization, the individual linear systems are solved using the following solvers: For the \(\mathbf{u}^{*}\) and \(p_{i}\) equations, the Preconditioned Conjugate Gradient Method (PCG) is applied with an Diagonal-Based Incomplete Cholesky preconditioner (DIC). An absolute tolerance of \(10^{-10}\), relative tolerance of \(10^{-4}\) and a maximum number of 1000 iterations are chosen as the possible termination criteria for these solvers. The \(\mathbf{\Psi}_{i}\) equation uses a Preconditioned Bi-Conjugate Gradient method (PBiCG) with an Diagonal-Based Incomplete LU preconditioner (DILU). The same termination configuration is chosen as for the \(\mathbf{u}^{*}\) and \(p_{i}\) equations.
In our numerical algorithm, the currently available field data is used as the initial guess for the corresponding iterative solver. In our benchmarks, a dimensionless timescale \(T=t/\lambda\) is used and each simulation is run until \(T=30\) with a Courant number of 0.5. We, therefore, ensure that the viscoelastic stresses in the fluid have converged at the end of a simulation, i.e., that the fluid has reached a steady-state. Within this steady-state, the initial guesses for the iterative solvers will already be close to the actual solutions, such that the number of iterations is expected to decrease as the simulation progresses in time. However, in a non-steady-state, i.e., at the beginning of a simulation, the initial guesses may be quite far from the actual solution of the system, such that more iterations are needed in general.
Typically, the \(p_{i}\) equation is the most expensive to solve. At the beginning of a simulation, the \(p_{i}\) equation requires several hundred iterations for convergence or even reaches the maximum number of iterations on our finest meshes. Overall the number of iterations needed for convergence decreases as the fluid approaches a steady-state. In a steady-state, there is often no need for a single iteration of the \(\mathbf{u}^{*}\) and \(\mathbf{\Psi}_{i}\) equations, since the initial guess already solves the system well enough.
## 5 Benchmarks
In this section, our implementation of the newly derived eigenvalue-free constitutive formulation is applied to a study of two benchmarks: the confined cylinder and the sedimenting sphere. These benchmarks represent similar flow problems, i.e., flow around an obstacle, in a two-dimensional and a three-dimensional case, respectively. Both benchmarks have been examined in the literature before, in order to validate new numerical schemes or models, see for example [11; 35; 36; 37; 38; 39] for the confined cylinder and [12; 40; 41; 42; 43] for the sedimenting sphere. For comparability, we specifically follow the setups, i.e., the geometries and fluid parameters, that were used in [11] for the confined cylinder and [12] for the sedimenting sphere. A detailed description will follow in the corresponding sections 5.1 and 5.2, where results for the eigenvalue-free logarithmic Oldroyd-B and Giesekus models are shown and discussed.
The main quantity of interest in both benchmarks is the drag coefficient \(C_{d}\), which describes the non-dimensionalized force the fluid exerts on the obstacle in \(x\)-direction. \(C_{d}\) is given by
\[C_{d}=\frac{1}{(\eta_{s}+\eta_{p})\bar{u}}\int_{\Gamma}\mathbf{e}_{x}\cdot( \mathbf{\sigma}\mathbf{n})\,, \tag{62}\]
where \(\Gamma\) is the surface of the obstacle, \(\mathbf{n}\) the corresponding unit normal, \(\mathbf{e}_{x}\) the unit vector in \(x\)-direction and \(\mathbf{\sigma}\) the Cauchy stress tensor
\[\mathbf{\sigma}=-p\mathbf{1}+\eta_{s}(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})+\bm {\tau}\,. \tag{63}\]
It is known that the drag coefficient varies with the Reynolds number Re of the simulation. This has for example been investigated by [35]. However, for comparability, we follow the literature and consider the limit of creeping flow conditions in both benchmarks by removing the convective term from the momentum equation (45).
Overall, a variety of flow simulations for different Weissenberg numbers will be presented and the cor
responding drag coefficient values will be compared to the literature. The dimensionless Weissenberg number is given by
\[\text{Wi}=\frac{\lambda\bar{u}}{R}\, \tag{64}\]
where \(\lambda\) is the relaxation time of the fluid, \(R\) is the radius of the cylinder or the sphere and \(\bar{u}\) the mean inflow velocity.
### Confined cylinder
In the confined cylinder case, a two-dimensional channel with a cylindrical obstacle of radius \(R\) in its center is considered as the computational domain. The channel has a height of \(4R\), such that the ratio of the channel height to the cylinder diameter is 2. Our setup mimics the setup of Knechtges et al. (2011) and Hulsen et al. (2011), where the channel has a total length of \(30R\) in order to reduce effects of the inflow and outflow and where the cylinder center is at \((15R,2R)\). An illustration of the geometry can be seen in Fig. 2.
#### 5.1.1 Setup
Boundary and initial conditions are chosen according to literature. At the inlet, a fully developed Poiseuille solution for an Oldroyd-B fluid is imposed for the velocity \(\mathbf{u}\) (with mean inflow \(\bar{u}\)) and the polymeric extra stresses \(\tau\) and \(\Psi\), similar to (Kennedy, 2011). A zero-gradient condition is considered for the pressure \(p\). The exact values for the Poiseuille flow are given in A. Furthermore, at the channel and cylinder walls, zero-gradient conditions are considered for the pressure and zero velocities (\(\mathbf{u}=\mathbf{0}\)). The polymeric extra stress components are linearly extrapolated. At the outlet, zero-gradient conditions are imposed for all variables, except for the pressure, which is set to zero. Initially (\(t=0\)) the fluid is at rest (\(\mathbf{u}=\mathbf{0}\)) and the extra-stresses are null (\(\tau=\Psi=\mathbf{0}\)). The pressure is set to zero as well.
In all of the following tests, \(R=1\,\mathrm{m}\) and \(\bar{u}=1\,\mathrm{m/s}\) were fixed, such that the Weissenberg number equals the numerical value of the relaxation time in seconds and could therefore easily be controlled by a change of \(\lambda\). Finally, as in the corresponding literature, a viscosity ratio of \(\beta=\eta_{s}/(\eta_{s}+\eta_{p})=0.59\) and a density of \(\rho=1\,\mathrm{kg/m^{3}}\) have been used.
Three quadrilateral meshes M1, M2 and M3 of different refinement levels have been considered. Their main properties are shown in Tab. 1. In each refinement step the total number of elements is quadrupled from mesh to mesh and the number of elements at the cylinder surface is doubled. An important property of these meshes and their refinement is that characteristics, such as the element non-orthogonality and skewness, are sufficiently small. Element non-orthogonality refers to the angle between the vector of two neighbored cell centers and their corresponding face normal. Element skewness refers to the deviation of the intersection point of this cell-center-connecting vector from the actual face center. For example, in a pure square mesh, element non-orthogonality and skewness would both be zero. In the FVM, the gradient computation can be negatively affected by such mesh irregularities, as is described and investigated by Syrakos et al. (2011). Furthermore, the importance of good quality meshes and strategic mesh refinement is particularly emphasized in (Syrakos et al., 2011). Therefore,
Figure 3: Convergence of the \(C_{d}\) values for the confined cylinder case on mesh M3 at different Weissenberg numbers over time from \(T=1\) to \(T=30\) using the eigenvalue-free logarithmic Oldroyd-B formulation.
Figure 2: Illustration (not to scale) of the confined cylinder. Fluid flows from the inlet at the left side to the outlet at the right side. The upper and lower boundaries of the channel and the cylinder surface are considered as solid walls.
only mesh configurations were considered where these characteristic values were sufficiently small on all refinement levels. RheoTool does already provide a confined cylinder case with an appropriate mesh [13]. The latter has been used as the basis for our benchmarks and adjusted, e.g., by adding several different refinement levels for the mesh. It should also be mentioned, that in order to achieve reasonable \(C_{d}\) values, boundary layers around the obstacle surface were used. This use of thin boundary layers has increased the resolution of the solution close to the obstacle surface and did also reduce the extrapolation error, resulting in \(C_{d}\) values that are in good agreement with the literature.
As already mentioned in Section 4.4, adaptive time-stepping kept a Courant number of 0.5 in all simulations. Typical time-step sizes were then ranging from \(1.8\times 10^{-3}\) s on M1, to \(9.0\times 10^{-4}\) s on M2, and \(4.5\times 10^{-4}\) s on M3. For all simulations, a dimensionless timescale \(T=t/\lambda\) was used with end time \(T=30\) in order to ensure convergence of the fluid to a steady-state. Therefore, the \(C_{d}\) values also converge eventually, as can be seen in Fig. 3.
#### 5.1.2 Results
Tab. 2 shows the final \(C_{d}\) values for the eigenvalue-free logarithmic Oldroyd-B formulation. Overall, the results on the finest mesh M3 show good agreement with the literature at all considered Weissenberg numbers. At smaller Weissenberg numbers (\(\mathrm{Wi}\leq 0.7\)) the values in the compared publications [11; 21; 35; 36] deviate at a magnitude of \(10^{-3}\) and our results on M3 (which we consider as our most accurate ones) do also fit into this range. At higher Weissenberg numbers the values tend to deviate more from each other among all publications, roughly at a magnitude of \(10^{-2}\); a property that has already been observed and described for example in [11]. Furthermore, all publications agree that the minimum drag coefficient is obtained at \(\mathrm{Wi}=0.7\). The highest \(C_{d}\) values of around 130.36 are reached at the lowest Weissenberg number of 0.1.
Fig. 4 shows solutions of the confined cylinder case at \(T=30\) and a Weissenberg number \(\mathrm{Wi}=0.7\). Presented are the \(\Psi_{xx}\) components for the
\begin{table}
\begin{tabular}{l c c c} \hline \hline & M1 & M2 & M3 \\ \hline Number of elements in the mesh & 99576 & 398304 & 1593216 \\ Number of elements on the cylinder surface & 756 & 1512 & 3024 \\ Average element non-orthogonality & 12.6 & 12.6 & 12.7 \\ Maximum element non-orthogonality & 44.7 & 44.9 & 45.0 \\ Maximum element skewness & 1.5 & 1.5 & 1.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mesh statistics for the confined cylinder geometry.
Figure 4: Comparison of \(\Psi_{xx}\) at the final time-step \(T=30\). Top: computed with the eigenvalue-free logarithmic Oldroyd-B formulation; bottom: computed with the standard logarithmic Oldroyd-B formulation that relies on an eigenvalue decomposition. Looking at the entrance of both simulations, it can additionally be seen that developed Poiseuille inflow conditions have been used, since the \(\Psi_{xx}\) components are already developed at the inlet.
eigenvalue-free logarithmic Oldroyd-B formulation in comparison with a eigenvalue-based formulation, that is described by Pimenta (14, Eq. (7)) and previously implemented in RheoTool (13). The contours of the tensor components, and in particular those close to the cylinder, look almost identical. To emphasize and quantify the similarity of these solutions, it can additionally be stated that their final \(C_{d}\) difference is only of magnitude \(10^{-6}\).
Tab. 3 shows \(C_{d}\) results for computations with the eigenvalue-free logarithmic Giesekus model. The Giesekus model has an additional parameter, the mobility factor \(\alpha\in[0,1]\). Again, good agreement with the literature can be observed. Additionally, our results show the significant influence of \(\alpha\) on the drag coefficient. We do not want to go into detail here, as the effect of \(\alpha\) on \(C_{d}\) has already been investigated by others, see for example (35). As \(\alpha\) increases (for fixed Wi), the drag decreases, which is explained by the shear-thinning property of the Giesekus model. When \(\alpha\) tends to zero, the Giesekus model transitions to the Oldroyd-B model and thus, the \(C_{d}\) values converge to the corresponding values in Tab. 2.
All computations were run in parallel on the Caro HPC cluster of the German Aerospace Center. M1 simulations were run on 32 cores, M2 simulations on 64 cores and M3 simulations on 128 cores. In its current state, we observe that our implementation of the eigenvalue-free variant is slightly slower than the standard eigenvalue-based implementation. In particular, we measure a runtime increase of around 7% per time-step in the log-conf equation. However, solving the constitutive equation for a single relaxation mode has only a minor impact on the overall runtime of the algorithm, since the momentum equation and the SIMPLEC algorithm are more computationally heavy. This is corroborated by the comparison of the total runtimes for our test case on M3 using the eigenvalue-free formulation with those of the standard formulation, which differ by less than 1%. Nevertheless, further optimizing the runtime of the constitutive equations, e.g., by bringing these computations onto the GPU, remains an important topic for future research.
### Sedimenting sphere
To demonstrate the eigenvalue-free approach on a three-dimensional problem, a simulation similar to the confined cylinder, the sedimenting sphere, is considered. In this benchmark, fluid flow around a spherical obstacle inside a three-dimensional channel is considered. The sphere has a radius of \(R\) and
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \multirow{2}{*}{Wi} & \multicolumn{6}{c}{\(C_{d}\)} \\ \cline{2-9} & M1 & M2 & M3 & [11] & [21] & [35] & [36] \\ \hline
0.1 & 130.31898 & 130.36049 & 130.36653 & 130.3626 & 130.363 & 130.364 & 130.36 \\
0.2 & 126.58894 & 126.62264 & 126.62875 & 126.6252 & 126.626 & 126.626 & 126.62 \\
0.3 & 123.16959 & 123.18940 & 123.19475 & 123.1912 & 123.193 & 123.192 & 123.19 \\
0.4 & 120.59084 & 120.59124 & 120.59500 & 120.5912 & 120.596 & 120.593 & 120.59 \\
0.5 & 118.85227 & 118.82872 & 118.83021 & 118.8260 & 118.836 & 118.826 & 118.83 \\
0.6 & 117.83174 & 117.78125 & 117.77988 & 117.7752 & 117.775 & 117.776 & 117.78 \\
0.7 & 117.40242 & 117.32483 & 117.32079 & 117.3157 & 117.315 & 117.316 & 117.32 \\
0.8 & 117.45188 & 117.35293 & 117.35114 & 117.3454 & 117.373 & 117.368 & 117.36 \\
0.9 & 117.87883 & 117.76574 & 117.77477 & 117.7678 & 117.787 & 117.812 & 117.80 \\
1.0 & 118.60224 & 118.47727 & 118.49927 & & 118.471 & & 118.49 \\ \hline \end{tabular}
\end{table}
Table 2: Final values for the drag coefficient \(C_{d}\) at \(T=30\) for the confined cylinder case, using the eigenvalue-free Oldroyd-B formulation at different Weissenberg numbers.
Figure 5: Illustration (not to scale) of the sedimenting sphere. Fluid flows from the inlet at the left side to the outlet at the right side. A fixed non-zero velocity is considered at the channel wall. A sphere with solid surface (zero velocity) is placed inside the channel.
the channel a height (or diameter) of \(4R\). Based on the setup in [12], we impose a channel length of \(20R\) and keep the sphere centered at \((7R,0,2R)\). An excerpt of the computational domain is shown in Fig. 5.
#### 5.2.1 Setup
Boundary and initial conditions are chosen according to the literature in order to increase comparability. A uniform inlet condition is considered, with a fixed non-zero velocity \(\bar{u}\) in \(x\)-direction, zero polymeric extra stress components, and a zero-gradient condition for the pressure. At the channel wall, the boundary conditions are chosen equal to the inlet conditions. Thus, in particular, the velocity is uniformly fixed with non-zero component in \(x\)-direction as well. At the sphere, a no-slip condition for the velocity is considered (\(\mathbf{u}=\mathbf{0}\)). The polymeric extra stress components are linearly extrapolated onto the surface and the pressure uses a zero-gradient condition. At the outlet, zero-gradient conditions are imposed for all variables except for the pressure, which uses a fixed value condition \(p=0\).
In the following tests, \(R=1\,\mathrm{m}\) and \(\bar{u}=1\,\mathrm{m/s}\) were used, such that the Weissenberg number equals the numerical value of the relaxation time in seconds and can again be controlled by a change of \(\lambda\). As in the corresponding literature, a viscosity ratio of \(\beta=\eta_{s}/(\eta_{s}+\eta_{p})=0.5\) and a density of \(\rho=1\,\mathrm{kg/m^{3}}\) have been used.
Three purely hexahedral meshes M1, M2 and M3 of different refinement levels have been considered. Their main properties are shown in Tab. 4. During refinement, the total number of elements is multiplied by eight from mesh to mesh, while the number of elements at the sphere surface is quadrupled. It was observed that the \(C_{d}\) computation in this case was very sensitive to the overall mesh quality. Configuring the mesh for the sedimenting sphere simulations, with the goal to minimize non-orthogonalities and skewnesses on all refinement levels, did therefore play an important role during our research. Additionally, boundary layers around the sphere surface were introduced for smaller numerical errors close to the surface and, therefore, a better \(C_{d}\) accuracy. An excerpt of mesh M1 is presented in Fig. 6. Again, an end time \(T=30\) was used and a Courant number of \(0.5\) was fixed, leading to typical time-step sizes ranging from \(1.6\times 10^{-2}\,\mathrm{s}\) on M1, to \(8.0\times 10^{-3}\,\mathrm{s}\) on M2 and \(4.0\times 10^{-3}\,\mathrm{s}\) on M3. At this point, it should be mentioned that the quantity of interest in the following tests is the drag correction factor \(K\), which is typically used in sedimenting sphere benchmarks [12; 40; 41; 42; 43]. \(K\) is given by
\[K=\frac{C_{d}}{6\pi}\,, \tag{65}\]
where \(C_{d}\) is the drag coefficient value from Eq. (62) with \(\Gamma\) being the sphere surface. The definition of \(K\) is motivated by Stokes' law [44; 45].
#### 5.2.2 Results
Tab. 5 shows \(K\) values for the eigenvalue-free logarithmic Oldroyd-B formulation and varying Weissenberg numbers between \(0.1\) and \(1.5\). In most cases, the compared publications show similar values up to a magnitude of \(10^{-3}\). In comparison, our results differ slightly more, at a magnitude of \(10^{-2}\). However, the mesh convergence of our results suggests that better values could possibly be reached
Figure 6: Rendering of the hexahedral mesh M1 for the sedimenting sphere benchmark.
when considering even finer meshes M4, M5 etc. To emphasize this point, we apply a Richardson extrapolation the discretization length \(h\) as a parameter. In our setting, we expect the error in \(C_{d}\) to scale linearly with \(h\), since we are using a piecewise linear approximation of the sphere surface when computing the integral in Eq. (62). Furthermore, the discretization length \(h\) is divided by two in each refinement step. In this case, the Richardson extrapolation value \(K_{\text{RE}}\) of the drag correction factor using the obtained values for M2 and M3 yields \(K_{\text{RE}}=2K_{\text{M3}}-K_{\text{M2}}\). The resulting values are shown in the RE column of Tab. 5 and they show a very good agreement with the compared publications, now deviating at a magnitude of \(10^{-3}\) as well. However, as already observed by Knechtges [12], the results start to deviate more from each other with increasing Weissenberg numbers, especially for \(Wi\geq 1.4\). Finally, it can be noted that all data in Tab. 5 agrees on the overall trend of decreasing \(K\) values for increasing Weissenberg numbers.
Tab. 6 shows \(K\) values for the eigenvalue-free logarithmic Giesekus model, evaluated for the same variety of Weissenberg numbers as before and mobility factors \(\alpha\in\{0.1,0.01,0.001\}\). Our data agrees with the literature. We observe a noticeable mesh convergence towards the compared values. Furthermore, increasing Weissenberg numbers result in decreasing \(K\) values, which also agrees with the literature. For decreasing \(\alpha\) values, an expected convergence of \(K\) towards the corresponding values in Tab. 5 is observed.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & M1 & M2 & M3 \\ \hline Number of elements in the mesh & 139392 & 1115136 & 8921088 \\ Number of elements on the sphere surface & 1152 & 4608 & 18432 \\ Average element non-orthogonality & 11.1 & 11.7 & 12.0 \\ Maximum element non-orthogonality & 41.1 & 52.4 & 64.4 \\ Maximum element skewness & 1.7 & 1.8 & 1.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mesh statistics for the sedimenting sphere geometry.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Wi \\ \end{tabular} } & \multicolumn{6}{c}{\(K\)} \\ \cline{2-9} & M1 & M2 & M3 & RE & [12] & [40] & [41] & [42] & [43] \\ \hline
0.1 & 5.73784 & 5.82977 & 5.86723 & 5.90469 & 5.90576 & & & \\
0.2 & 5.64635 & 5.73532 & 5.77120 & 5.80708 & 5.80763 & & & \\
0.3 & 5.53994 & 5.62529 & 5.65930 & 5.69331 & 5.69356 & 5.69368 & 5.6963 & \\
0.4 & 5.43888 & 5.52076 & 5.55300 & 5.58524 & 5.58527 & & & \\
0.5 & 5.35026 & 5.42977 & 5.46043 & 5.49109 & 5.49093 & & 5.4852 \\
0.6 & 5.27577 & 5.35396 & 5.38330 & 5.41264 & 5.41227 & 5.41225 & 5.4117 & 5.4009 \\
0.7 & 5.21468 & 5.29244 & 5.32071 & 5.34898 & 5.34838 & & 5.3411 \\
0.8 & 5.16544 & 5.24335 & 5.27092 & 5.29849 & 5.29747 & & 5.2945 \\
0.9 & 5.12649 & 5.20481 & 5.23202 & 5.25923 & 5.25761 & 5.25717 & 5.2518 \\
1.0 & 5.09638 & 5.17511 & 5.20219 & 5.22927 & 5.22700 & & 5.2240 \\
1.1 & 5.07430 & 5.15274 & 5.17989 & 5.20704 & 5.20402 & & 5.2029 \\
1.2 & 5.05872 & 5.13653 & 5.16379 & 5.19105 & 5.18733 & 5.18648 & 5.1842 & 5.1877 \\
1.3 & 5.04914 & 5.12552 & 5.15281 & 5.18010 & 5.17581 & & & 5.1763 \\
1.4 & 5.04439 & 5.11890 & 5.14608 & 5.17326 & 5.16851 & & & \\
1.5 & 5.04361 & 5.11609 & 5.14291 & 5.16973 & & 5.15293 & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Final values for the drag coefficient \(C_{d}\) at \(T=30\) for the sedimenting sphere case, using the eigenvalue-free Oldroyd-B formulation at different Weissenberg numbers. RE corresponds to the Richardson extrapolation value.
## 6 Conclusion and Outlook
In this paper, we have shown how the \(f(\text{ad}\,\mathbf{\Psi})\)-based formulation that was first introduced in [10] can be used to engineer an eigenvalue-free numerical algorithm for the log-conformation formulation.
In the course of our analysis, we first have proven the equivalence of this formulation to many different log-conformation formulations, including the original formulation by Fattal and Kupferman [1].
The new algorithm is in principle not tied to a specific discretization scheme of the resulting constitutive equation. However, in order to verify our algorithm, we have shown a working implementation in the RheoTool [13; 14] framework, which is based on OpenFOAM(r)[28]. The resulting implementation was successfully validated on the confined cylinder and sedimenting sphere benchmarks.
For the future, we mostly see the application of this eigenvalue-free algorithm in areas that have so far been hindered by the eigenvalue decomposition. One is certainly bringing more of these heavy computations per finite volume cell onto the GPU. Another is that the algorithm facilitates the development of semi-implicit or fully implicit discretization schemes, in the same vein as [11] facilitated adoption of automatic differentiation methods in [46].
## 7 Acknowledgments
This work was partially funded as part of the Zentrales Innovationsprogramm Mittelstand (ZIM) project REINVEST.
In addition, the third author thanks MAGMA Giessereitechnologie GmbH for the freedom to work on cutting-edge research topics.
## Appendix A Poiseuille Inflow Conditions in the Confined Cylinder Benchmark
As written in Section 5.1, we want to prescribe a fully developed Poiseuille flow at the inflow of the confined cylinder. This poses the question of whether an easy expression to specify \(\mathbf{\Psi}\) exists. For \(\tau\) it is known that
\[\tau=\begin{pmatrix}\tau_{xx}&\tau_{xy}\\ \tau_{xy}&0\end{pmatrix}\,, \tag{101}\]
with \(\tau_{xx}=2\lambda\mu_{P}\left(\partial_{y}u_{x}\right)^{2}\) and \(\tau_{xy}=\mu_{P}\,\partial_{y}u_{x}\). The velocity \(\mathbf{u}\) is given by
\[\mathbf{u}=\begin{pmatrix}\frac{3}{8}\bar{u}\Big{(}4-\frac{(y-2R ^{2})}{R^{2}}\Big{)}\\ 0\end{pmatrix}\,, \tag{102}\]
with mean inflow velocity \(\bar{u}=1\,\text{m}/\text{s}\) in all benchmarks. The coordinate system is centered at the lower left corner of the confined cylinder domain (hence the \(-2R\) term), as depicted in Fig. 2.
The corresponding conformation tensor is thus given by
\[\mathbf{C}=\mathbf{1}+\frac{\lambda}{\mu_{P}}\tau=\mathbf{1}+ \begin{pmatrix}2l^{2}&l\\ l&0\end{pmatrix}\,, \tag{103}\]
with \(l=\lambda\,\partial_{y}u_{x}\).
We claim that \(\mathbf{\Psi}\) is given by
\[\mathbf{\Psi}=\log\mathbf{C}=\frac{1}{2}\begin{pmatrix}p-ql^{2}/o&-ql/o \\ -ql/o&p+ql^{2}/o\end{pmatrix}\,, \tag{104}\]
with
\[o =\sqrt{l^{2}(1+l^{2})}=|l|\sqrt{1+l^{2}} \tag{105}\] \[p =\log(1+l^{2})\] (106) \[q =\log\Big{(}1+2(l^{2}-o)\Big{)}=2\,\text{arsinh}\,(-|l|)\,\,. \tag{107}\]
This is the same formulation as it was used for the actual computations in [11]. Even though the correct formula was used for computations in [11], a small error creep into the formulas printed in [11], which unfortunately omitted factors of \(l\) in \(\Psi_{xy}\) and \(\Psi_{yy}\). With this appendix we want to correct this error.
Coming to the proof, we split \(\mathbf{\Psi}\) into two parts: one which encodes the traceful part and one traceless part
\[\mathbf{\Psi}=\frac{p}{2}\mathbf{1}+\mathbf{B}\,, \tag{108}\]
with
\[\mathbf{B}=\frac{ql}{2o}\begin{pmatrix}-l&-1\\ -1&l\end{pmatrix}\,. \tag{109}\]
Note that the identity matrix \(\mathbf{1}\) and \(\mathbf{B}\) obviously commute and thus allow us to compute the matrix
exponential as two factors
\[\exp\mathbf{\Psi} =\exp\left(\frac{p}{2}\right)\exp\mathbf{B} \tag{10}\] \[=\sqrt{1+l^{2}}\,\exp\mathbf{B}\,. \tag{11}\]
In order to compute \(\exp\mathbf{B}\) it is helpful to see that the following identity holds
\[\mathbf{B}^{2}=\frac{q^{2}}{4}\mathbf{1}\,. \tag{12}\]
From this, it follows immediately
\[\mathbf{B}^{2n} =\left(\frac{q}{2}\right)^{2n}\mathbf{1} \tag{13}\] \[\mathbf{B}^{2n+1} =\left(\frac{q}{2}\right)^{2n+1}\frac{2}{q}\mathbf{B}\,. \tag{14}\]
Therefore, we can split the computation of \(\exp\mathbf{B}\) into two summands
\[\exp\mathbf{B} =\sum_{n=0}^{\infty}\frac{1}{n!}\mathbf{B}^{n} \tag{15}\] \[=\sum_{n=0}^{\infty}\frac{1}{(2n)!}\mathbf{B}^{2n}+\sum_{n=0}^{\infty }\frac{1}{(2n+1)!}\mathbf{B}^{2n+1}\] (16) \[=\cosh\left(\frac{q}{2}\right)\mathbf{1}+\sinh\left(\frac{q}{2} \right)\frac{2}{q}\mathbf{B}\,. \tag{17}\]
Together with the identity \(\cosh(\mathrm{arsinh}(-|l|))=\sqrt{1+l^{2}}\) it follows
\[\exp\mathbf{B}=\frac{1}{\sqrt{1+l^{2}}}\begin{pmatrix}1+2l^{2}&l\\ l&1\end{pmatrix}\,. \tag{18}\]
In total we obtain
\[\exp\mathbf{\Psi}=\begin{pmatrix}1+2l^{2}&l\\ l&1\end{pmatrix}\,, \tag{19}\]
which is what had to be proven.
|
2307.06622 | Quantum Autoencoders for Learning Quantum Channel Codes | This work investigates the application of quantum machine learning techniques
for classical and quantum communication across different qubit channel models.
By employing parameterized quantum circuits and a flexible channel noise model,
we develop a machine learning framework to generate quantum channel codes and
evaluate their effectiveness. We explore classical, entanglement-assisted, and
quantum communication scenarios within our framework. Applying it to various
quantum channel models as proof of concept, we demonstrate strong performance
in each case. Our results highlight the potential of quantum machine learning
in advancing research on quantum communication systems, enabling a better
understanding of capacity bounds under modulation constraints, various
communication settings, and diverse channel models. | Lakshika Rathi, Stephen DiAdamo, Alireza Shabani | 2023-07-13T08:37:21Z | http://arxiv.org/abs/2307.06622v1 | # Quantum Autoencoders for Learning Quantum Channel Codes
###### Abstract
This work investigates the application of quantum machine learning techniques for classical and quantum communication across different qubit channel models. By employing parameterized quantum circuits and a flexible channel noise model, we develop a machine learning framework to generate quantum channel codes and evaluate their effectiveness. We explore classical, entanglement-assisted, and quantum communication scenarios within our framework. Applying it to various quantum channel models as proof of concept, we demonstrate strong performance in each case. Our results highlight the potential of quantum machine learning in advancing research on quantum communication systems, enabling a better understanding of capacity bounds under modulation constraints, various communication settings, and diverse channel models.
Channel coding, quantum communication, quantum machine learning, quantum Shannon theory, quantum channel capacity, classical-quantum communication.
## I Introduction
In classical coding theory, machine learning (ML) has emerged as a powerful tool for generating communication codes that nearly achieve channel capacity for various channel models. Promising results using ML have been demonstrated for learning channel codes [1, 2, 3, 4]. Specifically, in these works, autoencoders have been proposed as an alternative model for communication systems. An autoencoder is a type of neural network used for unsupervised learning. It consists of an encoder neural network that can compress the input data into a lower-dimensional representation, and a decoder neural network that reconstructs the original input from the compressed representation. The network learns to minimize the reconstruction error.
In these past works, the authors map the original models introduced by Claude Shannon [5] to layers of a neural network representing the encoder, the noisy channel, and the decoder. By framing the training problem as a classification problem, the neural network can learn how to overcome the channel noise while simultaneously compressing the data that is transmitted. Moreover, particularly in [2], the cost function is set such that the autoencoder is encouraged to increase the mutual information of the code to better train for a capacity achieving code.
In this work, our objective is to apply the insights gained from previous studies to the realm of quantum communication. Quantum autoencoder models have been swiftly adopted across various research domains [6, 7, 8, 9, 10, 11]. In these contexts, quantum autoencoders draw inspiration from variational quantum algorithms [12] which iteratively update parameters to learn how to compress and decompress the Hilbert spaces encompassing quantum systems. By operating within a compressed, latent space, these models achieve more efficient computations within smaller Hilbert spaces. This becomes particularly valuable during the NISQ era of quantum computing, where resource conservation is of high importance
In information theory, channel capacity theorems are often proven non-constructively, leaving the challenging task of determining a capacity-achieving code open-ended. This holds true for quantum communication as well, where, for example, the Holevo capacity outperforms classical capacity [13] for certain channels but joint-detection receivers (JDRs) that are easy to produce is still a challenging problem [14, 15]. By using quantum autoencoders for training communication systems, we believe it can lead to easier-to-build JDRs that
Fig. 2: An example of a parameterized information-pooling circuit, where \(U_{\pi_{i}}\) are arbitrary rotation gates parameterized by \(\pi_{i}\in\mathbb{R}^{3}\).
Fig. 1: The circuit model for classical communication. A message \(s\in\mathcal{M}\) is encoded into a series of qubits \(|s_{i}\rangle\) and input to a parameterized encoder. Next, the channel effects \(\mathcal{N}\) are applied. Once through the channel, a parameterized decoder is applied, and a series of outputs \(\hat{s}_{i},i\in[n]\) are collected.
can allow for higher communication rates approaching the theoretical limits. This is highly aligned with the objective of future communication systems, especially in 6G networks. 6G networks aim to incorporate quantum features into the communication networks to enhance performance. Using our framework, we can, for example, learn the optimal channel codes under the constraints of what 6G networks can do.
In previous literature, autoencoders have been extensively used in a classical setting to generate channel codes for noisy classical channels [1, 2, 4]. While these concepts have been explored in the classical domain, their application to quantum communication remains unexplored. Hence, in this work, we investigate this unexplored direction. Our approach involves developing a framework to analyze quantum channel codes and using it to evaluate the performance of learned codes across various channels. We focus on three communication settings: 1) Classical-quantum communication; 2) Incorporating shared entanglement resources into the model for entanglement-assisted (EA) communication; and 3) Quantum communication. Our investigations yield promising results, with the models efficiently learning encoding and decoding parameters and approaching theoretical capacities in all cases.
## II Channel Coding with Quantum Autoencoders
We explore the performance of parameterized circuits training for learning classical, and EA-classical, and quantum capacities for various channels. Each communication setting has a slightly different configuration but there is a general structure that allows us to easily modify the framework to accommodate.
In the classical communication setting, the communication process is the following: a sender chooses a classical message from a codebook \(s\in\mathcal{M}\), which is then encoded into a quantum state \(\rho_{s}\) using a unitary operation \(E_{\theta}\). The parameterized operation \(E_{\theta}\) has a particular circuit structure and a \(k\)-length vector \(\theta\in\mathbb{R}^{k}\) defines the operations. The encoded state \(\rho_{s}\) is transmitted through a channel \(\mathcal{N}\), a completely positive, trace preserving map. The resulting state \(\mathcal{N}(E_{\theta}(\rho_{s}))\) is processed by a decoder \(D_{\phi}\), which is parameterized by \(\phi\in\mathbb{R}^{k^{\prime}}\), where \(k^{\prime}\) can be different than \(k\). The measurement of the state yields a classical message \(\hat{s}\) approximating \(s\) and defines a conditional distribution \(p(\hat{s}|s)\). This setting is illustrated in Fig. 1. In some cases, we introduce redundant qubits to test repetition codes, for example. In these cases, a parameterized "pooling" circuit (Fig. 2) is incorporated in the decoder, placed between \(D_{\phi}\) and the measurements. To analyze the quality of the learned channel code, we calculate the respective mutual information using input-output values over the channel and compare it to the known channel capacities.
For EA communication, additional qubits are introduced to the system and are entangled with the communication system before communication occurs. In this case, we assume the entanglement is distributed in a noiseless way, but we do not presume which state the entanglement was in to start with. Pairs of qubits are therefore firstly entangled using parameterized entangling operations with a gate set forming \(K_{\lambda}\), where \(\lambda\) is the parameter vector. For the encoding step, the classical message is fed into the encoder such that controlled gates can manipulate the sender's half of the entangled state. The framework applies channel noise to the sender's half and the other half of the system does not experience the channel noise. The receiver then uses the total system for message decoding. This setting is illustrated in Fig. 3. Again to analyze the quality of the learned EA channel code, we calculate the respective mutual information.
The quantum capacity of a quantum channel is defined in a different way than classical capacities are defined. In this case, it is not a direct measure of which rate quantum states are being transmitted, but rather the direct value can be thought of as a measure of how well a quantum channel preserves entanglement. To therefore compute the quantum capacity of a quantum channel, the communication task is to firstly generate a maximally entangled bi-partite system and then apply the encoding, channel model, and decoding to one half of the system. The value in which to maximize is no longer the mutual information, but rather the quantum coherent information defined as \(I(A)B)_{\rho}\coloneqq S(B)_{\rho}-S(AB)_{\rho}\), where
Fig. 4: The circuit model for quantum communication. A maximally entangled quantum state is taken as input where part of the system is fed through a parameterized encoder, the noise model, and a parameterized decoder. The density matrix representing the final state is taken as output. The red dashed line represents the system separation where some of the state remains with the sender.
Fig. 3: The circuit model for EA-classical communication. A parameterized circuit generates entanglement using learned entanglement resources. A message \(s\in\mathcal{M}\) is fed into the encoder and half of the qubits are encoded by a parameterized encoder. Next, the channel effects are applied to half of the qubits. Once through the channel, a parameterized decoder is applied to the whole system, and a series of outputs \(\hat{s}_{i},i\in[n]\) are collected. The red dashed line represents the system separation.
\(S(X)_{\rho}\) is the von Neumann entropy of state \(\rho\) restricted to subsystem \(X\). To simulate this in the framework, we apply parameterized encoding and decoding gates to the subsystem that undergoes channel noise as depicted in Fig. 4.
In this initial work, we have set up the framework for learning quantum channel codes and analyzed learned codes for different channels. In particular, we study the quantum bit-flip, phase-flip, depolarizing, and amplitude damping channel for \(p=1/2\) and \(p=1\). The Kraus operations modeling the behavior for these channels are the following:
1. **Bit-flip**: \[K_{0}=\sqrt{1-p}\begin{bmatrix}1&0\\ 0&1\end{bmatrix},K_{1}=\sqrt{p}\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\] (1)
2. **Phase-flip**: \[K_{0}=\sqrt{1-p}\begin{bmatrix}1&0\\ 0&1\end{bmatrix},K_{1}=\sqrt{p}\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\] (2)
3. **Depolarizing**: \[K_{0}=\sqrt{1-p}\begin{bmatrix}1&0\\ 0&1\end{bmatrix},K_{1}=\sqrt{p/3}\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\] (3) \[K_{2}=\sqrt{p/3}\begin{bmatrix}0&1\\ 1&0\end{bmatrix},K_{3}=\sqrt{p/3}\begin{bmatrix}0&-i\\ i&0\end{bmatrix}\]
4. **Amplitude Damping**: \[K_{0}=\sqrt{1-p}\begin{bmatrix}1&0\\ 0&\sqrt{1-\gamma}\end{bmatrix},K_{1}=\begin{bmatrix}0&\sqrt{\gamma(1-p)}\\ 0&0\end{bmatrix}\] \[K_{2}=\sqrt{p}\begin{bmatrix}\sqrt{1-\gamma}&0\\ 0&1\end{bmatrix},K_{3}=\begin{bmatrix}0&0\\ \sqrt{\gamma p}&0\end{bmatrix}\] (4)
We configure the model by combining a parameterized encoder, channel, and parameterized decoder, along with optional components such as the pooling layer. This configuration can be trained using the following software tools. Our framework leverages the JAX interface [16] of the Pennylane software library [17] for quantum simulation. For parameter updating, we employ a variant of the Adam optimizer [18]. In the classical communication case, we employ the average cross-entropy loss [19] as the cost function. In the quantum communication case, we use the trace distance between the output state of the reference system. The training flow is depicted in Fig. 5. Once the model is trained for a specific channel, a set of test messages is used to evaluate the code. In the classical case, we compute the mutual information, while in the quantum case, we calculate the quantum coherent information.
## III Training Results
In this section, we present the results of training the circuit models and the evaluation of the final trained codes in their ability to achieve capacity. We explore the classical and quantum communication scenarios.
### _Classical Communication_
Classical communication over quantum channels can be done in a variety of ways and therefore there are four main capacities to consider when no additional entanglement resources are used: 1) Separable-state encoding and decoding \(C_{ss}\); 2) Separable-state encoding and joint measurement decoding \(C_{sj}\), also known as the Holveo capacity [20]; 3) Entangled-state encoding and separable measurement decoding \(C_{es}\); and 4) Entangled-state encoding and joint decoding \(C_{ej}\)[21]. Depending on the desired model, our framework allows to test any of the cases.
For the demonstration of our framework in the classical communication setting without entanglement, we have tested various code models for the (a) Bit-flip channel, (b) Depolarizing channel, and (c) \(p=1\) amplitude damping channel. The results of training the code are shown in Fig. 6(a)-(c). For the bit-flip and depolarizing channel, the framework was able to train the circuit parameters to produce a code that meets the channel capacity. We can see that by adding an encoding operation to the bit-flip channel, the Holveo capacity is exceeded since it becomes possible to encode in the \(X\) basis. For the depolarizing channel, encoding at the sender's side does not enhance the capacity of the channel in this case and the Holveo capacity is equal to \(C_{ej}\).
The \(p=1\) amplitude damping channel is a non-unital channel, which means that the accessible information between channel uses is non-additive, making the determination of channel capacity more challenging. For the \(p=1\) amplitude damping channel, the Holveo quantity serves as a lower bound for the capacity. Initially, without encoding on the sender side and using a standard basis embedding, the output of our framework produced a code with a sub-optimal capacity, lower than the lower bound in the single qubit case. To further explore this, we introduced repetition and pooling techniques (see Fig. 2) in the encoding and decoding scheme. Although the repetition code uses the channel more times, the circuit with pooling yields a single output bit. We compute the mutual information (without regularization) using this single bit and
Fig. 5: A flow diagram of how the framework executes. For classical message transmission a training collection of messages \(\{m_{i}\}_{i=1}^{k}\) are selected and the initial parameters \((\theta_{0},\phi_{0},\lambda_{0},\pi_{0})\) are set. The data is fed into the parameterized circuit which outputs either classical measurement results or a density matrix. The cost function is computed and the parameters are updated. This repeats \(n\) times. After \(n\) iterations, the final parameters are used to evaluate the code against a test set of messages, when applicable for classical communication.
observe that for \(\gamma\) values greater than 0.5, repetition and pooling improve the results, where the solid lines in the plot represent the no-encoding scheme. When we introduce a parameterized encoding circuit (represented by the dashed lines) with the same pooling scheme, we observe a significant improvement in the mutual information of the code. We present known upper and lower bounds for the total capacity of the channel using the dash-dotted lines.
### _Entanglement-Assisted Classical Communication_
When considering entanglement-assisted (EA) communication, important considerations include determining the type of entanglement to be used (e.g., Bell states, GHZ states, etc.) and how these states should be encoded with classical information. In our framework, we delegate these complex decisions to the training process. The model in this case, as depicted in Fig. 3, parameterizes the initial entanglement generation and then the classical encoding step. Then, part of the system gets the noise effects from the channel and some remains noiseless. Upon arrival, the receiver decodes the total state also using a parameterized circuit. We analyzed the EA capacity for three channels: (a) Phase-flip, (b) Depolarizing, and (c) \(p=1/2\) amplitude damping. The results in Fig. 6(d)-(f) show that the framework is able to train the parameterized circuit model to find the optimal parameters that nearly meet or do meet the channel capacity in each case. In these cases, the channel models are unital channels and so their capacities are known. They are found by using standard super-dense coding which our trained model successfully reproduced.
In these results, we assumed that the stored entanglement remains noiseless while the sender makes transmissions, but we also test the case where a small amount of depolarizing noise affects the stored qubit. In Fig. 6(d)-(f), represented by the orange and green crosses, we also show the result of depolarizing noise applied to the idle qubit to simulate the effects of memory. We assume the sender's qubit undergoes a depolarizing channel with \(p_{i}=0.05\) and \(p_{i}=0.10\) during transmission. We see that the noisy memory diminishes the rate and that the framework could still train the model to achieve the maximum rate. We show the results in Fig. 6(d)-(f) with red crosses.
### _Quantum Communication_
The final communication scenario we considered for our framework is the quantum communication setting. In this scenario, we focused on transmitting parts of maximally entangled states and evaluating the preservation of entanglement. Based on the configuration in Fig. 4, we set the model for training. In quantum communication, channels that are "degradable" have a single-letter quantum capacity formula [22], making them easier to analyze. The phase-flip and \(p=1/2\) amplitude damping channels are such channels, where the depolarizing channel is not. We show the training results for determining the quantum capacity for those channels in 6(g)-(i). Notably, for degradable channels, a single maximally entangled state meets the capacity, and our framework successfully reproduces this result. However, the capacity for the depolarizing channel is generally unknown. We explore various methods of transmitting entanglement over this channel. The blue crosses in Fig. 6(h) show the single unit of entanglement. The red and black crosses show the regularized performance of transmitting \(n-1\) parts of an \(n\)-qubit GHZ state. As observed, the performance is worse, but intriguingly, for larger values of \(p\), this form of entanglement outperforms the single-qubit case, which is already a known fact. To highlight this advantage, we provide a zoomed-in plot within (b) on a logarithmic scale, revealing that the 4- and 5-qubit cases remain positive for larger values of \(p\). As in the entanglement-assisted case, we show the effects of memory noise for at the sender in (g) and (i) on the capacity.
In summary, there is much more to learn about quantum capacities, and our framework serves as a valuable tool for deeper investigations. The distribution of entanglement is a fundamental aspect of future quantum networks, and understanding the ultimate rates of entanglement distribution holds significant importance.
## IV Conclusion and Outlook
In conclusion, we have developed a quantum machine learning framework for training quantum channel codes over different channel models and constraints. Our framework has demonstrated its effectiveness in learning capacity-achieving codes for qubit channels. We have modeled classical, entanglement-assisted classical, and quantum capacities, and in each case, our framework has shown strong performance. The trained models have achieved capacities that are close to the known limits for unital channels, enabling straightforward analysis of various error-correcting codes. Furthermore, our framework has allowed us to observe quantum effects, such as super-additivity, with just a few lines of software code.
Although we have covered a vast subset of theory in this work, we strongly believe that quantum machine learning holds tremendous potential for future quantum communication systems, particularly in the era of 6G networks where strong constraints will have to be imposed to include quantum features. Our framework can be extended in numerous ways to explore various communication scenarios and channel capacities, including private and zero-error capacities, as well as adaptability to multiple input-output channels. Additionally, we foresee promising outcomes by exploring different pooling strategies [23] to facilitate efficient joint-detection receiver designs. Furthermore, while this work focuses on qubits, conducting an in-depth study of continuous variable communication can yield novel insights for optical communication, and is a project we are currently undertaking. In conclusion, our research introduces a new domain of investigation, employing ML techniques for quantum communication, and establishes the foundation for future explorations.
## Acknowledgements
The authors thank Bing Qi, Hassan Shapourian, and Ionel Miu for helpful discussions.
Fig. 6: The training results for various channel codes for classical, EA-classical, and quantum communication produced by the framework for various channel models. In plots (a) and (b), we show with green circles how adding an encoding layer affects the code capacity. In plot (c), we show the results of using a repetition code with a pooling layer. In the plots with orange and green curves from (d)-(i), we show how memory decoherence on the sender’s side affects the code capacity using depolarizing noise with idler probability \(p_{i}\). In (h), we show how GHZ states of varying sizes displays super-additivity in the zoomed-in plot. |
2305.10639 | Indium-Tin-Oxide for High-performance Electro-optic Modulation | Advances in opto-electronics are often led by discovery and development of
materials featuring unique properties. Recently the material class of
transparent conductive oxides (TCO) has attracted attention for active photonic
devices on-chip. In particular Indium Tin Oxide (ITO) is found to have
refractive index changes on the order of unity. This property makes it possible
to achieve electro-optic modulation of sub-wavelength device scales, when thin
ITO films are interfaced with optical light confinement techniques such as
found in plasmonics; optical modes are compressed to nanometer scale to create
strong light-matter-interactions. Here we review efforts towards utilizing this
novel material for high-performance and ultra-compact modulation. While high
performance metrics are achieved experimentally, there are open questions
pertaining the permittivity modulation mechanism of ITO. Furthermore, we show
that a footprint-saving waveguide inline cavity can enhance obtainable
extinction-ratio to insertion-loss ratios by about one order of magnitude over
non-cavity based version. Moreover, we offer a speed analysis that shows that
the device is resistance limited, but not capacitance or drift-carrier limited.
Interestingly, two bias options exist for ITO and we find that a
side-connection enables devices that should in principle enable several hundred
of GHz fast devices, using our routinely achievable ITO film resistivities.
Finally, we offer a brief discuss about footprint savings of compact ITO
modulators showing a 3-orders of magnitude smaller footprint over Silicon
photonic MZI-based modulators. | Zhizhen Ma, Zhuoran Li, Behrouz Movahhed Nouri, Ke Liu, Chenran Ye, Hamed Dalir, Volker J. Sorger | 2023-05-18T01:32:48Z | http://arxiv.org/abs/2305.10639v1 | # Indium-Tin-Oxide for High-performance Electro-optic Modulation
###### Abstract
Advances in opto-electronics are often led by discovery and development of materials featuring unique properties. Recently the material class of transparent conductive oxides (TCO) has attracted attention for active photonic devices on-chip. In particular Indium Tin Oxide (ITO) is found to have refractive index changes on the order of unity. This property makes it possible to achieve electro-optic modulation of sub-wavelength device scales, when thin ITO films are interfaced with optical light confinement techniques such as found in plasmonics; optical modes are compressed to nanometer scale to create strong light-matter-interactions. Here we review efforts towards utilizing this novel material for high-performance and ultra-compact modulation. While high performance metrics are achieved experimentally, there are open questions pertaining the permittivity modulation mechanism of ITO. Furthermore, we show that a footprint-saving waveguide inline cavity can enhance obtainable extinction-ratio to insertion-loss ratios by about one order of magnitude over non-cavity based version. Moreover, we offer a speed analysis that shows that the device is resistance limited, but not capacitance or drift-carrier limited. Interestingly, two bias options exist for ITO and we find that a side-connection enables devices that should in principle enable several hundred of GHz fast devices, using our routinely achievable ITO film resistivities. Finally, we offer a brief discuss about footprint savings of compact ITO modulators showing a 3-orders of magnitude smaller footprint over Silicon photonic MZI-based modulators. Such compact performance can play a key role in emerging ASICs such as photonic tensor core accelerators for machine learning. Lastly, we review a variety of optical and electrical properties of ITO for different processing conditions, and show that ITO-based plasmonic electro-optic modulators have the potential to significantly outperform diffraction-limited devices.
## 1 Introduction
A potential viable way of fulfilling both size and power requirement for future photonic integrated circuit (PIC) technology lies in down-scaling opto-electronic devices beyond the diffraction limit of light [1-4]. The advantage of such sub-diffraction limited photonics is two-fold; reduced optical power requirements and physical device size. To elaborate on this, while being physically compact, the optical mode confinement of such components can strongly enhance light matter interactions (LMI) [5,6], which in terms can reduce required drive power to obtain the desired effect, e.g. signal modulation, optical non-linearities [7,8]. In order to address these demands, photonic components and even circuits based on surface plasmon polaritons (SPPs), collective oscillations of electrons at metal-dielectric interfaces are thought of a solution for nanoscale PICs [9]. However, while SPP-based schemes have been explored before, many do not offer both sub-wavelength confinement beyond the diffraction limit and long enough interaction lengths making these designs often unsuitable towards nanoscale photonic integration [10, 11]. As a result, the use of plasmonics for photonic on-chip solutions, in particular for optical interconnects remained uncertain until recently. However, emerging materials such as the recently explored transparent conductive oxides (TOOs) |
2306.11077 | Hidden Cooling Flows in Clusters of Galaxies III: Accretion onto the
Central Black Hole | Recently, we have uncovered Hidden Cooling Flows (HCF) in the X-ray spectra
of the central Brightest Galaxies of 11 clusters, 1 group and 2 elliptical
galaxies. Here we report such flows in a further 15 objects, consisting of 8
clusters, 3 groups, 3 ellipticals and 1 Red Nugget. The mass cooling rates are
about 1 Msun/yr in the ellipticals, 2 to 20 Msun/yr in the groups and 20 to 100
Msun/yr in regular clusters. The Red Nugget, MRK1216, has an HCF of 10 Msun/yr.
We review the fate of the cooled gas and investigate how some of it might
accrete onto the central black hole. The gas is likely to be very cold and to
have fragmented into low mass stars and smaller objects before being swallowed
whole, with little luminous output. If such a scenario is correct and operates
at a few Msun/yr then such objects may host the fastest growing black holes in
the low redshift Universe. We briefly discuss the relevance of HCF to the
growth of early galaxies and black holes. | A. C. Fabian, J. S. Sanders, G. J. Ferland, B. R. McNamara, C. Pinto, S. A. Walker | 2023-06-19T17:26:27Z | http://arxiv.org/abs/2306.11077v1 | # Hidden Cooling Flows in Clusters of Galaxies III: Accretion onto the Central Black Hole
###### Abstract
Recently, we have uncovered Hidden Cooling Flows (HCF) in the X-ray spectra of the central Brightest Galaxies of 11 clusters, 1 group and 2 elliptical galaxies. Here we report such flows in a further 15 objects, consisting of 8 clusters, 3 groups, 3 ellipticals and 1 Red Nugget. The mass cooling rates are about \(1\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) in the ellipticals, 2 to \(20\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) in the groups and 20 to \(100\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) in regular clusters. The Red Nugget, MRK 1216, has an HCF of \(10\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\). We review the fate of the cooled gas and investigate how some of it might accrete onto the central black hole. The gas is likely to be very cold and to have fragmented into low mass stars and smaller objects before being swallowed whole, with little luminous output. If such a scenario is correct and operates at a few \(\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) then such objects may host the fastest growing black holes in the low redshift Universe. We briefly discuss the relevance of HCF to the growth of early galaxies and black holes.
keywords: galaxies: clusters: intracluster medium
## 1 Introduction
We have recently found Hidden Cooling Flows in clusters and groups of galaxies, as well as a couple of nearby elliptical galaxies (HCFI and HCFII) (Fabian et al., 2022, 2023), using spectra from the XMM Reflection Grating Spectrometer (RGS). These soft X-ray-emitting flows are hidden within photoelectrically-absorbing cold clouds and dust near the centres of the central brightest galaxies. They represent the cooler inner parts of larger, wider-scale cooling flows. AGN feedback acts to reduce the main cooling flow in the larger body of these objects but the inner parts drop from direct view behind cold absorbing clouds. The total mass cooling rates can be 20 to 50 per cent or more of the unabsorbed rates inferred earlier from X-ray imaging studies.
The findings again raise the "cooling flow problem" of what happens to the cooled gas? HCF mass cooling flow rates of tens of Solar masses per year in regular clusters and \(1\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) in early-type galaxies lasting \(\sim 8\,\mathrm{Gyr}\) (since redshift \(z=1\)) means almost \(10^{11}\,\mathrm{M}_{\odot}\) and \(10^{10}\,\mathrm{M}_{\odot}\), respectively, of accumulated cooled gas. Where does it go? The issue is not new1 but has largely been ignored for the past 2 decades, even at the low rates allowed without absorption (see Liu et al. (2019) and Section 2).
Footnote 1: We do not repeat here the history of absorption studies in cooling flows, which is discussed in HCFI and II.
We have proposed and discussed several possibilities, namely that a) the gas cools to invisibility (i.e. so cold that it radiates little), b) the cooled gas fragments into low mass stars and substellar objects, c) cooled gas is dragged out from the centre by the bubbling action of AGN feedback. a) and b) mean that there is increasing unseen mass of gas and/or low mass stars at the centres of these objects. c) may be consistent with observed metal abundance profiles. These possibilities are not of course mutually exclusive.
Here we investigate how much cooled gas can end up in the central black holes. Many of the most massive black holes at low redshift lie in Brightest Cluster Galaxies (McConnell & Ma, 2013; Bogdan et al., 2018), and we include a couple here including Holm 15A, the central galaxy of A85, which has a black hole of mass \(4\times 10^{10}\)(Mehran et al., 2019). There is some evidence that the black hole to galaxy stellar mass ratio of early-type galaxies has increased significantly from \(z=1\) to the present day (Farrah et al., 2023). Since massive black holes can swallow stars whole, such accretion need not be luminous.
We now search for hidden cooling flows in 8 cool core clusters, 3 X-ray luminous groups and 4 relatively isolated elliptical galaxy including a Red Nugget. They are found in all objects and have the typical mass cooling rates found in HCFI and II. One is the very X-ray luminous cluster, ZW3146, at medium redshift \(z\sim 0.3\), the results for which compare well with other high luminosity clusters found at similar and higher redshift. A significant part of its high cooling rate of \(\sim 1000\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\), goes into observed normal star formation but it is unlikely to be a long-lived situation. As discussed in HCFII (Fabian et al., 2023), rapid accretion onto the central black hole has
the potential to turn it into a luminous quasar, as seen in the Phoenix cluster, perhaps ending in a massive outburst such as has occurred in MS 0735+7421 (McNamara et al., 2005).
We then speculate whether hidden accretion is taking place into the central black holes of HCFs. Hidden in the sense of unobserved because the infalling matter consists of low mass stars etc which are swallowed whole without emitting radiation. Finally we speculate on high pressure star formation, which occurs in HCF, and discuss its relevance to early galaxy formation and in particular to the origin of "red nuggets".
## 2 Spectral Analysis
The objects and data used are listed in Table 1. SAS 20.0 was used for the data reduction. The spectra were extracted using rospproc with a 95% extraction in PSF width (corresponding to 1.7 arcmin) and 95% in pulse-height distribution. To create Good Time Intervals (GTIs), light curves were created for each RGS instrument from CCD 9, with events with flag values of 8 or 16, extracted with a cross-dispersion angle of greater than \(1.5\times 10^{-4}\), in time bins of 200s. The GTIs were created when the rate was below 0.3 cts/s. Background spectra were created with rospkskmodel. The spectra and background spectra from RGS1 and RGS2 were combined using rospcombine before spectral fitting. The spectra were then analysed using xspec Arnaud (1996) over the energy range of \(8-22\)A, where background is minimised.
The spectral model used is tbars(gsmooti*apc+gsmooti(patcov*mlayerz)mkeflow). The intrinsic absorption model mlayerz (see HCFII for details) represents a sequence of interleaved emission and absorption layers with a total column density \(N_{\rm H}\) listed in Table 2. TBABS is the Galactic absorption in the direction of the target APEC is a constant temperature thermal emission model which represents the outer cluster gas. Its temperature is also used as the hotter temperature in the cooling flow mkcllow model. patcov enables the measurement of the total mass cooling rate of both unabsorbed and absorbed components. A covering fraction of one means that all the cooling flow component is absorbed and if zero then none is absorbed. The model assumes no particular geometry for the absorbed and unabsorbed components. It does assume that all absorbed components are identical. The minimum temperature of the cooling flow model is set at 0.1 keV.
The spectra are shown in the Appendix as Figs B1 to B15, together with contour plots of absorbed mass cooling rate (\(\dot{M}_{\rm a}\)) versus intrinsic column density \(N_{\rm H}\) and covering fraction.
Since the RGS is a slitless spectrometer (den Herder et al., 2001) there is some blurring of the energy scale associated with extended sources. This is included in the spectral model by smoothing the
Figure 1: RGS spectrum of ZW3146 with HCF component shown in red and McCall component dotted, Mass cooling rate in \(\rm M_{\odot}\,yr^{-1}\) versus total column density in units of \(10^{22}\,\rm cm^{2}\), Mass cooling rate versus Covering Fraction of the HCF component. Contours at 68% (red), 90% (green) and 99% (blue).
Figure 3: Mass cooling rates, classical imaging rate from (Hudson et al., 2010) (black), if available, and spectroscopic HCF rate (red). Objects: 1) 2A0335; 2) A85; 3) A496; 4) A2597; 5) S159, 6) A262, 7) A2052; 8) Cen; 9) Per, 10) A2199 11) NGC1550 and 12) NGC5044. The average ratio of red (HCF) to black (classical) is 0.45.
Figure 2: Flux from cooling flow emerging in lines (blue) and continuum (green) (top panel); as a fraction (bottom panel).
spectral components with separate gaussian kernels for the outer APEC component and inner HCF. When making the contour plots for the less bright objects we often needed to freeze the smoothing parameters to their best fit values in order to have convergence. Detailed spectral results are given in Table 2 and are compared with data from other wavebands in Table 3.
## 3 The spectral results
As noted in HCFII, \(\chi^{2}\)-space for the HCF model is often corrugated which can lead to complex contour plots. We are using a very simplistic model and a real hidden cooling flow is expected to be far more complicated in both space and column density. RGS spectra provide no more than a rough average over the inner arcmin of the target source.
A source like ZW3146, where there is a large continuum fraction, can have a very uncertain abundance \(Z\), with it anticorrelating with the mass cooling rate (see Fig 1). In this case we fix it at \(Z=0.4\).
Of the 15 sources studied here, all but 3 require a best-fit covering fraction of 0.95 or more. This emphasises that they are indeed "hidden". The intrinsic column densities range from \(2\times 10^{21}\) to \(3\times 10^{22}\,\rm cm^{-2}\).
We also refit the spectra with the Covering Fraction set to zero, in order to determine the mass cooling rate if there is no absorption, \(\dot{M}_{\rm u}\). This is listed in the last column of Table 2. As expected, it is generally very low, but quite large for 2A0335. The lowest \(\chi^{2}\) value for this no absorption case is however 14 above that for the best fit HCF mlayrz2 model, which is therefore the statistically preferred one. The value in the case of A2597 is about what is expected from the HCF model where the Covering Fraction is about 70 per cent.
When the temperature of the gas is above about 0.4 keV the fraction of the energy emerging in continuum is about 50 per cent and drops below 10 per cent below 0.2 keV (Fig 2). Most of the flux below 0.4
\begin{table}
\begin{tabular}{l r r r r r r} \hline Target & RA & Dec & OBSIDs & Exposure (ks) \\ \hline
2A0335 & 54.6691 & 9.9697 & 0109870101 0109870201 0147800201 & 145 \\ A85 & 10.4601 & \(-\)9.3031 & 0065140101 0723802101 0723802201 & 215 \\ A496 & 68.4074 & \(-\)13.2619 & 0135120201 0506260301 0506260401 & 162 \\ A2597 & 351.3321 & \(-\)12.1243 & 0108462010147330101 0723801601 & 257 \\ & & & 0723801701 & & \\ A2199 & 247.1594 & 39.5512 & 0008030201 0080301 0008030601 & 137 \\ & & & 07238011 0723801201 & & \\ M87 & 187.7059 & 12.3911 & 0114120101 002020101 0803670501 & 430 \\ & & & 0803670601 0803671001 0803671101 & \\ NGC1399 & 54.6210 & \(-\)35.4505 & 00128301101 0400620101 & 139 \\ NGC720 & 28.2519 & \(-\)13.7387 & 0112300101 0602010101 & 121 \\ NGC1550 & 64.9080 & 2.4101 & 01521501101 0723800401 072380051 & 200 \\ NGC1600 & 67.9156 & \(-\)0.0861 & 0040490101400490201 & 81 \\ NGC3091 & 150.0591 & \(-\)19.6364 & 0041180301 0041180701 & 30 \\ NGC5813 & 225.2969 & 1.7019 & 0302460101 0554680201 0554680301 & 170 \\ & & & 0554680401 & & \\ NGC5846 & 226.6223 & 1.6048 & 0215401012545001 0045340101 & 228 \\ & & & 0723800101 0723800201 & & \\ MRK1216 & 127.1964 & \(-\)6.9402 & 0822960101 0822960201 & 235 \\ ZW3146 & 155.9147 & 4.1866 & 0108670101 0108670401 0605540201 & 240 \\ & & & & 0605540301 & & \\ \hline \end{tabular}
\end{table}
Table 1: Observed targets, giving the used source position (deg; J2000), observation identifiers, and average cleaned exposure of the RGS cameras.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline Cluster & \(N_{\rm H}\) & \(kT\) & \(Z\) & \(z\) & \(Norm\) & \(CFrac\) & \(N_{\rm H}\)\({}^{\prime}\) & \(\dot{M}\) & \(\chi^{2}/{\rm dof}\) & \(\dot{M}_{\rm u}\) \\ \hline & \(10^{22}\) cm\({}^{2}\) & keV & \(Z_{\odot}\) & & & & \(10^{22}\) cm\({}^{2}\) & M\({}_{\odot}\) yr\({}^{-1}\) & M\({}_{\odot}\) yr\({}^{-1}\) & \\ \hline
2A0335 & 0.26 & \(1.85^{+0.02}_{-0.01}\) & \(0.32\pm 0.01\) & 0.035 & 4.9e-2 & 1 & 0.32 & \(86^{+9}_{-7}\) & 1359/1224 & \(48\pm 4\) \\ A85 & 0.029 & \(3.6\pm 0.4\) & \(0.29^{+0.07}_{-0.003}\) & 0.056 & 2e-2 & 0.95 & 1 & \(25^{+40}_{-2}\) & 1406/1327 & \(14.4^{+3}_{-5}\) \\ A496 & 0.05 & \(2.49\) & 0.32 & 0.033 & 2.1e-2 & 0.95 & 2.2 & \(22.7^{+2.0}_{-7.0}\) & 1406/1299 & \(<2.1\) \\ A2597 & 0.023 & \(3.3\pm 0.13\) & \(0.35\pm 0.03\) & 0.082 & 1.5e-2 & 0.71 & 5 & \(6^{+19}_{-19}\) & 1457/1364 & \(19.5^{+6}_{-5.4}\) \\ A2199 & 0.008 & \(2.96^{+0.16}_{-0.08}\) & \(0.29^{+0.14}_{-0.02}\) & 0.0296 & 2.4e-2 & 0.95 & 0.46 & \(5.6^{+4.5}_{-4.5}\) & \(1409/13122\) & \(2.3\pm 1.5\) \\ M87 & 0.018 & \(1.47\) & 0.23 & 4.28e-3 & 7.3e-2 & 1. & 0.2 & \(0.8^{+0.2}_{-0.15}\) & 18759/992 & \(0.5\pm 0.04\) \\ NGC1399 & 0.014 & \(1.08\pm 0.01\) & \(0.27\pm 0.02\) & 5.5e-3 & 3.2e-3 & 1. & 2.67 & \(3.3^{+3.2}_{-0.2}\) & 1086/7666 & \(0.15\pm 0.03\) \\ NGC720 & 0.14 & \(0.61\pm 0.02\) & \(0.11\pm 0.02\) & 0.0065 & 7.9e-4 & 0.95 & 1.13 & \(1.3^{+3.0}_{-0.05}\) & 232/124 & \(0.25\pm 0.1\) \\ NGC1550 & 0.114 & \(1.26\pm 0.01\) & \(0.26\pm 0.02\) & 0.0132 & 7.5e-3 & 1. & 1.33 & \(3.5^{+2.0}_{-0.2}\) & 1115/1051 & \(0.55\pm 0.2\) \\ NGC1600 & 0.04 & \(1.33^{+0.02}_{-0.07}\) & \(0.12^{+0.07}_{-0.04}\) & 0.0163 & 1.1e-3 & 0.65 & 1.3 & \(0.81^{+0.7}_{-0.7}\) & 161/146 & \(<0.6\) \\ NGC3091 & 0.013 & \(0.01\pm 0.03\) & \(0.09\pm 0.02\) & 0.013 & 1.37e-3 & 0.95 & 3.16 & \(8.5\pm 6.6\) & 92/78 & \(<1\) \\ NGC5813 & 0.043 & \(0.7
keV is absorbed away in our HCF fits, meaning the continuum shape plays a significant role in our spectral fit results.
We reduced the energy band of the spectra of several lower temperature objects to 12-20A due to broad excess residuals around 10A. These are likely due to the APEC component having a (small) spread in temperature.
The absorbed luminosities (\(L_{\mathrm{a}}\), Table 3) are all less than the Far Infrared luminosities, where available. This indicates that the energy lost in the cooling flow to absorption is energetically capable of emerging as radiation from dust in the absorbing gas.
Fig 3 shows the HCF mass cooling rates (in red) compared (where available) with the "classical" rates from X-ray imaging listed by (Hudson et al., 2010) (in black). The mean ratio of Hidden to classical rates is 45 per cent, with a range from 4 to 180 per cent. (Most lie between 17 and 58 percent.) The HCF rates are about 1 M\({}_{\odot}\) yr\({}^{-1}\) for elliptical galaxies, 2 to 20 M\({}_{\odot}\) yr\({}^{-1}\) for Brightest Group Galaxies (BGG) and about 10 to 100 M\({}_{\odot}\) yr\({}^{-1}\) for regular Brightest Cluster Galaxies (BCG). There are then a group of more distant, exceptionally X-ray luminous, BCGs with 400 to \(>1000\) M\({}_{\odot}\) yr\({}^{-1}\).
We suspect that the last group of rare objects may be highly time variable, with peak luminosity followed by a quasar eruption. The regular clusters and elliptical galaxies generally have low luminosity nuclei, with radio emission from jets that blow bubbles in the intracluster medium. The bubbles and related activity generally lie outside the inner kpc studied here.
## 4 The Accumulation of Cooled Gas
Over a billion years \(10^{9}\) M\({}_{\odot}\) of gas will have cooled in a typical elliptical, up to \(10^{10}\) M\({}_{\odot}\) in a BCG. These are large values, the higher end of which exceeds the cold molecular masses observed via CO emission in BCGs (Russell et al., 2019; Olivares et al., 2019). It is possible that the mass of molecular gas has been underestimated due to low abundance and an unseen diffuse component, but this is unlikely to make a very large difference.
In HCFI we considered the following possibilities: a) continued cooling to invisibility at 3K, b) fragmentation and collapse into substellar objects since the Jeans mass is less than 0.1 M\({}_{\odot}\), c) outward dragging of cooled clouds by the bubbling process or d) cold front formation. The gas and dark matter peaks may be offset by a kpc or more.
We also flagged the similarity in conditions (e.g. gas pressure) of cooled dusty molecular clouds of a BCG core to those in the Crab Nebula. More detailed observational comparisons are warranted.
It is likely that the dominant process is a combination of c) and d) in which the cooled material is spread over the innermost few kpc of the core. Clear evidence of the dragging out of dust-enriched material from the centre is provided by the peaks in metal abundance seen \(\sim 10\) kpc from the centre of low redshift clusters (Panagoulia et al., 2015; Lakhchaura et al., 2019; Liu et al., 2019).
Detailed measurements of the mass profile of each separate component in a cool core (black hole, dark matter, stars, gas etc) will be invaluable in sorting the possibilities out. We now consider whether some small fraction of the very cold clouds and substellar objects can be swallowed by the central black hole in the next section.
\begin{table}
\begin{tabular}{l c c c c c c} \hline Cluster & \(L\) (FIR) & \(L_{\mathrm{at}}\) & \(M\) & \(L\) (H\(\alpha\)) & \(M_{\mathrm{CO}}\) & \(M_{\mathrm{BH}}\) \\ \hline & erg s\({}^{-1}\) & erg s\({}^{-1}\) & M\({}_{\odot}\) yr\({}^{-1}\) & erg s\({}^{-1}\) & M\({}_{\odot}\) & M\({}_{\odot}\) \\ \hline \hline
2A0335 & 4e43 & 2.1e43 & 86 & 8e41 & 1.1e9 & - \\ A85 & 2.8e43 & 9.9e42 & 23 & - & - & 4e10 \\ A496 & - & 9.6e42 & 23 & 5e40 & - & - \\ A2597 & 6.5e43 & 2.1e43 & 67 & 3e42 & 2.3e9 & - \\ A2199 & - & 1.5e42 & 5.6 & 3.5e40 & - & 4e9 \\ M87 & 5.0e41 & 1.6e41 & 0.8 & 1.9e40 & - & 6.5e9 \\ NGC1399 & - & 7.4e41 & 3.3 & 1e39 & 1e9 & 1e9 \\ NGC720 & - & 1.5e41 & 1.0 & - & 1.1e7 & - \\ NGC1550 & - & 8.7e41 & 1.5 & - & - & 4.5e9 \\ NGC1600 & - & 1.3e41 & 0.8 & 4e39 & - & 1.7e10 \\ NGC3091 & - & 1.6e42 & 8.5 & - & - & 3.6e9 \\ NGC5813 & 1.1e42 & 5.9e41 & 2.0 & 1.6e40 & - & - \\ NGC5846 & 6.2e41 & 2.0e41 & 1.3 & 2.5e40 & 2e6 & - \\ MRK1216 & - & 1.3e41 & 9.7 & - & - & 4.9e9 \\ ZW3146 & 1.0e45 & 6.3e44 & 1570 & 6e42 & 5e10 & - \\ \hline NGC5044 & 3.0e42 & 3.6e42 & 20 & 7.0e40 & 1.5e8 & \\ Sersic 159 & 7.3e42 & 2.5e42 & 10 & 2.0e41 & 1.1e9 & \\ A262 & 8.0e42 & 2.1e42 & 7 & 9.4e40 & 4.0e8 & \\ A2052 & 8.3e42 & 4.4e42 & 15 & 6e40 & 2.8e8 & \\ RXJ0821 & 4.5e44 & 7.8e42 & 40 & 3.0e41 & 3.9e10 & \\ RXJ1532 & 2.3e45 & 2.0e44 & 1000 & 3e42 & 8.7e10 & \\ MACS1931 & 5.6e45 & 4.6e44 & 1000 & 2e42 & 9.0e10 & \\ Phoenix Cluster & 3.7e46 & 3.3e44 & 2000 & 8.5e43 & 2e10 & \\ M84 & 1.0e42 & 3.3e41 & 2.0 & 4.0e39 & +1.8e7 & \\ M49 & 1.2e42 & 2.0e41 & 1.0 & 5.8e39 & +1.4e7 & \\ \hline Centaurus & 3.2e42 & 3.6e42 & 15 & 1.7e40 & 1.0e8 & \\ Perseus & 5.6e44 & 5.8e42 & 50 & 3.2e42 & 2.0e10 & \\ A1835 & 3.2e45 & 5.2e43 & 400 & 4.4e42 & 5.0e10 & \\ \hline RXJ1504 & - & 1.9e44 & 520 & 3.2e43 & 1.9e10 & \\ \hline \end{tabular}
\end{table}
Table 3: Relevant Cluster Properties. See subsections of Appendix A for individual object references. A dash indicates lack of data.
### Accretion of fragmented cold matter by the central black hole
We showed in HCFI (Fabian et al., 2022) that, under the high pressure conditions of an HCF (\(nT\sim 10^{6.5}-10^{7.5}\,\rm cm^{-3}\,K\)) and no heating, the gas cools rapidly (timescale of tens of years) to \(\sim 3K\). The Jeans mass is below about \(0.1\,\rm M_{\odot}\)(Jura, 1977; Ferland et al., 1994) and the gas expected to clump and fragment into low mass stars, brown dwarfs etc, some of which will fall into the black hole emitting little radiation. Exactly how large a fraction will be swallowed depends on how angular momentum is transported outward. The turbulent viscosity of a luminous accretion disc is absent here and a possible path is that the innermost cooled gas forms a thick disc of low mass stars and cold gas clouds around the black hole. Dynamical gravitational instabilities such as spiral waves2 and bars within bars transport angular momentum outward in non-spherical systems so that some of the matter falls inward (Shlosman et al., 1989; Hopkins & Quataert, 2011; Gualandris et al., 2017) to be swallowed directly by the central black hole without a standard accretion disc forming. 3
Footnote 2: A spiral feature is seen at the centre of the Centaurus cluster, see Fig 6 in HCFI.
A very crude estimate of the mass inflow rate may be obtained from an isothermal Bondi flow. This of course assumes the matter is a fluid and ignores rotation but does give some idea of the rate at which matter comes under the gravitational influence of the black hole. This simple rate is
\[\dot{M}=4.5\pi\frac{G^{2}M^{2}}{c_{s}^{3}}\rho, \tag{1}\]
where \(M,c_{s}\) and \(\rho\) are the black hole mass \(M_{9}=10^{9}\,\rm M_{\odot}\), the speed of sound (or random motions) and density of the surrounding gas. Taking \(c_{s}=300\,\rm km\,s^{-1}\) and \(\rho\) equal to the mass density if the medium has a mass in units of \(10^{9}\,\rm M_{\odot}\) per sphere of radius \(1\,\rm kpc\), we obtain \(\dot{M}\approx 4M_{9}^{2}\,\rm M_{\odot}\,yr^{-1}\) and an accretion radius of \(\sim 50\,\rm pc\).
Hopkins & Quataert (2011) give an analytical estimate of the accretion rate from gravitational torques which agrees with their numerical simulations and find \(1\,\rm M_{\odot}\,yr^{-1}\) in the middle of the range. The predicted rate has a weak black hole mass dependence of \(M^{1/6}\). Using observations and analytical work, Genzel et al. (2023) show that such torques operating in disc galaxies at \(z\sim 2\) lead to largescale inflow on about 10 dynamical times.
In the case of the elliptical and brightest group galaxies studied here, the accretion rate could exceed the HCF mass cooling rate, which would then become the determining rate. We conclude that rates of a few \(\,\rm M_{\odot}\,yr^{-1}\) may be possible. We are of course assuming a high efficiency with which the cold matter is swallowed by the black hole.
The possibility thus emerges that the mass of black holes in low redshift Elliptical Galaxies is increasing due to inflow from HCF at a rate of several \(\,\rm M_{\odot}\,yr^{-1}\). Angular momentum transfer is due to gravitational torques. The black hole mass can thus increase by up to \(\sim 10^{10}\,\rm M_{\odot}\) since \(z=1\) and possibly even more for the most massive objects in the BCGs of the most massive clusters. Such objects need not have a luminous AGN, although an ADAF due to a weak gaseous inflow may persist and power jets thus a radio source in these objects. They would be the highest accretion rate black holes in the low redshift Universe. If the accretion rate continues for several Gyr then this would lead to the most massive black holes appearing now. Examples in our sample include NGC1600 and the central galaxy of A85, Holm 15A, with \(1.7\times 10^{10}\,\rm M_{\odot}\) and \(4\times 10^{10}\,\rm M_{\odot}\) black holes, respectively. Other BCGs have very high mass black hole including that of the BCG of A1201 for which a gravitationally lensed arc reveals a central mass of \(3\times 10^{10}\,\rm M_{\odot}\)(Nightingale et al., 2023).
### Relevance to galaxy formation
The standard model of galaxy formation involves gas falling into dark matter haloes and heated by shocks and compression, gas that can cool quickly (on a dynamical time or less) then leads to star formation enriching the gas with metals and dust and the core of a new galaxy (White & Rees, 1978). Further accretion, mergers and feedback later build the outer galaxy. The gas which has a longer cooling time than the local dynamical time, but shorter that the age of the Universe at the time, can form a cooling flow (Nulsen & Fabian, 1995). If the conditions such as metal and dust enrichment and especially the pressure are high (\(nT>10^{6}\,\rm cm^{-3}\,K\)) then they may resemble the nearby Hidden Cooling Flows discussed here. If at high redshift, the higher temperature of the Cosmic Microwave Background will in turn require a higher Jeans mass. If cloud collapse does lead to large populations of low mass stars and brown dwarfs then early supermassive black holes can grow by swallowing such fragments whole, independent of the Eddington limit.
### Red Nuggets and MRK 1216
A population of compact Early-Type Galaxies (ETG) have been identified at redshifts of 2 and above which may be examples of galaxies that did not progress beyond the early core formation galaxy stage (Daddi et al., 2005). These are known as "Red Nuggets" (Damjanov et al., 2009) and have stellar masses of \(1-2\times 10^{11}\,\rm M_{\odot}\) and effective radii of only 1-2 kpc. Later some examples have been identified at low redshifts, e.g. NGC1277 (Trujillo et al., 2014) a galaxy unable to grow larger by mergers, or by accretion of cold gas, since it lies in the core of the rich Perseus Cluster. More recently further examples have been found (Ferre-Mateu et al., 2015) including the isolated rotating ETG MRK 1216 (Ferre-Mateu et al., 2017) which lies at a distance of 94 Mpc and hosts a black hole of mass \(4.5\times 10^{9}\,\rm M_{\odot}\)(Walsh et al., 2017).
Werner et al. (2018) noted that MRK 1216 might lie in a halo of mass up to \(10^{13}\,\rm M_{\odot}\) and so have an X-ray halo. They indeed found extended thermal emission with an X-ray luminosity \(L_{\rm X}=7\times 10^{41}\,\rm erg\,s^{-1}\). Buote & Barth (2019) found that its dark matter halo has a high concentration, implying early formation. We have included MRK1216 in our sample and find a significant HCF of \(9.7\pm 2.7\,\rm M_{\odot}\,yr^{-1}\), larger than the rate of typical ETGs. Ferre-Mateu et al. (2017) show it to have a very bottom-heavy IMF which is consistent with a significant accumulation of low mass stars and brown dwarfs.
MRK1216 could provide the nearest link between low and high redshift HCF and clearly merits deeper study.
### Observational Possibilities
Further observations at the time of writing are limited. Hopefully, XRISM will be launched soon and provide new high resolution, non-dispersive, X-ray spectra of the inner regions of clusters, groups and ETG. Its Field of View is larger than that of the RGS so can see how any HCF region matches into the rest of the cluster. High spatial resolution X-ray studies await next generation telescopes such as AXIS. As well as resolving the expected irregular appearance of HCF due to the absorption, it will be particularly helpful for examining the immediate surroundings of the central black hole. The X-IFU of Athena will spectroscopically map HCFs in great detail, as will the Light Element Emission Mapper Probe. JWST may map up the inner regions in the near IR. Since most of the flow of cooled gas takes place at very low temperatures below 10 K, the bulk of the flow will be inaccessible, except to absorption measurements.
ALMA has opened up molecular _absorption_ studies of cool BCGs using the central radio source as a backlight (David et al., 2014; Tremblay et al., 2016; Rose et al., 2019, 2023). Four objects in the last study show molecular gas moving towards the central source at 200 - 300 km s\({}^{-1}\), each plausibly forming part of an inward cold accretion flow.
## 5 Conclusion
We find that significant cooling flows, closely linked with cold absorbing gas, are common in the brightest galaxies of cool core clusters and groups as well as large elliptical galaxies. The mass cooling rates range from 1 to over 1000 M\({}_{\odot}\) yr\({}^{-1}\). In most cases they are reduced by AGN Feedback to a factor 2 to 3 times lower than the simple cooling rates derived from X-ray imaging. The gas in the central hidden/absorbed part can cool to below 10K, collapsing and fragmenting into low mass stars, brown dwarfs etc., most of which are dragged outward by the bubbling and cold front processes. We speculate that some matter within the inner tens pc may fall into the black hole, with a rate of a few M\({}_{\odot}\) yr\({}^{-1}\) being plausible. Such accretion emits little radiation, although it is likely that some thin plasma is present, possibly in the form of a low luminosity ADAF, to power the jets usually seen in radio images. If cooled collapsed matter does fall in, then the mass accretion rate can be among the highest in the low redshift Universe.
## 6 Acknowledgements
BRM acknowledges the Natural Sciences and Engineering Research Council for their support. We thank the referee for a prompt report.
## 7 Data Availability
All data used here are available from ESA's XMM-Newton Science Archive.
|
2305.08076 | Improving Defensive Distillation using Teacher Assistant | Adversarial attacks pose a significant threat to the security and safety of
deep neural networks being applied to modern applications. More specifically,
in computer vision-based tasks, experts can use the knowledge of model
architecture to create adversarial samples imperceptible to the human eye.
These attacks can lead to security problems in popular applications such as
self-driving cars, face recognition, etc. Hence, building networks which are
robust to such attacks is highly desirable and essential. Among the various
methods present in literature, defensive distillation has shown promise in
recent years. Using knowledge distillation, researchers have been able to
create models robust against some of those attacks. However, more attacks have
been developed exposing weakness in defensive distillation. In this project, we
derive inspiration from teacher assistant knowledge distillation and propose
that introducing an assistant network can improve the robustness of the
distilled model. Through a series of experiments, we evaluate the distilled
models for different distillation temperatures in terms of accuracy,
sensitivity, and robustness. Our experiments demonstrate that the proposed
hypothesis can improve robustness in most cases. Additionally, we show that
multi-step distillation can further improve robustness with very little impact
on model accuracy. | Maniratnam Mandal, Suna Gao | 2023-05-14T05:27:17Z | http://arxiv.org/abs/2305.08076v1 | # Improving Defensive Distillation using Teacher Assistant
###### Abstract
Adversarial attacks pose a significant threat to the security and safety of deep neural networks being applied to modern applications. More specifically, in computer vision-based tasks, experts can use the knowledge of model architecture to create adversarial samples imperceptible to the human eye. These attacks can lead to security problems in popular applications such as self-driving cars, face recognition, etc. Hence, building networks which are robust to such attacks is highly desirable and essential. Among the various methods present in literature, defensive distillation has shown promise in recent years. Using knowledge distillation, researchers have been able to create models robust against some of those attacks. However, more attacks have been developed exposing weakness in defensive distillation. In this project, we derive inspiration from teacher assistant knowledge distillation and propose that introducing an assistant network can improve the robustness of the distilled model. Through a series of experiments, we evaluate the distilled models for different distillation temperatures in terms of accuracy, sensitivity, and robustness. Our experiments demonstrate that the proposed hypothesis can improve robustness in most cases. Additionally, we show that multi-step distillation can further improve robustness with very little impact on model accuracy.
knowledge distillation, adversarial attacks, defensive distillation
## I Introduction and Background
Among all the modern applications of Deep Learning (DL), its impact on Computer Vision (CV) tasks seems ubiquitous. DL networks, specifically the ones based on CNNs, have achieved impressive accuracy in tasks like recognition, segmentation, quality assessment, captioning, enhancement, etc. [1, 2]. Although DNNs achieve high performance, they come with inherent security risks. Experts in machine learning and security communities have developed sophisticated methods of successfully constructing adversarial inputs using the knowledge of the architecture which can force the model to produce adversary-selected outputs [3, 4, 5]. With simple but imperceptible modifications of a few pixels, a DNN might misclassify the image of handwritten digits [6], as shown in Figure 1. Left unattended, such vulnerability could lead to security problems in applications including self-driving cars and internet privacy, potentially causing damages and even threatening human lives.
Unfortunately, no known method has been proposed to entirely eliminate such problems, and few of the proposed methods provide satisfactory improvement. Some of the attempts require substantial changes to the existing network architectures that might limit the capacity of the networks [5, 7], and others only provide marginal improvements in robustness against the attacks [8, 9, 10]. One method that was proposed recently to be effective against adversarial attacks is defensive distillation [11], in which knowledge distillation (KD) is applied on models of the same architecture. [11] found that KD on models of the same architecture significantly reduces the rate of adversarial sample crafting.
In this work, we build upon the results from [11] and explore the usage of multi-step distillation via teacher assistants [12] for defense against adversarial attacks (Fig. 2). We also investigate the effect of varying temperature, a hyperparameter that controls the process of the distillation, on such defense. The models have been evaluated in terms of accuracy, sensitivity, and robustness against adversarial attacks.
The rest of this section introduces the two major research our work is built upon: defensive distillation (section I-A) and teacher assistant knowledge distillation (TAKD, section I-B). Next, we state the method used for crafting adversarial samples (section II). In section III, IV and V, we layout the details of our experiments, the metrics used to evaluate the models, and the obtained results on both MNIST and CIFAR10. Finally, we briefly describe the results of applying distillation with multiple steps in section VI.
### _Defensive distillation_
KD was originally introduced as a training procedure that aims to transfer the knowledge learned by a larger network (Teacher) to another relatively smaller one (Student), while
Fig. 1: Adversarial samples crafted using the \(L_{0}\) attack which are successfully misclassified. The ‘0’, ‘1’, and ‘2’ input samples are taken from the MNIST dataset, and the corresponding adversarial samples is shown under the targeted ‘wrong’ classes. (Samples taken from [6])
maintaining the same level of performance [13]. The motivation behind this technique is to reduce the computational complexity of some operations, or compression of large networks, such that devices with limited capacity (e.g. smartphones) could benefit from the achievements of deep learning. In KD, the student network is trained not with the original hard binary labels of the dataset, but instead with the soft labels taken from the output probability of the teacher network.
The technique of defensive distillation introduced by [11] heavily relies on the intuition behind knowledge distillation (KD) and the common methods of attacks. The intuition behind this procedure is that the knowledge acquired by the teacher network is not only embedded in the weight parameters, but also in the probability vector output by the network. Consider the MNIST digits dataset, where the images of digits 7 and 1 share similar spatial structures. The output of the teacher network thus contains relatively similar probabilities for these two digits. This extra entropy in the probability vector contains structural information about the data compared with the hard binary labels, and thus may be helpful when the student network is faced with adversarial samples that could be "unnatural", or outside of the data distribution.
One important factor in the distillation procedure is the choice of distillation temperature (\(T\)). The effect of the value of \(T\) on the resulting defensive distillation is perhaps clearer when we consider the method used for adversarial attacks. Most attacks assume that the adversary has access to the gradient of the network, and can thus estimate the model's sensitivity to the input data (see section II). When the gradients are large, the model is more sensitive because simple perturbations of the input can lead to large changes of the model output, and thus is more prone to attacks. Therefore, a major goal of the defense against adversarial attacks could be to smooth the gradients and to stabilize the model around the input data in the sample space. In this sense, acquiring "softer" labels from the teacher network (corresponding to larger \(T\)) should produce more robust student networks, as confirmed by the results from [11].
The authors in [11] found that using defensive distillation reduces the success rate of adversarial sample crafting from 95.89% to 0.45% on the MNIST dataset [14], and from 87.89% to 5.11% on the CIFAR10 dataset [15].
### _Teacher Assistant Knowledge Distillation (TAKD)_
Knowing that defensive distillation could produce student networks that are more robust to adversarial attacks, a natural question one might ask is, what if we apply distillation multiple times? Another motivation is the observation of the teacher-student gap, where the performance of the student is often far than ideal when there is a large difference between the capacity of the student and the teacher.
[12] explored this multi-step distillation strategy in normal KD settings with the task of image recognition. Instead of training one student directly from the teacher, this method employs a teacher assistant, a network that has an architecture of intermediate size between the teacher and the student, and serves as an intermediate step in the distillation process. In other words, the TA is first trained with the soft labels from the teacher, and then produces another set of soft labels to train the student. The authors observed that student models trained with TAs outperform those trained directly with the teacher. They also discuss the choice of TA size and the distillation path for multi-step distillation beyond one TA. They observed that the more TAs employed in between distillations, the better the student.
In this work, we combine TAKD with defensive distillation, and explore whether adding intermediate steps in the distillation process can lead to even larger improvement in the robustness of the student model.
## II Adversarial Attacks
The main goal of an adversarial attack algorithm is to produce a perturbation small enough to be imperceptible to human eyes, but large enough to produce misclassification. For a sample \(X\) and a trained classifier \(F\), and adversarial attack produces an input sample \(X^{*}=X+\delta X\) (where \(\delta X\)) is the perturbation, such that \(F(X^{*})=Y^{*}\) and \(Y^{*}\neq Y\). There have been several attack algorithms based on the \(L_{0}\), \(L_{2}\), and \(L_{\infty}\) distance metrics proposed in literature such as box-constrained L-BFGS method [3] and Deepfool [16] based on \(L_{2}\), Fast Gradient Sign [5] and Iterative Gradient Sign [17] methods based on \(L_{\infty}\) distance, and Jacobian-based Saliency Map Attack (JSMA) [4] based on the \(L_{0}\) distance. But the methods we used in our project were proposed in [6] which improve upon the mentioned works. The adversarial attack optimization problem is formally defined as:
\[\begin{array}{ll}\mathrm{minimize}&\mathcal{D}(X,X+\delta X)\\ \text{such that}&F(X+\delta X)=t\\ &X+\delta X\in[0,1]^{n}\end{array} \tag{1}\]
Here \(F\) is the classification model, \(t\) is the targeted 'wrong' class, and \(\mathcal{D}\) is the distance metric. The constraint \(F(X+\delta X)\) is highly non-linear, so the problem is expressed differently for applying optimization algorithms. An objective function \(f\) is defined such that \(F(X+\delta X)=t\) iff \(f(X+\delta X)\leq 0\). The authors have proposed and analyzed several such \(f\) in their paper. Replacing the first constraint, an alternative formulation of the optimization problem is given as:
Fig. 2: **Proposed framework:** In this project, we propose introducing an assistant network in distillation to increase the robustness of student against adversarial perturbations.
minimize \(\mathcal{D}(X,X+\delta X)+c\cdot f(X+\delta X)\) such that \(X+\delta X\in[0,1]^{n}\)
Here \(\mathcal{D}(X,X+\delta X)=\|\delta X\|_{p}\), if we use the \(L_{p}\) norm.
### \(L_{2}\) _Attack_
Instead of optimizing for \(\delta X\), the objective is optimized over a transformed variable \(w\), s.t.
\[\delta X_{i}=\frac{1}{2}\left(\tanh\left(w_{i}\right)+1\right)-X_{i} \tag{3}\]
As, \(-1\leq\tanh(w_{i})\leq 1\) implies \(0\leq X_{i}+\delta X_{i}\leq 1\). If \(X\) is the input sample, and \(t\) is the chosen 'wrong' target class (\(t\neq F(X)\)), then the attack finds \(w\) solving
\[\text{minimize }\left\|\frac{1}{2}(\tanh(w)+1)-X\right\|_{2}^{2}+c\cdot f \left(\frac{1}{2}(\tanh(w)+1)\right) \tag{4}\] \[f\left(X^{*}\right)=\max\left(\max\left\{Z\left(X^{*}\right)_{i} :i\neq t\right\}-Z\left(X^{*}\right)_{t},-\kappa\right) \tag{5}\]
Here \(f\) is the best suited objective function proposed in the paper, and \(\kappa\) controls the confidence of misclassification. It encourages the solver to find an adversarial sample \(X^{*}\) which will be classified as class \(t\) with high confidence. For the purpose of experiments in this project, \(\kappa\) has been set to zero. To avoid getting stuck at a local minimum, gradient descent is run with multiple random starting points close to \(X\) for a fixed number of iterations. The random points are sampled at random from a norm ball of radius \(r\) centered at \(X\), where \(r\) is the closest adversarial sample found so far.
### \(L_{0}\) _Attack_
The \(L_{0}\) metric is non-differentiable and therefore standard gradient descent cannot be applied. The \(L_{0}\) attack algorithm uses the \(L_{2}\) adversary to eliminate pixels that are unimportant. Starting with the original image \(X\), let \(\delta X\) be the solution found by \(L_{2}\) adversary, such that \(X^{-}X+\delta X\) is an adversarial sample. The gradient of the objective function is computed (\(g=\nabla f(X*)\)). The pixel with the minimum gradient (\(i=\arg\min_{i}g_{i}\cdot\delta X_{i}\)) is removed from the allowed set, as it will have the least impact on the output. On each iteration, pixels are eliminated and the adversary is restricted to modify only the allowed set. This process is repeated until the \(L_{2}\) adversary fails. By process of elimination, a final subset of important pixels can be identified. The constant \(c\) is initially set to a very low value, and \(L_{2}\) attack is run using this value. Upon failure, the value is doubled and the attack is run again till \(c\) exceeds a threshold, which is declared as a failure. At each iteration, the solution found in the previous one is used as the starting point for gradient descent, and as such, the attack algorithm is much more efficient than previous \(L_{0}\) attacks in the literature.
### \(L_{\infty}\) _Attack_
The \(L_{\infty}\) distance is not fully differentiable and the authors observed that the performance was poor when using the basic objective function. They also noticed that as the \(\left\|\delta X\right\|_{\infty}\) penalizes the largest element, gradient descent gets stuck oscillating between two suboptimal options. To circumvent this, the \(L_{\infty}\) term is replaced in the objective function with apenalty for any element of \(\delta X\) that exceeds threshold \(\tau\). This prevents the oscillation in gradient descent, as all the large values are penalized. The resulting objective function is
\[\text{minimize }\quad c\cdot f(X+\delta X)+\cdot\sum_{i}\left[(\delta X_{i} -\tau)^{+}\right] \tag{6}\]
The threshold \(\tau\) is initially set to \(1\), and decreased in each iteration. If after an iteration, all of the components of \(\delta X\) are less than \(\tau\), it is reduced by a factor of \(0.9\) and repeated. The constant \(c\) is chosen similarly to the \(L_{0}\) case. Initially set to a low value, and if the \(L_{\infty}\) adversary fails, the value of \(c\) is doubled until it exceeds a threshold, when it is declared as a failure. Similar to \(L_{0}\) adversary, gradient descent at each iteration is performed with a "warm-start" for efficiency.
## III Analysis of Defensive Distillation
In defensive distillation, two networks of similar architecture, the teacher (or primary) and the student (or distilled), are trained sequentially. The input to the distillation process is a set of sample images \(X\in\mathcal{X}\) and the corresponding 'hard' labels \(Y(X)\), which is a set of one-hot vectors corresponding to the correct class. Given the training set \(\{(X,Y(X))\ X\in\mathcal{X}\}\), the primary teacher \(F\) is trained with a softmax layer of temperature \(T\). \(F(X)\) is the resulting probability vector computed for input \(X\) using the trained teacher network, and these are used as the'soft' labels for training the distilled student network. If \(F\) has parameters \(\theta_{F}\), then \(F(X)=p(.|X,\theta_{F})\) is the probability distribution of the output. The new training set \(\{(X,F(X))\ X\in\mathcal{X}\}\) is used to train the student DNN (\(F^{d}\)) which has the same neural architecture and the same softmax temperature \(T\) as its teacher. Using soft-targets for training \(F^{d}\) imparts additional knowledge found in probability vectors than compared to hard labels. The additional entropy encodes the relative differences between classes. This relative probabilistic information prevents the network from fitting too tightly to the data and increases its generalizability. Inspired by TAKD, we propose and show that adding an assistant model \(F^{a}\) of the same architecture and softmax temperature in between the teacher and distilled student model increases robustness to adversarial attacks.
### _Impact on Training_
For training the DNN classifier \(F\), the training algorithm minimizes the empirical risk:
\[\arg\min_{\theta_{F}}-\frac{1}{|X|}\sum_{X\in\mathcal{X}}\sum_{i\in 0 \cdot N}Y_{i}(X)\log F_{i}(X) \tag{7}\]
So, to decrease the log-likelihood \(\ell(F,X,Y(X))=-Y(X)\cdot\log F(X)\) of \(F\) on \((X,Y(X))\), the optimizer adjusts \(\theta_{F}\) so
that \(F(X)\) can get close to \(Y(X)\). As, \(Y(X)\) is a one-hot vector, the optimization problem can be written as
\[\arg\min_{\theta_{F}}-\frac{1}{|\mathcal{X}|}\sum_{X\in\mathcal{X}}\log F_{t(X)}(X) \tag{8}\]
So, the optimizer effectively pushes the model to make overconfident predictions on \(t(X)\), while pushing other probabilities to zero. In contrast, when the student network is trained using soft labels \(F(X)\), the optimization problem is given by:
\[\arg\min_{\theta_{F}}-\frac{1}{|\mathcal{X}|}\sum_{X\in\mathcal{X}}\sum_{i\in 0..N}F_{i}(X)\log F_{i}^{d}(X) \tag{9}\]
Using probabilities \(F_{j}(X)\) ensures that the training algorithm constrains the output neurons \(F_{j}^{d}(X)\) proportionally to their likelihood when updating \(\theta_{F}\). This ensures that the model learns the relative likelihood among classes and is not forced into hard predictions, this increasing its generalizability. Introducing \(F^{a}\) in between them further increases this trend. Ideally, with enough samples and training, \(F^{a}\) would eventually converge to \(F\), and consequently \(F^{d}\) would converge to \(F^{a}\), but empirically the model robustness is seen to increase with distillation.
### _Model Sensitivity_
The training procedure gives an intuition about the higher generalizability of the distilled models, but a sensitivity metric shows how crafting adversarial samples gets harder with increasing the distillation temperature. As discussed before, adversarial attacks identify inputs that have a higher chance of misclassification with lower perturbations. In other words, models that are more sensitive to perturbations will cause a higher change in output with small input variations. The authors in [11] have shown that the sensitivity of a model with respect to its input variations is dictated by the magnitude of its Jacobian or gradients. The expression for the \((i,j)\) component of the Jacobian for model \(F\) at distillation temperature \(T\) is:
\[\frac{\partial F_{i}(X)}{\partial X_{j}}\bigg{|}_{T} =\frac{\partial}{\partial X_{j}}\left(\frac{e^{z_{i}(X)/T}}{\sum_ {l=0}^{N-1}e^{z_{l}(X)/T}}\right) \tag{10}\] \[=\frac{1}{T}\frac{e^{z_{i}/T}}{g^{2}(X)}\left(\sum_{l=0}^{N-1} \left(\frac{\partial z_{i}}{\partial X_{j}}-\frac{\partial z_{l}}{\partial X _{j}}\right)e^{z_{l}/T}\right) \tag{11}\]
Here \(z_{0}(X),...,Z_{N-1}(X)\) are the logits. For fixed values of logits, increasing distillation temperature \(T\) will lead to decreasing the magnitude of the model gradients as can be inferred from the above expression. This reduces the model sensitivity to adversarial perturbations, and larger input variations are needed to craft the samples. In other words, increasing distillation temperature reduces the magnitude of the adversarial gradients. It is to be noted that the distillation temperature is introduced only during training, and it is not used in the evaluation, i.e. for testing the models, \(T\) is set to 1.
### _Model Robustness_
To measure the efficacy of defensive distillation, the robustness metric was introduced in [18]. The robustness of a model refers to its resistance to adversarial perturbations. A robust DNN should perform well (in terms of prediction accuracy) both on the training and the outside data, and should be consistent in its class predictions for inputs in the neighborhood of a given sample. Robustness is achieved when this consistency can be ensured in a closed neighborhood. The larger this neighborhood, the higher the robustness of the DNN. The neighborhood cannot be extended indefinitely, as that is the case of a constant function. The robustness of a trained DNN classifier \(F\) is given as:
\[\rho_{adv}(F)=E_{\mu}\left[\Delta_{adv}(X,F)\right] \tag{12}\]
\(X\) is the input data drawn from the true distribution \(\mu\), and \(\Delta_{adv}(X,F)\) is the minimum perturbation that causes a misclassification. Therefore,
\[\Delta_{adv}(X,F)=\arg\min_{\delta X}\{\|\delta X\|:F(X+\delta X)\neq F(X)\} \tag{13}\]
The distance metric can be chosen accordingly. For our project, we compute the average value of \(\Delta_{adv}(X,F)\) computed over all test samples. Higher the value, more robust is the model.
## IV Experiments
### _Dataset_
For our project, we used two legacy image classification datasets - MNIST [14] and CIFAR10 [19]. The MNIST dataset is used for classifying handwritten digits (0-9). It consists of 60,000 training samples and 10,000 testing samples, and the pixels are encoded to \([0,1]\). The CIFAR10 dataset consists of 60,000 color images (three color components) divided into 50,000 training and 10,000 testing samples. Similar to MNIST, CIFAR10 images are classified into 10 mutually exclusive classes. We chose these simple datasets because we could construct relatively shallow models to achieve satisfactory performance. Also, we needed to train and evaluate a large number of models for different temperatures and multiple steps, so we wanted to work with small datasets that can be trained efficiently. Moreover, the previous papers on defensive distillation also dealt with these two simple multi-class classification problems, and we wanted to demonstrate improvements over them.
### _Model Architecture and Training_
To remain consistent with previous works on defensive distillation, we used the same models as [11] and [6]. Both the DNN architectures have 9 layers consisting of 4 convolution layers, 2 max pooling layers, and 3 fully connected dense layers in the head. Momentum and parameter decay were used to ensure convergence, and dropout was used to prevent overfitting. For the softmax layer, we experimented on distillation temperatures \(T=\{1,2,5,10,20,30,40\}\). For training, we used a batch size of 128, and a learning rate of \(\eta=0.01\)
decay rate \(10^{-6}\), momentum 0.9, and SGD optimizer for 50 epochs.
### _Attack Parameters_
We experimented with the three attacks proposed in [6] as described in section II. For each of the attacks, we evaluated the robustness (average perturbation) of the teacher, assistant, and student models for the entire temperature range. For \(L_{2}\) attack, we used a learning rate of \(10^{-2}\), maximum number of iterations as 10,000, initial value of constant \(c\) as \(10^{-3}\), and zero confidence value. For \(L_{0}\) attack, we reduced the maximum number of iterations to 1,000, and the upper limit of the constant to \(2\times 10^{-6}\). For \(L_{\infty}\) attack, the learning rate was fixed at \(5\times 10^{-3}\), the initial value of the constant was set to \(10^{-}5\), and the upper threshold was set to \(20\). The number of maximum iterations was kept the same as \(L_{0}\) attack.
## V Results
### _Accuracy_
In the first set of experiments, we wanted to observe the variation of the accuracy of the distilled models as compared to the original teacher model when trained at different distillation temperatures. For each of the datasets, MNIST and CIFAR10, we evaluate the performance of the models on the test dataset (10,000 images). As mentioned before, for testing we keep the distillation temperature to be \(1\). The baseline accuracy for both the datasets are measured by evaluating the teacher model trained with temperature \(T=1\). The accuracy for baseline MNIST model \(F_{MNIST}\) was \(99.38\%\), and for \(F_{CIFAR10}\) was \(77.72\%\). From figure 3 we can notice that the accuracy for either of the datasets varies very little with distillation. For MNIST, the maximum variation that can be noticed w.r.t the teacher is about \(0.15\%\), whereas for CIFAR10, the maximum variation noticed is \(1.2\%\). It should also be noted that for each of the models, there is a trend of accuracy decreasing with temperature. With the increase in temperature, most of the class outputs have higher probability values and that might make the class distinctions a bit difficult (As \(T\rightarrow\infty\), the output of softmax converges to \(1/N\)). It is also interesting to note that in most cases, for a fixed temperature, the distilled models leads to better performance. This may be due to the increase in generalization ability of the network that leads to better prediction on unobserved samples.
### _Sensitivity_
The second set of experiments pertains to the effect of distillation on the sensitivity of the models. As described in Section III, the sensitivity of a model is measured using the magnitude of its gradients. For each of the two datasets, we computed the gradient magnitudes for the 10,000 test samples as inputs, and took their average. According to the hypothesis presented in the analysis, increasing the distillation temperature should result in decreasing the magnitude of the gradients. Small gradients mean the function is smoother around the sample points in the distribution. A smoother function with lower gradient magnitudes would need higher perturbation for misclassification, which we will show later. The effect of temperature is illustrated in figure 4. For the teacher models, each trained at different temperatures, we compute the magnitude of the gradients averaged over all input dimensions. Repeating this for 10,000 test samples, we have 10,000 instances of mean gradient amplitudes. We then calculate the proportion of samples for which the gradient magnitude is close to zero. For simplifying the illustration, we consider any gradient with amplitude \(<10^{-10}\) to be close to zero. The proportion of such gradients is plotted for both models. As expected, higher temperatures cause the trained models to produce smaller gradients and thus smoother outputs.
### _Robustness_
Robustness is measured by computing the minimum perturbation needed for misclassification (Sec III). In the third set of experiments, we evaluate the efficacy of robustness against the attacks stated in section II. Our goal is to observe how temperature affects the robustness of a model, and if the introduction of the assistant in distillation improves the robustness of the student model. To evaluate this, we craft adversarial samples using each attack on all 10,000 inputs from the test set. For each sample input, the minimum perturbation for misclassifying it is measured (distance between the input and the crafted sample). Finally, we calculate the average of the perturbations over the entire test set which is the robustness value of the model. We repeat this experiment for both the MNIST and CIFAR10 datasets.
Figure 5 is a collection of plots for different models. In each subplot, the orange line indicates the variation of adversarial perturbation for the assistant model, the blue line indicates that of the student model, and the red dotted line is that of
Fig. 4: **Influence of distillation temperature on model sensitivity:** The mean value of the gradient amplitudes calculated for 10,000 test samples for different distillation temperatures are computed, and the proportion of small gradients are calculated. As we expected, higher temperature causes more gradients to go to very small values, effectively smoothing the network output.
Fig. 3: **Influence of distillation on accuracy:** The accuracy of the teacher, assistant, and student models for different distillation temperatures evaluated on the test datasets.
the baseline teacher model. For each dataset, the plots on the top row are of the mean deviations (or perturbations), which is the robustness metric, and the bottom row is the maximum deviation required to craft an adversarial sample. Irrespective of the attack, it can be observed that, in general, increasing the distillation temperature improves the robustness of the model. Also, on average, the student model has higher robustness than the teacher and assistant models. This shows that introducing an assistant network did improve the robustness against adversarial attacks in most cases. Among the different attacks, it can be noticed that \(L_{2}\) is the most efficient as it generates adversarial samples successfully with the least perturbations among the three, whereas \(L_{0}\) is the least efficient. From these sets of experiments, it can be inferred that an assistant model in distillation does help in defending better against adversarial attacks with effectively very minimal hit to performance.
### _Confidence_
The final experiments that we conducted were computing the confidence values of the previously stated models. The confidence of a model calculated over a dataset \(\mathcal{X}\) is the average of the following quantity over all \(X\in\mathcal{X}\):
\[C(X)=\left\{\begin{array}{l}0\text{ if }\operatorname*{arg\,max}_{i}F_{i}(X) \neq t(X)\\ \operatorname*{arg\,max}_{i}F_{i}(X)\text{ otherwise}\end{array}\right. \tag{14}\]
Here \(t(X)\) is the correct class for input \(X\). In [11], the authors have stated the confidence values increase with distillation temperature for the models trained on CIFAR10. We repeated the same experiment for our models but did not observe any concrete trend over temperature. We also did not observe any substantial difference among the teacher, assistant, and student models as shown in Fig. 6. The variation in confidence values is very little in either case.
## VI Multi-step Distillation
As we noticed an improvement in robustness after introducing an assistant in defensive distillation, we wanted to observe the effect of multi-step distillation. Specifically, for the experiments in this section, six models were trained sequentially after the teacher, and the output of each was used as the input labels for the next one. First, we study the effect of multi-step distillation on the classification performance of the models. Fig. 7 plots the model accuracies for different distillation steps and varying the temperature. We can observe that the variation in performance is very little, although we can see the temperature trend in the case of CIFAR10 models as seen before. Also, increasing the levels of distillation does not have a significant effect on accuracy for either of the model classes.
Testing the robustness of multi-step distilled models offers some interesting insights. Fig. 8 is plotted using the robustness metric calculated for different distilled models at varying temperatures. For the sake of clarity, we only show the results of three steps of distillation. The figures on the left show the increasing trend of robustness with increasing temperature as shown before. The figures on the right show the robustness value averaged over all temperatures. These figures clearly show that robustness against the attacks increases as we carry out distillation through multiple steps. Although the improvement is not huge, but it is still noticeable. We only show the plots for \(L_{0}\) and \(L_{2}\) attacks on CIFAR10 models, but the same can be shown for other models and attacks as well. In conclusion, multi-step distillation does help in making
Fig. 5: **Influence of temperature and distillation on robustness:** The plots show the magnitude of average perturbation or robustness (top) and max perturbation (bottom) for each of the two datasets. It can be observed that generally robustness increases with distillation temperature and the introduction of the assistant (orange) increases the robustness of the student (blue).
Fig. 6: **Influence of distillation on confidence:** The confidence values of the CIFAR10 models computed over the test set.
Fig. 7: **Model accuracies for multi-step distillation:** The performance of the MNIST and CIFAR10 models for different levels of distillation and increasing temperature.
the model more robust against attacks with an insignificant hit to performance.
## VII Conclusion
In this project we build on the work in defensive distillation [11], and inspired by the success of TAKD [12], we proposed that introducing the assistant model in distillation would improve the robustness of the student against adversarial attacks. We use the state-of-the-art adversarial attacks proposed in [6] and test it on our models trained on CIFAR10 and MNIST datasets. For both the datasets, we verify the claims on robustness and sensitivity of the distilled models, and also provide empirical evidence supporting our proposed hypothesis. We further experiment on multi-step distillations, and successfully show that with the increase of distillation levels, models tend to get more robust with very little hit to performance. However, we acknowledge that the field of defensive distillation is in its infancy and it is very difficult to analytically prove these claims. Also, modern adversarial attacks are very effective and for most cases distillation cannot provide substantial defense. In the future, we would like to delve deeper into the analysis and try to mathematically support our statements. We would also like to experiment with more architectures and attacks to generalize our claims. Defensive distillation does show promise but still needs to be studied extensively before implementing in real-world applications.
|
2310.03395 | Returns to the origin of the Pólya walk with stochastic resetting | We consider the simple random walk (or P\'olya walk) on the one-dimensional
lattice subject to stochastic resetting to the origin with probability $r$ at
each time step. The focus is on the joint statistics of the numbers
${\mathcal{N}}_t^{\times}$ of spontaneous returns of the walker to the origin
and ${\mathcal{N}}_t^{\bullet}$ of resetting events up to some observation time
$t$. These numbers are extensive in time in a strong sense: all their joint
cumulants grow linearly in $t$, with explicitly computable amplitudes, and
their fluctuations are described by a smooth bivariate large deviation
function. A non-trivial crossover phenomenon takes place in the regime of weak
resetting and late times. Remarkably, the time intervals between spontaneous
returns to the origin of the reset random walk form a renewal process described
in terms of a single `dressed' probability distribution. These time intervals
are probabilistic copies of the first one, the `dressed' first-passage time.
The present work follows a broader study, covered in a companion paper, on
general nested renewal processes. | Claude Godrèche, Jean-Marc Luck | 2023-10-05T09:03:01Z | http://arxiv.org/abs/2310.03395v2 | # Returns to the origin of the Polya walk with stochastic resetting
###### Abstract
We consider the simple random walk (or Polya walk) on the one-dimensional lattice subject to stochastic resetting to the origin with probability \(\boldsymbol{r}\) at each time step. The focus is on the joint statistics of the numbers \(\boldsymbol{\mathcal{N}}_{\boldsymbol{t}}^{\boldsymbol{\times}}\) of spontaneous returns of the walker to the origin and \(\boldsymbol{\mathcal{N}}_{\boldsymbol{t}}^{\boldsymbol{\bullet}}\) of resetting events up to some observation time \(\boldsymbol{t}\). These numbers are extensive in time in a strong sense: all their joint cumulants grow linearly in \(\boldsymbol{t}\), with explicitly computable amplitudes, and their fluctuations are described by a smooth bivariate large deviation function. A non-trivial crossover phenomenon takes place in the regime of weak resetting and late times. Remarkably, the time intervals between spontaneous returns to the origin of the reset random walk form a renewal process described in terms of a single 'dressed' probability distribution. These time intervals are probabilistic copies of the first one, the 'dressed' first-passage time. The present work follows a broader study, covered in a companion paper, on general nested renewal processes.
## 1 Introduction
This work builds upon a previous study on the replication of a renewal process at random times, which is equivalent to nesting two generic renewal processes, or, alternatively, to considering a renewal process subject to random resetting [1]. In that study, we investigated the interplay between the two probability laws governing the distribution of time intervals between renewals, on the one hand, and resettings, on the other hand, resulting in a phase diagram that highlights a rich range of behaviours.
In the present work, we investigate the specific case where the internal renewal process consists of the epochs of returns to the origin of the simple random walk (or Polya walk [2]) on the one-dimensional lattice, while the external one involves discrete-time reset events at which the process is restarted from the origin with probability \(r\) at each time step. The position \(x_{t}\) of the walker at discrete time \(t\) thus obeys the recursion
\[x_{t+1}=\begin{cases}0&\text{with probability }r,\\ x_{t}+\eta_{t+1}&\text{with probability }1-r,\end{cases} \tag{1.1}\]
where \(\eta_{t}=\pm 1\) with equal probabilities. The walk starts at the origin, \(x_{0}=0\). Figure 1 illustrates a sample path of the walk, showing spontaneous returns to the origin marked by crosses and reset events marked by dots. Figure 2 provides a depiction of these temporal events and of the intervals of time between them.
Much of the research in the theory of resetting processes has predominantly concentrated on continuous time stochastic processes (see [3] for a review). In contrast, relatively less emphasis has been devoted to discrete-time processes. An illustrative example of these processes involves discrete-time random walks with continuous distributions of steps subject to resetting [4]. Recent studies have delved into the statistics of extremes and records of symmetric random walks with stochastic resetting [5, 6]. Furthermore, investigations into discrete-time lattice random walks with resetting have also been carried out. Examples include unidirectional random walks with random restarts [7], random walks where the walker is relocated to the previous maximum [8], and random walks with preferential relocations to previously visited locations [9].
Figure 1: Example of a path of the Polya walk on the one-dimensional lattice under stochastic resetting, generated by a simulation with \(r=0.08\). The walk starts at the origin. It restarts afresh at the origin at each resetting event, figured by a dot. Spontaneous returns to the origin are figured by crosses.
However, it is noteworthy that the Polya walk subject to resetting, defined by (1.1), has received relatively limited attention in the literature [10, 11, 6, 12, 13]. References [10, 11] deal with general first-passage properties of lattice random walks in discrete time, with application to the Polya walk, while [12] contains a study of some aspects of the statistics of records for the same walk. In [13], an analysis of the survival probability of symmetric random walks with stochastic resetting was performed, specifically focussing on the probability for the walker not to cross the origin up to time \(t\), including the example of the Polya walk (1.1). Finally, the statistics of extremes and records for the Polya walk with stochastic resetting are discussed in [6].
The focus of the present work is on the joint statistics of the numbers \(\mathcal{N}_{t}^{\times}\) of spontaneous returns to the origin of the reset Polya walk (1.1), and \(\mathcal{N}_{t}^{\bullet}\), denoting the count of reset events, up to a given time \(t\). These are the simplest observables one can think of for this process. Their sum \(\mathcal{N}_{t}^{\times\bullet}=\mathcal{N}_{t}^{\times}+\mathcal{N}_{t}^{\bullet}\) is the total time spent by the walker at the origin.
The motivation for such a research stems from the analysis presented in the companion paper [1]. The latter predicts that the more regular of two nested renewal processes always governs the overall regularity of the entire process. Here, the two renewal processes in question are made of the sequence of spontaneous returns to the origin of the Polya walk, on the one hand, and of the sequence of resetting events, on the other hand. The latter--the more regular process of the two--is a Bernoulli process, as can be seen on its definition (1.1). In such a circumstance, as demonstrated in [1], \(\langle\mathcal{N}_{t}^{\times}\rangle\) grows linearly in time and typical fluctuations of \(\mathcal{N}_{t}^{\times}\) around its mean value are relatively negligible. The purpose of the present work is to corroborate these general results and complete them by a thorough quantitative analysis of the simple specific case at hand--the Polya walk under stochastic resetting.
The setup and the main outcomes of this research are as follows. Section 2 gives an exposition of background concepts and results. For the Polya walk without resetting (section 2.1), we recall results concerning the distribution of the intervals between consecutive returns to the origin, and the statistics of the number \(N_{t}\) of such returns up to some time \(t\). Section 2.2 contains a reminder on the statistics of resetting events in discrete time. Section 3 presents the detailed derivation of the joint probability
Figure 2: Sketch of the temporal events for the path of figure 1. Spontaneous returns to the origin of the walk are figured by crosses, resetting events by dots. The intervals of time between two crosses, \(\boldsymbol{\tau}_{1},\boldsymbol{\tau}_{2},\dots\), have common distribution \(\rho(\tau)\) (see (2.1), (2.2)). The intervals of time between two resettings, \(\boldsymbol{T}_{1},\dots,\boldsymbol{T}_{4}\), have the geometric distribution (2.32). The last interval, \(B_{t}\), represents the backward recurrence time, or age of the resetting process at time \(t\), i.e., the time elapsed since the previous resetting event.
generating function of the random variables \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) at any finite time \(t\). As a first application of the key equation (3.20), derived in section 3.1, the mean values of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) are shown to grow linearly in time, as
\[\langle\mathcal{N}_{t}^{\times}\rangle\approx A^{\times}t,\qquad\langle \mathcal{N}_{t}^{\bullet}\rangle\approx rt,\qquad\langle\mathcal{N}_{t}^{ \times\bullet}\rangle\approx At, \tag{1.2}\]
where the amplitude
\[A=A^{\times}+r=\sqrt{\frac{r}{2-r}} \tag{1.3}\]
is identified with the steady-state probability for the walker to be at the origin. In addition, we give in section 3.2 an interpretation of the distribution of \(\mathcal{N}_{t}^{\times}\) in terms of a single 'dressed' renewal process, and discuss its consequences. An in-depth investigation of the statistics of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) in the late-time regime is done in section 4, highlighting the fact that these quantities are extensive in a strong sense: their joint cumulants grow linearly in time, as
\[\langle(\mathcal{N}_{t}^{\times})^{k}(\mathcal{N}_{t}^{\bullet})^{\ell} \rangle_{c}\approx c_{k,\ell}\,t. \tag{1.4}\]
We provide a method to evaluate all the cumulant amplitudes \(c_{k,\ell}\), and we give the explicit expressions of the first amplitudes corresponding to \(k+\ell\leq 3\) (see (4.13)-(4.15)). The above scaling law of cumulants is virtually equivalent to the statement that large fluctuations of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) far from their mean values obey a large deviation formula of the form
\[\mathbb{P}(\mathcal{N}_{t}^{\times}\approx\xi t,\ \mathcal{N}_{t}^{\bullet} \approx\eta t)\sim\mathrm{e}^{-I(\xi,\eta)t}, \tag{1.5}\]
where the bivariate large deviation function \(I(\xi,\eta)\) is the Legendre transform of the bivariate entropy function \(S(\lambda,\mu)\) generating the cumulant amplitudes \(c_{k,\ell}\). The ensuing univariate large deviation functions \(I^{\bullet}(\eta)\), \(I^{\times}(\xi)\), and \(I(\varphi)\), corresponding respectively to \(\mathcal{N}_{t}^{\bullet}\), \(\mathcal{N}_{t}^{\times}\) and their sum \(\mathcal{N}_{t}^{\times\bullet}\), are plotted in figure 7. In the crossover regime at weak resetting and late times, studied in section 5, it is found that
\[\mathcal{N}_{t}^{\times}\approx\sqrt{t}\,\boldsymbol{\zeta}, \tag{1.6}\]
where the rescaled random variable \(\boldsymbol{\zeta}\) has a limiting distribution with density \(f(\zeta,u)\), depending solely on the parameter \(u=rt=\langle\mathcal{N}_{t}^{\bullet}\rangle\). Figure 8 shows the density \(f(\zeta,u)\) for several values of this parameter, illustrating the crossover between a half-Gaussian form at \(u=0\) and a drifting Gaussian at large \(u\). Section 6 contains a brief discussion, where the main outcomes of the present work are put in perspective with those of the companion paper [1]. Some calculation details pertaining to section 3 are relegated to an appendix.
## 2 Background concepts
### Polya walk without resetting
As is well documented (see, e.g., [14]), the sequence of returns to the origin of the Polya walk forms a discrete renewal process. Let us denote by \(\boldsymbol{T}_{0\to 0}\) the time of first
return to the origin (from either side), and its distribution by
\[\rho(\tau)=\mathbb{P}(\boldsymbol{T}_{0\to 0}=\tau). \tag{2.1}\]
This quantity is non-zero whenever \(\tau=2,4,\dots\) is an even integer. This is also the common distribution of the intervals between two consecutive returns to the origin, denoted by \(\boldsymbol{\tau}_{1},\boldsymbol{\tau}_{2},\dots\), which are independent copies of \(\boldsymbol{T}_{0\to 0}\). The distribution \(\rho(\tau)\) is known in terms of its generating function [14]
\[\tilde{\rho}(z)=\sum_{\tau\geq 0}z^{\tau}\rho(\tau)=1-\sqrt{1-z^{2}}. \tag{2.2}\]
Introducing the binomial probabilities
\[b_{n}=\frac{(2n)!}{(2^{n}n!)^{2}}=\frac{\binom{2n}{n}}{2^{2n}}, \tag{2.3}\]
with generating function
\[\tilde{b}(z)=\sum_{n\geq 0}b_{n}z^{n}=\frac{1}{\sqrt{1-z}}, \tag{2.4}\]
we have
\[\rho(2n)=\frac{b_{n}}{2n-1} \tag{2.5}\]
for \(n\geq 1\), i.e.,
\[\rho(2)=\frac{1}{2},\quad\rho(4)=\frac{1}{8},\quad\rho(6)=\frac{1}{16},\quad \rho(8)=\frac{5}{128}, \tag{2.6}\]
and so on. When the even time \(\tau\) becomes large, we have
\[\rho(\tau)\approx\sqrt{\frac{2}{\pi\tau^{3}}}. \tag{2.7}\]
The corresponding survival probability, defined as the complementary distribution function of \(\boldsymbol{T}_{0\to 0}\),
\[R(\tau)=\mathbb{P}(\boldsymbol{T}_{0\to 0}>\tau)=\sum_{j>\tau}\rho(j), \tag{2.8}\]
obeys \(R(\tau-1)-R(\tau)=\rho(\tau)\). Its generating function reads
\[\tilde{R}(z)=\sum_{\tau\geq 0}z^{\tau}R(\tau)=\frac{1-\tilde{\rho}(z)}{1-z}= \frac{1+z}{\sqrt{1-z^{2}}}. \tag{2.9}\]
We have therefore
\[R(2n)=R(2n+1)=b_{n}, \tag{2.10}\]
i.e.,
\[R(0)=R(1)=1,\quad R(2)=R(3)=\frac{1}{2},\quad R(4)=R(5)=\frac{3}{8}, \tag{2.11}\]
and so on. When \(\tau\) becomes large, irrespective of its parity, we have
\[R(\tau)\approx\sqrt{\frac{2}{\pi\tau}}. \tag{2.12}\]
The asymptotic estimate (2.7) is minus twice the derivative of (2.12), as it should be, because (2.7) only holds for even times \(\tau\).
We now focus on the distribution of the number \(N_{t}\) of returns of the walker to the origin up to time \(t\). This random variable is defined by the condition
\[\boldsymbol{\tau}_{1}+\cdots+\boldsymbol{\tau}_{N_{t}}\leq t<\boldsymbol{ \tau}_{1}+\cdots+\boldsymbol{\tau}_{N_{t}+1}, \tag{2.13}\]
hence the total time \(t\) is decomposed into
\[t=\boldsymbol{\tau}_{1}+\cdots+\boldsymbol{\tau}_{N_{t}}+b_{t}, \tag{2.14}\]
where the last interval, \(b_{t}\), is the backward recurrence time, or the age of the renewal process at time \(t\), i.e., the elapsed time since the last return to the origin. In the present discrete setting, \(b_{t}=0,1,\ldots,\boldsymbol{\tau}_{N_{t}+1}-1\).
A realisation of the set of random variables \(\boldsymbol{\tau}_{1},\ldots,\boldsymbol{\tau}_{N_{t}},b_{t}\), with \(N_{t}=n\), denoted by
\[\tilde{\mathcal{C}}=\{\tau_{1},\ldots,\tau_{n},b\}, \tag{2.15}\]
has weight
\[P(\tilde{\mathcal{C}})=\rho(\tau_{1})\ldots\rho(\tau_{n})\,R(b)\,\delta\Big{(} \sum_{i=1}^{n}\tau_{i}+b,t\Big{)}, \tag{2.16}\]
where \(\delta(i,j)\) is the Kronecker delta symbol.
The distribution of \(N_{t}\) ensues by summing the above weight over all variables \(\{\tau_{i}\}\) and \(b\):
\[p_{n}(t)=\mathbb{P}(N_{t}=n)=\sum_{\{\tau_{i}\},b}\rho(\tau_{1})\ldots\rho( \tau_{n})R(b)\,\delta\Big{(}\sum_{i=1}^{n}\tau_{i}+b,t\Big{)}. \tag{2.17}\]
The expression thus obtained is a discrete convolution, which is easier to handle by taking its generating function with respect to \(t\), which reads
\[\sum_{t\geq 0}w^{t}\,p_{n}(t)=\tilde{\rho}(w)^{n}\tilde{R}(w), \tag{2.18}\]
where \(\tilde{\rho}(w)\) and \(\tilde{R}(w)\) are respectively given by (2.2) and (2.9). The distribution of \(N_{t}\) can be expressed compactly through the probability generating function
\[Z(z,t)=\langle z^{N_{t}}\rangle=\sum_{n\geq 0}z^{n}\,p_{n}(t). \tag{2.19}\]
The generating function of the latter quantity with respect to \(t\) is
\[\tilde{Z}(z,w)=\sum_{t\geq 0}w^{t}Z(z,t)=\tilde{R}(w)\sum_{n\geq 0}(z\tilde{\rho}(w ))^{n}, \tag{2.20}\]
i.e.,
\[\tilde{Z}(z,w)=\frac{1-\tilde{\rho}(w)}{(1-w)(1-z\tilde{\rho}(w))}. \tag{2.21}\]
In particular, the generating function with respect to \(t\) of the mean number \(\langle N_{t}\rangle\) of returns reads
\[\sum_{t\geq 0}w^{t}\langle N_{t}\rangle=\frac{\partial}{\partial z}\tilde{Z}(z,w)\Big{|}_{z=1}=\frac{\tilde{\rho}(w)}{(1-w)(1-\tilde{\rho}(w))}=\frac{1+w}{( 1-w^{2})^{3/2}}-\frac{1}{1-w}. \tag{2.22}\]
We have therefore
\[\langle N_{2n}\rangle=\langle N_{2n+1}\rangle=(2n+1)b_{n}-1, \tag{2.23}\]
i.e.,
\[\langle N_{0}\rangle=\langle N_{1}\rangle=0,\quad\langle N_{2}\rangle=\langle N _{3}\rangle=\frac{1}{2},\quad\langle N_{4}\rangle=\langle N_{5}\rangle=\frac{ 7}{8}, \tag{2.24}\]
and so on. When time \(t\) becomes large, regardless of its parity, we have
\[\langle N_{t}\rangle\approx\sqrt{\frac{2t}{\pi}}. \tag{2.25}\]
The probability of having \(N_{t}=0\) is given by the generating function
\[\sum_{t\geq 0}w^{t}\,p_{0}(t)=\tilde{Z}(0,w)=\frac{1-\tilde{\rho}(w)}{1-w}= \tilde{R}(w) \tag{2.26}\]
(see (2.9)). We thus recover the expected result
\[p_{0}(t)=\mathbb{P}(\boldsymbol{\tau}>t)=R(t). \tag{2.27}\]
The asymptotic distribution of \(N_{t}\) in the regime of late times can be extracted through a scaling analysis of (2.21). Setting \(w=\mathrm{e}^{-s}\) and \(z=\mathrm{e}^{-p}\), and working to leading order in the continuum regime where \(s\) and \(p\) are small, we obtain
\[\int_{0}^{\infty}\mathrm{d}t\,\mathrm{e}^{-st}\langle\mathrm{e}^{-pN_{t}} \rangle\approx\frac{1}{s+p\sqrt{s/2}}. \tag{2.28}\]
Inverting the Laplace transforms in \(p\) and in \(s\) yields
\[\int_{0}^{\infty}\mathrm{d}t\,\mathrm{e}^{-st}\,p_{n}(t)\approx\sqrt{\frac{2} {s}}\,\mathrm{e}^{-\sqrt{2s}\,n}, \tag{2.29}\]
and finally
\[p_{n}(t)\approx\sqrt{\frac{2}{\pi t}}\,\mathrm{e}^{-n^{2}/(2t)}. \tag{2.30}\]
We have thus recovered the known property that the asymptotic distribution of the number \(N_{t}\) of returns to the origin of the simple random walk is a half-Gaussian [15]. The limit of this distribution as \(n\to 0\) is consistent with the asymptotic behaviour of \(R(t)\) given by (2.12). The moments of the distribution (2.30) read
\[\langle N_{t}^{2k}\rangle\approx\frac{(2k)!}{2^{k}k!}\,t^{k},\qquad\langle N_{ t}^{2k+1}\rangle\approx\sqrt{\frac{2}{\pi}}\,2^{k}k!\,t^{k+1/2}. \tag{2.31}\]
In particular, the first moment agrees with (2.25).
### Statistics of resetting events
The resetting events also constitute a discrete renewal process, referred to in [1] as the external renewal process. The integer intervals of time \(\boldsymbol{T}_{1},\boldsymbol{T}_{2}\dots\) between two consecutive resettings have the geometric distribution
\[f(T)=r(1-r)^{T-1}\qquad(T\geq 1), \tag{2.32}\]
whose complementary distribution function is given by
\[\Phi(T)=\sum_{j>T}f(j)=(1-r)^{T}\qquad(T\geq 0). \tag{2.33}\]
The corresponding generating functions read
\[\tilde{f}(z) = \sum_{T\geq 1}z^{T}f(T)=\frac{rz}{1-(1-r)z}, \tag{2.34}\] \[\tilde{\Phi}(z) = \sum_{T\geq 0}z^{T}\Phi(T)=\frac{1-\tilde{f}(z)}{1-z}=\frac{1}{1-(1 -r)z}. \tag{2.35}\]
The number of resetting events \(M_{t}\) is defined by the condition
\[\boldsymbol{T}_{1}+\dots+\boldsymbol{T}_{M_{t}}\leq t<\boldsymbol{T}_{1}+ \dots+\boldsymbol{T}_{M_{t}+1}, \tag{2.36}\]
hence
\[t=\boldsymbol{T}_{1}+\dots+\boldsymbol{T}_{M_{t}}+B_{t},\qquad B_{t}=0,1, \dots,\boldsymbol{T}_{M_{t}+1}-1. \tag{2.37}\]
The last interval \(B_{t}\) is the backward recurrence time, or the age of the resetting process at time \(t\), i.e., the elapsed time since the last resetting event.
A realisation of the set of random variables \(\boldsymbol{T}_{1},\dots,\boldsymbol{T}_{M_{t}},B_{t}\), with \(M_{t}=m\), denoted by
\[\mathcal{C}=\{T_{1},\dots,T_{m},B\}, \tag{2.38}\]
has weight
\[P(\mathcal{C})=f(T_{1})\ldots f(T_{m})\,\Phi(B)\;\delta\Big{(}\sum_{i=1}^{m}T_{i}+ B,t\Big{)}. \tag{2.39}\]
Following the same approach as in (2.21), we have
\[Y(y,w)=\sum_{t\geq 0}w^{t}\langle y^{M_{t}}\rangle=\frac{\tilde{\Phi}(w)}{1-y \tilde{f}(w)}=\frac{1}{1-(1-r+ry)w}. \tag{2.40}\]
Hence
\[\langle y^{M_{t}}\rangle=(1-r+ry)^{t}, \tag{2.41}\]
implying that \(M_{t}=0,\ldots,t\) has the binomial distribution
\[\mathbb{P}(M_{t}=m)=\binom{t}{m}r^{m}(1-r)^{t-m} \tag{2.42}\]
at all times, in agreement with the property that resetting events are independent from each other, and therefore form a Bernoulli process. In particular, the mean number of resettings reads
\[\langle M_{t}\rangle=rt. \tag{2.43}\]
## 3 Spontaneous returns to the origin and resetting events
### The key equation
As stated in the introduction, the main purpose of this work is to analyse the joint distribution of the numbers \(\mathcal{N}_{t}^{\bullet}\) of dots representing resetting events and \(\mathcal{N}_{t}^{\times}\) of crosses representing spontaneous returns to the origin, for the reset Polya walk up to time \(t\). These numbers are respectively given by
\[\mathcal{N}_{t}^{\bullet}=M_{t}, \tag{3.1}\]
introduced above, and
\[\mathcal{N}_{t}^{\times}=N_{\boldsymbol{T}_{1}-1}+N_{\boldsymbol{T}_{2}-1}+ \cdots+N_{\boldsymbol{T}_{M_{t}-1}}+N_{B_{t}}, \tag{3.2}\]
as illustrated on figure 1. The sum of these two numbers is denoted by
\[\mathcal{N}_{t}^{\times\bullet}=\mathcal{N}_{t}^{\times}+\mathcal{N}_{t}^{ \bullet}, \tag{3.3}\]
and reads
\[\mathcal{N}_{t}^{\times\bullet}=\sum_{\tau=1}^{t}\delta(x_{\tau},0), \tag{3.4}\]
where \(x_{\tau}\) is the position of the walker at time \(\tau\) (see (1.1)). This is the total time spent by the walker the origin, either by a resetting event (a dot) or by a spontaneous return (a cross).
Notice that the distribution of the number of dots, \(\mathcal{N}_{t}^{\bullet}=M_{t}\) (see (3.1)), is known from (2.40), (2.41), (2.42) and (2.43). Thus
\[\mathcal{Y}(y,w)=\sum_{t\geq 0}w^{t}\langle y^{\mathcal{N}_{t}^{\bullet}}\rangle =Y(y,w)=\frac{1}{1-(1-r+ry)w}, \tag{3.5}\]
\[\langle y^{\mathcal{N}_{t}^{\bullet}}\rangle=(1-r+ry)^{t}, \tag{3.6}\]
\[\mathbb{P}(\mathcal{N}_{t}^{\bullet}=\mathcal{N})=\binom{t}{\mathcal{N}}r^{ \mathcal{N}}(1-r)^{t-\mathcal{N}}, \tag{3.7}\]
\[\langle\mathcal{N}_{t}^{\bullet}\rangle=rt. \tag{3.8}\]
Notice also that the situation simplifies in the following two special cases. First, in the absence of resetting (\(r=0\)), we have
\[\mathcal{N}_{t}^{\times}=\mathcal{N}_{t}^{\times\bullet}=N_{t},\qquad\mathcal{ N}_{t}^{\bullet}=0, \tag{3.9}\]
second, when a resetting event occurs at every time step (\(r=1\)), then
\[\mathcal{N}_{t}^{\bullet}=\mathcal{N}_{t}^{\times\bullet}=t,\qquad\mathcal{N} _{t}^{\times}=0. \tag{3.10}\]
The central quantity for the determination of the joint statistics of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) is the generating function
\[\mathcal{Z}(z,y,t)=\langle z^{\mathcal{N}_{t}^{\times}}y^{\mathcal{N}_{t}^{ \bullet}}\rangle, \tag{3.11}\]
where the average is taken over the external configurations \(\mathcal{C}\) (see (2.38)), with weight \(P(\mathcal{C})\) given by (2.39), and over the internal configurations \(\tilde{\mathcal{C}}\) (see (2.15)), with weight \(P(\tilde{\mathcal{C}})\) given by (2.16). Thus
\[\mathcal{Z}(z,y,t)=\sum_{\mathcal{C}}P(\mathcal{C})\sum_{\tilde{\mathcal{C}}}P (\tilde{\mathcal{C}})\,z^{\mathcal{N}_{t}^{\times}}y^{\mathcal{N}_{t}^{ \bullet}}, \tag{3.12}\]
with the notations
\[\sum_{\mathcal{C}}=\sum_{m\geq 0}\sum_{\{T_{i},\,B\}},\qquad\sum_{\tilde{ \mathcal{C}}}=\sum_{n\geq 0}\sum_{\{\tau_{i},\,b\}}. \tag{3.13}\]
The average over the internal variables of each term \(z^{N_{T_{i}}-1}\) with weight \(P(\tilde{\mathcal{C}})\) gives a factor \(Z(z,T_{i}-1)\) (see (2.19)). We then average over the external variables with weight \(P(\mathcal{C})\) to arrive at
\[\langle z^{\mathcal{N}_{t}^{\times}}y^{\mathcal{N}_{t}^{\bullet}}\rangle=\sum _{\mathcal{C}}P(\mathcal{C})y^{m}Z(z,T_{1}-1)\ldots Z(z,T_{m}-1)Z(z,B). \tag{3.14}\]
The expression thus obtained is a discrete convolution, which is easier to handle by taking its generating function with respect to \(t\), leading to
\[\tilde{\mathcal{Z}}(z,y,w)=\sum_{t\geq 0}w^{t}\mathcal{Z}(z,y,t)=\sum_{m\geq 0}y^{m} \tilde{\varphi}(z,w)^{m}\tilde{\psi}(z,w)=\frac{\tilde{\psi}(z,w)}{1-y\tilde{ \varphi}(z,w)}, \tag{3.15}\]
with
\[\tilde{\varphi}(z,w) = \sum_{T\geq 1}w^{T}f(T)\,Z(z,T-1), \tag{3.16}\] \[\tilde{\psi}(z,w) = \sum_{B\geq 0}w^{B}\Phi(B)\,Z(z,B). \tag{3.17}\]
The expressions (3.15)-(3.17) are quite general [1]. They hold for arbitrary distributions \(\rho(\tau)\) and \(f(T)\), both in a continuous and in a discrete setting. For the case at hand, the generating functions of \(Z(z,T)\), \(f(T)\) and \(\Phi(T)\) are respectively given in (2.21), (2.34) and (2.35). We have therefore
\[\tilde{\varphi}(z,w) = rw\,\tilde{Z}(z,\tilde{w}),\] \[\tilde{\psi}(z,w) = \tilde{Z}(z,\tilde{w}), \tag{3.18}\]
where we introduced the shorthand notation
\[\tilde{w}=(1-r)w, \tag{3.19}\]
and finally
\[\tilde{\mathcal{Z}}(z,y,w)=\frac{\tilde{Z}(z,\tilde{w})}{1-ryw\tilde{Z}(z, \tilde{w})}. \tag{3.20}\]
The expression (3.20), where \(\tilde{Z}(z,w)\) is given in (2.21), and \(\tilde{\rho}(w)\) in (2.2), is the key result of this section.
The distribution of \(\mathcal{N}_{t}^{\bullet}=M_{t}\) is obtained by setting \(z=1\) in (3.20). We thus recover the expression (3.5), since \(\tilde{\mathcal{Z}}(1,y,w)=\mathcal{Y}(y,w)\), as it should be. The distribution of \(\mathcal{N}_{t}^{\times}\) is obtained by setting \(y=1\) in (3.20). We thus have
\[\tilde{\mathcal{Z}}(z,1,w)=\sum_{t\geq 0}w^{t}\langle z^{\mathcal{N}_{t}^{ \times}}\rangle=\frac{\tilde{Z}(z,\tilde{w})}{1-rw\tilde{Z}(z,\tilde{w})}. \tag{3.21}\]
This formula relates the same quantity with and without resetting. Rational relationships of this form are specific to Poissonian resetting in continuous time or to geometric resetting in discrete time (see, e.g., [3, 4, 5, 6, 11, 13]). Another interpretation of (3.21) will be given in section 3.2.
The general expression of the mean value \(\langle\mathcal{N}_{t}^{\times}\rangle\) ensues from (3.15), and reads
\[\sum_{t\geq 0}w^{t}\langle\mathcal{N}_{t}^{\times}\rangle = \frac{\mathrm{d}}{\mathrm{d}z}\mathcal{Z}(z,y,w)\Big{|}_{z=y=1} \tag{3.22}\] \[= \frac{1}{1-\tilde{f}(w)}\Bigg{(}\sum_{T\geq 1}w^{T}\langle N_{T- 1}\rangle\frac{f(T)}{1-w}+\sum_{B\geq 0}w^{B}\langle N_{B}\rangle\Phi(B)\Bigg{)}. \tag{3.23}\]
In the case at hand, this expression yields
\[\sum_{t\geq 0}w^{t}\langle\mathcal{N}_{t}^{\times}\rangle=\frac{(1-\tilde{w} )\tilde{\rho}(\tilde{w})}{(1-w)^{2}(1-\tilde{\rho}(\tilde{w}))}, \tag{3.24}\]
which can alternatively be obtained from (3.20).
In the absence of resetting (\(r=0\)), (3.24) gives back (2.22), as it should be, since \(\mathcal{N}_{t}^{\times}=N_{t}\). In the presence of resetting (\(r\neq 0\)), we obtain the linear growth law
\[\langle\mathcal{N}_{t}^{\times}\rangle\approx A^{\times}t \tag{3.25}\]
in the late-time regime, with
\[A^{\times}=\frac{r\tilde{\rho}(1-r)}{1-\tilde{\rho}(1-r)}=\sqrt{\frac{r}{2-r}} -r. \tag{3.26}\]
In the regime of weak resetting, the mean value \(\langle\mathcal{N}_{t}^{\times}\rangle\) exhibits a smooth crossover between the square-root law (2.25) and the linear law (3.25). The complete determination of the distribution of \(\mathcal{N}_{t}^{\times}\) throughout this crossover regime will be given in section 5.
Summing expressions (3.8) and (3.25), we end up with
\[\langle\mathcal{N}_{t}^{\times\bullet}\rangle\approx At, \tag{3.27}\]
with
\[A=\sqrt{\frac{r}{2-r}}. \tag{3.28}\]
Both amplitudes \(A^{\times}\) and \(A\) vanish as
\[A^{\times}\approx A\approx\sqrt{\frac{r}{2}} \tag{3.29}\]
as \(r\to 0\). This square-root scaling will be corroborated by the analysis of the crossover regime (see section 5). The amplitude \(A^{\times}\) vanishes quadratically as
\[A^{\times}\approx\frac{(1-r)^{2}}{2} \tag{3.30}\]
as \(r\to 1\), testifying that the presence of a cross, i.e., of a spontaneous return of the walker to the origin, requires at least two successive time steps without resetting. This amplitude reaches its maximum \(A^{\times}_{\max}=0.134884\dots\) for \(r=0.160713\dots\) As for the amplitude \(A\) of \(\langle\mathcal{N}^{\star\star}_{t}\rangle\), it increases monotonically from \(0\) to \(1\) as \(r\) increases in the same range of values. Figure 3 shows plots of the amplitudes \(A^{\bullet}=r\), \(A^{\times}\), and of their sum \(A\), against the resetting probability \(r\).
The formula (3.28) can be compared with the expression derived in [6] for the position distribution of the walker in the nonequilibrium stationary state reached in the limit of infinitely large times, namely
\[p(x)=\sqrt{\frac{r}{2-r}}\lambda^{-|x|},\qquad\lambda=\frac{1+\sqrt{r(2-r)}}{ 1-r}. \tag{3.31}\]
This distribution falls off exponentially with the distance to the resetting point, i.e., the origin, where it reaches its maximum
\[p(0)=\sqrt{\frac{r}{2-r}}. \tag{3.32}\]
The identity \(A=p(0)\) is to be expected, as both sides represent the fraction of time spent by the walker at the origin. Formally, this identity can be derived by taking the mean values of (3.4) and using the fact that the occupation probability of the origin at time \(t\), \(\langle\delta(x_{t},0)\rangle\), tends to \(p(0)\) at late times.
### The reset process seen as a 'dressed' renewal process
It is interesting to note that there exists another interpretation of the expression (3.21), encoding the full statistics of \(\mathcal{N}^{\times}_{t}\). This expression is of the form (2.21), up to the
replacement of \(\tilde{\rho}(w)\) by
\[\tilde{\rho}^{(\mathtt{r})}(w)=\frac{(1-\tilde{w})\tilde{\rho}(\tilde{w})}{1-w+ rw\tilde{\rho}(\tilde{w})}. \tag{3.33}\]
This implies that the interarrival times between spontaneous returns to the origin (crosses in figure 2) remain, in the presence of resetting, independent, identically distributed, random variables, whose common probability distribution,
\[\rho^{(\mathtt{r})}(\tau)=\mathbb{P}(\boldsymbol{T}^{(\mathtt{r})}_{0\to 0}= \tau), \tag{3.34}\]
has a generating function given by (3.33), and depends only on the resetting probability \(r\) (see (3.35)). In other words, these interarrival times form a renewal process, defined by the 'dressed' distribution (3.34), i.e., they are probabilistic replicas of the 'dressed' first-passage time \(\boldsymbol{T}^{(\mathtt{r})}_{0\to 0}\) (or, in the present context, time of first return to the origin), as depicted in figure 4. The superscript in these expressions is an abbreviation for _replication_ or _resetting_.
The first few values of \(\rho^{(\mathtt{r})}(\tau)\) read
\[\rho^{(\mathtt{r})}(1) = 0,\quad\rho^{(\mathtt{r})}(2)=\frac{1}{2}(1-r)^{2},\quad\rho^{( \mathtt{r})}(3)=\frac{1}{2}r(1-r)^{2},\] \[\rho^{(\mathtt{r})}(4) = \frac{1}{8}(1-r)^{4}+\frac{1}{2}r^{2}(1-r)^{2}+\frac{1}{2}r(1-r) ^{3}=\frac{1}{8}(1-r^{2})^{2}. \tag{3.35}\]
The first three expressions are easy to guess, whereas the fourth is the sum of the probabilities of the following events: {return to the origin in four steps without any resetting}, {two resettings first, then return to the origin in two steps}, and finally {one step away from the origin, a resetting, then return to the origin in two steps}. The right-hand sides in (3.35) give back (2.6), when \(r=0\).
The existence of the aforementioned dressed renewal process is a remarkable phenomenon, which occurs whether the time intervals between resetting events, pertaining to the external renewal process, follow an exponential distribution in continuous time (resulting in a Poisson process), as described in [1], or a geometric distribution (see (2.32)) in discrete time (resulting in a Bernoulli process), as described above. This allows us to access several characteristic features of the reset Polya walk by considering quantities pertaining to the dressed renewal process, as we now elaborate.
Figure 4: The time intervals between spontaneous returns to the origin (crosses) of the reset random walk form a renewal process, described in terms of the ‘dressed’ probability distribution (3.34). These time intervals are probabilistic copies of the first one, \(\boldsymbol{T}^{(\mathtt{r})}_{0\to 0}\), the ‘dressed’ first-passage time. (Compare to figure 2.)
The 'dressed' survival probability is naturally defined, in line with (2.8), as
\[R^{(\mathtt{r})}(\tau)=\mathbb{P}(\boldsymbol{T}^{(\mathtt{r})}_{0\to 0}>\tau)= \sum_{j>\tau}\rho^{(\mathtt{r})}(j). \tag{3.36}\]
Starting from (3.33) and using (2.9), as well as the corresponding generating function for the dressed distribution,
\[\tilde{R}^{(\mathtt{r})}(w)=\frac{1-\tilde{\rho}^{(\mathtt{r})}(w)}{1-w}, \tag{3.37}\]
we can establish the following connection between the generating functions for the survival probabilities (2.8) and (3.36) in the absence or in the presence of resetting,
\[\tilde{R}^{(\mathtt{r})}(w)=\frac{\tilde{R}(\tilde{w})}{1-rw\tilde{R}(\tilde{ w})}. \tag{3.38}\]
The same relation holds for the probability of not crossing the origin up to integer time \(t\), for a discrete-time walker with continuous steps [4], or for the Polya walk [13].
The moments of \(\boldsymbol{T}^{(\mathtt{r})}_{0\to 0}\) can be derived from (3.33). We have in particular
\[\langle\boldsymbol{T}^{(\mathtt{r})}_{0\to 0}\rangle=\frac{1}{A^{\times}}, \tag{3.39}\]
meaning that \(\langle\boldsymbol{T}^{(\mathtt{r})}_{0\to 0}\rangle\) has a minimum at \(r=0.160713\dots\), and that (3.25) can be recast as
\[\langle\mathcal{N}^{\times}_{t}\rangle\approx\frac{t}{\langle\boldsymbol{T}^ {(\mathtt{r})}_{0\to 0}\rangle}, \tag{3.40}\]
which is consistent with intuition.
Another quantity of interest is the probability of occurrence of a cross (spontaneous return to the origin of the Polya walk) at time \(t\), in the absence or in the presence of resetting, i.e., respectively,
\[U(t)=\langle N_{t}\rangle-\langle N_{t-1}\rangle,\qquad U^{(\mathtt{r})}(t)= \langle\mathcal{N}^{\times}_{t}\rangle-\langle\mathcal{N}^{\times}_{t-1} \rangle\qquad(t\geq 1), \tag{3.41}\]
completed by \(U(0)=U^{(\mathtt{r})}(0)=1\). The corresponding generating functions are given by (see (2.22))
\[\tilde{U}(w)=\frac{1}{1-\tilde{\rho}(w)},\qquad\tilde{U}^{(\mathtt{r})}(w)= \frac{1}{1-\tilde{\rho}^{(\mathtt{r})}(w)}. \tag{3.42}\]
We have \(U(2n)=b_{n}\) (see (2.3)), whereas \(U(2n+1)=0\). For \(t\) large, \(U^{(\mathtt{r})}(t)\) converges very rapidly to \(A^{\times}\). The expression of \(U^{(\mathtt{r})}(t)\) in the crossover regime of weak resetting and late times will be given in (5.22).
The tail of the dressed distribution \(\rho^{(\mathtt{r})}(\tau)\) falls off exponentially as
\[\rho^{(\mathtt{r})}(\tau)\sim\mathrm{e}^{-\sigma\tau}. \tag{3.43}\]
The decay rate \(\sigma\) is such that \(w_{0}=\mathrm{e}^{\sigma}\) is the smallest zero of the denominator of (3.33), obeying
\[r^{2}(1-r)w_{0}^{3}+r^{2}w_{0}^{2}+(1-r)w_{0}-1=0. \tag{3.44}\]
The dependence of the decay rate \(\sigma\) on \(r\) is qualitatively similar to that of the amplitude \(A^{\times}\) (see figure 5). It vanishes as \(\sigma\approx r\) as \(r\to 0\), and as \(\sigma\approx(1-r)^{2}/2\) as \(r\to 1\) (compare to (3.29) and (3.30)), reaching its maximum \(\sigma_{\max}=0.126530\dots\) for \(r=0.260465\dots\).
As a summary, while the original renewal process made of the spontaneous returns to the origin of the Polya walk is not stationary, since the distribution (2.7) has a fat power-law tail, the dressed renewal process defined by the exponentially decaying distribution (3.43) becomes eventually stationary (see [1, 16]).
The formula (3.33) is presented in [11] within a different context. It appears as an application to geometric resetting of a general formula concerning the distribution of first-passage times in a generic discrete-step walk in the presence of an arbitrary distribution of reset events. This reference solely focuses on first-passage properties. In particular, it does not delve into the renewal structure of the spontaneous returns to the origin of the reset Polya walk defined by (1.1). The method used in [11] to derive the distribution of the first-passage time for an arbitrary distribution of reset events, which builds upon prior research [10, 17, 18], is revisited in [1].
## 4 Cumulants and large deviations in the late-time regime
In this section we continue the investigation of the joint statistics of the quantities \(\mathcal{N}_{t}^{\times}\) (number of dots) and \(\mathcal{N}_{t}^{\bullet}\) (number of crosses) for the Polya walk in the late-time regime. We shall demonstrate that these variables are extensive in a strong sense, first
Figure 5: Amplitude \(A^{\times}\) (see (3.25) and (3.39)) and decay rate \(\sigma\) (see (3.43)), plotted against the resetting probability \(r\).
by examining their joint cumulants and then by investigating the corresponding large deviation functions.
### Cumulants
The starting point of the analysis is again the key formula (3.20). The late-time regime is governed by the smallest zero \(w_{\star}(z,y)\) of the denominator of that formula, which entails an exponential law of the form
\[\langle z^{\mathcal{N}_{t}^{\times}}y^{\mathcal{N}_{t}^{\bullet}}\rangle\sim w _{\star}(z,y)^{-t} \tag{4.1}\]
for the joint probability generating function of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) in the late-time regime. Introducing the notations
\[z=\mathrm{e}^{\lambda},\qquad y=\mathrm{e}^{\mu}, \tag{4.2}\]
and
\[w_{\star}(z,y)=\mathrm{e}^{-S(\lambda,\mu)} \tag{4.3}\]
brings the estimate (4.1) for the generating function of the joint cumulants of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) to the more familiar form
\[\langle\mathrm{e}^{\lambda\mathcal{N}_{t}^{\times}+\mu\mathcal{N}_{t}^{ \bullet}}\rangle\sim\mathrm{e}^{S(\lambda,\mu)t}. \tag{4.4}\]
The exponential law (4.4) implies that all joint cumulants grow linearly with time, as
\[\langle(\mathcal{N}_{t}^{\times})^{k}(\mathcal{N}_{t}^{\bullet})^{\ell} \rangle_{c}\approx c_{k,\ell}\,t, \tag{4.5}\]
where the amplitudes \(c_{k,\ell}\) are the coefficients of the series expansion
\[S(\lambda,\mu)=\sum_{k+\ell\geq 1}c_{k,\ell}\,\frac{\lambda^{k}}{k!}\,\frac{ \mu^{\ell}}{\ell!} \tag{4.6}\]
of the entropy function \(S(\lambda,\mu)\) entering (4.4).
In particular, setting \(\lambda=\mu\) in (4.4) yields
\[\langle\mathrm{e}^{\lambda\mathcal{N}_{t}^{\times\bullet}}\rangle\sim\mathrm{ e}^{S(\lambda,\lambda)t}, \tag{4.7}\]
so that \(S(\lambda,\lambda)\) generates the amplitudes of the cumulants of \(\mathcal{N}_{t}^{\times\bullet}=\mathcal{N}_{t}^{\times}+\mathcal{N}_{t}^{\bullet}\) in the late-time regime. We have
\[\langle(\mathcal{N}_{t}^{\times\bullet})^{n}\rangle_{c}\approx C_{n}\,t, \tag{4.8}\]
with
\[S(\lambda,\lambda)=\sum_{n\geq 1}C_{n}\,\frac{\lambda^{n}}{n!} \tag{4.9}\]
and
\[C_{n}=\sum_{k=0}^{n}\binom{n}{k}c_{k,n-k}. \tag{4.10}\]
In order to derive explicit expressions for the function \(S(\lambda,\mu)\) defined in (4.3), and therefore for the cumulant amplitudes \(c_{k,\ell}\) and \(C_{n}\), we need an expression of the smallest zero of the denominator in (3.20). This is done in Appendix A, yielding \(w_{\star}(z,y)=w_{1}\), where \(w_{1}\) is known explicitly from (4.2), (A2), (A4), (A5) and (A6), whence, finally,
\[S(\lambda,\mu)=-\ln w_{1}. \tag{4.11}\]
By expanding the expression above as a power series in \(\lambda\) and \(\mu\), we obtain explicit expressions for the cumulants \(c_{k,\ell}\) and \(C_{n}\), which can be further reduced to expressions linear in
\[A=\sqrt{\frac{r}{2-r}} \tag{4.12}\]
(see (3.28)), with coefficients rational in \(r\). The first few formulas given below testify that their complexity increases very fast with the order of the cumulants. To first order in \(\lambda\) and \(\mu\), we have
\[c_{1,0} = A-r,\] \[c_{0,1} = r,\] \[C_{1} = A, \tag{4.13}\]
in agreement with (3.8), (3.26) and (3.28). To second order, we have
\[c_{2,0} = -\frac{4-3r}{2-r}\,A+\frac{2+4r-9r^{2}+5r^{3}-r^{4}}{(2-r)^{2}},\] \[c_{1,1} = \frac{1-r}{2-r}\,A-r(1-r),\] \[c_{0,2} = r(1-r),\] \[C_{2} = -A+\frac{2-r^{2}}{(2-r)^{2}}. \tag{4.14}\]
To third order, we have
\[c_{3,0} = \frac{3+38r-76r^{2}+46r^{3}-8r^{4}}{r(2-r)^{3}}\,A\] \[- \frac{12+14r-66r^{2}+73r^{3}-43r^{4}+15r^{5}-2r^{6}}{(2-r)^{3}},\] \[c_{2,1} = -\frac{(1-r)(4-5r)}{(2-r)^{2}}\,A\] \[+ \frac{r(1-r)(12-32r+30r^{2}-13r^{3}+2r^{4})}{(2-r)^{3}},\] \[c_{1,2} = \frac{(1-r)(1-2r)}{(2-r)^{2}}\,A-r(1-r)(1-2r),\] \[c_{0,3} = r(1-r)(1-2r),\]
\[C_{3}\,=\,\frac{3+20r-31r^{2}+10r^{3}+r^{4}}{r(2-r)^{3}}\,A-\frac{3(2-r^{2})}{(2-r )^{2}}. \tag{4.15}\]
More specific results can be derived in a few special and limiting situations. We have observed that the statistics of \(\mathcal{N}_{t}^{\bullet}\) is simple, as its distribution is the binomial (3.7). This leads to two consequences. First, all its cumulants have an exact linear behaviour in \(t\) at all times:
\[\langle(\mathcal{N}_{t}^{\bullet})^{\ell}\rangle_{c}=c_{0,\ell}\,t. \tag{4.16}\]
Second, the cumulant amplitudes \(c_{0,\ell}\) can be derived from (3.6), which amounts to
\[S(0,\mu)=\sum_{\ell\geq 1}c_{0,\ell}\,\frac{\mu^{\ell}}{\ell!}=\ln(1-r+r{\rm e }^{\mu}). \tag{4.17}\]
The first few amplitudes read
\[c_{0,1} = r,\qquad c_{0,2}=r(1-r),\] \[c_{0,3} = r(1-r)(1-2r),\qquad c_{0,4}=r(1-r)(1-6r+6r^{2}). \tag{4.18}\]
The first three expressions agree with (4.13)-(4.15). The amplitudes \(c_{0,\ell}\) are polynomials in \(r\) of increasing degrees, obeying the linear differential recursion [19]
\[c_{0,\ell+1}=r(1-r)\frac{{\rm d}c_{0,\ell}}{{\rm d}r}\qquad(\ell\geq 1). \tag{4.19}\]
They read explicitly
\[c_{0,\ell}=\sum_{k=1}^{\ell}(-1)^{k-1}(k-1)!\left\{\!\!\begin{array}{c}\ell \\ k\end{array}\!\!\right\}r^{k}, \tag{4.20}\]
where \(\left\{\!\!\begin{array}{c}\ell\\ k\end{array}\!\!\right\}\) are the Stirling numbers of the second kind.
In the regime of strong resetting (\(r\to 1\)), we have
\[S(\lambda,\mu)=\mu+(1-r)({\rm e}^{-\mu}-1)+(1-r)^{2}({\rm e}^{-\mu}-{\rm e}^{- 2\mu}+{\frac{1}{2}}({\rm e}^{\lambda-2\mu}-1))+\cdots \tag{4.21}\]
For \(r=1\), only \(c_{0,1}=1\) is non-zero, in agreement with (3.10). To first order in \(1-r\), only the \(c_{0,\ell}\) are non-zero, as there are no crosses at this order. Their behaviour \(c_{0,\ell}\approx(-1)^{\ell}(1-r)\) for \(\ell\geq 2\) agrees with (4.20). All cumulant amplitudes \(c_{k,\ell}\) become non-trivial to second order in \(1-r\).
The regime of weak resetting (\(r\to 0\)) is more subtle. This richness is related to the crossover phenomenon that will be examined in section 5. An inspection of the formulas (4.13), (4.14) and (4.15) yields the scaling \(c_{k,\ell}\sim A^{2-k}\), with \(A\approx\sqrt{r/2}\) (see (3.28)), at least for odd \(k\). This observation can be corroborated by the following scaling analysis. Let us assume that the cumulant amplitudes behave as
\[c_{k,\ell}\approx b_{k,\ell}\,A^{2-k} \tag{4.22}\]
as \(A\to 0\), i.e., \(r\to 0\), where the \(b_{k,\ell}\) are constants to be determined. This ansatz translates into the scaling form
\[S(\lambda,\mu)\approx A^{2}\,F(h,\mu), \tag{4.23}\]
with
\[h=\frac{\lambda}{A},\qquad F(h,\mu)=\sum_{k+\ell\geq 1}b_{k,\ell}\,\frac{h^{k}}{k!} \,\frac{\mu^{\ell}}{\ell!}. \tag{4.24}\]
Inserting the scaling form (4.23), with the notations (4.2), into (A1), and expanding to the first non-trivial order in \(A\), we obtain that the scaling function \(F(h,\mu)\) obeys the quadratic equation
\[2F^{2}-(8(\mathrm{e}^{\mu}-1)+h^{2})F+8(\mathrm{e}^{\mu}-1)^{2}-2h^{2}=0, \tag{4.25}\]
hence
\[F(h,\mu)=2(\mathrm{e}^{\mu}-1)+\frac{h^{2}}{4}+h\sqrt{\mathrm{e}^{\mu}+\frac{ h^{2}}{16}}. \tag{4.26}\]
Each term of the expression (4.26) is responsible for the scaling (4.22) of some of the cumulant amplitudes as \(A\to 0\). The first term yields
\[c_{0,\ell}\approx 2A^{2}\approx r \tag{4.27}\]
for all \(\ell\geq 1\), in agreement with (4.20). The second term yields
\[c_{2,0}\to\frac{1}{2} \tag{4.28}\]
as \(r\to 0\), in agreement with (4.14). The third term of (4.26) is an odd function of \(h\), and therefore concerns odd values of \(k\). Introducing the series expansion
\[\sqrt{1+\frac{x}{16}}=\sum_{p\geq 0}\frac{a_{p}}{(2p+1)!}\,x^{p},\qquad a_{p}= (-1)^{p-1}\frac{(2p+1)!}{16^{p}(2p-1)}\,b_{p} \tag{4.29}\]
(see (2.3)), i.e.,
\[a_{0}=1,\quad a_{1}=\frac{3}{16},\quad a_{2}=-\frac{15}{256},\quad a_{3}= \frac{315}{4096}, \tag{4.30}\]
and so on, we obtain
\[c_{2p+1,\ell}\approx a_{p}\,\left(-\tfrac{1}{2}(2p-1)\right)^{\ell}\,A^{1-2p}. \tag{4.31}\]
All cumulant amplitudes \(c_{k,\ell}\) with odd \(k\) therefore obey the scaling ansatz (4.22), while the cumulant amplitudes with even \(k\) are subleading as \(r\to 0\), with the exception of those previously mentioned in (4.27) and (4.28). Finally, the cumulant amplitudes \(C_{n}\) for odd \(n\) are governed by the term \(k=n\) in (4.10). We have therefore
\[C_{2p+1}\approx a_{p}\,A^{1-2p}, \tag{4.32}\]
whereas the \(C_{n}\) with even \(n\) are subleading as \(r\to 0\), except \(C_{2}\approx c_{2,0}\approx 1/2\).
### Large deviations
The result (4.4) has an alternative interpretation in terms of large deviations [20, 21, 22, 23]. It implies that the joint probability distribution of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) falls off exponentially as
\[\mathbb{P}(\mathcal{N}_{t}^{\times}\approx\xi t,\ \mathcal{N}_{t}^{\bullet} \approx\eta t)\sim\mathrm{e}^{-I(\xi,\eta)t} \tag{4.33}\]
in the regime of late times, for fixed densities \(\xi\) of crosses and \(\eta\) of dots. The estimate
\[\langle\mathrm{e}^{\lambda\mathcal{N}_{t}^{\times}+\mu\mathcal{N}_{t}^{ \bullet}}\rangle\sim\int\mathrm{d}\xi\int\mathrm{d}\eta\ \mathrm{e}^{[\lambda\xi+\mu\eta-I(\xi,\eta)]t}\sim \mathrm{e}^{S(\lambda,\mu)t} \tag{4.34}\]
shows that the bivariate functions \(S(\lambda,\mu)\) and \(I(\xi,\eta)\) are related by a Legendre transformation of the form
\[S(\lambda,\mu)+I(\xi,\eta)=\lambda\xi+\mu\eta, \tag{4.35}\]
with
\[\xi=\frac{\partial S}{\partial\lambda},\quad\eta=\frac{\partial S}{\partial \mu},\quad\lambda=\frac{\partial I}{\partial\xi},\quad\mu=\frac{\partial I}{ \partial\eta}. \tag{4.36}\]
In the late-time regime, the joint distribution of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) becomes peaked around the point
\[\xi_{0}=A^{\times}=A-r=c_{1,0},\qquad\eta_{0}=A^{\bullet}=r=c_{0,1}, \tag{4.37}\]
in agreement with (3.8), (3.25) and (4.13). The form of the bivariate large deviation function \(I(\xi,\eta)\) around \((\xi_{0},\eta_{0})\) is governed by the regime where \(\lambda\) and \(\mu\) are small. Using the series expansion (4.6), we are left with the quadratic form (see (4.14))
\[I(\xi,\eta)\approx\frac{c_{0,2}(\xi-\xi_{0})^{2}-2c_{1,1}(\xi-\xi_{0})(\eta- \eta_{0})+c_{2,0}(\eta-\eta_{0})^{2}}{2(c_{2,0}c_{0,2}-c_{1,1}^{2})}, \tag{4.38}\]
describing the Gaussian bulk of the joint distribution of \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\).
The subsequent analysis shows that the domain of permitted values of the densities \(\xi\) and \(\eta\) is the triangle ABC shown in figure 6. The large deviation function \(I(\xi,\eta)\) is continuous all along the boundary of the triangle. Its behaviour near the vertices and the edges of the triangle is governed by the regime where \(\lambda\) and/or \(\mu\) are large, either positive or negative.
The vertex \(\mathsf{A}=(0,0)\) is reached for \(\lambda\) and \(\mu\to-\infty\). We obtain \(S(-\infty,-\infty)=\ln(1-r)\), and so
\[I_{\mathsf{A}}=I(0,0)=-\ln(1-r). \tag{4.39}\]
This can be interpreted as follows. Point \(\mathsf{A}\) corresponds to the situation where the system is empty. The absence of resettings (\(\eta=0\)) brings a weight \((1-r)^{t}\). Given this condition, the probability of the system containing no crosses (\(\xi=0\)) is \(R(t)\)
(see (2.27)), which falls off as a power of time (see (2.12)), and thus does not contribute to the large deviation function. The vertex \({\sf B}=(1/2,0)\) is reached for \(\lambda\to-\infty\) and \(\mu\to+\infty\). We obtain
\[I_{\sf B}=I(1/2,0)=-\ln(1-r)+\frac{1}{2}\,\ln 2. \tag{4.40}\]
The absence of resettings (\(\eta=0\)) indeed again brings a weight \((1-r)^{t}\). Given this condition, when time \(t\) is even, the number \({\cal N}_{t}^{\times}\) of crosses takes its maximal value \(t/2\), and \(\xi=1/2\), if the walk consists of a sequence of \(t/2\) back-and-forth excursions on either side of the origin. This constraint brings a weight \(2^{-t/2}\). The vertex \({\sf C}=(0,1)\) is reached for \(\lambda\to+\infty\) and \(\mu\to-\infty\). We obtain
\[I_{\sf C}=I(0,1)=-\ln r. \tag{4.41}\]
The condition \(\eta=1\) indeed amounts to having a resetting event at each time step. This brings a weight \(r^{t}\), and there is no space left for crosses.
Along the edge \({\sf AB}\), \(I(\xi,0)\) increases monotonically from \(I_{\sf A}\) to \(I_{\sf B}\). Along \({\sf AC}\), \(I(0,\eta)\) is not monotonic and exhibits a minimum, to be identified with \(I^{\times}(0)\) (see below). Finally, a generic point along \({\sf BC}\) is reached for \(\lambda\) and \(\mu\to+\infty\) in such a way that the difference \(\lambda-2\mu\) is kept fixed. The large deviation function thus obtained is not monotonic and exhibits a minimum.
Let us now turn to the univariate large deviation functions associated with \({\cal N}_{t}^{\bullet}\), \({\cal N}_{t}^{\times}\) and their sum \({\cal N}_{t}^{\times\bullet}\). In the case of \({\cal N}_{t}^{\bullet}\), (4.33) yields
\[{\mathbb{P}}({\cal N}_{t}^{\bullet}\approx\eta t)\sim{\rm e}^{-I^{\bullet}( \eta)t}\qquad(0<\eta<1), \tag{4.42}\]
where
\[I^{\bullet}(\eta)=\min_{\xi}I(\xi,\eta)=\mu\eta-S(0,\mu) \tag{4.43}\]
Figure 6: Triangular domain of permitted values of densities \(\xi\) of returns to the origin of the walk (crosses) and \(\eta\) of resetting events (dots).
is the Legendre transform of the function \(S(0,\mu)\) given in (4.17). We thus obtain the simple expression
\[I^{\bullet}(\eta)=\eta\ln\frac{\eta}{r}+(1-\eta)\ln\frac{1-\eta}{1-r}, \tag{4.44}\]
with limit values
\[I^{\bullet}(0)=I_{\mathsf{A}},\qquad I^{\bullet}(1)=I_{\mathsf{C}}, \tag{4.45}\]
and a quadratic behaviour
\[I^{\bullet}(\eta)\approx\frac{(\eta-r)^{2}}{r(1-r)} \tag{4.46}\]
around \(\eta_{0}=r\).
In the case of \(\mathcal{N}_{t}^{\times}\), (4.33) yields
\[\mathbb{P}(\mathcal{N}_{t}^{\times}\approx\xi t)\sim\mathrm{e}^{-I^{\times}( \xi)t}\qquad(0<\xi<1/2), \tag{4.47}\]
where
\[I^{\times}(\xi)=\min_{\eta}I(\xi,\eta)=\lambda\xi-S(\lambda,0) \tag{4.48}\]
is the Legendre transform of \(S(\lambda,0)\). This function has the limit value
\[I^{\times}(1/2)=I_{\mathsf{B}} \tag{4.49}\]
and the quadratic behaviour
\[I^{\times}(\xi)\approx\frac{(\xi-A^{\times})^{2}}{2c_{2,0}} \tag{4.50}\]
around \(\xi_{0}=A^{\times}=c_{1,0}\). The limit value \(I^{\times}(0)\) is given by the decay rate of the distribution of \(\mathbf{T}_{0\to 0}^{(\mathsf{r})}\), introduced in (3.43),
\[I^{\times}(0)=\sigma, \tag{4.51}\]
since \(\mathbb{P}(\mathcal{N}_{t}^{\times}=0)=R^{(\mathsf{r})}(t)\sim\mathrm{e}^{- \sigma t}\), according to (3.36).
Finally, in the case of \(\mathcal{N}_{t}^{\times\bullet}=\mathcal{N}_{t}^{\times}+\mathcal{N}_{t}^{\bullet}\), (4.33) yields
\[\mathbb{P}(\mathcal{N}_{t}^{\times\bullet}\approx\varphi t)\sim\mathrm{e}^{- I(\varphi)t}\qquad(0<\varphi<1), \tag{4.52}\]
where
\[I(\varphi)=\min_{\xi}I(\xi,\varphi-\xi)=\lambda\varphi-S(\lambda,\lambda) \tag{4.53}\]
is the Legendre transform of \(S(\lambda,\lambda)\). The limit values
\[I(0)=I_{\mathsf{A}},\qquad I(1)=I_{\mathsf{C}}, \tag{4.54}\]
coincide with (4.45). The function \(I(\varphi)\) has the expected quadratic behaviour
\[I(\varphi)\approx\frac{(\varphi-A)^{2}}{2C_{2}} \tag{4.55}\]
around \(\varphi_{0}=A=C_{1}\).
Figure 7 shows plots of the univariate large deviation functions \(I^{\bullet}(\eta)\), \(I^{\times}(\xi)\), and \(I(\varphi)\), respectively corresponding to \(\mathcal{N}_{t}^{\bullet}\), \(\mathcal{N}_{t}^{\times}\) and their sum \(\mathcal{N}_{t}^{\times\bullet}\), for a resetting probability \(r=0.3\). Numerical values of these functions are obtained by means of (4.11). All derivatives required by the Legendre transform (4.35), (4.36) are worked out by analytical means. For instance,
\[\xi=\frac{\partial S}{\partial\lambda}=-\frac{\partial\ln w_{1}}{\partial\ln z }=-\frac{z}{w_{1}}\frac{\partial w_{1}}{\partial z}=\left.\frac{z\,\partial P /\partial z}{w\,\partial P/\partial w}\right|_{w=w_{1}} \tag{4.56}\]
(see (A1)) is a rational expression in \(r\), \(z\), \(y\) and \(w_{1}\).
## 5 Crossover regime at weak resetting
The statistics of the number \(\mathcal{N}_{t}^{\times}\) of crosses exhibits a non-trivial behaviour in the crossover regime of weak resetting (\(r\to 0\)) and late times (\(t\to\infty\)). In the absence of resettings, the mean value \(\langle\mathcal{N}_{t}^{\times}\rangle\) scales as \(\sqrt{t}\) (see (2.25)), while in the case of weak resetting, it scales as \(\sqrt{r}\,t\) (see (3.25) and (3.26)). These two estimates become comparable when the product \(rt\) is of order unity. Interestingly, the latter is precisely the value of the mean number of resettings \(\langle\mathcal{N}_{t}^{\bullet}\rangle\) (see (3.8)), which implies that a finite number of resetting events are sufficient to induce a macroscopic crossover in the statistics of \(\mathcal{N}_{t}^{\times}\). This phenomenon has also been described in other observables, including the maximum and number of records of random walks under weak resetting [5, 6].
The full distribution of \(\mathcal{N}_{t}^{\times}\) throughout this crossover regime can be derived from (3.20). Setting \(w=\mathrm{e}^{-s}\), \(y=1\) and \(z=\mathrm{e}^{-p}\), and working to leading order in the
continuum regime where \(r\), \(s\) and \(p\) are small, we obtain
\[\int_{0}^{\infty}\mathrm{d}t\,\mathrm{e}^{-st}\langle\mathrm{e}^{-p\mathcal{N}_{ t}^{\times}}\rangle\approx\frac{1}{s+p\sqrt{(r+s)/2}}. \tag{5.1}\]
Inverting the Laplace transform in \(p\) yields
\[\int_{0}^{\infty}\mathrm{d}t\,\mathrm{e}^{-st}\,\mathbb{P}(\mathcal{N}_{t}^{ \times}=\mathcal{N})\approx\sqrt{\frac{2}{r+s}}\exp\Big{(}-s\sqrt{\frac{2}{r+ s}}\,\mathcal{N}\Big{)}. \tag{5.2}\]
The two expressions above are very similar to (2.28) and (2.29). We thus infer from (5.2) that the number \(\mathcal{N}_{t}^{\times}\) of crosses scales as
\[\mathcal{N}_{t}^{\times}\approx\sqrt{t}\,\boldsymbol{\zeta}, \tag{5.3}\]
where the rescaled random variable \(\boldsymbol{\zeta}\) has a limiting distribution with density \(f(\zeta,u)\), depending only on the parameter
\[u=rt=\langle\mathcal{N}_{t}^{\bullet}\rangle. \tag{5.4}\]
Introducing the ratio \(\lambda=s/r\), (5.2) becomes
\[f(\zeta,u)=\sqrt{2u}\int\frac{\mathrm{d}\lambda}{2\pi\mathrm{i}}\frac{ \mathrm{e}^{\lambda u}}{\sqrt{\lambda+1}}\exp\left(-\frac{\lambda}{\sqrt{ \lambda+1}}\,\sqrt{2u}\,\zeta\right). \tag{5.5}\]
This expression cannot be made more explicit, except at \(\zeta=0\), resulting in the following value:
\[f(0,u)=\sqrt{2u}\int\frac{\mathrm{d}\lambda}{2\pi\mathrm{i}}\frac{\mathrm{e}^ {\lambda u}}{\sqrt{\lambda+1}}=\sqrt{\frac{2}{\pi}}\,\mathrm{e}^{-u}. \tag{5.6}\]
Hereafter we examine the behaviour of this distribution in the regimes where the parameter \(u\) is either small or large. We then shift our focus to the analysis of the moments and the cumulants of \(\boldsymbol{\zeta}\).
### Behaviour for \(\boldsymbol{u\ll 1}\)
The behaviour of \(f(\zeta,u)\) for small \(u\) can be derived by setting \(\lambda=p^{2}/u\) in (5.5), and expanding the integrand as a power series in \(u\) at fixed \(p\). We thus obtain
\[f(\zeta,u) = \int\frac{\mathrm{d}p}{2\pi\mathrm{i}}\,\mathrm{e}^{p^{2}-\sqrt{ 2}p\zeta}\left[2\sqrt{2}+\left(\frac{2\zeta}{p}-\frac{\sqrt{2}}{p^{2}}\right) u+\cdots\right] \tag{5.7}\] \[= \sqrt{\frac{2}{\pi}}\,\mathrm{e}^{-\zeta^{2}/2}+\left(2\zeta\, \mathrm{erfc}\,\frac{\zeta}{\sqrt{2}}-\sqrt{\frac{2}{\pi}}\,\mathrm{e}^{- \zeta^{2}/2}\right)u+\cdots,\]
where \(\mathrm{erfc}\) is the complementary error function. The first term matches the half-Gaussian asymptotic distribution (2.30) of \(\mathcal{N}_{t}^{\times}\) in the absence of resetting.
### Behaviour for \(u\gg 1\)
Let us define the random variable \(X\) by
\[\boldsymbol{\zeta}=\sqrt{\frac{u}{2}}+X. \tag{5.8}\]
The behaviour of \(f(\zeta,u)\) for large \(u\) can then be derived by setting \(\lambda=p/\sqrt{2u}\) in (5.5) and expanding the integrand as a power series in \(1/\sqrt{u}\) at fixed \(p\). We thus obtain, with \(\zeta=\sqrt{u/2}+x\),
\[f(\zeta,u) = \int\frac{\mathrm{d}p}{2\pi\mathrm{i}}\,\mathrm{e}^{p^{2}/4-px} \left(1+\frac{8p^{2}x-3p^{3}-8p}{16\sqrt{2u}}+\cdots\right) \tag{5.9}\] \[= \frac{\mathrm{e}^{-x^{2}}}{\sqrt{\pi}}\left(1+\frac{x(2x^{2}+1)}{ 4\sqrt{2u}}+\cdots\right).\]
Whenever the parameter \(u\) is large, the random variable \(\boldsymbol{\zeta}\) therefore consists of a large deterministic part, growing as \(\sqrt{u}\), along with a fluctuating component \(X\) of order unity. To leading order, the distribution of \(X\) is a Gaussian with variance \(1/2\).
Figure 8 shows the distribution \(f(\zeta,u)\) for several values of the parameter \(u=rt\) (see legend). The plotted distributions exhibit a smooth but rather rapid crossover between a half-Gaussian form at \(u=0\) (see (5.7)) and a shifted Gaussian at large \(u\) (see (5.9)). In particular, the maxima of the curves converge very fast to their limit \(1/\sqrt{\pi}=0.564189\ldots\)
Figure 8: Distribution \(f(\zeta,u)\) of the rescaled number \(\zeta\) of returns to the origin in the weak-resetting crossover regime (see (5.3)), for several values of the parameter \(u=rt\) (see legend).
#### Moments of \(\zeta\)
Equation (5.5) yields the following formula for the moments of \(\zeta\):
\[\mu_{k}(u)=\langle\boldsymbol{\zeta}^{k}\rangle=\int_{0}^{\infty}\mathrm{d} \zeta\,\zeta^{k}\,f(\zeta,u)=\frac{k!}{(2u)^{k/2}}\int\frac{\mathrm{d}\lambda}{ 2\pi\mathrm{i}}\,\mathrm{e}^{\,\lambda u}\,\frac{(\lambda+1)^{k/2}}{\lambda^{ k+1}}. \tag{5.10}\]
These moments only depend on the parameter \(u=rt\). They are such that
\[\langle(\mathcal{N}_{\epsilon}^{\times})^{k}\rangle\approx\mu_{k}(u)\,t^{k/2} \tag{5.11}\]
throughout the crossover regime. The moments (2.31) in the absence of resetting yield
\[\mu_{2n}(0)=\frac{(2n)!}{2^{n}n!},\qquad\mu_{2n+1}(0)=\sqrt{\frac{2}{\pi}}\,2^ {n}n!. \tag{5.12}\]
The results above suggest that the moments \(\mu_{k}(u)\) have different analytical expressions according to the parity of the integer exponent \(k\). This is indeed the case.
For even \(k=2n\), the integrand in the rightmost side of (5.10) is a rational function of \(\lambda\). Expanding out \((\lambda+1)^{n}\), we readily obtain
\[\mu_{2n}(u)=\frac{(2n)!n!}{2^{n}}\sum_{m=0}^{n}\frac{u^{m}}{m!(n-m)!(n+m)!}, \tag{5.13}\]
namely
\[\mu_{2}(u)=\frac{u+2}{2},\quad\mu_{4}(u)=\frac{u^{2}+8u+12}{4},\quad\mu_{6}(u )=\frac{u^{3}+18u^{2}+90u+120}{8}, \tag{5.14}\]
and so on. The moment \(\mu_{2n}(u)\) is a polynomial of degree \(n\) in \(u\). Its constant term (\(m=0\)) matches the first expression in (5.12), whereas its leading term (\(m=n\)) yields
\[\mu_{2n}(u)\approx\left(\frac{u}{2}\right)^{n}, \tag{5.15}\]
in agreement with (5.8).
For odd \(k=2n+1\), the integrand in the rightmost side of (5.10) is now the ratio of a rational function by \(\sqrt{\lambda+1}\). Proceeding as before, we obtain
\[\mu_{2n+1}(u)=\frac{(2n+1)!(n+1)!}{(2u)^{n+1/2}}\sum_{m=0}^{n+1}\frac{g_{n+m} (u)}{m!(n+1-m)!(n+m)!}, \tag{5.16}\]
with
\[g_{n}(u)=n!\int\frac{\mathrm{d}\lambda}{2\pi\mathrm{i}}\,\frac{\mathrm{e}^{ \lambda u}}{\lambda^{n+1}\sqrt{\lambda+1}}=\int_{0}^{u}\mathrm{d}v\,(u-v)^{n} \,\frac{\mathrm{e}^{-v}}{\sqrt{\pi v}}. \tag{5.17}\]
It can be shown using two integrations by parts that these functions obey the three-term linear recursion
\[g_{n}(u)=(u-n+\tfrac{1}{2})g_{n-1}(u)+(n-1)ug_{n-2}(u), \tag{5.18}\]
with initial values
\[g_{0}(u)=\operatorname{erf}\sqrt{u},\qquad g_{1}(u)=(u-\tfrac{1}{2}) \operatorname{erf}\sqrt{u}+\sqrt{\frac{u}{\pi}}\operatorname{e}^{-u}. \tag{5.19}\]
We thus obtain
\[\mu_{1}(u) = (2u+1)\frac{\operatorname{erf}\sqrt{u}}{2\sqrt{2u}}+\frac{ \operatorname{e}^{-u}}{\sqrt{2\pi}},\] \[\mu_{3}(u) = (8u^{3}+36u^{2}+18u-3)\frac{\operatorname{erf}\sqrt{u}}{16u\sqrt {2u}}+(4u^{2}+16u+3)\frac{\operatorname{e}^{-u}}{8u\sqrt{2\pi}}, \tag{5.20}\]
and so on. The general structure of the odd moments emerges from the above examples. Their values at \(u=0\) (see (5.12)) cannot be easily read off, as more and more compensations are involved in taking the \(u\to 0\) limit. To leading order at large \(u\), we have
\[\mu_{2n+1}(u)\approx\left(\frac{u}{2}\right)^{n+1/2}, \tag{5.21}\]
again in agreement with (5.8).
To close, we mention that the probability \(U^{(\mathtt{r})}(t)\) introduced in (3.41) scales as
\[U^{(\mathtt{r})}(t)\approx\frac{G(u)}{\sqrt{t}} \tag{5.22}\]
throughout the crossover regime, with
\[G(u)=\tfrac{1}{2}\mu_{1}(u)+u\mu_{1}^{\prime}(u)=\frac{\sqrt{u}\,\operatorname {erf}\sqrt{u}}{\sqrt{2}}+\frac{\operatorname{e}^{-u}}{\sqrt{2\pi}}. \tag{5.23}\]
The probability \(U^{(\mathtt{r})}(t)\) exhibits even-odd oscillations, so that (5.22) actually describes the behaviour of the local average \(\tfrac{1}{2}(U^{(\mathtt{r})}(t)+U^{(\mathtt{r})}(t-1))\).
### Cumulants of \(\boldsymbol{\zeta}\)
In order to compare the above analysis of the crossover with the outcomes of section 4, let us consider the cumulants
\[\gamma_{k}(u)=\langle\boldsymbol{\zeta}^{k}\rangle_{c}. \tag{5.24}\]
At large values of \(u\), neglecting exponentially small corrections, these quantities read
\[\gamma_{1}(u)\,\approx\,\frac{2u+1}{2\sqrt{2u}},\qquad\gamma_{2}(u)\approx \frac{4u-1}{8u},\]
\[\gamma_{3}(u) \approx \frac{6u-1}{16u\sqrt{2u}},\qquad\gamma_{4}(u)\approx\frac{3}{32u^{2}},\] \[\gamma_{5}(u) \approx -\frac{3(10u-3)}{128u^{2}\sqrt{2u}},\qquad\gamma_{6}(u)\approx- \frac{15}{64u^{3}}. \tag{5.25}\]
The cumulants of \(\boldsymbol{\zeta}\) appear to have a simpler dependence on \(u\) than the corresponding moments. To leading order as \(u\gg 1\), the odd cumulants scale as
\[\gamma_{2n+1}(u)\approx a_{n}\left(\frac{u}{2}\right)^{1/2-n}, \tag{5.26}\]
in agreement with (4.31), where the amplitudes \(a_{n}\) are given in (4.29). The second cumulant (variance) admits a finite limit \(1/2\), to be identified with the limit of \(c_{2,0}\), whereas higher even cumulants scale as
\[\gamma_{2n}(u)\approx\frac{\alpha_{n}}{u^{n}}, \tag{5.27}\]
for some constants \(\alpha_{n}\). They are therefore subleading with respect to the odd ones.
## 6 Discussion
To conclude, let us put the main outcomes of the present work in perspective with those of the companion paper [1]. The point process considered in this latter work involves two generic nested renewal processes, an internal one characterised by the distribution \(\rho(\tau)\) of interarrival times, and an external one characterised by the distribution \(f(T)\) of time intervals between resetting events. In [1], the main emphasis was on the number \(\mathcal{N}_{t}^{\times}\) of (internal) renewal events occurring within a fixed observation time \(t\). The statistics of this observable revealed a wide variety of asymptotic behaviours, dependent on the values of the exponents \(\theta_{1}\) and \(\theta_{2}\) governing the tails of the distributions \(\rho(\tau)\) and \(f(T)\). These findings highlight the dominance of the more regular of the two processes, specifically the one with the larger tail exponent, \(\tilde{\theta}=\max(\theta_{1},\theta_{2})\). More specifically, \(\mathcal{N}_{t}^{\times}\) grows linearly in time and has relatively negligible fluctuations whenever \(\tilde{\theta}>1\), whereas \(\mathcal{N}_{t}^{\times}\sim t^{\tilde{\theta}}\) grows subextensively over time while continuing to fluctuate for \(\tilde{\theta}<1\).
The reset Polya walk considered in the present work is a specific instance of the general process made of two arbitrary nested renewal processes. The internal renewal process describes the spontaneous returns of the walker to its starting point, whereas the external one consists of resettings, taking place with probability \(r\) at each time step. In the phase diagram of [1], this example corresponds to \(\theta_{1}=1/2\) and \(\theta_{2}=\infty\), and hence \(\tilde{\theta}=\infty\), so that a high degree of regularity is expected for the entire process.
The present analysis corroborates this prediction and completes it by a breadth of quantitative results concerning the joint statistics of the numbers \(\mathcal{N}_{t}^{\times}\) of crosses (spontaneous returns) and \(\mathcal{N}_{t}^{\bullet}\) of dots (resetting events) in the regime of late times. The most salient of these outcomes--highlighted in the introduction--concerning the linear growth of all joint cumulants \(\langle(\mathcal{N}_{t}^{\times})^{k}(\mathcal{N}_{t}^{\bullet})^{\ell} \rangle_{c}\) and the smoothness of the bivariate
large deviation function \(I(\xi,\eta)\), testify that the numbers \(\mathcal{N}_{t}^{\times}\) and \(\mathcal{N}_{t}^{\bullet}\) are extensive in a very strong sense, and that the reset Polya walk indeed manifests a very high degree of regularity. This characteristic can be related to the exponentially decaying, hence strongly localised, steady-state distribution of the walker's position under stochastic resetting (see (3.31)). It would be worth investigating whether similar regularity properties also manifest themselves in other observables pertaining to the Polya walk under resetting.
#### Data availability statement
The authors have no data to share.
#### Conflict of interest
The authors declare no conflicts of interest.
## Appendix A Zeros of the denominator of (3.20)
This appendix is devoted to the determination of the zeros of the denominator of (3.20) in the variable \(w\). Using (2.21) and (2.2), and eliminating the square root entering the formula thus obtained, results in a polynomial equation of degree three for the zeros \(w(z,y)\), reading
\[P(r,z,y,w)=P_{3}w^{3}+P_{2}w^{2}+P_{1}w+P_{0}=0,\] (A1)
with
\[P_{3} = (1-r)((1-r)z+ry)^{2},\qquad P_{2}=r^{2}y^{2}-(1-r)^{2}z^{2},\] \[P_{1} = (1-r)(1-2z)-2ryz,\qquad P_{0}=2z-1.\] (A2)
Polynomial equations of degree three can be solved analytically, either by Cardano's method or by the trigonometric method. Here, we adopt the latter approach, which has the advantage that no complex numbers are involved when the three zeros are real. This is indeed the case here, for small enough real \(\lambda\) and \(\mu\). Setting \(w=B+x\), we arrive at a reduced equation for \(x\),
\[x^{3}+px+q=0,\] (A3)
with
\[B=-\frac{P_{2}}{3P_{3}},\quad p=\frac{P_{1}}{P_{3}}-\frac{P_{2}^{2}}{3P_{3}^{2 }},\quad q=\frac{P_{0}}{P_{3}}-\frac{P_{1}P_{2}}{3P_{3}^{2}}+\frac{2P_{2}^{3}}{ 27P_{3}^{3}}.\] (A4)
The condition for all zeros to be real reads \(4p^{3}+27q^{2}<0\), implying in particular \(p<0\). These zeros then read
\[w_{1} = B+\sigma\cos(\theta-2\pi/3),\] \[w_{2} = B+\sigma\cos\theta,\]
\[w_{3}\,=\,B+\sigma\cos(\theta+2\pi/3),\] (A5)
with
\[\sigma=-\sqrt{-\frac{4p}{3}},\qquad\cos 3\theta=-\frac{4q}{\sigma^{3}}\qquad(0 \leq\theta\leq\pi/3).\] (A6)
For small enough real \(\lambda\) and \(\mu\), the smallest of the three zeros is \(w_{1}\), which is positive, so that
\[S(\lambda,\mu)=-\ln w_{1},\] (A7)
that is (4.11).
|
2309.01387 | An optimization framework for wind farm layout design using CFD-based
Kriging model | Wind farm layout optimization (WFLO) seeks to alleviate the wake loss and
maximize wind farm power output efficiency, and is a crucial process in the
design of wind energy projects.Since the optimization algorithms typically
require thousands of numerical evaluations of the wake effects, conventional
WFLO studies are usually carried out with the low-fidelity analytical wake
models.In this paper, we develop an optimization framework for wind farm layout
design using CFD-based Kriging model to maximize the annual energy production
(AEP) of wind farms. This surrogate-based optimization (SBO) framework uses
latin hypercube sampling to generate a group of wind farm layout samples, based
on which CFD simulations are carried out to obtain the corresponding AEPs.This
wind farm layout dataset is used to train the Kriging model, which is then
integrated with an optimizer based on genetic algorithm (GA). As the
optimization progresses, the intermediate optimal layout designs are again fed
into the dataset.Such adaptive update of wind farm layout dataset continues
until the algorithm converges.To evaluate the performance of the proposed SBO
framework, we apply it to three representative wind farm cases.Compared to the
conventional staggered layout, the optimized wind farm produces significantly
higher total AEP.In particular, the SBO framework requires significantly
smaller number of CFD calls to yield the optimal layouts that generates almost
the same AEP with the direct CFD-GA method.Further analysis on the velocity
fields show that the optimization framework attempts to locate the downstream
turbines away from the the wakes of upstream ones.The proposed CFD-based
surrogate model provides a more accurate and flexible alternative to the
conventional analytical-wake-model-based methods in WFLO tasks, and has the
potential to be used for designing efficient wind farm projects. | Zhenfan Wang, Yu Tu, Kai Zhang, Zhaolong Han, Yong Cao, Dai Zhou | 2023-09-04T06:35:39Z | http://arxiv.org/abs/2309.01387v1 | # An optimization framework for wind farm layout design using CFD-based Kriging model
###### Abstract
Wind farm layout optimization (WFLO) seeks to alleviate the wake loss and maximize wind farm power output efficiency, and is a crucial process in the design and planning of wind energy projects. Since the optimization algorithms typically require thousands of numerical evaluations of the wake effects, conventional WFLO studies are usually carried out with the low-fidelity analytical wake models, while the higher-fidelity computational-fluid-dynamics-based (CFD-based) methods are seldom used due to the excessive computational cost. In this paper, we develop a self-adaptive optimization framework for wind farm layout design using CFD-based Kriging model to maximize the annual energy production (AEP) of wind farms. This surrogate-based optimization (SBO) framework uses latin hypercube sampling to generate a group of wind farm layout samples, based on which CFD simulations with the turbines modeled as actuator disks are carried out to obtain
the corresponding AEPs. This initial wind farm layout dataset is used to train the Kriging model, which is then integrated with an optimizer based on genetic algorithm (GA). As the optimization progresses, the intermediate optimal layout designs are again fed into the dataset to re-train the Kriging model. Such adaptive update of wind farm layout dataset continues until the algorithm converges to the optimal layout design. To evaluate the performance of the proposed SBO framework, we apply it to three wind farm cases under different wind distribution and terrains. Compared to the conventional staggered layout along the dominant wind direction, the optimized wind farm produces significantly higher total AEP, which is more evenly distributed among the turbines. In particular, the SBO framework requires significantly smaller number of CFD calls to yield the optimal layouts that generates almost the same AEP with the direct CFD-GA method. Further analysis on the velocity fields show that the optimization framework always attempts to locate the downstream turbines away from the the wakes of upstream ones along the dominant wind directions. The proposed CFD-based surrogate model provides a more accurate and flexible alternative to the conventional analytical-wake-model-based methods in WFLO tasks, and has the potential to be used for designing efficient wind farm projects.
Wind farm layout design Surrogate-based optimization Kriging model Actuator disk model
## 1 Introduction
An increasing number of countries have committed to developing sustainable energy as substitutes for fossil fuels due to pressing environmental issues. As wind energy is one of the most economically competitive and abundant renewable energy, wind farms have been installed worldwide to harvest this clean energy resources. However, the interaction between turbines within wind farms, particularly the wake effects generated by upstream turbines, can have a substantial impact on the performance of downstream turbines. These wake effects lead to lower wind speeds and increased turbulence intensity, resulting in reduced power generation. It is reported that the average power loss reaches \(10\%-20\%\) in some large offshore wind farms due to the the wake effects (Barthelmie et al., 2009).
As an effective means to alleviate the wake effects within wind farms at the initial designing phase, wind farm layout optimization (WFLO) has been a subject of extensive researches (Reddy, 2020; Dong et al., 2021; Thomas et al., 2022). Table 1 presents a selective summary of the existing studies on WFLO in terms of the optimization algorithm and the wake model. Both the gradient-based methods such as sparse nonlinear optimizer (SNOPT) and Sequential Convex Programming (SCP), and the heuristic search techniques such as the greedy algorithm, genetic algorithm (GA), particle swarm optimization algorithm (PSO), an colony optimization (ACO), have been applied in these studies. Two main types of wake models are used in WFLO, namely the analytical wake models and the numerical wake models.
Analytical wake models such as the Jensen wake model (Jensen, 1983), the Gaussian model (Larsen, 1988), the Frandsen wake model (Frandsen et al., 2006) are most widely applied in WFLO problems due to the high computational efficiency. These models employ simplified analytical functions based on momentum conservation or empirical relations to describe the turbine's wake characteristics. The most used Jensen model assumes that the wake expands linearly, and the velocity deficit is determined by the distance behind the turbine, wind speed and induction parameter. The Gaussian wake model assumes that the velocity deficit follows a Gaussian distribution. It provides a more realistic representation of the wake's shape compared to the linear wake expansion in Jensen's model. The Frandsen wake model incorporates both linear and Gaussian wake expansions, and offers more accurate predictions by accounting for the initial wake width. Although these analytical models provide valuable insights into the wake development behind wind turbines, they are simplifications of the complex physics involved in wake interactions, and can be less accurate in assessing the wake effects.
Numerical simulations based on computational fluid dynamics (CFD) presents a higher-fidelity method for understanding the aerodynamics of wind turbines. Depending on the way the turbines are modeled, the CFD-based methods can be classified as blade-resolved simulation, actuator line model, and actuator disk model. Blade-resolved simulations provide detailed insights into the flow characteristics around individual blades, including the effects of turbulence, separation, and dynamic stall (Mittal et al., 2016; Liu et al., 2017; de Oliveira et al., 2022; Zhang et al., 2023). However, the huge computational resources required for resolving the blade boundary layer preclude its use for large-scale wind farm studies. The actuator line model (ALM) treats the rotor blades as lines distributed with actuator points, on which the sectional drag and lift forces are projected. Since these aerodynamic forces are calculated based on tabulated airfoil data, this approach removes the computational complexity associated with resolving the rotor blades. ALM has been regarded as the state-of-the-art tool for wind farm simulations (Stevens and Meneveau, 2017; Stevens et al., 2018; Shapiro et al., 2022). Nevertheless, in the case of WFLO where iterative evaluations of the velocity deficit are required, ALM is still prohibitively expensive.
Another popular method in the actuator-type modeling technique is the actuator disk model (ADM). This model further simplifies the rotor swept area as permeable disk. The thrust and torque of the rotor are added to the actuator zone as source terms, which generate pressure drops and velocity plunges across the disk area in the CFD simulations (Sanderse et al., 2011; Svenning, 2010). Compared with ALM, the more intricate flow structures such as root and tip vortices can not be captured by ADM. Nevertheless, the ADM has been shown to depict wake effects with sufficient accuracy (Martinez-Tossas et al., 2015). In recent years, CFD simulations with ADM has been applied in WFLO problems. Antonini et al. (2020) presented a gradient-based algorithm with adjoint method for gradient calculation to solve the optimization of wind farm layout using ADM. The authors claimed that this CFD-based method is able to accurately simulate wake effects and terrain-induced flow characteristics. Cruz and Carmo (2020) applied the heuristic genetic algorithm to optimize the layout of wind farm with terrain effects using CFD. While the authors also commended the superior accuracy of ADM over the analytical wake model in WFLO tasks, they noted that significant amount of computational resources are required for the optimization algorithm to converge to the optimal layout design.
To alleviate the computational challenges of using high-fidelity CFD-based methods in WFLO problems, we propose to apply the Kriging model as a surrogate to accelerate the optimization process. The Kriging model, also known as Gaussian Progress Regression (GPR), was initially developed as a geostatistical method for spatial interpolation and prediction of correlated data (Krige, 1951). It estimates the unknown values at unobserved locations by taking into account the observed values at nearby locations and their spatial relationships. Once this surrogate model is trained, the wake effects and the energy production of a particular layout can be obtained at a low computational cost and without the need of new CFD simulations. The applicability of Kriging model has been demonstrated in various disciplines such as image segmentation (Karl, 2010), hull shape optimization (Casella, 2020), aero-elastic tailoring of bridge decks (Montoya et al., 2022), just to name a few. However, the feasibility of using Kriging model for high-fidelity wind farm design has not been reported before.
In this paper, we develop a self-adaptive optimization framework for high-fidelity wind farm layout design using surrogate-based optimization (SBO) method. Using this framework, we optimize the siting of each turbine within a restricted area to maximize wind farm's annual energy production (AEP) with reduced computational cost. The rest of the paper is organized as follows: Section 2 outlines the formulation of the WFLO problems and the self-adaptive WFLO framework, including the wind farm layout dataset, the numerical method, and the surrogate-based optimization (SBO) method using Kriging. In Section 3, we present three wind farm cases under different wind distributions and terrains to demonstrate the performance and advantages of our proposed framework. Finally, Section 4 presents the conclusions and potential future research directions.
\begin{table}
\begin{tabular}{c c c} \hline \hline Reference & Optimization algorithms & Wake models \\ \hline Mosetti et al. (1994) & GA & Jensen \\ Ozturk and Norman (2004) & Greedy & - \\ Grady et al. (2005) & GA & Jensen \\ Mora et al. (2007) & GA & Jensen \\ Zhang et al. (2011) & Greedy & Jensen \\ Eroglu and Seckiner (2012) & ACO & Jensen \\ Chen et al. (2013) & GA & Frandsen \\ Park and Law (2015) & SCP & Jensen \\ Shakoor et al. (2015) & GA & Jensen \\ Hou et al. (2016) & PSO & Jensen \\ Gebraad et al. (2017) & SNOPT & Floris wake model \\ Parada et al. (2017) & GA & BP \\ Pillai et al. (2017) & GA \& PSO & Larsen \\ Kirchner-Bossi and Porte-Agel (2018) & GA & Gaussian \\ Stanley and Ning (2019) & SNOPT & Gaussian \\ Quan and Kim (2019) & Greedy & Jensen \\ Cruz and Carmo (2020) & GA & CFD-ADM \\ Antonini et al. (2020) & SQP & CFD-ADM \\ Stanley et al. (2020) & SNOPT & BP \\ Gagakuma et al. (2021) & SNOPT & Floris wake model \\ Liu et al. (2021) & GA & Ishihara \\ Thomas et al. (2022b) & SNOPT \& ALPSO & Gaussian \\ \hline \hline \end{tabular}
\end{table}
Table 1: A review of the wind farm layout optimization studies
## 2 Self-adaptive WFLO framework
In this section, we present the details about the proposed wind farm layout optimization (WFLO) framework. The formulation of the WFLO problem is formally introduced in SS2.1. Next, we outline the three main procedures involved in the framework, i.e., the adaptive wind farm layout dataset (SS2.2), the prediction of AEP using CFD simulations (SS2.3), and the construction of Kriging model for surrogate-based optimization (SS2.4). Figure 1 illustrates the self-adaptive WFLO framework. The open-source CFD toolbox OpenFOAM is used for wind farm CFD simulations and AEP prediction. The construction of Kriging model and implementation of SBO are carried out using optimization toolbox DAKOTA. Python and shell glue scripts are created to couple these toolboxes.
### Formulation of WFLO problem
The objective of the WFLO problem is to find the optimal location of each wind turbine to maximize annual energy production (AEP) of the wind farm:
\[\mathrm{AEP}=\frac{365\times 3+366}{4}\times 24\sum_{i=1}^{n}\sum_{j=1}^{m}P_{j} (x_{i},y_{i})f_{j},\]
where \(n,m\) denote numbers of turbine and wind directions respectively, \(P_{j}\) is turbine's power output under wind direction \(j\), \(f_{j}\) is the frequency of wind direction \(j\). \((x_{i},y_{i})\) is the coordinate of turbine \(i\), and is the design variable in this study. To reduce computational cost, we choose to solve a discrete optimization problem by splitting the siting area into square cells as candidate positions for wind turbines (Bai et al., 2022). It was shown in Thomas et al. (2022) that the limitation on potential turbine locations does not necessarily preclude such methods from finding optimal layouts. In WFLO problem, additional inter-distance requirement is defined as:
\[\mathrm{subject\ to\ }d_{i}\geq d_{min},\]
where \(d_{i}\) is the inter-distance between turbines, and must be greater than or equal to \(d_{min}=2D\).
Figure 1: Flow chart of the self-adaptive WFLO framework.
### Adaptive wind farm layout dataset
The wind farm layout parameters and the corresponding AEP acquired from CFD simulations form the wind farm layout dataset. It is comprised of two groups of data, the initial sampling data and iteratively updated data. The distribution of initial samples determines the performance of the framework. Therefore, it needs to be equally distributed throughout design space. We use latin hypercube sampling (LHS) method [McKay et al., 2000] to produce the initial samples of layout parameters, \(\mathbf{X}=\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},...,\mathbf{x}^{(n)}\}^{T}\). The \(\mathbf{x}^{(i)}=\{(x_{1}^{(i)},y_{1}^{(i)}),(x_{2}^{(i)},y_{2}^{(i)}),...,(x_{k} ^{(i)},y_{k}^{(i)})\}\) represents turbines' horizontal coordinate set in the \(i\)th sample of layout. CFD simulations are carried out based on the layout parameters, and the annual energy production (AEP) is collected in \(\mathbf{Y}=\{\mathrm{AEP}^{(1)},\mathrm{AEP}^{(2)},...,\mathrm{AEP}^{(n)}\}^{T}\). The iterative update of dataset is explained in section 2.4.
### Wind farm CFD simulations
In this study, CFD simulations are carried out to model the wake interactions with the rotors treated as actuator disks. The power production of each turbine is predicted using the the ADM-RANS simulations, as illustrated below.
#### 2.3.1 Governing equations
We numerically solve for the flows over the wind turbines by employing Reynolds-averaged Navier-Stokes (RANS) formulation
\[\frac{\partial\overline{u}_{i}}{\partial x_{i}} =0, \tag{1a}\] \[\rho\frac{\partial\overline{u}_{i}}{\partial t}+\rho\frac{ \partial(\overline{u}_{i}\overline{u}_{j})}{\partial x_{j}} =-\frac{\partial\overline{p}}{\partial x_{i}}+\frac{\partial}{ \partial x_{j}}\left(\mu\left(\frac{\partial\overline{u}_{i}}{\partial x_{j}}+ \frac{\partial\overline{u}_{j}}{\partial x_{i}}\right)-\rho\overline{u_{j}^{ \prime}u_{i}^{\prime}}\right)+f_{i}, \tag{1b}\]
where \(x_{i}\) is Cartesian space coordinate, \(\overline{u}_{i}\) and \(\overline{p}\) are the temporally averaged velocity and pressure, respectively. \(\rho\) and \(\mu\) are the air density and dynamic viscosity, \(f_{i}\) represents the source term from the actuator disk. The Reynolds stress emerged from the time-average process \(\rho\overline{u_{j}^{\prime}u_{i}^{\prime}}\) is modeled using the \(k-\epsilon\) turbulence model with the introduction of two new variables, i.e., the turbulence kinematic energy \(k\), and the dissipation rate \(\epsilon\). The standard transport equations for the two new variables read as
\[\rho\frac{\partial k}{\partial t}+\rho\frac{\partial(\overline{u} _{i}k)}{\partial x_{i}} =\frac{\partial}{\partial x_{j}}\left(\frac{\mu_{t}}{\sigma_{k}} \frac{\partial k}{\partial x_{j}}\right)+P_{k}-\rho\epsilon, \tag{2a}\] \[\rho\frac{\partial\epsilon}{\partial t}+\rho\frac{\partial( \overline{u}_{i}\epsilon)}{\partial x_{i}} =\frac{\partial}{\partial x_{j}}\left(\frac{\mu_{t}}{\sigma_{ \epsilon}}\frac{\partial\epsilon}{\partial x_{j}}\right)+C_{1\epsilon}\frac{ \epsilon}{k}P_{k}-C_{2\epsilon}\frac{\epsilon^{2}}{k}\rho, \tag{2b}\]
with
\[P_{k}=-\rho\overline{u_{j}^{\prime}u_{i}^{\prime}}\frac{ \partial u_{i}}{\partial x_{i}}, \tag{3a}\] \[\mu_{t}=-\rho C_{\mu}\frac{k^{2}}{\epsilon}, \tag{3b}\]
where \(C_{\mu}\), \(\sigma_{k}\), \(\sigma_{\epsilon}\), \(C_{1\epsilon}\), \(C_{2\epsilon}\) are the five constants in the \(k-\epsilon\) turbulence model and are set as default values in OpenFOAM.
The RANS equations outlined above are discretized using the finite volume method with second-order numerical schemes in both space and time. A modified version of the simpleFoam solver (OpenFOAM package) is used for solving for the steady-state flow over the wind turbines. The modification involves adding the ADM source term into the RANS equations, as we discuss in detail next.
#### 2.3.2 Actuator disk model
The actuator disk model (ADM) simulates the turbine rotor as a thin, permeable disk, and the momentum transferred from rotor to the flow is added to the flow as volumetric source term \(f_{i}\), as mentioned in equation 1. To implement wind farm CFD simulations, thrust and torque of each rotor are required. Ideally, these aerodynamic quantities can be obtained by looking up the thrust and torque curves provided by wind turbine manufacturers, given the theoretical inflow velocity. However, in the presence of the wake effects within wind farm, the theoretical inflow velocity for the downstream turbine is not known _a prior_.
To circumvent this problem, we follow the approach in Richmond et al. (2019), where the thrust and torque curves are inserted into the flow solver, and the theoretical inflow velocity is calculated in an iterative manner. This method, which is summarized in figure 2, is based on the one-dimensional actuator disk theory (Burton et al., 2011):
\[U_{D} =U_{\infty}(1-a), \tag{4a}\] \[C_{T} =\frac{T}{\frac{1}{2}\rho U_{\infty}^{2}A_{D}},\] (4b) \[C_{T} =4a(1-a),\] (4c) \[a =\frac{1}{2}(1-\sqrt{1-C_{T}}), \tag{4d}\]
where \(U_{D}\) is the velocity averaged over the disk region, \(C_{T}\) is the coefficient of thrust, \(A_{D}\) is the area of actuator disk, and \(a\) is axial induction factor. This equation is solved iteratively within the modified simpleFoam code, for each turbine at each solver iteration, until the convergence criteria is met.
After the thrust and torque of each rotor are obtained, the volume forces are distributed along the radial direction following the Goldstein optimum (Goldstein, 1929):
\[f_{ix} =A_{x}r^{*}\sqrt{1-r^{*}}, \tag{5a}\] \[f_{i\theta} =A_{\theta}\frac{r^{*}\sqrt{1-r^{*}}}{r^{*}(1-r_{h}^{{}^{\prime}} )+r_{h}^{{}^{\prime}}}, \tag{5b}\]
with
\[r^{*} =\frac{r^{{}^{\prime}}-r_{h}^{{}^{\prime}}}{1-r_{h}^{{}^{\prime}} },\,\,r^{{}^{\prime}}=\frac{r}{R_{P}}, \tag{6a}\] \[A_{x} =\frac{105}{8}\frac{T}{\pi\Delta(3R_{H}+4R_{P})(R_{P}-R_{H})},\] (6b) \[A_{\theta} =\frac{105}{8}\frac{Q}{\pi\Delta R_{P}(3R_{P}+4R_{H})(R_{P}-R_{H} )}, \tag{6c}\]
where \(f_{ix}\) is axial force, \(f_{i\theta}\) is tangential force, \(r\) is the distance between the point and disk center, \(R_{P}\) is the external radius of disk, \(R_{H}\) is the internal radius of disk, \(T\) is rotor's thrust, \(Q\) is rotor's torque, and \(\Delta\) is the thickness of disk.
Finally, the objective function, \(\mathrm{AEP}\), is predicted by the sum of power of each individual cell within actuator disk zones as
\[P=\sum_{i=1}^{k}P_{i}=\sum_{i=1}^{k}F_{i}U_{i} \tag{7}\]
Figure 2: Workflow to calculate inflow velocity, thrust and torque in the ADM code
where \(F_{i}\) and \(U_{i}\) are vectors of volume force and velocity in each individual cell, and \(k\) is total number of cells in the disk zone.
#### 2.3.3 Computational setup
We adopt a circular computational domain that allows for simulations with different inflow directions, as illustrated in figure 3. The wind turbines are restricted within a rectangular subdomain in the center. The diameter of the circular computational domain is set as 3 times the length of the center rectangle. In the vertical direction, the height of the computational domain is set as \(8D\), where \(D\) is the diameter of the rotor. The inlet velocity is prescribed as \(U_{in}=U_{\infty}(z/z_{0})^{\alpha}\), where \(z_{0}\) is the height of the turbine hub, \(U_{\infty}=11.4\) m/s is the reference velocity at the turbine hub, and \(\alpha=0.14\) is the shear rate of the velocity profile. At the outlet, a reference pressure \(p_{\infty}=0\) is set, with zero gradient condition for the velocity. The bottom side of the computational domain is treated as wall, and the top boundary as slip condition.
The mesh of the computational domain is created using OpenFOAM's native blockMesh tool. To guarantee that ADM's source terms are loaded into the flow, meshes near actuator disk zone are refined. The element size in disk region is \(6\) m (\(0.05D\)). For a simulation with 8 turbines, the total number of control volumes is around \(2\times 10^{5}-6\times 10^{5}\).
### Surrogate-based optimization method
#### 2.4.1 Kriging model
The Kriging model (Krige, 1951) is employed in this framework to build the surrogate model. It predicts the response values at unknown points based on a set of sample input data \(\mathbf{X}=\{\mathbf{x}^{(1)},\mathbf{x}^{(2)},...,\mathbf{x}^{(n)}\}^{T}\), and the observed values \(\mathbf{Y}=\{y^{(1)},y^{(2)},...,y^{(n)}\}^{T}\).
The Kriging emulator, \(\hat{f}(x)\), is defined in the following equation (Dalbey et al., 2022; Adams et al., 2020):
\[\hat{f}(\mathbf{x})=g(\mathbf{x})^{T}\beta+\epsilon(\mathbf{x}), \tag{8}\]
where \(\hat{}\) denotes the Kriging approximation, \(\mathbf{x}\) is the vector with all the design variables, \(g(\mathbf{x})\beta\) is the vector of trend functions, \(\epsilon(\mathbf{x})\) is Gaussian process error model which has zero uncertainty at the training points. The covariance matrix \(\epsilon(\mathbf{x})\) is modeled using the correlation matrix as
\[\mathrm{Cov}[\epsilon(\mathbf{x}^{(i)}),\epsilon(\mathbf{x}^{(j)})]=\sigma^{2}r(\mathbf{ x}^{(i)},\mathbf{x}^{(j)}), \tag{9}\]
where \(r(\mathbf{x}^{(i)},\mathbf{x}^{(j)})\) is the correlation functions between samples \(\mathbf{x}^{(i)}\) and \(\mathbf{x}^{(j)}\).
Figure 3: Top views of the circular computational domain for a wind farm. \((a)\) assigning inlet or outlet boundary for wind direction of \(180^{\circ}\), \((b)\) assigning inlet or outlet boundary for wind direction of \(225^{\circ}\).
In this framework, the Gaussian correlation function is used to calculate the correlation vector and matrix. The correlation vector and matrix are constructed by the following equations:
\[r(\mathbf{x}^{(i)},\mathbf{x}^{(j)})=\exp(-\sum_{k=1}^{m}\theta_{k}|x_{k}^ {(i)}-x_{k}^{(j)}|^{2}),\ \ i,j\ =\ 1,2,...,n \tag{10a}\] \[\mathbf{R}=\begin{pmatrix}r(\mathbf{x}^{(1)},\mathbf{x}^{(1)})&\cdots&r(\mathbf{x} ^{(1)},\mathbf{x}^{(n)})\\ \vdots&\ddots&\vdots\\ r(\mathbf{x}^{(n)},\mathbf{x}^{(1)})&\cdots&r(\mathbf{x}^{(n)},\mathbf{x}^{(n)})\end{pmatrix}, \tag{10b}\]
where \(m\) is the number of dimensions in the search space; \(n\) is number of samples; \(\theta\) is hyperparameter that affects correlations between samples.
The hyperparameters \(\theta\) is estimated via Maximum Likelihood Estimation (MLE) method. To simplify MLE, the natural logarithm is used and constant terms are omitted:
\[\ln(L)=-\frac{n}{2}ln|\mathbf{R}|-\frac{1}{2}ln(\hat{\sigma}^{2}), \tag{11}\]
where \(\hat{\sigma}^{2}=(\mathbf{Y}-\mathbf{G}\beta)^{T}\mathbf{R}^{-1}(\mathbf{Y}-\mathbf{G}\beta)/n\) is the MLE of \(\sigma^{2}\) and \(\mathbf{G}\) is an \(n\times q\) matrix with rows \(\mathbf{g}\mathbf{(x)}_{i}^{T}\) (the trend function evaluated at point \(i\)). Maximizing the equation (11), we obtain the MLE of hyperparameter \(\theta\) which in turn determines \(\hat{\sigma^{2}}\).
When MLE is done, we obtain the Kriging emulator of unknown truth function. The predicted value \(\hat{f}(\textbf{x})\) at a new point **x** is:
\[\hat{f}(\mathbf{x})=g(\mathbf{x})^{T}\beta+\mathbf{r(x)}^{T}\mathbf{R}^{-1}(\mathbf{Y}-\mathbf{G}\beta), \tag{12}\]
where \(\mathbf{r(x)}\) is the vector of correlation between \(\mathbf{x}\) and each of training points.
Based on the wind farm layout dataset, a high-fidelity Kriging model \(\widehat{\text{AEP}}(x_{1},y_{1},x_{2},y_{2},...,x_{n},y_{n})\) for wind farm AEP prediction is built with the aforementioned process and coupled to optimization algorithms. The adaptive SBO method is introduced in detail in section 2.4.2.
#### 2.4.2 Adaptive surrogate-based optimization
The surrogate-based optimization algorithm requires to construct one surrogate over the whole design space. The global surrogate is then coupled to optimization algorithms. Surrogate model built with initial sampling dataset is usually not accurate enough to be applied in optimization problem. For this reason, various methods to update truth dataset have been explored. An effective method is to infill the optimal design of \(\hat{f}\) during last iteration with its truth response dataset. Given its advantage of low additional computational cost (Thelen, 2016), this infilling strategy is employed in this study. The main steps of the adaptive SBO algorithm are as follows:
1. Generate a group of initial sampling parameters using LHS method.
2. Apply the simulation models to obtain the truth response for all sampling parameters.
3. Choose proper approximation method (Gaussian Process, also known as Kriging model in this framework) to construct the surrogate model based on the sampling parameters with their truth response.
4. Couple surrogate model to the optimization algorithm and obtain optimum of the surrogate.
5. Check if maximum number of iteration is exceeded or optimal design converges. If not, repeat (2) - (4).
The workflow of this adaptive SBO method is illustrated in figure 4. As the optimization evolves, the adaptive samples with their truth response are infilled into the dataset and surrogate becomes increasingly accurate. Finally, the optimal solution is obtained.
Genetic algorithm (GA) is selected as the optimization algorithm in this study due to its wide application in WFLO problems as shown in table 1. The process of GA includes population initialization, selection of the most fitted individuals, application of some natural processes such as crossover and mutation, and population reproduction (Liu et al., 2021). The detailed information of these steps is shown as follows:
1. Population initialization. Initialize a random population and then compute the value of the fitness function for each individual.
2. Selection. Individuals in population which do not satisfy constraints will be eliminated or multiplied by a penalty parameter. Then the algorithm selects suitable parents from individuals based on their fitness value. These parent individuals are further used for mating and generating new population.
3. Crossover and mutation. Randomly select two parent individuals. Then the process of crossover and mutation are performed according to crossover and mutation rate respectively. This step aims at creating offspring individuals and preventing GA from converging to local optimum solution.
4. Check convergence. If the convergence criteria is satisfied or maximum number of evaluations is exceeded, the iteration ends. Otherwise, repeat step II.
## 3 Case studies
In this section, three wind farm cases with different wind directions and terrains are studied using the self-adaptive optimization framework. As a reference, the more computationally expensive CFD-GA optimization (without surrogate model) is also carried out. The optimized results are then discussed and compared to the conventional staggered layout along the major wind direction (baseline) to assess the performance of the developed framework. The self-adaptive optimization framework settings for the three cases are summarized in table 2. Detailed case descriptions are presented in SS3.1. Results and discussions are presented in SS3.2.
### Case descriptions
We use the NREL 5 MW reference turbine (Jonkman et al., 2009) as the wind turbine model in this study. The key parameters in the numerical modeling are listed in the table 3. The thrust and torque curves of the NREL 5 MW wind turbine as shown in figure 5 are inserted into the ADM code as mentioned in figure 4.
We optimize the layout of an 8-turbine wind farm within an \(8D\times 8D\) rectangle, which is decomposed into \(8\times 8\) cells. The 81 nodes are prescribed as the potential positions for the eight turbines. We evaluate the performance of
\begin{table}
\begin{tabular}{c c} \hline \hline Parameters & Value \\ \hline Initial dataset size & 360 \\ Population size & 50 \\ Crossover rate & 0.95 \\ Mutation rate & 0.15 \\ Maximum iterations & 1000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Settings for the self-adaptive framework.
Figure 4: Workflow of the adaptive SBO method
the proposed WFLO framework in three representative cases as shown in table 4. In Case I, only one wind direction with \(U_{\infty}=11.4\) m/s is considered. In Case II, a wind rose with eight wind directions (shown in figure 6) is taken into consideration. On this basis, case III further considers the effects of the terrain on the wind farm layout by the addition of an Gaussian-shaped hill with the height of 150 m on the bottom boundary.
### Optimization results and discussion
The optimized and baseline layouts with their AEP distribution are shown in figure 7 to 9, and the number of CFD calls for different optimization strategies along with their AEPs are listed in table 5. Compared to baseline layouts, optimized layouts obtained from this framework and CFD-GA method both achieve significant AEP improvement. Due to the multi-modality of the WFLO problem, the optimization process can get stuck in a local minimum, thus the optimal layout results obtained from the CFD-GA and SBO strategies are usually not identical even for the same case. Nevertheless, the AEP of optimized layouts obtained from SBO are almost the same from that obtained by CFD-GA. In terms of computational cost, the numbers of CFD calls for SBO are \(437\), \(400\) and \(399\), compared to \(961\), \(793\) and \(885\) for CFD-GA in case I, case II and case III, respectively. These CFD simulations are executed on two desktop computers, both equipped with AMD EPYC 7532 processor with 32 cores. For the three wind farm cases, it takes approximately 72 h, 336 h and 360 h to obtain optimized layouts using CFD-GA method while it only takes approximately 36 h, 172 h and 190 h using the developed framework. Above all, we see that the surrogate-based optimization framework takes
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Case I & Case II & Case III \\ \hline Turbine numbers & 8 & 8 & 8 \\ Restricted area & \(8D\times 8D\) & \(8D\times 8D\) & \(8D\times 8D\) \\ Wind directions & 1 & 8 & 8 \\ Terrains & flat terrain & flat terrain & Gaussian-shaped hill \\ \hline \hline \end{tabular}
\end{table}
Table 4: Definition for 3 WFLO cases
Figure 5: Thrust and torque curves for NREL 5 MW wind turbine
Figure 8: Case II: AEP distribution for layouts obtained from \((a)\) SBO \((b)\) CFD-GA and \((c)\) baseline.
Figure 6: Wind rose for case II and case III
Figure 7: Case I: AEP distribution for layouts obtained from \((a)\) SBO \((b)\) CFD-GA and \((c)\) baseline. The turbines are colored based on their individual AEPs.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Case I} & \multicolumn{2}{c}{Case II} & \multicolumn{2}{c}{Case III} \\ \cline{2-7} & CFD calls & AEP (GWh) & CFD calls & AEP (GWh) & CFD calls & AEP (GWh) \\ \hline SBO & 437 & 202.740 & 400 & 120.374 & 399 & 178.037 \\ CFD-GA & 961 & 204.400 & 793 & 126.707 & 885 & 183.054 \\ Baseline & - & 141.324 & - & 97.287 & - & 135.732 \\ \hline \hline \end{tabular}
\end{table}
Table 5: AEP and CFD calls of optimized layouts using different optimization strategy for 3 cases
Figure 11: Velocity magnitude at rotor hub height of optimized layout for case I
Figure 10: Comparison of maximum and minimum AEP of the turbines in the wind farm for optimal layouts for \((a)\) Case I, \((b)\) Case II, and \((c)\) Case III.
Figure 9: Case III: AEP distribution for layouts obtained from \((a)\) SBO \((b)\) CFD-GA and \((c)\) baseline.
Figure 12: Velocity magnitude at rotor hub height of optimized layout for case II. \((a)\) West, and \((b)\) South-West.
Figure 13: Velocity magnitude at rotor hub height of optimized layout for case III. \((a)\) West, and \((b)\) South-West. The gray dashed lines indicate the iso-height lines of the terrain.
only about half computational cost of the CFD-based method to find optimal designs which produce almost the same AEP.
We also compare the maximum and minimum AEP of the turbines in the wind farm for different layouts in figure 10. It is argued that, in a well-designed wind farm layout, the AEP among the different turbines should be distributed as even as possible, such that each turbine contribute a fair share to the total AEP. As shown in figure 10, the difference between maximum and minimum AEP in optimized layouts (obtained from both SBO and CFD-GA) is much smaller than that in baseline layouts for the two cases, indicating that AEP of optimized layouts becomes much more evenly-distributed. For case III, the difference between the maximum and minimum AEP is significantly larger. From figure 9, it is observed that the turbine associated with maximum AEP is located near the hill top, where the wind velocity increases due to the terrain effects. Nevertheless, the optimized layouts still feature more even AEP distribution than the reference layout.
We further show the velocity fields at the hub height of optimized layouts obtained from SBO in figures 11-13. For case I where only a single wind direction is considered, we notice that in SBO's optimized layout, all turbines are placed away from other turbines' wakes. In case II, the western inflow with wind speed of 11.3 m/s and occurrence frequency of 0.21 is the most dominant wind direction, and the southwestern inflow with wind speed of 10.9 m/s and occurrence frequency of 0.203 is the secondary dominant wind direction, as shown in the wind rose in figure 6. It is shown in figure 8 that most turbines are located away from other turbines' wake zone along the top two dominant wind directions. In case III, the effects of the terrain is considered. With the Gaussian hill at the center of the computational domain, the wind turbines in the optimized layouts are clustered toward the center to take advantage of the higher wind velocity. The ability of taking the terrain effects into consideration equips the current CFD-based WFLO method with more flexibility compared to those based on analytical wake models.
## 4 Conclusions
We have developed a surrogate-based framework for wind farm layout optimization with higher fidelity wake modeling methods, with the objective of maximizing the annual energy production. The surrogate is built by the Kriging model, which is trained using CFD simulations by treating the turbines as actuator disks. The surrogate model is then integrated with the genetic algorithm as the optimizer. During the optimization process, the wind farm layout dataset is updated adaptively with the intermediate design during each iteration added into it until the algorithm converges to the optimal layout. To assess its performance, we have tested the proposed SBO framework in three WFLO cases with different wind distributions and terrain. It is observed that the optimized layouts obtained from this self-adaptive framework generates almost similar AEP with CFD-GA direct optimization (without surrogate), and significantly outperforms the conventional staggered layout along the major wind direction. Aided by the surrogate modeling technique, the computational time of SBO framework is only half of that for the direct method, thus allowing the CFD-based WFLO to be carried out with manageable cost.
While the current study has demonstrated the feasibility of using Kriging model to accelerate high-fidelity wind farm layout optimization, there are still some limitations in the framework that need further improvement. First, the use of Gaussian process for building surrogate models is known to be "cursed" for high-dimensional data (Binois and Wycoff, 2022). Innovative techniques for solving the scalability problem are in urgent need if this SBO framework is to be applied in WFLO tasks with larger number of wind turbines. Second, current work considers a single objective of maximizing AEP. Additional objectives, such as minimizing fatigue loads, minimizing construction cost, etc, can be incorporated in future studies. In addition, multi-fidelity methods have shown great potential in a variety of complex design problems (Forrester et al., 2007; Kou and Zhang, 2019; Jasa et al., 2022). For WFLO, the results from low-fidelity analytical wake models can also be expected to play an important role in further improving the optimization efficiency using the multi-fidelity optimization framework.
## Acknowledgments
DZ, ZH, KZ acknowledge financial support from the Innovation Program of Shanghai Municipal Education Commission (no. 2019-01-07-00-02-E00066), National Science Foundation of China (grant numbers: 12202271, 52122110, 42076210), Program for Intergovernmental International S&T Cooperation Projects of Shanghai Municipality, China (grant no. 22160710200), and the Oceanic Interdisciplinary Program of Shanghai Jiao Tong University (grant no. SL2020PT201). |
2307.06144 | On Anick resolution: from the original setting to the language of
non-commutative Groebner bases | Anick introduced a resolution, that now bears his name, of a field using an
augmented algebra over that field. We present here what one could call a
dictionary between Anick's original paper and the other resources on the
matter, most of which use the language of non-commutative Groebner bases. | Adya Musson-Leymarie | 2023-07-12T12:59:39Z | http://arxiv.org/abs/2307.06144v1 | # On Anick resolution: from the original setting to the language of non-commutative Grobner bases
###### Abstract
Anick introduced a resolution, that now bears his name, of a field using an augmented algebra over that field. We present here what one could call a dictionary between Anick's original paper [2] and the other resources on the matter, most of which use the language of non-commutative Grobner bases.
## Introduction
When the task at hand is to effectively compute some homology groups of an associative algebra, one is presented with a choice: which resolution to use. The bar resolution [6] is a resolution that always exists but is usually far too large for practical purposes. One prefers resolutions that are as close as possible to the minimal one. However, there are no definite algorithm known to compute the minimal resolution in general. Anick resolution [2] offers an alternative (for a certain class of algebras) that involves usable amount of data to be constructed and used to complete our goals of computation. It is not in general minimal but it is still smaller than the bar resolution and is computed quite easily algorithmatically.
Anick resolution acts upon an augmented algebra \(A\) with a given presentation whose generators extend in a free monoid of monomials that needs be equipped with a monomial order. The resolution can be taken in the category of right \(A\)-modules as well as left \(A\)-modules. It consists of free modules whose bases are formed from _\(n\)-chains_[1, 2], a construction that acts purely combinatorially on another one called _obstructions_ (or _tips_). These two are the fundamental concepts surrounding Anick resolution. An algorithmic method to compute the so-called obstructions is through the use of non-commutative Grobner bases, as done by Ufnarovski in [9]. This is why the Anick resolution is nowadays mostly known through the lense of non-commutative Grobner bases.
Our purpose in this paper is to give proofs and explanations from known facts in the folklore surrounding the topic of Anick resolution, that had yet to be written explicitly. In particular, we wish to emphasise the bridge existing between Anick's original paper [2] and the subsequent resources expressed in the language of non-commutative Grobner bases. Following Anick's paper structure, we will introduce our setting while showing how it translates with his work. In the first section, the two main results are Proposition 1.1 and Proposition 1.5 that state, respectively:
* the _normal words_ are exactly the words that have no leading monomials of the relations as subwords,
* the _obstructions_ are the leading monomials of the minimal non-commutative Grobner bases of the ideal of relations.
The second section introduces the notion of \(n\)-chains from two different perspectives: one due to Anick and the other one due to Ufnarovski in terms of a graph. We show in Proposition 2.6 that those two are equivalent. The third section presents the resolution in itself and gives the
proof written by Anick, as a matter of completeness, with somewhat more details to help the reader. Throughout this paper, we will follow an example to show how the different constructions concretely shape out to be.
## Notations and conventions
* We will denote by \(\left\langle X\right\rangle\) (where \(X\) is a non-empty set) the _free monoid_ on \(X\) consisting of the words on the alphabet \(X\). We will write \(1\) for the empty word. Alternative notations often found in the litterature are \(X^{*}\) (_Kleene star_) for the free monoid and \(\epsilon\) for the empty word.
* Writing \(\mathbb{K}S\) where \(\mathbb{K}\) is a field and \(S\) is a set will denote the \(\mathbb{K}\)-vector space with basis \(S\), thus consisting of the finite formal linear combinations of elements of \(S\) with coefficients in \(\mathbb{K}\).
* We identify \(\mathbb{K}\left\langle X\right\rangle\), where \(\mathbb{K}\) is a field and \(X\) is a non-empty set of indeterminates, with the _free algebra_ consisting of all the polynomials with non-commutative indeterminates in \(X\) and coefficients in \(\mathbb{K}\). It is the \(\mathbb{K}\)-algebra on the free monoid \(\left\langle X\right\rangle\) (see for instance [3] for the construction of the free algebra). Alternative notations in the litterature include \(\mathbb{K}X^{*}\) or \(T(V)\), where \(T(V)\) denotes the _tensor algebra_ constructed from a vector space \(V\) whose basis is in bijection with \(X\). The tensor algebra so-defined and \(\mathbb{K}\left\langle X\right\rangle\) are isomorphic as \(\mathbb{K}\)-algebras.
* The notation \(I(R)\), where \(R\) is a subset of \(\mathbb{K}\left\langle X\right\rangle\), means the two-sided ideal generated by \(R\) in the ring \(\mathbb{K}\left\langle X\right\rangle\), _i.e._ the set of finite combinations of elements from \(R\) with left and right coefficients in \(\mathbb{K}\left\langle X\right\rangle\).
* Let \(A\) be a \(\mathbb{K}\)-algebra. A _presentation_\(\left\langle X\right|R\right\rangle\) of \(A\) is given by a set \(X\) of indeterminates called _generators_ and a subset \(R\) of \(\mathbb{K}\left\langle X\right\rangle\) called _relations_ such that \(A\) is isomorphic to the quotient algebra \(\mathbb{K}\left\langle X\right\rangle/_{I(R)}\). It is easy to see that any algebra is of this form (refer to [3] for details). Given a presentation \(\left\langle X\right|R\right\rangle\) of \(A\), we will write \(\overline{g}\), where \(g\) is a polynomial or a word in \(\mathbb{K}\left\langle X\right\rangle\), for the image of \(g\) in \(A\) by the natural projection \(\pi\) induced by the presentation.
* Let us fix throughout this paper \(\prec\), a monomial order on \(\left\langle X\right\rangle\), _i.e._ a well-order on \(\left\langle X\right\rangle\) compatible with left- and right-multiplication of words.
* If \(f\) is a nonzero polynomial in \(\mathbb{K}\left\langle X\right\rangle\), then the highest monomial for \(\prec\) in the support of \(f\) (_i.e._ the set of monomials in \(f\) appearing with a nonzero coefficient) will be written \(\operatorname{LM}\left(f\right)\). We will write similarly \(\operatorname{LM}\left(F\right):=\left\{\operatorname{LM}\left(f\right)\mid f \in F\right\}\) for any set \(F\) of nonzero polynomials in \(\mathbb{K}\left\langle X\right\rangle\).
* For a set of generators \(X\), we will call _monomial ideal_ any monoidal ideal in \(\left\langle X\right\rangle\), _i.e._ any subset \(I\) of \(\left\langle X\right\rangle\) such that for every word \(w\) in \(I\) and every word \(u\) and \(v\) in \(\left\langle X\right\rangle\), the word \(uwv\) is also in \(I\).
* We remind the reader that, given an ideal \(I\) in \(\mathbb{K}\left\langle X\right\rangle\), a _non-commutative Grobner basis_ of \(I\) according to the monomial order \(\prec\) is any subset \(G\) of \(I\) such that the leading monomials \(\operatorname{LM}\left(G\right)\) generates \(\operatorname{LM}\left(I\right)\) as a monomial ideal, _i.e._ every leading monomial in \(I\) has a leading monomial in \(G\) as a subword. We will call a non-commutative Grobner basis \(G\)_minimal_ if no leading monomial in \(\operatorname{LM}\left(G\right)\) is divisible by another leading monomial in \(\operatorname{LM}\left(G\right)\). If moreover no monomial in the support of any element of \(G\) is divisible by a leading monomial in \(\operatorname{LM}\left(G\right)\), it is said to be _reduced_. Note that all the minimal non-commutative Grobner bases of a same ideal \(I\) share the same set of leading monomials.
Setting
Let us fix a field \(\mathbb{K}\) throughout this paper. This corresponds to the field \(k\) in [2].
Suppose we have an associative unitary \(\mathbb{K}\)-algebra denoted \(A\) (instead of \(G\) in [2]) that we assume is augmented by the augmentation map \(\varepsilon:A\to\mathbb{K}\); it is a surjective homomorphism of \(\mathbb{K}\)-algebras. We will denote \(\eta:\mathbb{K}\to A\) the section of \(\varepsilon\) defined by \(\eta(1_{\mathbb{K}})=1_{A}\), as a \(\mathbb{K}\)-linear map.
Throughout this paper, we will assume that \(A\) is defined by a fixed presentation \(\left\langle X|R\right\rangle\) in the sense that \(A\) is actually equal to the quotient algebra \(\mathbb{K}\left\langle X\right\rangle/I(R)\). In [2], Anick uses presentations implicitly: he picks out a set \(X\) (he denotes \(S\)) of generators for \(A\), and considers the canonical surjective morphism of algebras \(f:\mathbb{K}\left\langle X\right\rangle\to A\) that ensues, that subsequently, by the First Isomorphism Theorem, gives rise to an isomorphism \(\mathbb{K}\left\langle X\right\rangle/\ker(f)\cong A\). In our setting, this surjective morphism \(f\) is exactly the natural projection \(\pi\) from the free algebra to the quotient algebra. It follows that the kernel \(\ker(f)\) in [2] is nothing other than the two-sided ideal \(I(R)\) generated by \(R\) in our notations.
It is worth noting that most subsequent sources make the assumption that \(\varepsilon\) is zero on the set \(X\) of generators. This indeed is the case when we take the very common augmentation of a presented algebra as the evaluation of polynomials at \(0\), assuming our relations do not contain any constant terms. It is also the case when we consider the common case of a connected graded algebra augmented by its natural augmentation, as done in [9]1. However, with very little efforts, mostly during the initialisation of the proof of exactness, the general case, without any assumptions on the augmentation map \(\varepsilon\), remains true as shown in [2].
Footnote 1: Where connectivity of graded algebras is always assumed at page 24.
In [2], Anick uses a specific kind of monomial order that is graded by a certain function he denotes \(e\). In particular, if we follow his suggestion of setting \(e(x)=1\) for all \(x\in X\), we obtain the order commonly known as _degler_ (lexicographic order graded by degree).
In [2, Lemma 1.1], Anick introduces a set \(M\), defined as the set of words \(w\) whose images in \(A\) cannot be expressed as a finite linear combination of images in \(A\) of words that are smaller than \(w\) by \(\prec\). These words are called _normal words_ in the litterature. This vocabulary comes from the algebraic rewriting area where a word is called a _normal form_ according to a set of rewriting rules when no more of the rules can be applied to it. We shall later see (Corollary 1.2) that the set \(M\) is exactly the set of normal form monomials according to \(R\) (_i.e._ the rewriting rules induced by the relations) if, and only if, \(R\) is a non-commutative Grobner basis of \(I(R)\) according to \(\prec\). Therefore, by existence and uniqueness of a reduced non-commutative Grobner basis of \(I(R)\), it makes sense to talk about normal words solely based on the ideal rather than the generating set of rewriting rules. What Anick calls an _admissible monomial_ in [2] is thus in our language a _normal word_.
In our setting, we will define and write \(M\) in the same manner, explicitly:
\[M:=\left\{w\in\left\langle X\right\rangle\,\Bigg{|}\,\,\forall(w_{1},\cdots,w _{n})\in\left\langle X\right\rangle^{n},\quad w_{i}\prec w\,\,\,\Rightarrow \,\,\forall(\lambda_{1},\cdots,\lambda_{n})\in\mathbb{K}^{n},\quad\overline{ w}\neq\sum_{i=1}^{n}\lambda_{i}\overline{w_{i}}\right\}.\]
It is well known in the litterature on non-commutative Grobner bases [9] that the set \(M\) is the complement of \(\operatorname{LM}\left(I(R)\right)\) in \(\left\langle X\right\rangle\), usually denoted \(O(I(R))\). We prove this fact in the next proposition.
**Proposition 1.1**.: _With the same previous notations, we have:_
\[M=\left\langle X\right\rangle\setminus\operatorname{LM}\left(I(R)\right)=:O(I (R)).\]
_As a consequence, we have the very well-known direct sum decomposition of \(\mathbb{K}\)-vector spaces:_
\[\mathbb{K}\left\langle X\right\rangle=I(R)\oplus\mathbb{K}M.\]
Proof.: Let \(w\in M\). Suppose there exists a nonzero polynomial \(g\in I(R)\) such that \(w=\operatorname{LM}\left(g\right)\). Therefore, we can write \(g=\lambda_{w}w+\sum_{w^{\prime}\prec w}\lambda_{w^{\prime}}w^{\prime}\) with \(\lambda_{w}\neq 0\). Applying the natural projection \(\pi\), on one hand, we get \(\overline{g}=0\) because \(g\in I(R)\) and on the other hand, \(\overline{g}=\lambda_{w}\overline{w}+\sum_{w^{\prime}\prec w}\lambda_{w^{ \prime}}\overline{w^{\prime}}\) because \(\pi\) is linear. By rearranging, we have exhibited that:
\[\overline{w}=-\frac{1}{\lambda_{w}}\sum_{w^{\prime}\prec w}\lambda_{w^{\prime} }\overline{w^{\prime}}.\]
Therefore, \(w\notin M\) which is a contradiction, so \(M\subseteq O(I(R))\).
Conversely, if \(w\notin M\), _i.e._ we can write \(\overline{w}=\sum_{w^{\prime}\prec w}\lambda_{w^{\prime}}\overline{w^{\prime}}\), then consider the polynomial \(g=w-\sum_{w^{\prime}\prec w}\lambda_{w^{\prime}}w^{\prime}\); it is non-zero and is trivially sent to zero under \(\pi\), therefore we exhibited \(g\in I(R)\) such that \(\operatorname{LM}\left(g\right)=w\), which means \(w\in\operatorname{LM}\left(I(R)\right)\), _i.e._\(\langle X\rangle\setminus M\subseteq\operatorname{LM}\left(I(R)\right)\) from which we deduce \(O(I(R))\subseteq M\) and the result follows.
**Corollary 1.2**.: _The set \(R\) of relations is a non-commutative Grobner basis of \(I(R)\) if and only if \(M\) is the set of normal form monomials according to \(R\)._
Proof.: Normal form monomials according to \(R\) are exactly the monomials that are not in the monomial ideal generated by \(\operatorname{LM}\left(R\right)\). Therefore, the corollary is a consequence of Proposition 1.1, by the definition of non-commutative Grobner bases given previously.
Since \(A\) is isomorphic to \(\nicefrac{{\mathbb{K}\langle X\rangle}}{{I(R)}}\), it follows, from the decomposition in Proposition 1.1, that \(A\) is isomorphic to \(\mathbb{K}M\) as \(\mathbb{K}\)-vector spaces, _i.e._ the family \(\overline{M}:=\left(\overline{m}\right)_{m\in M}\) is a \(\mathbb{K}\)-basis of \(A\). In particular, the cardinality of the set of normal words gives the dimension of \(A\). Some authors (see for instance [9], page 28) write \(N\) for the space spanned by normal words \(\mathbb{K}M\) and call it the _normal complement_ of the ideal \(I(R)\): it is isomorphic to \(A\) as \(\mathbb{K}\)-vector spaces and allows therefore, by identification, to perform most of our computations inside the free algebra.
The set \(M\) has a special structure that Anick calls _order ideal of monomials_ (or "o.i.m." for short). That structure is defined as a subset \(W\) of words in \(\langle X\rangle\) such that every subword of a word in \(W\) is also in \(W\). He proceeds by mentionning that giving an o.i.m. is equivalent to giving an anti-chain with respect to the subword partial order (_i.e._ a set of words that are pairwise not subwords of one another). We prove that result here in the more general context of any poset with a well-founded relation (o.i.m.'s and anti-chains are defined for any poset). If \(\hat{E}:=\left(E,\leqslant\right)\) is a partially ordered set, consider the following sets:
\[I_{\hat{E}}:=\left\{F\subseteq E\mid\forall x\in F,\forall y\in E,y\leqslant x \ \Rightarrow\ y\in F\right\},\]
\[J_{\hat{E}}:=\left\{F\subseteq E\mid\forall x\in F,\forall y\in F,x\not \prec y\ \wedge\ y\not<x\right\}.\]
Notice that the set \(I_{\hat{E}}\) is the set of o.i.m.'s of \(\hat{E}\) and the set \(J_{\hat{E}}\) is the set of anti-chains of \(\hat{E}\).
Then, define the following map:
\[\begin{array}{rcl}f_{\hat{E}}:J_{\hat{E}}&\to&I_{\hat{E}}\\ F&\mapsto&f_{\hat{E}}(F):=\left\{y\in E\mid\forall x\in F,\quad(x\leqslant y \ \vee\ y\leqslant x)\ \Rightarrow\ y<x\right\}.\end{array}\]
This translates as saying that an element \(y\) is in the image of an anti-chain \(F\) if and only if, granted \(y\) is comparable with an element from \(F\), then it is necessarily smaller. This means that \(f_{\hat{E}}(F)\) is exactly the union of the set of the elements incomparable with any element from \(F\) and of the set of the elements that are smaller than an element from \(F\).
The map is well-defined because if \(F\) is an anti-chain, then, for any \(y\in f_{\hat{E}}(F)\) and \(x\leqslant y\):
* if \(y\) is comparable to an \(x^{\prime}\in F\), then \(y<x^{\prime}\). By transitivity, \(x<x^{\prime}\) and thus \(x\in f_{\hat{E}}(F)\).
* if \(y\) is incomparable with every element of \(F\), then \(\forall x^{\prime}\in F,x^{\prime}\not\leqslant x\). Otherwise there would exist \(x^{\prime}\in F\) such that \(x^{\prime}\leqslant x\leqslant y\), a contradiction. Therefore, if \(x\) is comparable with a \(x^{\prime}\in F\), it is necessarily such that \(x<x^{\prime}\). This means exactly that \(x\in f_{\hat{E}}(F)\).
Hence, \(f_{\hat{E}}(F)\) is an o.i.m.
**Proposition 1.3**.: _Let \(\hat{E}=(E,\leqslant)\) be a partially ordered set. If \(\leqslant\) is a well-founded relation on \(E\), then \(f_{\hat{E}}\) is a bijection and its inverse is given by:_
\[\begin{array}{rcl}g_{\hat{E}}:I_{\hat{E}}&\to&J_{\hat{E}}\\ F^{\prime}&\mapsto&g_{\hat{E}}(F^{\prime}):=\left\{y\in E\setminus F^{\prime} \mid\forall x\in E\setminus F^{\prime},\quad x\not\prec y\right\}.\end{array} \tag{1}\]
Notice that the map \(g_{\hat{E}}\) sends an o.i.m. \(F\) to the set of minimal elements in \(E\setminus F\), which is an anti-chain by construction; hence \(g_{\hat{E}}\) is well-defined. Since \(\leqslant\) is a well-founded relation, note that \(g_{\hat{E}}(F)\) is non-empty if and only if \(F\neq E\).
Proof.: Let us show both assertions of the proposition at once by proving that \(g_{\hat{E}}\) is indeed a two-sided inverse for \(f_{\hat{E}}\).
Consider \(F^{\prime}\in I_{\hat{E}}\) an o.i.m. Define \(F\) as \(g_{\hat{E}}(F^{\prime})\), _i.e._ as the set of minimal elements of \(E\setminus F^{\prime}\).
If \(F^{\prime}=E\), then \(F=\varnothing\). In that case, \(f_{\hat{E}}(F)\) is equal to \(E\) since there are no elements in \(F\) to compare the elements of \(E\) with.
Consider thus \(F^{\prime}\neq E\). Hence, \(F\) is non-empty. It follows that see that \(f_{\hat{E}}(F)=F^{\prime}\). Indeed:
* Suppose \(y\in f_{\hat{E}}(F)\). On one hand, if \(y\) is not comparable with any elements of \(F\), then \(y\) cannot possibly be in \(E\setminus F^{\prime}\) since there exists elements in \(F\) and they are minimal in \(E\setminus F^{\prime}\); \(y\) would therefore be comparable with one of them. On the other hand, if it is comparable to an \(x\in F\), then \(y<x\) but \(y\) cannot be in \(E\setminus F^{\prime}\) since \(x\) is minimal. Therefore, \(f_{\hat{E}}(F)\subseteq F^{\prime}\).
* Suppose now \(y\in F^{\prime}\). Assume \(y\) is comparable with some \(x\in F\)_i.e._\(x\leqslant y\lor y\leqslant x\). If \(x\leqslant y\) then it would follow that \(x\in F^{\prime}\) since \(F^{\prime}\) is an o.i.m. Therefore, since \(x\notin F^{\prime}\), we must necessarily have \(y<x\) and thus \(y\in f_{\hat{E}}(F)\). Hence, \(F^{\prime}\subseteq f_{\hat{E}}(F)\).
Hence, \(f_{\hat{E}}\circ g_{\hat{E}}=\operatorname{id}_{I_{\hat{E}}}\).
Consider now \(F\in J_{\hat{E}}\) an anti-chain. Define \(F^{\prime}\) as \(f_{\hat{E}}(F)\). Note that \(F\subseteq E\setminus F^{\prime}\).
* Suppose \(y\in g_{\hat{E}}(F^{\prime})\). In particular, \(y\notin F^{\prime}\). Then \(y\) is comparable with an \(x\in F\) such that necessarily \(x\leqslant y\). But, on one hand, \(x\in F\) implies \(x\notin F^{\prime}\), on the other hand, \(y\) is minimal in \(E\setminus F^{\prime}\), therefore \(x=y\) and thus \(y\in F\).
* Suppose \(y\in F\). Then \(y\notin F^{\prime}\). Suppose we would have \(x\in E\setminus F^{\prime}\) such that \(x<y\). Then \(x\) would be comparable with a \(z\in F\) such that \(z\leqslant x\) and thus we would have \(z<y\). However, \(F\) is anti-chain and \(z,y\in F\), so that's a contradiction. We conclude that there are no elements \(x\in E\setminus F^{\prime}\) such that \(x<y\), which exactly means \(y\in g_{\hat{E}}(F^{\prime})\).
Hence, we have \(g_{\hat{E}}\circ f_{\hat{E}}=\operatorname{id}_{J_{\hat{E}}}\).
**Definition 1.4**.: _Denote by \(\hat{X}\) the free monoid \(\langle X\rangle\) equipped with the well-founded relation of subwords. Define \(V_{M}\) as \(g_{\hat{X}}(M)\), the unique anti-chain in \(\hat{X}\) associated to the o.i.m. \(M\) of normal words where \(g_{\hat{X}}\) is the map defined in (1). The elements of \(V_{M}\) are called the obstructions (or tips) of the o.i.m. \(M\)._
It is well known in the litterature [9] that \(V_{M}\) is the minimal generating set of \(\operatorname{LM}\left(I(R)\right)\) as a monomial ideal. We prove this fact in the next proposition. It shows furthermore the connection between Anick's original setting and the language of non-commutative Grobner bases.
**Proposition 1.5**.: _The set \(V_{M}\) is the unique minimal set generating \(\operatorname{LM}\left(I(R)\right)\) as a monomial ideal. In particular, for any minimal non-commutative Grobner basis \(G\) of \(I(R)\), we have:_
\[V_{M}=\operatorname{LM}\left(G\right).\]
Proof.: Recall by Definition 1.4, \(V_{M}\) is the minimal elements of the complement of \(M\). But, by Proposition 1.1, we have \(M=\left\langle X\right\rangle\setminus\operatorname{LM}\left(I(R)\right)\). Therefore, \(V_{M}\) is exactly the set of minimal elements (in terms of the subword relation) of \(\operatorname{LM}\left(I(R)\right)\), which is exactly equivalent to saying that \(V_{M}\) generates \(\operatorname{LM}\left(I(R)\right)\) as a monomial ideal. Moreover, \(V_{M}\) being an anti-chain, removing any element from \(V_{M}\) implies losing the ability to generate \(\operatorname{LM}\left(I(R)\right)\), hence \(V_{M}\) is minimal as a generating set.
In particular, if \(R\) is indeed a minimal non-commutative Grobner basis as it usually is, then we have \(V_{M}=\operatorname{LM}\left(R\right)\). In general, we can use the set of obstructions as a one-to-one index set for the reduced non-commutative Grobner basis of \(I(R)\). It can be also useful in certain contexts to consider the associated monomial algebra presented by \(\left\langle X|V_{M}\right\rangle\), for instance to compute more easily the Hilbert series (see [4, 5, 9]).
The Proposition 1.5 is equivalent to, but expressed in a different way than, the Lemma 1.2 from [2] stating that every non-normal word contains an obstruction.
## 2 \(n\)-chains and critical branchings
The idea of Anick resolution is to construct free \(A\)-modules with as close to the minimal amount of generators as possible that still allow us to define differentials in a way that gives rise to a resolution with an explicit contracting homotopy. We will consider here the case of right modules (refer to [8] for an adaptation to left modules).
In order to do so, Anick introduces the notions of \(n\)-prechains and \(n\)-chains through a top-down definition.
**Definition 2.1** (Chains (top-down)).: _Let \(w=x_{1}\cdots x_{t}\) be a word in \(\left\langle X\right\rangle\). Let \(n\in\mathbb{N}^{*}\)._
_We say that \(w\) is a \(n\)-prechain if there exists two \(n\)-tuples \((a_{1},\cdots,a_{n})\)\(et\)\((b_{1},\cdots,b_{n})\) of integers such that:_
\[1=a_{1}<a_{2}\leqslant b_{1}<a_{3}\leqslant b_{2}<a_{4}\leqslant b_{3}<\cdots <a_{n}\leqslant b_{n-1}<b_{n}=\ell\]
_and_
\[\forall i\in\llbracket 1\,..\,n\rrbracket,\quad x_{a_{i}}x_{a_{i}+1}\cdots x_{b _{i}-1}x_{b_{i}}\in V_{M}.\]
_A \(n\)-prechain is called a \(n\)-chain if:_
\[\forall m\in\llbracket 1\,..\,n\rrbracket,\quad\forall i\in\llbracket 1\,..\, b_{m}-1\rrbracket,\quad x_{1}x_{2}\cdots x_{i}\text{ is not a $m$-prechain}.\]
Intuitively, a \(n\)-prechain is a sequence of \(n\) obstructions where two obstructions in a row overlap each other by at least a character while obstructions separated by at least one obstruction in the sequence do not overlap. A \(n\)-chain is a \(n\)-prechain such that the consecutive overlaps are "maximal" in the sense that no other overlap with the same obstructions could have been longer while still satisfying the condition that the obstructions one apart do not overlap. All of the obstructions need not appear in each prechain and chain and the same obstruction can appear several times within a single prechain or chain.
Notice that the set of \(1\)-chains according to this definition is exactly the set of obstructions \(V_{M}\).
**Example 2.2**.: _On the alphabet \(X=\{x,y,z\}\) with the anti-chain \(V_{M}=\{xxx,xxyx,yxz\}\), we have \(\underline{xxxx}\) is a \(2\)-chain, \(\underline{xxxxxx}\) is a \(2\)-prechain but not \(2\)-chain nor a \(3\)-prechain. Similarly, \(\underline{xxxyx}\)
is a \(2\)-chain but \(\underline{xxxxyx}\) is a \(2\)-prechain but not a \(2\)-chain, since \(\underline{xxxx}\) is a \(2\)-prechain contained in it and shorter but with the same number of obstructions. It is also not a \(3\)-prechain because the first obstruction \(xxx\) would overlap with the third obstruction \(xxyx\). A \(3\)-chain is for instance \(\underline{xxy}\overline{xxyx}z\)._
By convention, let the set of \((-1)\)-chains be exactly \(\{1\}\) and the set of \(0\)-chains be exactly \(X\).
Anick establishes a result in [2] in the form of Lemma 1.3 stating that for an \(n\)-chain, the \(n\)-tuples \((a_{1},\cdots,a_{n})\) and \((b_{1},\cdots,b_{n})\) are uniquely determined. In particular, this means that:
**Proposition 2.3**.: _For any \(n\in\mathbb{N}^{*}\), any \(n\)-chain \(w=x_{1}\cdots x_{\ell}\) (defined with \((a_{1},\cdots,a_{n})\) and \((b_{1},\cdots,b_{n})\)) can be uniquely expressed as \(w=vu\), where \(v=x_{1}\cdots x_{b_{n-1}}\) is an \((n-1)\)-chain and \(u=x_{b_{n-1}+1}\cdots x_{\ell}\) is a normal word._
**Example 2.4**.: _Following the examples given in Example 2.2:_
* _for the_ \(2\)_-chain_ \(w=xxxyx\)_,_ \(v=xxx\)_,_ \(u=yx\)_._
* _for the_ \(2\)_-chain_ \(w=xxxx\)_,_ \(v=xxx\)_,_ \(u=x\)_._
* _for the_ \(3\)_-chain_ \(w=xxxxyxz\)_,_ \(v=xxyxxyx\)_,_ \(u=z\)_._
The top-down Definition 2.1 is not particularly easy to grasp, as such, conceptually and even less algorithmically. We will prefer the bottom-up definition given in most other sources and present it here.
First, let us warn that we will be using the numbering proposed in [8] rather the one proposed in [2]: what Anick calls \(0\)-chains in the top-down Definition 2.1 will be \(1\)-chains for us, \(1\)-chains will be \(2\)-chains, and so on. That way, the numbering will match the homology degrees conveniently.
**Definition 2.5** (Chains (bottom-up) _due to Ufnarovski [9]).: _With previous notations and remarks, construct a simple directed graph \(Q\) whose nodes are:_
\[Q_{0}=\left\{1\right\}\cup X\cup\left\{s\in\left\langle X\right\rangle\mid s \text{ is a proper suffix of an obstruction}\right\}.\]
_The directed edges are defined as follows:_
\[Q_{1}=\left\{(1,x)\mid x\in X\right\}\cup\left\{(s,t)\in(Q_{0}\setminus\{1 \})^{2}\mid st\text{ contains only one obstruction and it is a suffix}\right\}.\]
_For any non-negative integer \(n\in\mathbb{N}\), we define the set of \(n\)-chains as:_
\[C_{n}:=\left\{\prod_{i=0}^{n}w_{i}\;\middle|\;(1=w_{0},w_{1},\cdots,w_{n}) \text{ are nodes in a path of length $n$ in $Q$ starting at $1$}\right\}.\]
In other words, an \(n\)-chain is the product of nodes travelling through a path of length \(n\) starting at the node \(1\). Note that the nodes that are not in the connected component of the node \(1\) have no use for our purpose and can therefore be omitted. Note also that we have \(C_{0}=\{1\}\), \(C_{1}=X\), and \(C_{2}=V_{M}\).
This definition can be rephrased with ease in terms of a recursive definition of \(n\)-chains with tails as done in [7].
**Proposition 2.6** (Top-down and bottom-up definitions match).: _Let us denote by \(\hat{C}_{n}\) the set of \(n\)-chains defined in Definition 2.1 for \(n\geqslant-1\). We have:_
\[\forall n\in\mathbb{N},\quad\hat{C}_{n-1}=C_{n}.\]
Proof.: We see easily that this is true for \(n\in\{0,1,2\}\).
By induction, suppose this is true for a certain \(n\geqslant 2\).
Let \(w\in\hat{C}_{n}\) defined by \(w=x_{1}\cdots x_{\ell}\) and the \(n\)-tuples \((a_{1},\cdots,a_{n})\) and \((b_{1},\cdots,b_{n})\) for Definition 2.1. By Proposition 2.3, we have: \(v:=x_{1}\cdots x_{b_{n-1}}\in\hat{C}_{n-1}\) and \(u:=x_{b_{n-1}+1}\cdots x_{\ell}\in M\). Since \(u\) is a proper suffix of \(s=x_{a_{n}}\cdots x_{b_{n}}\) (an obstruction), then \(u\in Q_{0}\). By induction hypothesis, \(v\in C_{n}\). Moreover, the last node in the path for \(v\) is \(t:=x_{b_{n-2}+1}\cdots x_{b_{n-1}}\). Since \(w\) is a \(n\)-prechain, it follows that \(b_{n-2}<a_{n}\), thus \(tu\) evidently contains \(s\) as a suffix. Furthermore, the last axiom of \(n\)-chains (top-down) ensures that \(s\) is the only obstruction that \(tu\) contains. This means exactly that there is an edge between \(t\) and \(u\) such that the path \(v\in C_{n}\) can be extended with \(u\) and gives \(w=vu\in C_{n+1}\).
Let \(w\in C_{n+1}\). It is thus defined as a path of \(n+1\) length. Let \(u\) be the last node of that path and \(t\) the node before that. Denote by \(v\) the path of length \(n\) when we omit \(u\). We have \(v\in C_{n}\). By induction hypothesis, it follows that \(v\in\hat{C}_{n-1}\). Let \((a_{1},\cdots,a_{n-1})\) and \((b_{1},\cdots,b_{n-1})\) be the \((n-1)\)-tuples defining \(v\) in Definition 2.1. Denote by \(s\) the obstruction linking \(t\) and \(u\). We know that \(tu\) contains \(s\) as a suffix and that \(\ell(s)>\ell(u)\)2 because \(u\) is either a letter or a proper suffix of an obstruction. Define \(a_{n}:=b_{n-1}+\ell(u)-\ell(s)+1\leqslant b_{n-1}\) and \(b_{n}:=b_{n-1}+\ell(u)>b_{n-1}\). Then the tuples \((a_{1},\cdots,a_{n})\) and \((b_{1},\cdots,b_{n})\) make \(w=vu\) into a \(n\)-chain (top-down) since \(x_{a_{n}}\cdots x_{b_{n}}=s\in V_{M}\) and no other obstructions is contained in \(x_{b_{n-2}+1}\cdots x_{b_{n}}\). Therefore, \(w\in\hat{C}_{n}\).
Footnote 2: \(\ell(u^{\prime})\) is the length of the word \(w^{\prime}\in\langle X\rangle\), _i.e._ the number of letters from \(X\) that constitutes the word.
**Example 2.7**.: _Consider again the example \(X=\{x,y,z\}\) and \(V_{M}=\{xxx,xxyx,yxz\}\). We have:_
\[Q_{0}=\left\{1,x,y,z,xx,xyx,yx,xz\right\}.\]
_The graph is then given by Figure 1. Each arrow that does not start from \(1\) is to be understood as indexed by an obstruction, the obstruction satisfying the condition for the directed edge._
Let us now introduce some useful notations.
**Definition 2.8**.: _Let \(n\in\mathbb{N}\), \(m\in\llbracket 0\,..\,n\rrbracket\) and \(c^{(n)}\in C_{n}\) be a \(n\)-chain according to Definition 2.5. Let us explicitly fix \((a_{1},\cdots,a_{n-1})\) and \((b_{1},\cdots,b_{n-1})\) the uniquely determined tuples of integers defining \(c^{(n)}=x_{1}\cdots x_{\ell}\) as in Definition 2.1. Write:_
\[\left[c^{(n)}\right]^{m}:=\begin{cases}1&\text{if }m=0\\ x_{1}&\text{if }m=1\\ x_{1}\cdots x_{b_{n-1}}&\text{if }1<m\leqslant n\end{cases}\]
Figure 1: \(n\)-chains graph for Example 2.2
_This designates the \(m\)-chain that is a prefix of \(c^{(n)}\)._
\[\left[c^{(n)}\right]_{m}:=\begin{cases}w&\text{if }m=0\\ x_{2}\cdots x_{\ell}&\text{if }m=1\\ x_{b_{m-1}+1}\cdots x_{\ell}&\text{if }1<m<n\\ 1&\text{if }m=n\end{cases}\in\left\langle X\right\rangle.\]
_That corresponds to the left-over part of the \(n\)-chain \(c^{(n)}\) after removing the \(m\)-chain prefix._
In particular, this means that for all \(n\in\mathbb{N}\) and \(c^{(n)}\in C_{n}\) then:
\[\forall m\in\llbracket 0\mathinner{\ldots}n\rrbracket,\quad c^{(n)}=\left[c^{(n )}\right]^{m}\left[c^{(n)}\right]_{m}\in\left\langle X\right\rangle.\]
As a remark, notice the link with algebraic rewriting. Let us use some terminology from that field. The relations from \(R\) define _rewriting rules_:
\[\lambda w_{1}\mathrm{LM}\left(g\right)w_{2}+h\quad\overset{\lambda}{\to} \quad\frac{\lambda}{\mathrm{LC}\left(g\right)}w_{1}r(g)w_{2}+h\]
where \(w_{1},w_{2}\in\left\langle X\right\rangle\), \(\lambda\in\mathbb{K}\setminus\left\{0\right\}\), \(g\in R\), \(h\in\mathbb{K}\left\langle X\right\rangle\) such that \(w_{1}\mathrm{LM}\left(g\right)w_{2}\) does not belong to its support and \(r(g):=\mathrm{LC}\left(g\right)\mathrm{LM}\left(g\right)-g\) with \(\mathrm{LC}\left(g\right)\) the coefficient of \(\mathrm{LM}\left(g\right)\) in \(g\).
In that context, a word is said to give rise to a _critical pair_ if two (or the same one twice) of those rewriting rules can be applied on parts of the word that overlap, while overall going from beginning to end of the word, giving possibly different results. If three rules can be applied, we talk about _critical triples_, if four, _critical quadruples_ and so on. In general, these are called _critical branchings_.
In the common case where \(R\) is a minimal non-commutative Grobner basis (and thus \(V_{M}=\mathrm{LM}\left(R\right)\)), let us now note that the set \(C_{n}\) of \(n\)-chains for \(n\geqslant 3\) is a subset of the words that give rise to a critical branching, _e.g._\(3\)-chains give rise to critical pairs. Indeed, a rewriting rule is applied on a word if it contains a leading monomial of \(R\)_i.e._ an obstruction in our case. Therefore, two rules will be simulaneoulsy applied on a \(3\)-chain because it contains exactly two obstructions, and so on and so forth for higher degrees.
For more details on algebraic rewriting and its connections with non-commutative Grobner bases and Anick resolution, see [7].
## 3 Anick resolution
We can now introduce the Anick resolution. It will be a resolution of the field \(\mathbb{K}\) made out of free right \(A\)-modules. Indeed, once \(A\) is augmented by \(\varepsilon:A\to\mathbb{K}\), we can equip \(\mathbb{K}\) with a structure of right \(A\)-module using the external law of composition \(\mathbb{K}\times A\ni\left(\lambda,a\right)\mapsto\lambda\varepsilon(a)\in \mathbb{K}\). Similarly, we could define a structure of left \(A\)-module (or even of \(A\)-bimodule). Hence, even if in this paper we present a resolution of \(\mathbb{K}\) by right \(A\)-modules, with a few minor adaptations, one by left \(A\)-modules would work just as well (see [8]).
The free modules in the resolution are defined from the linear hull of (_i.e._ the vector space generated by) the sets of chains in each degree. The differentials are defined inductively at the same time as the contracting homotopy proving that the complex is a resolution.
It is helpful to define an order on the bases of the free modules. In order to do so, we will make use of the monomial order \(\prec\) at hand.
**Definition 3.1**.: _Let \(n\in\mathbb{N}\). Let \(C_{n}\) be the set of \(n\)-chains on an anti-chain \(V\), as defined in
_Definition 2.5_.: _We define an order \(<\) on the basis of the free right \(A\)-module \(\mathbb{K}C_{n}\otimes_{\mathbb{K}}A\) as:_
\[\forall c_{1},c_{2}\in C_{n},\quad\forall s_{1},s_{2}\in O(I(R)),\quad c_{1} \otimes\overline{s_{1}}<c_{2}\otimes\overline{s_{2}}\stackrel{{ \mathrm{def}}}{{\Leftrightarrow}}c_{1}s_{1}\sim c_{2}s_{2}.\]
The order \(<\) is well-defined and total because \(V\) is an anti-chain (since it entails that \(c_{1}\otimes\overline{s_{1}}\neq c_{2}\otimes\overline{s_{2}}\) implies that \(c_{1}s_{1}\neq c_{2}s_{2}\)). Moreover, it is a well-order (induced by the properties of \(\prec\)).
It follows that, for every element in \(\mathbb{K}C_{n}\otimes_{\mathbb{K}}A\), there is a greatest term according to the order \(<\), since such an element is a finite sum of terms. An alternative way to define this greatest term would be to use the monomial order on a polynomial that we associate to the element of \(\mathbb{K}C_{n}\otimes_{\mathbb{K}}A\), as we do in the next definition.
**Definition 3.2**.: _The leading monomial (or high-term, as called by Anick [2]) of an element \(P:=\sum_{i}\lambda_{i}c_{i}^{(n)}\otimes\overline{r_{i}}\in\mathbb{K}C_{n} \otimes_{\mathbb{K}}A\) is defined as_
\[\mathrm{LM}\left(P\right):=\mathrm{LM}\left(\sum_{i}\lambda_{i}\left(c_{i}^{( n)}\widehat{r_{i}}\right)\right)\in\left\langle X\right\rangle,\]
_where \(\widehat{r_{i}}\) is the unique normal form of \(r_{i}\)._
We can now formulate the Anick resolution and prove its exactness.
**Theorem 3.3** (Anick resolution).: _Let \(\mathbb{K}\) be a field. Let \(A\) be a \(\mathbb{K}\)-algebra augmented by \(\varepsilon\) with the section defined by \(\eta(1_{\mathbb{K}})=1_{A}\). Let \(\left\langle X|R\right\rangle\) be a presentation of \(A\) such that \(R\) is a minimal non-commutative Grobner basis according to the monomial order \(\prec\). Let \(O(I(R)):=\left\langle X\right\rangle\backslash\mathrm{LM}\left(I(R)\right)\) be the set of normal words. Let \(V:=\mathrm{LM}\left(R\right)\) be the set of leading monomials in \(R\), called obstructions. For any \(n\in\mathbb{N}\), let \(C_{n}\) denote the set of \(n\)-chains on \(V\) as defined in Definition 2.5._
_The following is a free resolution of \(\mathbb{K}\) in the category of right \(A\)-modules:_
\[\cdots\to\mathbb{K}C_{n+1}\otimes_{\mathbb{K}}A\stackrel{{ d_{n+1}}}{{\to}}\mathbb{K}C_{n}\otimes_{\mathbb{K}}A\to \cdots\to\mathbb{K}C_{2}\otimes_{\mathbb{K}}A\stackrel{{ d_{3}}}{{ \to}}\mathbb{K}C_{1}\otimes_{\mathbb{K}}A\stackrel{{ d_{4}}}{{\to}} \mathbb{K}C_{0}\otimes_{\mathbb{K}}A\stackrel{{\varepsilon}}{{\to}} \mathbb{K}\to 0,\]
_where for \(n\geqslant 1\), the map of right \(A\)-modules \(d_{n}\) satisfies:_
\[\forall c^{(n)}\in C_{n},\quad d_{n}\left(c^{(n)}\otimes 1_{A}\right):= \left[c^{(n)}\right]^{n-1}\otimes\overline{\left[c^{(n)}\right]_{n-1}}+\omega_ {c^{(n)}},\]
_with either \(\omega_{c^{(n)}}=0\) or its high-term verifies \(\mathrm{LM}\left(\omega_{c^{(n)}}\right)\prec c^{(n)}\)._
Proof.: The proof is done by induction by constructing the differentials and contracting homotopy at the same time.
Note that to prove exacteness at \(E\) in \(\cdots\to F\stackrel{{\delta_{1}}}{{\to}}E\stackrel{{ \delta_{2}}}{{\to}}\cdots\), it suffices to prove that \(\delta_{0}\delta_{1}=0\) and that there exists a \(\mathbb{K}\)-linear map \(\iota_{0}:\ker(\delta_{0})\to F\) such that \(\delta_{1}\iota_{0}=\mathrm{id}_{\ker(\delta_{0})}\).
Since \(C_{0}=\{1\}\), we can identify \(\mathbb{K}C_{0}\otimes_{\mathbb{K}}A\) with \(A\) for the initialisation, as a matter of simplifying notations.
Then, define \(d_{1}:\mathbb{K}C_{1}\otimes_{\mathbb{K}}A\to A\) as the map of right \(A\)-modules with:
\[\forall x\in C_{1}=X,\quad d_{1}(x\otimes 1_{A})=\overline{x}-\eta\varepsilon( \overline{x}).\]
Firstly, it is evident that \(\varepsilon d_{1}=0\) since \(\varepsilon\) is \(\mathbb{K}\)-linear map and \(\eta\) is a section of it. The kernel of \(\varepsilon\) is spanned by the elements from \(A\) of the form \(\overline{s}-\eta\varepsilon(\overline{s})\) where \(s\in O(I(R))\). Indeed, every element of \(a\in A\) is written \(\eta\varepsilon(a)+(a-\eta\varepsilon(a))\) and we have the decomposition from the augmentation:
\(A=\mathbb{K}1_{A}\oplus\ker(\varepsilon)\). Defining \(i_{0}\) on those elements:
\[\forall s=x_{1}\cdots x_{\ell}\in O(I(R)),\quad i_{0}(\overline{s}- \eta\varepsilon(\overline{s})):=\sum_{j=1}^{\ell}\varepsilon\left(\overline{x _{1}\cdots x_{j-1}}\right)\left(x_{j}\otimes\overline{x_{j+1}\cdots x_{\ell}} \right),\] \[\text{(with the convention that $x_{1}\cdots x_{j-1}=1$ if $j\leqslant 1$ and $x_{j+1}\cdots x_{\ell}=1$ if $j\geqslant\ell$)}\]
and extending by \(\mathbb{K}\)-linearity on \(\ker(\varepsilon)\) gives us a map satisfying \(d_{1}i_{0}=\operatorname{id}_{\ker(\varepsilon)}\). Indeed for \(s=x_{1}\cdots x_{\ell}\in O(I(R))\), we have:
\[d_{1}i_{0}(\overline{s}-\eta\varepsilon(\overline{s})) =d_{1}\left(\sum_{j=1}^{\ell}\varepsilon\left(\overline{x_{1} \cdots x_{j-1}}\right)\left(x_{j}\otimes\overline{x_{j+1}\cdots x_{\ell}} \right)\right)\] \[=\sum_{j=1}^{\ell}\varepsilon\left(\overline{x_{1}\cdots x_{j-1} }\right)d_{1}\left(x_{j}\otimes\overline{x_{j+1}\cdots x_{\ell}}\right) d_{1}\ \mathbb{K}\text{-linear},\] \[=\sum_{j=1}^{\ell}\varepsilon\left(\overline{x_{1}\cdots x_{j-1} }\right)\left(\overline{x_{j}\cdots x_{\ell}}-\eta\varepsilon(\overline{x_{j} \overline{x_{j+1}}\cdots x_{\ell}})\right.\qquad\qquad\text{definition of $d_{1}$},\] \[=\sum_{j=1}^{\ell}\varepsilon\left(\overline{x_{1}\cdots x_{j-1} }\right)\overline{x_{j}\cdots x_{\ell}}-\eta\varepsilon(\overline{x_{1} \cdots x_{j}})\overline{x_{j+1}\cdots x_{\ell}}\qquad\left.\begin{subarray}{c }\eta\ \mathbb{K}\text{-linear},\\ \varepsilon\ \text{algebra morphism}.\end{subarray}\right.\]
Since \(\eta(1_{\mathbb{K}})=1_{A}\)3, then \(\eta\varepsilon(\overline{x_{1}\cdots x_{j}})\overline{x_{j+1}\cdots x_{ \ell}}=\varepsilon(\overline{x_{1}\cdots x_{j}})\overline{x_{j+1}\cdots x_{ \ell}}\), therefore the right-most and left-most of two consecutive terms in the sum cancel out. Remain only the left- and right-most terms of the entire sum _i.e._ :
Footnote 3: This is a requirement, trivially verified when \(\eta\) is considered as a morphism of unitary algebras and not simply \(\mathbb{K}\)-linear. Otherwise, in that latter case, \(\eta\), being a section of \(\varepsilon\), could satisfy \(\eta(1_{\mathbb{K}})=1_{A}+\omega\), where \(\omega\in\ker(\varepsilon)\)
\[d_{1}i_{0}(\overline{s}-\eta\varepsilon(\overline{s}))=\overline{x_{1}\cdots x _{\ell}}-\eta(\varepsilon(\overline{x_{1}\cdots x_{\ell}}))=\overline{s}-\eta \varepsilon(\overline{s}).\]
This proves exacteness of the sequence at \(\mathbb{K}C_{0}\otimes_{\mathbb{K}}A\).
Now, suppose that, for \(n\in\mathbb{N}\), the sequence:
\[\cdots\to\mathbb{K}C_{n+1}\otimes_{\mathbb{K}}A\overset{d_{n+1}}{\underset{i_{ n}}{\underset{i_{n}}{\underset{i_{n}}{\underset{i_{n}}{\underset{i_{n}}{ \underset{i_{n}}{\underset{i_{n}}{\underset{i_{n}}{\underset{i}}{\underset{i_{n}}{ \underset{i_{n}}{\underset{i}}{\underset{i_{n}}{\underset{i}}{\underset{i_{n}}{ \underset{i}}{\underset{i}}{\underset{i_{n}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}{\underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}{\underset{i}}{\underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{ \underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}{\underset{i}}{ \underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}{\underset{i}}{\underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{ \underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}{\underset{}}{\underset{i}}{\underset{i}}{\underset{i}{ \underset{}}{\underset{i}}{\underset{i}}{\underset{i}{\underset{}}{\underset{i}}{ \underset{i}{\underset{}}{\underset{i}}{\underset{i}}{\underset{i}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}{\underset{}{i}}{\underset{i}}{\underset{i}}{ \underset{i}{\underset{}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}{\underset{}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}{ }}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}{\underset{}{i}}{\underset{i}}{ \underset{i}{\underset{}{i}}{\underset{i}}{\underset{i}}{\underset{i}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{\underset{i}}{ \underset{i}}{\
By Property (iii) for \(m=n\), it is evident that \(d_{n}d_{n+1}=0\), proving Property (i) for \(d_{n+1}\).
Letting \(\omega_{c^{(n+1)}}:=-i_{n-1}d_{n}\left(c^{(n)}\otimes\overline{t}\right)\) we obtain the desired expression for \(d_{n+1}\). Indeed, by Property (iv) for \(m=n\), \(\operatorname{LM}\left(\omega_{c^{(n+1)}}\right)=\operatorname{LM}\left(d_{n} (c^{(n)}\otimes\overline{t})\right)\). But by Property (ii), \(\operatorname{LM}\left(d_{n}(c^{(n)}\otimes\overline{t})\right)\) matches with \(c^{(n-1)}\otimes\overline{r}\) where \(c^{(n-1)}:=\left[c^{(n+1)}\right]^{n-1}\) and \(r:=\left[c^{(n+1)}\right]_{n-1}\). But since there are no overlaps between the last obstructions of \(c^{(n+1)}\) and of \(c^{(n-1)}\), \(r\) contains the last obstruction of \(c^{(n+1)}\) and is therefore not normal. Reducing it to its normal form \(\hat{r}\) implies that \(c^{(n-1)}\hat{r}\) is smaller than \(c^{(n+1)}\). Thus \(\operatorname{LM}\left(\omega_{c^{(n+1)}}\right)\prec c^{(n+1)}\), proving Property (ii) for \(d_{n+1}\).
Let us define recursively the \(\mathbb{K}\)-linear map \(i_{n}\) on \(\ker(d_{n})\). Let \(v=\sum_{i}\lambda_{i}c_{i}^{(n)}\otimes\overline{s_{i}}\in\ker(d_{n})\). Assume that \(i_{n}\) has been defined for all \(v^{\prime}\in\ker(d_{n})\) with \(\operatorname{LM}\left(v^{\prime}\right)\prec\operatorname{LM}\left(v\right)\) such that it satisfies Properties (iii) and (iv) on those elements. Without loss of generality, assume that \(\operatorname{LM}\left(v\right)\) coincides with \(c_{0}^{(n)}\otimes\overline{s_{0}}\) where \(s_{0}\) is normal. Then, since \(d_{n}(v)=0\), it follows that \(c_{0}^{(n-1)}\otimes\overline{r_{0}}=-\omega_{v}\) where \(d_{n}\left(c_{0}^{(n)}\otimes\overline{s_{0}}\right)=c_{0}^{(n-1)}\otimes \overline{r_{0}}+\omega_{c_{0}^{(n)}}\) in such a way that \(c_{0}^{(n-1)}=\left[c_{0}^{(n)}\right]^{n-1}\) and \(r_{0}=\left[c_{0}^{(n)}\right]_{n-1}s_{0}\), as well as, \(\omega_{v}=\frac{1}{\lambda_{0}}\left(\omega_{c_{0}^{(n)}}+d_{n}\left(\sum_{i \neq 0}\lambda_{i}c_{i}^{(n)}\otimes\overline{s_{i}}\right)\right)\) and thus by Property (ii), \(\operatorname{LM}\left(c_{0}^{(n-1)}\otimes\overline{r_{0}}\right)\prec c_{0}^ {(n)}s_{0}\). This implies, since \(c_{0}^{(n-1)}r_{0}=c_{0}^{(n)}s_{0}\), that \(r_{0}\) is not normal and thus contains an obstruction. Consider the obstruction in \(r_{0}\) starting the furthest to the left. It will overlap with the last obstruction in \(c_{0}^{(n)}\) since \(s_{0}\) is normal. Therefore, we obtain an \((n+1)\)-chain \(c^{(n+1)}\) and \(t\) a proper suffix of \(s_{0}\) such that \(c_{0}^{(n-1)}r_{0}=c_{0}^{(n)}s_{0}=c^{(n+1)}t\) in \(\left\langle X\right\rangle\). Define:
\[i_{n}(v):=\operatorname{LC}\left(v\right)c^{(n+1)}\otimes\overline{t}+i_{n} \left(v-\operatorname{LC}\left(v\right)d_{n+1}\left(c^{(n+1)}\otimes\overline {t}\right)\right). \tag{2}\]
This works because \(\operatorname{LM}\left(v-\operatorname{LC}\left(v\right)d_{n+1}\left(c^{(n+1) }\otimes\overline{t}\right)\right)<\operatorname{LM}\left(v\right)\) since \(d_{n+1}\) verifies the Property (ii) and thus cancellation on the leading term occurs. It follows that \(i_{n}\) verifies Property (iv) since we assumed \(i_{n}\) satisfies Property (iv) for elements with a smaller leading monomial.
Finally, by recursive hypothesis, we have that Property (iii) is verified on elements with a smaller leading monomial. Hence:
\[d_{n+1}i_{n}(v)=\operatorname{LC}\left(v\right)d_{n+1}\left(c^{(n+1)}\otimes \overline{t}\right)+v-\operatorname{LC}\left(v\right)d_{n+1}\left(c^{(n+1)} \otimes\overline{t}\right)=v\]
and thus, \(d_{n+1}\) and \(i_{n}\) will ultimately verify Property (iii) on \(\ker(d_{n})\).
This concludes the inductive proof.
**Example 3.4**.: _Let us consider, in the continuity of the examples throughout this paper, the algebra presented by \(\left\langle X\middle|R\right\rangle\), where \(X=\left\{x,y,z\right\}\) and \(R=\left\{xxyx,xxx-xx,yxz-yx\right\}\) with the deglex monomial order induced by \(x\succ y\succ z\) augmented with the evalutation of polynomials at zero. We have \(V:=\operatorname{LM}\left(R\right)=\left\{xxyx,xxx,yxz\right\}\) and the graph of \(n\)-chains is given in Figure 1._
_We have, for all \(\zeta\in X\) and \(x_{1}\cdots x_{\ell}\in O(I(R))\):_
\[d_{1}(\zeta\otimes\overline{1}) =\overline{\zeta}\] \[i_{0}(\overline{x_{1}\cdots x_{\ell}}) =x_{1}\otimes\overline{x_{2}\cdots x_{\ell}}\]
_Then:_
\[d_{2}(xxx\otimes\overline{1}) =x\otimes\overline{xx}-i_{0}d_{1}(x\otimes\overline{xx})\] _definition of_ \[d_{2}\] \[=x\otimes\overline{xx}-i_{0}(\overline{xx}x)\] _definition of_ \[d_{1}\] \[=x\otimes\overline{xx}-i_{0}(\overline{xx})\] _reduction_ \[=x\otimes\overline{xx}-x\otimes\overline{x}\] _definition of_ \[i_{0}\]
_Similarly, we compute:_
\[d_{2}(xxyx\otimes\overline{1}) =x\otimes\overline{xyx}\] \[d_{2}(yxz\otimes\overline{1}) =y\otimes\overline{xz}-y\otimes\overline{x}\]
_The \(3\)-chains are \(\{xxyxxy,xxyxxx,xxyxz,xxxyx,xxxx\}\). Then:_
\[d_{3}(xxxyx\otimes\overline{1}) =xxx\otimes\overline{yx}-i_{1}d_{2}(xxx\otimes\overline{yx})\] _definition of \[d_{3}\] \[=xxx\otimes\overline{yx}-i_{1}(x\otimes\overline{xyx}-x\otimes \overline{xyx})\] _definition of \[d_{2}\] \[=xxx\otimes\overline{yx}-i_{1}(x\otimes 0-x\otimes\overline{xyx})\]_reduction \[=xxx\otimes\overline{yx}+xxyx\otimes\overline{1}\] _definition of \[i_{1}\] (see (2))_
_In an analoguous manner, we compute:_
\[d_{3}(xxyxxy\otimes\overline{1}) =xxyx\otimes\overline{xyx}\] \[d_{3}(xxyxxx\otimes\overline{1}) =xxyx\otimes\overline{xx}-xxyx\otimes\overline{x}\] \[d_{3}(xxyz\otimes\overline{1}) =xxyz\otimes\overline{z}-xxyx\otimes\overline{1}\] \[d_{3}(xxxx\otimes\overline{1}) =xxx\otimes\overline{x}\]
_The \(4\)-chains are:_
\[\{xxyxxyxxy,xxyxxyxxx,xxyxyxz,xxyxxxyx,\\ \
## Acknowledgements
I would like to thank Cyrille Chenavier for his continuous guidance and support all along the process of writing this paper, providing many relevant remarks and insight on the subject. I would also like to thank Thomas Cluzeau for proofreading this note.
|
2302.12937 | Constraint Optimization over Semirings | Interpretations of logical formulas over semirings have applications in
various areas of computer science including logic, AI, databases, and security.
Such interpretations provide richer information beyond the truth or falsity of
a statement. Examples of such semirings include Viterbi semiring, min-max or
access control semiring, tropical semiring, and fuzzy semiring.
The present work investigates the complexity of constraint optimization
problems over semirings. The generic optimization problem we study is the
following: Given a propositional formula $\varphi$ over $n$ variable and a
semiring $(K,+,\cdot,0,1)$, find the maximum value over all possible
interpretations of $\varphi$ over $K$. This can be seen as a generalization of
the well-known satisfiability problem. A related problem is to find an
interpretation that achieves the maximum value. In this work, we first focus on
these optimization problems over the Viterbi semiring, which we call optConfVal
and optConf.
We show that for general propositional formulas in negation normal form,
optConfVal and optConf are in ${\mathrm{FP}}^{\mathrm{NP}}$. We investigate
optConf when the input formula $\varphi$ is represented as a CNF. For CNF
formulae, we first derive an upper bound on optConfVal as a function of the
number of maximum satisfiable clauses. In particular, we show that if $r$ is
the maximum number of satisfiable clauses in a CNF formula with $m$ clauses,
then its optConfVal is at most $1/4^{m-r}$. Building on this we establish that
optConfVal for CNF formulae is hard for the complexity class
${\mathrm{FP}}^{\mathrm{NP}[\log]}$. We also design polynomial-time
approximation algorithms and establish an inapproximability for optConfVal. We
establish similar complexity results for these optimization problems over other
semirings including tropical, fuzzy, and access control semirings. | A. Pavan, Kuldeep S. Meel, N. V. Vinodchandran, Arnab Bhattacharyya | 2023-02-24T23:53:03Z | http://arxiv.org/abs/2302.12937v1 | # Constraint Optimization over Semirings+
###### Abstract
Interpretations of logical formulas over semirings (other than the Boolean semiring) have applications in various areas of computer science including logic, AI, databases, and security. Such interpretations provide richer information beyond the truth or falsity of a statement. Examples of such semirings include Viterbi semiring, min-max or access control semiring, tropical semiring, and fuzzy semiring.
The present work investigates the complexity of constraint optimization problems over semirings. The generic optimization problem we study is the following: Given a propositional formula \(\varphi\) over \(n\) variable and a semiring \((K,+,\cdot,0,1)\), find the maximum value over all possible interpretations of \(\varphi\) over \(K\). This can be seen as a generalization of the well-known satisfiability problem (a propositional formula is satisfiable if and only if the maximum value over all interpretations/assignments over the Boolean semiring is 1). A related problem is to find an interpretation that achieves the maximum value. In this work, we first focus on these optimization problems over the Viterbi semiring, which we call \(\mathsf{optConVal}\) and \(\mathsf{optConf}\).
We first show that for general propositional formulas in negation normal form, \(\mathsf{optConVal}\) and \(\mathsf{optConf}\) are in \(\mathrm{FP}^{\mathrm{NP}}\). We then investigate \(\mathsf{optConf}\) when the input formula \(\varphi\) is represented in the conjunctive normal form. For CNF formulae, we first derive an upper bound on the value of \(\mathsf{optConf}\) as a function of the number of maximum satisfiable clauses. In particular, we show that if \(r\) is the maximum number of satisfiable clauses in a CNF formula with \(m\) clauses, then its \(\mathsf{optConf}\) value is at most \(1/4^{m-r}\). Building on this we establish that \(\mathsf{optConf}\) for CNF formulae is hard for the complexity class \(\mathrm{FP}^{\mathrm{NP}[\log]}\). We also design polynomial-time approximation algorithms and establish an inapproximability for \(\mathsf{optConfVal}\). We establish similar complexity results for these optimization problems over other semirings including tropical, fuzzy, and access control semirings.
Introduction
Classically, propositional formulae are interpreted over the Boolean semiring \(\mathbb{B}=(\{\mathrm{F},\mathrm{T}\},\vee,\wedge,\mathrm{F},\mathrm{T})\) which is the standard semantics for the logical truth. In this setting, the variables take one of the two values \(\mathrm{T}\) (true) or \(\mathrm{F}\) (false). However, it is natural to extend the semantics to other semirings. Here, the idea is to interpret logical formulae when the variables take values over a semiring \(\mathbb{K}=(K,+,\cdot,0,1)\). Such interpretations provide richer information beyond the truth or falsity of a statement and have applications in several areas such as databases, AI, logic, and security (see [11, 12, 13, 14, 15, 16] and references therein). In particular, semiring _provenance analysis_ has been successfully applied in several software systems, such as Orchestra and Propolis (see, e.g., [1, 1, 13, 15, 16, 17]).
Examples of semirings that are studied in the literature include Viterbi semiring, fuzzy semiring, min-max or access control semiring, and tropical semiring. Semantics over the Viterbi semiring \(\mathbb{V}=([0,1],\max,\cdot,0,1)\), has applications in database provenance, where \(x\in[0,1]\) is interpreted as a _confidence score_[1, 1, 13, 14, 15], in probabilistic parsing, in probabilistic CSPs, and in Hidden Markov Models [16, 17, 18, 19]. The access control semiring can be used as a tool in security specifications [15]. Other semirings of interest include the tropical semiring, used in cost analysis and algebraic formulation for shortest path algorithms [13], and fuzzy semirings used in the context of fuzzy CSPs [2].
Optimization problems over Boolean interpretations have been central in many application as well as foundation areas. Indeed, the classical satisfiability problem is determining whether a formula \(\phi(x_{1},\cdots,x_{n})\) has an interpretation/assignment over the Boolean semiring that evaluates to True. Even though semiring semantics naturally appear in a variety of applications, the optimization problems over semirings, other than the Boolean semiring, have not received much attention.
In this work, we introduce and investigate the complexity of optimization problems over semiring semantics. Let \(\mathbb{K}=(K,+,\cdot,0,1)\) be a semiring with a total order over \(K\) and \(\varphi\) be a propositional formula over a set \(X\) of variables. A \(\mathbb{K}\)-interpretation \(\pi\) is a function from \(X\) to \(K\). Such an interpretation can be naturally extended to formula \(\varphi\), which we denote by \(\mathsf{Sem}(\varphi,\pi)\). We study the following computational problem: Given a propositional formula \(\varphi\) in negation normal form over a set \(X\) of variables, compute the maximum value of \(\mathsf{Sem}(\varphi,\pi)\) over all possible interpretations \(\pi\). We call this problem \(\mathsf{optSemVal}\). A related problem, denoted \(\mathsf{optSem}\), is to compute an interpretation \(\pi\) that maximizes \(\mathsf{Sem}(\varphi,\pi)\). Refer to Section 2 for a precise formulation of these problems.
There has been a rich history of work which formulated the notion of CSP over semirings and investigated local consistency algorithms in the general framework [1, 1, 1, 2, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 11, 19, 12, 14, 16, 18, 19, 13, 15, 17, 19, 16, 18, 19, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 91, 84, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 85, 87, 88, 89, 92, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 19, 11, 12, 13, 14, 15, 16, 17, 19, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 92, 93, 94, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
1. We establish that both \(\mathsf{optConf}\) and \(\mathsf{optConfVal}\) are in the complexity class \(\mathrm{FP}^{\mathrm{NP}}\). The crucial underlying observation is that even though \(\pi\) maps \(X\) to real values in the range \([0,1]\); the solution to \(\mathsf{optConfVal}\) can be represented using polynomially many bits. We then draw upon connections to Farey sequences to derive an algorithm with polynomially many \(\mathrm{NP}\) calls (Theorem 3.2).
2. For CNF formulas, we establish an upper bound on \(\mathsf{optConfVal}\) as a function of the number of maximum satisfiable clauses (Theorem 3.7).
3. We also establish a lower bound on the complexity of \(\mathsf{optConfVal}\) and \(\mathsf{optConf}\). In particular, we show that both the problems are hard for the complexity class \(\mathrm{FP}^{\mathrm{NP}[\log]}\). To this end, we demonstrate a reduction from MaxSATVal to \(\mathsf{optConfVal}\); this reduction crucially relies on the above-mentioned upper bound on \(\mathsf{optConfVal}\) in terms of the number of maximum satisfiable clauses (Theorem 3.9).
4. We design a polynomial-time approximation algorithm for \(\mathsf{optConfVal}\) and establish an inapproximability result. In particular, for 3-CNF formulas with \(m\) clauses, we design a \(0.716^{m}\)-approximation algorithm and show that the approximation factor can not be improved to \(0.845^{m}\) unless P = NP (Theorems 4.3 and 4.5).
5. Finally, we show that for the access control semiring, the complexity of these optimization problems is equivalent to the corresponding problems over Boolean semiring (Theorem 5.3).
**Remark 1**.: _Since Viterbi semiring and tropical semiring are isomorphic via the mapping \(x\leftrightarrow-\ln x\), results established for Viterbi semiring also hold for the tropical semiring. Fuzzy semiring can be seen as an "infinite refinement" of access control semiring with the same algebraic structure, results that we establish for access control semiring also hold for fuzzy semiring._
_Organization._ The rest of the paper is organized as follows. We give the necessary notation and definitions in Section 2. Section 3 details our results on the computational complexity of \(\mathsf{optConf}\) and \(\mathsf{optConfVal}\). Section 4 deals with approximate algorithms and the hardness of approximation of \(\mathsf{optConfVal}\). In Section 5, we give complexity results for optimization problems for the access control semiring. Finally, we conclude in Section 6.
## 2 Preliminaries
We assume that the reader is familiar with definition of a semiring. We denote a generic semiring by \(\mathbb{K}=(K,+,\cdot,0,1)\) where \(K\) is the underlying set. For interpreting formulas over \(\mathbb{K}\), we will add a "negation" function \(\daldal:K\to K\). We assume \(\daldal\) is a bijection so that \(\daldal(\dal(x))=x\), and \(\dal(0)=1\). For ease of presentation, we use the most natural negation function (depending on the semiring). However, many of our results hold for very general interpretations of negation. Finally, as our focus is on optimization problems, we will also assume a (natural) total order on the elements of \(K\).
For a set \(X=\{x_{1},x_{2},\ldots x_{n}\}\) of variables, we associate the set \(\overline{X}=\{\neg x_{1},\ldots,\neg x_{n}\}\). We call \(X\cup\overline{X}\) the literals and formulas we consider are propositional formulas over \(X\cup\overline{X}\) in _negation normal form_. We also view a propositional formula \(\varphi\) in negation normal form as a rooted directed tree wherein each leaf node is labeled with a literal, 1, or 0 and each internal node is labeled with conjunction \((\wedge)\) or disjunction \(\vee\). Note that viewing \(\varphi\) as a tree ensures a similar size as its string representation. We call the tree representing the formula \(\varphi\) as _formula tree_ and denote it with \(T_{\varphi}\). For a propositional formula \(\varphi(x_{1},\cdots,x_{n})\), in negation normal form we use \(m\) to denote the size of the formula, i.e. the total number of occurrences of each variable
and its negation. When \(\varphi(x_{1},\cdots x_{n})\) is in CNF form, \(m\) denotes the number of clauses. We interpret a propositional formula over a semiring \(\mathbb{K}\) by mapping the variables to \(K\) and naturally extending it. Formally, a \(\mathbb{K}\)-interpretation is a function \(\pi:X\to K\). We extend \(\pi\) to an arbitrary propositional formula \(\varphi\) in negation normal form, which is denoted by \(\mathsf{Sem}(\varphi,\pi)\) (\(\mathsf{Sem}\) stands for'semantics'), as follows.
* \(\mathsf{Sem}(x,\pi)=\pi(x)\)
* \(\mathsf{Sem}(\neg x,\pi)=\dal(\pi(x))\)
* \(\mathsf{Sem}(\alpha\vee\beta,\pi)=\mathsf{Sem}(\alpha,\pi)+\mathsf{Sem}(\beta,\pi)\)
* \(\mathsf{Sem}(\alpha\wedge\beta,\pi)=\mathsf{Sem}(\alpha,\pi)\cdot\mathsf{Sem}( \beta,\pi)\)
### Optimization Problems and Complexity Classes
For a formula \(\varphi\), we define \(\mathsf{optSemVal}(\varphi)\) as
\[\mathsf{optSemVal}(\varphi)=\max_{\pi}\{\mathsf{Sem}(\varphi,\pi)\},\]
where \(\max\) is taken over all possible \(\mathbb{K}\)-interpretations from \(X\) to \(K\).
**Definition 2.1** (\(\mathsf{optSem}\) and \(\mathsf{optSemVal}\)).: Given a propositional formula \(\varphi\) in negation normal form, the \(\mathsf{optSemVal}\) problem is to compute \(\mathsf{optSemVal}(\varphi)\). The \(\mathsf{optSem}\) problem is to compute a \(\mathbb{K}\)-interpretation that achieves \(\mathsf{optSemVal}(\varphi)\), i.e, output \(\pi^{*}\) so that \(\mathsf{optSemVal}(\varphi)=\mathsf{Sem}(\varphi,\pi^{*})\).
Notice that when \(\mathbb{K}\) is the Boolean semiring (with \(0<1\) ordering and standard negation interpretation), \(\mathsf{optSemVal}\) is the well-known satisfiability problem: the formula \(\varphi\) is satisfiable if and only if \(\mathsf{optSemVal}(\varphi)=1\). Also, the problem \(\mathsf{optSem}\) is to output a satisfying assignment if the formula \(\varphi\) is satisfiable.
In this work, we consider the following semirings.
1. Viterbi semiring \(\mathbb{V}=([0,1],\max,\cdot,0,1)\). As mentioned, the Viterbi semiring has applications in database provenance, where \(x\in[0,1]\) is interpreted as confidence scores, in probabilistic parsing, in probabilistic CSPs, and in Hidden Markov Models.
2. The tropical semiring \(\mathbb{T}=(\mathbb{R}\cup\{\infty\},\min,+,\infty,0)\). The tropical semiring is isomorphic to the Viterbi semiring via the mapping \(x\leftrightarrow-\ln x\).
3. The fuzzy semiring \(\mathbb{F}=([0,1],\max,\min,0,1)\).
4. Access control semiring \(\mathbb{A}_{k}=([k],\max,\min,0,k)\). Intuitively, each \(i\in[k]\) is associated with an access control level with natural ordering. Here 0 corresponds to public access and \(n\) corresponds to no access at all. \([k]\) is the set \(\{0<1<\cdots<k\}\).
Most of our focus will be on complexity of \(\mathsf{optSem}\) and \(\mathsf{optSemVal}\) problems over the Viterbi semiring. We call the corresponding computational problems \(\mathsf{optConf}\) and \(\mathsf{optConfVal}\) respectively. We call the extended interpretation function \(\mathsf{Sem}\) as \(\mathsf{Conf}\) in this case.
**Definition 2.2** (MaxSat and MaxSatVal).: Given a propositional formula \(\varphi\) in CNF form, the MaxSat problem is to compute an assignment of \(\varphi\) that satisfies the maximum number of clauses. Given a propositional formula \(\varphi\) in CNF form, the MaxSatVal problem is to compute the maximum number of clauses of \(\varphi\) that can be satisfied.
We need a notion of reductions between functional problems. We use the notion of _metric reductions_ introduced by Krentel [10].
**Definition 2.3** (Metric Reduction).: For two functions \(f,g:\{0,1\}^{*}\to\{0,1\}^{*}\), we say that \(f\) metric reduces to \(g\) if there are polynomial-time computable functions \(h_{1}\) and \(h_{2}\) where \(h_{1}:\{0,1\}^{*}\to\{0,1\}^{*}\) (the reduction function) and \(h_{2}:\{0,1\}^{*}\times\{0,1\}^{*}\to\{0,1\}^{*}\) so that for any \(x\), \(f(x)=h_{2}(x,g(h_{1}(x)))\).
**Definition 2.4**.: For a function \(t:\mathbb{N}\to\mathbb{N}\), \(\mathrm{FP}^{\mathrm{NP}[t(n)]}\) denotes the class of functions that can be solved in polynomial-time with \(O(t(n))\) queries to an \(\mathrm{NP}\) oracle where \(n\) is the size of the input. When \(t(n)\) is some polynomial, we denote the class by \(\mathrm{FP}^{\mathrm{NP}}\).
Metric reductions are used to define notions of completeness and hardness for function classes \(\mathrm{FP}^{\mathrm{NP}}\) and \(\mathrm{FP}^{\mathrm{NP}[\log]}\). The following result due to Krentel [10] characterizes the complexity of the MaxSatVal problem.
**Theorem 2.5** ([10]).: MaxSatVal _is complete for \(\mathrm{FP}^{\mathrm{NP}[\log]}\) under metric reductions._
The following proposition is a basic ingredient in our results. It can be proved using basic calculus.
**Proposition 1**.: _Let \(f(x)=x^{a}(1-x)^{b}\) where \(a,b\) are non-negative integers, the maximum value of \(f(x)\) over the domain \([0,1]\) is attained when \(x=\frac{a}{a+b}\). The maximum value of the function is \(\left(\frac{a}{a+b}\right)^{a}\left(\frac{b}{a+b}\right)^{b}\)._
## 3 Computational Complexity of Confidence Maximization
For semantics over Viterbi semiring we assume the standard closed world semantics and use the negation function \(\dal(x)=1-x\). Thus we have \(\mathsf{Conf}(\neg x,\pi)+\mathsf{Conf}(x,\pi)=1\). However, our upper bound proofs go through for any reasonable negation function. We discuss this in Remark 2.
Since \(\mathsf{Conf}(\varphi,\pi)\) can be computed in polynomial time, \(\mathsf{optConf}\) is at least as hard as \(\mathsf{optConfVal}\). The following observation states that computing \(\mathsf{optConfVal}\) and \(\mathsf{optConf}\) are \(\mathrm{NP}\)-hard.
**Observation 3.1**.: _For a formula \(\varphi\), \(\mathsf{optConfVal}(\varphi)=1\) if and only if \(\varphi\) satisfiable. Hence both \(\mathsf{optConf}\) and \(\mathsf{optConfVal}\) are NP-hard._
While both \(\mathsf{optConf}\) and \(\mathsf{optConfVal}\) are \(\mathrm{NP}\)-hard, we would like to understand their relation to other maximization problems. In the study of optimization problems, the complexity classes \(\mathrm{FP}^{\mathrm{NP}}\) and \(\mathrm{FP}^{\mathrm{NP}[\log]}\) play a key role. In this section, we investigate both upper and lower bounds for these problems in relation to the classes \(\mathrm{FP}^{\mathrm{NP}}\) and \(\mathrm{FP}^{\mathrm{NP}[\log]}\).
An Illustrative Example.We first provide an illustrative example that gives an idea behind the upper bound. Consider the formula \(\phi(x_{1},x_{2})=(x_{1})\wedge(x_{2})\wedge(\neg x_{1}\vee\neg x_{2})\). Clearly, the formula is not satisfiable. Over the Viterbi semiring the value of the \(\mathsf{optConfVal}=\max\limits_{x_{i}\in[0,1]}\{x_{1}x_{2}(1-x_{1}),x_{1}x_{2}( 1-x_{2})\}\) by distributivity. This is maximized when (by Proposition 1) \(x_{1}=1\) and \(x_{2}=0.5\) or \(x_{1}=0.5\) and \(x_{2}=1\), leading to an optimum value of \(0.25\). In the following section, we show that the computation of \(\mathsf{optConfVal}\) reduces
to maximization over a set of polynomial terms wherein each polynomial term corresponds to a _proof tree_, which we define. While the number of polynomial terms could be exponential, we use an NP oracle to binary search for the term that gives the maximum value.
### An Upper Bound for General Formulae
We show that \(\mathsf{optConfVal}\) and \(\mathsf{optConf}\) can be computed in polynomial-time with oracle queries to an \(\mathrm{NP}\) language.
**Theorem 3.2**.: \(\mathsf{optConfVal}\) _for formulas in negation normal form is in \(\mathrm{FP}^{\mathrm{NP}}\)._
_Proof Idea:_ In order to show that \(\mathsf{optConfVal}\) is in \(\mathrm{FP}^{\mathrm{NP}}\), we use a binary search strategy using a language in \(\mathrm{NP}\). One of the challenges is that the confidence value could potentially be any real number in \([0,1]\) and thus apriori we may not be able to bound the number of binary search queries. However, we first argue that for any formula \(\varphi\) on \(n\) variables and with size \(m\), \(\mathsf{optConf}(\varphi)\) is a fraction of the form \(A/B\) where \(1\leq A\leq B\leq 2^{nm\log m}\). Ordered fractions of such form are known as _Farey_ sequence of order \(2^{nm\log m}\) (denoted as \(\mathcal{F}_{2^{nm\log m}}\)). Thus our task is to do a binary search over \(\mathcal{F}_{2^{nm\log m}}\) with time complexity \(O(nm\log m)\). However, in general binary search for an unknown element in the Farey sequence \(\mathcal{F}_{N}\) with time complexity \(O(\log N)\) appears to be unknown. We overcome this difficulty by using an \(\mathrm{NP}\) oracle to aid the binary search. We will give the details now.
**Definition 3.3**.: Let \(\varphi(x_{1},\cdots,x_{n})\) be a propositional formula in negation normal form with size \(m\). Let \(T_{\varphi}\) be its formula tree. A proof tree \(T\) of \(T_{\varphi}\) is a subtree obtained by the following process: for every OR node \(v\), choose one of the sub-trees of \(v\). For every AND node \(v\), keep all the subtrees.
Note that in a proof tree every OR node has only one child.
**Definition 3.4**.: Let \(\varphi(x_{1},\cdots,x_{n})\) be a propositional formula in negation normal form and let \(T\) be a proof tree. We define the _proof tree polynomial_\(p_{T}\) by inductively defining a polynomial for the subtree at every node \(v\) (denoted by \(p_{v}\)): If the node \(v\) is a variable \(x_{i}\), the polynomial is \(x_{i}\) and if it is \(\neg x_{i}\), the polynomial is \((1-x_{i})\). If \(v\) is an AND node with children \(v_{1},\ldots,v_{s}\), then \(p_{v}=\prod_{i=1}^{s}p_{s}\). If \(v\) is an OR node with a child \(u\), then \(p_{v}=p_{u}\).
**Claim 3.4.1**.: _Let \(\varphi(x_{1},\cdots,x_{n})\) be a propositional formula in negation normal form and let \(T\) be a proof tree of \(\varphi\)._
1. _The proof tree polynomial_ \(p_{T}\) _is of the form_ \[\prod_{i=1}^{n}x_{i}^{a_{i}}(1-x_{i})^{b_{i}}\] _where_ \(0\leq a_{i}+b_{i}\leq m\)_._
2. _For a_ \(\mathbb{V}\)_-interpretation_ \(\pi\)_,_ \[\mathsf{Conf}(T,\pi)=p_{T}\left(\pi(x_{1}),\ldots,\pi(x_{n})\right).\]
3. _Both_ \(\mathsf{optConf}(T)\) _and_ \(\mathsf{optConfVal}(T)\) _can be computed in polynomial-time._
4. \(\mathsf{optConfVal}(T)=\Pi_{i=1}^{n}\left(\frac{a_{i}}{a_{i}+b_{i}}\right)^{a_ {i}}\left(\frac{b_{i}}{a_{i}+b_{i}}\right)^{b_{i}}.\)
Proof.: Item (1) follows from the definition of the proof tree polynomial and a routine induction and the fact that the size of the formula \(\varphi\) is \(m\). Item (2) follows from the definitions.
Note that the polynomial \(\pi_{i=1}^{n}x_{i}^{a_{i}}(1-x_{i})^{b_{i}}\) can be maximized by maximizing each of the individual terms \(x_{i}^{a_{i}}(1-x_{i})^{b_{i}}\). By Proposition 1, the maximum value for a polynomial of this form is achieved at \(x_{i}=\frac{a_{i}}{a_{i}+b_{i}}\). Thus the interpretation \(\pi(x_{i})=\frac{a_{i}}{a_{i}+b_{i}}\) is an optimal \(\mathbb{V}\)-interpretation that can be computed in polynomial-time. Since \(0\leq a_{i}+b_{i}\leq m\), \(\mathsf{optConfVal}\) also can be computed in polynomial-time. Item (4) follows from Item (3), by substituting the values \(\pi(x_{i})\) for in the polynomial \(p_{T}\).
The next claim relates \(\mathsf{optConf}\) of the formula \(\varphi\) to \(\mathsf{optConf}\) of its proof trees. The proof of this claim follows from the definition of proof tree and standard induction.
**Claim 3.4.2**.: _For a formula \(\varphi\),_
\[\mathsf{optConfVal}(\varphi)=\max_{T}\mathsf{optConfVal}(T)\]
_where maximum is taken over all proof trees \(T\) of \(T_{\varphi}\). If \(T^{*}\) is the proof tree for which \(\mathsf{optConf}(T)\) is maximized, then \(\mathsf{optConf}(T^{*})=\mathsf{optConf}(\varphi)\)._
The above claim states that \(\mathsf{optConf}(\varphi)\) can be computed by cycling through all proof trees \(T\) of \(\varphi\) and computing \(\mathsf{optConf}(T)\). Since there could be exponentially many proof trees, this process would take exponential time. Our task is to show that this process can be done in \(\mathrm{FP}^{\mathrm{NP}}\). To do this we establish a claim that restricts values that \(\mathsf{optConfVal}(\varphi)\) can take. We need the notion of _Farey sequence_.
**Definition 3.5**.: For any positive integer \(N\), the _Farey sequence_ of order \(N\), denoted by \(\mathcal{F}_{N}\), is the set of all irreducible fractions \(p/q\) with \(0<p<q\leq N\) arranged in increasing order.
**Claim 3.5.1**.:
1. _For a propositional formula_ \(\varphi(x_{1},\cdots,x_{n})\)_,_ \(\mathsf{optConfVal}(\varphi)\) _belongs to the Farey sequence_ \(\mathcal{F}_{2nm\log m}\)_._
2. _For any two fractions_ \(u\) _and_ \(v\) _from_ \(\mathcal{F}_{2^{nm\log m}}\)_,_ \(|u-v|\geq 1/2^{2nm\log m}\)__
Proof.: By Claim 3.4.2, \(\mathsf{optConfVal}(\varphi)\) equals \(\mathsf{optConfVal}(T)\), for some proof tree \(T\). By Item (4) of Claim 3.4.1 this value is a product of fractions, where the denominator of each fraction is of the form \((a_{i}+b_{i})^{a_{i}+b_{i}}\) where \(a_{i}\) and \(b_{i}\) are non-negative integers. Since \(a_{i}+b_{i}\leq m\), each denominator is at most \(m^{m}\), and thus the denominator of the product is bounded by \(m^{nm}=2^{nm\log m}\). Since the numerator is at most the denominator, the claim follows.
For the proof of the second part, let \(u=p_{1}/q_{1}\) and \(v=p_{2}/q_{2}\), \(u>v\). Now \(u-v=(p_{1}q2-p_{2}q_{1})/q_{1}q_{2}\). Since \(q_{1},q_{2}\leq 2^{nm\log m}\), we have \(u-v>p_{1}q_{2}-p_{2}q_{1}/2^{2nm\log m}\). Since \(p_{1},p_{2},q_{1}\), \(q_{2}\) are all integers, \(p_{1}q_{2}-p_{2}q_{1}\geq 1\). Thus \(|u-v|\geq 1/2^{2nm\log m}\).
Consider the following language
\[L_{opt}=\{\langle\varphi,v\rangle\mid\mathsf{optConfVal}(\varphi)\geq v\}\]
**Claim 3.5.2**.: \(L_{opt}\) _is in \(\mathrm{NP}\)._
Proof.: Consider the following non-deterministic machine \(M\). On input \(\varphi\), \(M\) guesses a proof tree \(T\) of \(\varphi\): for every OR node, non-deterministically pick one of the subtrees. For \(T\), compute \(\mathsf{optConfVal}(T)\) and accept if \(\mathsf{optConfVal}(T)\geq v\). This can be done in polynomial-time using Item (3) of Claim 3.4.1.1. The correctness of this algorithm follows from Claim 3.4.2.
We need a method that given two fractions \(u\) and \(v\) and an integer \(N\), outputs a fraction \(p/q:u\leq p/q\leq v\), and \(p/q\in\mathcal{F}_{N}\). We give an \(\mathrm{FP}^{\mathrm{NP}}\) algorithm that makes \(O(N)\) queries to the \(\mathrm{NP}\) oracle to achieve this. We first define the \(\mathrm{NP}\) language \(L_{\textit{farey}}\). For this we fix any standard encoding of fraction using the binary alphabet. Such an encoding will have \(O(\log N)\) bit representation for any fraction in \(\mathcal{F}_{N}\).
\[L_{\textit{farey}}=\{\langle N,u,v,z\rangle\ |\ \exists z^{\prime};u\leq zz^{ \prime}\leq v\ \&\ zz^{\prime}\in\mathcal{F}_{N}\}\]
The following claim is easy to see.
**Claim 3.5.3**.: \(L_{\textit{farey}}\in\mathrm{NP}\)_._
Now we are ready to prove the Theorem 3.2.
Proof.: (_of Theorem 3.2_). The algorithm performs a binary search over the range \([0,1]\) by making adaptive queries \(\langle\varphi,v\rangle\) to the \(\mathrm{NP}\) language \(L_{\textit{opt}}\) starting with \(v=1\). At any iteration of the binary search, we have an interval \(I=[I_{l},I_{r}]\) and with the invariant \(I_{l}\leq\mathsf{optConfVal}(\varphi)<I_{r}\). The binary search stops when the size of the interval \([I_{l},I_{r}]=1/2^{2nm\log m}\). Since each iteration of the binary search reduces the size of the interval by a factor of 2, the search stops after making \(2nm\log m\) queries to \(L_{\textit{opt}}\). The invariant ensures that \(\mathsf{optConfVal}(\varphi)\) is in this interval. Moreover, \(\mathsf{optConfVal}(\varphi)\in\mathcal{F}_{2^{nm\log m}}\) (by item (1) of Claim 3.5.1) and there are no other fractions from \(\mathcal{F}_{2^{nm\log m}}\) in this interval (by item (2) of Claim 3.5.1). Now, by making \(O(nm\log m)\) queries to \(L_{\textit{farey}}\) with \(N=2^{nm\log m}\), \(u=I_{l}\), \(v=I_{r}\), we can construct the binary representation of the unique fraction in \(\mathcal{F}_{2^{nm\log m}}\) that lies between \(I_{l}\) and \(I_{r}\) which is \(\mathsf{optConfVal}(\varphi)\).
Next we show the optimal \(\mathbb{V}\)-interpretation can also be computed in polynomial time with queries to an NP oracle.
**Theorem 3.6**.: \(\mathsf{optConf}\) _for formulas in negation normal form can be computed in \(\mathrm{FP}^{\mathrm{NP}}\)._
Proof.: Let \(\varphi\) be a propositional formula in negation normal form. We use a prefix search over the encoding of proof trees of \(\varphi\) using an \(\mathrm{NP}\) language to isolate a proof tree \(T\) such that \(\mathsf{optConfVal}(\varphi)=\mathsf{optConfVal}(T)\). For this, we fix an encoding of proof trees of \(\varphi\). Consider the following \(\mathrm{NP}\) language \(L_{pt}\):
\[\{\langle\varphi,v,z\rangle\ |\ \exists z^{\prime}: zz^{\prime}\text{encodes a proof tree $T$ of $\varphi$}\] \[\&\ \mathsf{optConfVal}(T)=v\}\]
**Claim 3.6.1**.: \(L_{\textit{pt}}\) _is in \(\mathrm{NP}\)._
Proof.: Consider a non-deterministic machine that guesses a \(z^{\prime}\), verifies that \(zz^{\prime}\) encodes a proof tree \(T\) of \(\varphi\), and accepts if \(\mathsf{optConfVal}(T)=v\). By item (3) of Claim 3.4.1, \(\mathsf{optConfVal}(T)\) can be computed in polynomial time.
To complete the proof Theorem 3.6, given a propositional formula \(\varphi\), we first use \(\mathrm{FP}^{\mathrm{NP}}\) algorithm from Theorem 3.2 to compute \(v^{*}=\mathsf{optConfVal}(\varphi)\). Now we can construct a proof tree \(T\) of \(\varphi\) so that \(\mathsf{optConfVal}(T)=v^{*}\) by a prefix search using language \(L_{pt}\). Now by Claim 3.4.1, we can compute a \(\mathbb{V}\)-interpretation \(\pi^{*}\) so that \(\mathsf{Conf}(T,\pi^{*})=v^{*}\). Thus \(\pi^{*}\) is an optimal \(\mathbb{V}\)-interpretation for \(\varphi\), by Claim 3.4.2.
**Remark 2**.: _We revisit the semantics of negation. As stated earlier, by assuming the closed world semantics, we have \(\overline{\mathsf{l}}(x)=1-x\). We note that this assumption is not strictly necessary for the above proof to go through. Recall that Item (1) of Claim 3.4.1 states that the proof tree polynomial is of the form \(\prod x_{i}^{a_{i}}(1-x_{i})^{b_{i}}\). For a general negation function \(\overline{\mathsf{l}}\), the proof tree polynomial is of the form \(\prod x_{i}^{a_{i}}(\overline{\mathsf{l}}(x_{i}))^{b_{i}}\). Now if the maximum value of a term \(x^{a}(\overline{\mathsf{l}}(x))^{b}\) can be found, for example when \(\overline{\mathsf{l}}\) is an explicit differentiable function, the result will hold._
### Relation to \(\mathsf{MaxSat}\) for CNF Formulae
In this section we study the \(\mathsf{optConfVal}\) problem for CNF formulae and establish its relation to the \(\mathsf{MaxSat}\) problem. We first exhibit an upperbound on the \(\mathsf{optConfVal}(\varphi)\) using the maximum number of satisfiable clauses. Building on this result, in Section 3.3 we show that \(\mathsf{optConfVal}\) for CNF formulae is hard for the complexity class \(\mathrm{FP}^{\mathrm{NP}[\log]}\).
We first define some notation that will be used in this and next subsections. Let \(\varphi(x_{1},\cdots x_{n})=C_{1}\wedge\cdots\wedge C_{m}\) be a CNF formula and let \(\pi^{*}\) be an optimal \(\mathbb{V}\)-interpretation. For each clause \(C\) from \(\varphi\), let \(\pi^{*}(C)\) be the value achieved by this interpretation, i.e \(\pi^{*}(C)=\mathsf{Conf}(C,\pi^{*})\). Observe that since \(C\) is a disjunction of literals, \(\pi^{*}(C)=\max_{\ell\in C}\{\pi^{*}(\ell)\}\). For a clause \(C\), let
\[\ell_{C}=\operatorname{argmax}_{\ell\in C}\{\pi^{*}(\ell)\}\]
In the above, if there are multiple maximums, we take the smallest literal as \(\ell_{C}\) (By assuming an order \(x_{1}<\neg x_{1}<x_{2}<\neg x_{2}\cdots<x_{n}<\neg x_{n}\)). Observe that, since we are working over the Viterbi semiring, \(\mathsf{Conf}(C,\pi^{*})=\pi^{*}(\ell_{C})\). A literal \(\ell\) is _maximizing literal for a clause \(C\)_, if \(\ell_{C}=\ell\).
Since \(\varphi\) is a CNF formula, for any \(\mathbb{V}\)-interpretation \(\pi\)\(\mathsf{Conf}(\varphi,\pi)\) is of the form \(\Pi_{i=1}^{m}\mathsf{Conf}(C_{i},\pi)\). Given a collection of clauses \(\mathcal{D}\) from \(\varphi\), the _contribution of \(\mathcal{D}\) to \(\mathsf{Conf}(\varphi,\pi)\)_ is defined as \(\Pi_{c\in\mathcal{D}}\mathsf{Conf}(C,\pi)\).
The following theorem provides an upperbound on \(\mathsf{optConfVal}(\varphi)\) using \(\mathsf{MaxSatVal}\). This is the main result of this section.
**Theorem 3.7**.: _Let \(\varphi(x_{1},\cdots,x_{n})\) be a CNF formula with \(m\) clauses. Let \(r\) be the maximum number of clauses that can be satisfied. Then \(\mathsf{optConfVal}(\varphi)\leq 1/4^{(m-r)}\)._
Proof.: Let \(\pi^{*}\) be an optimal \(\mathbb{V}\)-interpretation for \(\varphi\). A clause \(C\) is called _low-clause_ if \(\pi^{*}(C)<1/2\), \(C\) is called a _high-clause_ of \(\pi^{*}(C)>1/2\), and \(C\) is a _neutral-clause_ if \(\pi^{*}(C)=1/2\). Let \(L\), \(H\), and \(N\) respectively denote the number of low, high, and neutral clauses.
We start with the following claim that relates the number of neutral clauses and the number of high-clauses to \(r\).
**Claim 3.7.1**.: \(\frac{N}{2}+H\leq r\)__
Proof.: Suppose that the number of low-clauses is strictly less than \(m-r\), thus number of high-clauses is more than \(r\).
For a variable \(x\), let
\[p_{x}=|\{C\mid C\text{ is neutral and }\ell_{C}=x\}|\]
and
\[q_{x}=|\{C\mid C\text{ is neutral and }\ell_{C}=\neg x\}|\]
That is \(p_{x}\) is the number of neutral clauses for which \(x\) is the maximizing literal and \(q_{x}\) is the number of neutral clauses for which \(\neg x\) is the maximizing literal.
Consider the truth assignment that is constructed based on the following three rules: (1) For every high-clause \(C\), set \(\ell_{C}\) to True and \(\neg\ell_{C}\) to False, 2) For every variable \(x\), if one of \(p_{x}\) or \(q_{x}\) is not zero, then if \(p_{x}\geq q_{x}\), then set \(x\) to True, otherwise set \(x\) to False. (3) All remaining variables are consistently assigned arbitrary to True/False values.
We argue that this is a consistent assignment: I.e, for every literal \(\ell\), both \(\ell\) and \(\neg\ell\) are not assigned the same truth value. Consider a literal \(\ell\). If there is a high clause \(C\) such that \(\ell=\ell_{C}\), then this literal is assigned truth value True and \(\neg\ell\) is assigned False. In this case, since \(\pi^{*}(\ell)>1/2\), \(\pi^{*}(\neg\ell)<1/2\). Thus \(\neg\ell\) can
not be maximizing literal for any high-clause and thus Rule (1) does not assign True to \(\neg\ell\). Again, since \(\pi^{*}(\ell)>1/2\), there is no neutral-clause \(D\) such that \(\ell=\ell_{D}\) or \(\neg\ell=\ell_{D}\). Thus Rule (2) does not assign a truth value to either of \(\ell\) or \(\neg\ell\). Since \(\ell\) and \(\neg\ell\) are assigned truth values, Rule (3) does not assign a truth value to \(\ell\) or \(\neg\ell\).
Consider a variable \(x\) where at least one of \(p_{x}\) or \(q_{x}\) is not zero. In this case \(x\) or \(\neg x\) is maximizing literal for a neutral clause. Thus \(\pi^{*}(x)=\pi^{*}(\neg x)=1/2\) and neither \(x\) nor \(\neg x\) is maximizing literal for a high-clause. Thus Rule (1) does not assign a truth value to \(x\) or \(\neg x\). Now \(x\) is True if and only if \(p_{x}\geq q_{x}\), thus the truth value assigned to \(x\) (and \(\neg x\)) is consistent. Since Rule (3) consistently assigns truth values of literals that are not covered by Rules (1) and (2), the constructed assignment is a consistent assignment.
For every high clause \(C\), literal \(\ell_{C}\) is set to true. Thus the assignment satisfies all the high-clauses. Consider a variable \(x\) and let \(\mathcal{D}\) be the (non-empty) collection of neutral clauses for which either \(x\) or \(\neg x\) is a maximizing literal. As \(x\) is assigned True if and only if \(p_{x}\geq q_{x}\), at least half the clauses from \(\mathcal{D}\) are satisfied. Thus this assignment satisfies at least \(H+\frac{N}{2}\) clauses. Since \(r\) is the maximum number of satisfiable clauses, the claim follows.
For a literal \(\ell\), let \(a_{\ell}\) be the number of low-clauses \(C\) for which \(\ell\) is a maximizing literal, i.e,
\[a_{\ell}=|\{C\mid C\text{ is a low-clause and }\ell_{C}=\ell\}|,\]
and
\[b_{\ell}=|\{C\mid C\text{ is a high-clause and }\ell_{C}=\neg\ell\}|,\]
We show the following relation between \(a_{\ell}\) and \(b_{\ell}\).
**Claim 3.7.2**.: _For every literal \(\ell\), \(a_{\ell}\leq b_{\ell}\)._
Proof.: \[\mathsf{Conf}(\varphi,\pi)=\Pi_{i}\mathsf{Conf}(\varphi_{|x_{i}},\pi)\] (1)
Now suppose that \(a_{\ell}>b_{\ell}\) for some literal \(\ell\). Let \(x_{j}\) be the variable corresponding to the literal \(\ell\). Note that
\[\mathsf{Conf}(\varphi_{|x_{j}},\pi^{*})=\pi^{*}(\ell)^{a_{\ell}}\times(1-\pi^{ *}(\ell))^{b_{\ell}}\]
where \(\pi(\ell)<1/2\). Consider a new interpretation \(\pi^{\prime}\) where \(\pi^{\prime}(\ell)=1-\pi^{*}(\ell)\), and for all other literals the value of \(\pi^{\prime}\) is the same as the value of \(\pi^{*}\). Now
\[\frac{\mathsf{Conf}(\varphi_{|x_{j}},\pi^{\prime})}{\mathsf{Conf}( \varphi_{|x_{j}},\pi^{*})} = \frac{\pi^{\prime}(\ell)^{a_{\ell}}\times(1-\pi^{\prime}(\ell))^{b _{\ell}}}{\pi(\ell)^{a_{\ell}}\times(1-\pi(\ell))^{b_{\ell}}}\] \[= \frac{(1-\pi(\ell))^{a_{\ell}}\times\pi(\ell)^{b_{\ell}}}{\pi( \ell)^{a_{\ell}}\times(1-\pi(\ell))^{b_{\ell}}}\] \[= \left(\frac{(1-\pi(\ell))}{\pi(\ell)}\right)^{a_{\ell}-b_{\ell}}>1\]
The last inequality follows because \(\pi(\ell)<1/2\) and the assumption that \(a_{\ell}>b_{\ell}\). Since \(\mathsf{Conf}(\varphi_{|x},\pi^{*})=\mathsf{Conf}(\varphi_{|x},\pi^{\prime})\) for every \(x\neq x_{j}\), combining the above inequality with Equation 1, we obtain that \(\mathsf{Conf}(\varphi,\pi^{\prime})>\mathsf{Conf}(\varphi,\pi^{*})\) and thus \(\pi^{*}\) is not an optimal \(\mathbb{V}\)-interpretation. This is a contradiction. Thus \(a_{\ell}\leq b_{\ell}\)
We next bound the contribution of neutral and low clauses to \(\mathsf{optConfVal}(\varphi)\). For every neutral clause \(C\), \(\pi^{*}(C)=1/2\), thus we have the following observation.
**Observation 3.8**.: _The contribution of neutral clauses to \(\mathsf{Conf}(\varphi,\pi^{*})\) is exactly \(1/2^{N}\)._
We establish the following claim.
**Claim 3.8.1**.: \[\mathsf{Conf}(\varphi,\pi^{*})=\prod_{\ell}\left(\pi^{*}(\ell)^{a_{\ell}}\times(1 -\pi^{*}(\ell))^{b_{\ell}}\right)\times\frac{1}{2^{N}}\]
Proof.: By Observation 3.8, the contribution of neutral clauses to \(\mathsf{Conf}(\varphi,\pi^{*})\) is \(1/2^{N}\). Next we show that the contribution of all high and low clauses is exactly.
\[\prod_{\ell}\pi^{*}(\ell)^{a_{\ell}}\times(1-\pi^{*}(\ell))^{b_{\ell}}.\]
For this we first claim that exactly one of \(\ell\) or \(\neg\ell\) contribute to the above product. For this it suffices to prove that for every literal \(\ell\) exactly one of \(a_{\ell}\) (\(b_{\ell}\) resp.) or \(a_{\neg\ell}\) (\(b_{\neg\ell}\)) is zero. Suppose \(a_{\ell}\neq 0\), in this case \(\neg\ell\) can not be maximizing literal for any low clause. Thus \(a_{\neg\ell}=0\). Suppose that \(b_{\ell}\neq 0\), then \(\neg\ell\) is a maximizing literal for a high clause and thus \(\pi^{*}(\neg\ell)>1/2\), and \(\pi^{*}(\ell)\leq 1/2\). If \(b_{\neg\ell}\neq 0\), then \(\ell\) must be a maximizing literal for a high-clause, and this is not possible as \(\pi^{*}(\ell)\leq 1/2\). Thus \(b_{\neg\ell}=0\).
Let \(Z\) be the collection of literals \(\ell\) for which \(a_{\ell}>0\). Now that quantity \(\prod_{\ell\in Z}\pi^{*}(\ell)^{a_{\ell}}\times(1-\pi^{*}(\ell))^{b_{\ell}}\) captures the contribution of all low clauses and \(\sum_{\ell\in Z}\) many high-clauses. For all remaining high-clauses, there exist a literal \(\ell\) such that \(\ell\notin Z\) and \(b_{\ell}\neq 0\). The contribution of all the remaining high- clauses is \(\prod_{\ell\notin Z}(1-\pi(\ell))^{b_{\ell}}\). This quantity equals \(\prod_{\ell\notin Z}\pi^{*}(\ell)^{a_{\ell}}\times(1-\pi(\ell))^{b_{\ell}}\) as \(a_{\ell}=0\) for \(\ell\notin Z\).
Finally, we are ready to complete the proof of Theorem 3.7. For every literal \(\ell\), By Claim 3.7.2, \(a_{\ell}\leq b_{\ell}\). Let \(b_{\ell}=a_{\ell}+c_{\ell}\), \(c_{\ell}\geq 0\). Consider the following inequalities.
\[\mathsf{optConfVal}(\varphi) = \mathsf{Conf}(\varphi,\pi^{*})\] \[= \prod_{\ell}\left(\pi^{*}(\ell)^{a_{\ell}}\times(1-\pi^{*}(\ell)) ^{b_{\ell}}\right)\times\frac{1}{2^{N}}\] \[= \prod_{\ell}\left(\pi^{*}(\ell)^{a_{\ell}}\times(1-\pi^{*}(\ell) )^{a_{\ell}+c_{\ell}}\right)\times\frac{1}{2^{N}}\] \[\leq \prod_{\ell}\left(\pi^{*}(\ell)^{a_{\ell}}\times(1-\pi^{*}(\ell) )^{a_{\ell}}\right)\times\frac{1}{2^{N}}\] \[\leq \prod_{\ell}\left(\frac{1}{4^{a_{\ell}}}\right)\times\frac{1}{2^ {N}}=\frac{1}{4^{L+N/2}}\]
In the above, equality at line 2 is due to Claim 3.8.1. The inequality at line 4 follows because \((1-\pi^{*}(\ell))\leq 1\). The last inequality follows because \(x(1-x)\) is maximized at \(x=1/2\). The last equality follows as \(\sum a_{\ell}=L\). Note that the number of clauses \(m=N+H+L\) and by Claim 3.7.1\(H+N/2\leq r\). It follows that \(L+N/2\geq m-r\). Thus \(\mathsf{optConfVal}(\varphi)=\mathsf{Conf}(\varphi,\pi^{*})\leq\frac{1}{4^{L+ N/2}}\leq\frac{1}{4^{m-r}}\).
### \(\mathrm{FP}^{\mathrm{NP}[\log]}\)_- Hardness_
In this subsection, we show that \(\mathsf{optConfVal}\) is hard for the class \(\mathrm{FP}^{\mathrm{NP}[\log]}\). We show this by reducing \(\mathsf{MaxSatVal}\) to \(\mathsf{optConfVal}\). Since \(\mathsf{MaxSatVal}\) is complete for \(\mathrm{FP}^{\mathrm{NP}[\log]}\), the result follows. We also show that the same reduction can be used to compute a \(\mathsf{MaxSat}\) assignment from an optimal \(\mathbb{V}\)-interpretation.
**Theorem 3.9**.: \(\mathsf{MaxSatVal}\) _metric reduces to \(\mathsf{optConfVal}\) for CNF formulae. Hence \(\mathsf{optConfVal}\) is hard for \(\mathrm{FP}^{\mathrm{NP}[\log]}\) for CNF formulae._
Proof.: Let \(\varphi(x_{i},\ldots,x_{n})=C_{1}\wedge\ldots\wedge C_{m}\) be a formula with \(m\) clauses on variables \(x_{1},\ldots,x_{n}\). Consider the formula \(\varphi^{\prime}\) with \(m\) additional variables \(y_{1},\ldots,y_{m}\) constructed as follows: For each clause \(C_{i}\) of \(\varphi\), add the clause \(C^{\prime}_{i}=C_{i}\lor y_{i}\) in \(\varphi^{\prime}\). Also add \(m\) unit clauses \(\neg y_{i}\). That is
\[\varphi^{\prime}=(C_{1}\lor y_{1})\wedge\ldots\wedge(C_{m}\lor y_{m})\wedge \neg y_{1}\wedge\cdots\wedge\neg y_{m}\]
**Claim 3.9.1**.: \(\mathsf{optConfVal}(\varphi^{\prime})=\frac{1}{4^{m-r}}\) _where \(r\) is the maximum number of clauses that can be satisfied in \(\varphi\)._
Proof.: We show this claim by first showing that \(\mathsf{optConfVal}(\varphi^{\prime})\leq\frac{1}{4^{m-r}}\) and exhibiting an interpretation \(\pi^{*}\) so that \(\mathsf{Conf}(\varphi,\pi^{*})=\frac{1}{4^{m-r}}\). We claim that if \(r\) is the maximum number of clauses that can be satisfied in \(\varphi\), then \(m+r\) is the maximum number of clauses that can be satisfied in \(\varphi^{\prime}\). We will argue this by contradiction. Let \(\mathbf{a}\) be an assignment that satisfies \(>m+r\) clause in \(\varphi^{\prime}\). Let \(s\) be the number of \(y_{i}\)s that are set to False. This assignment will satisfy \(m-s\) clauses of the form \(C_{i}\lor y_{i}\). However the total number of clauses of the form \(C_{i}\lor y_{i}\) that are satisfied is \(>m+r-s\). Thus there are \(>r\) clauses of the form \(C_{i}\lor y_{i}\) that are satisfied where \(y_{i}\) is set to False. This assignment when restricted to \(x_{i}\)s will satisfy more than \(r\) clauses of \(\varphi\). Hence the contradiction.
Thus from Theorem 3.7, it follows that \(\mathsf{optConfVal}(\varphi^{\prime})\leq\frac{1}{4^{m-r}}\). Now we exhibit an interpretation \(\pi^{*}\) so that \(\mathsf{Conf}(\varphi,\pi^{*})=\frac{1}{4^{m-r}}\). Consider an assignment \(\mathbf{a}=a_{1},\ldots,a_{n}\) for \(\varphi\) that satisfies \(r\) clauses. Consider the following interpretation \(\pi^{*}\) over the variable of \(\varphi^{\prime}\colon\pi^{*}(x_{i})=1\) if \(a_{i}=\mathrm{True}\) and \(\pi^{*}(x_{i})=0\) if \(a_{i}=\mathrm{False}\). \(\pi^{*}(y_{i})=0\) if and only if \(C_{i}\) is satisfied by \(\mathbf{a}\). Else \(\pi^{*}(y_{i})=1/2\). For every satisfiable clause \(C_{i}\), \(\mathsf{Conf}(C_{i}\lor y_{i},\pi^{*})=1\) and \(\mathsf{Conf}(\neg y_{i},\pi^{*})=1\). For all other clauses \(C\) in \(\varphi^{\prime}\), \(\mathsf{Conf}(C,\pi^{*})=1/2\). Since there are \(r\) clauses that are satisfied, the number of clauses for which \(\mathsf{Conf}(C,\pi^{*})=1/2\) is \(2m-2r\). Hence the \(\mathsf{Conf}(\varphi^{\prime},\pi^{*})=\frac{1}{4^{(m-r)}}\). Thus \(\mathsf{optConfVal}(\varphi^{\prime})=\frac{1}{4^{m-r}}\).
Since \(\mathsf{optConfVal}(\varphi^{\prime})=1/4^{m-r}\), \(\mathsf{MaxSatVal}\) for \(\varphi\) can be computed by knowing the \(\mathsf{optConfVal}\).
While the above theorem shows that \(\mathsf{MaxSatVal}\) can be computed from \(\mathsf{optConfVal}\), the next theorem shows that a maxsat assignment can be computed from an optimal \(\mathbb{V}\)-interpretation.
**Theorem 3.10**.: \(\mathsf{MaxSat}\) _metric reduces to \(\mathsf{optConf}\)._
Proof.: Consider the same reduction as from the previous theorem. Our task is to construct a \(\mathsf{MaxSat}\) assignment for \(\varphi\), given an optimal \(\mathbb{V}\)-interpretation \(\pi\) for \(\varphi^{\prime}\). By the earlier theorem, \(\mathsf{Conf}(\varphi^{\prime},\pi)=\frac{1}{4^{m-r}}\), where \(r\) is the maximum number of satisfiable clauses of \(\varphi\).
We next establish a series of claims on the values takes by \(\pi(y_{i})\) and \(\pi(x_{i})\).
**Claim 3.10.1**.: _For all \(y_{i}\); \(\pi(y_{i})\in\{0,1/2\}\)._
Proof.: Consider a clause \(C^{\prime}_{i}=(C_{i}\lor y_{i})\) for which \(\ell_{C^{\prime}_{i}}=y_{i}\). Now the contribution of \(C^{\prime}_{i}\) and the clause \(\neg y_{i}\) to \(\mathsf{Conf}(\varphi^{\prime},\pi)\) is \(\pi(y_{i})\times(1-\pi(y_{i}))\). Since there is no clause \(C^{\prime}_{j}\) for which \(\ell_{C^{\prime}_{j}}=y_{i}\), the above value is maximized when \(\pi(y_{i})=1/2\). Now consider a clause \(C^{\prime}_{j}=(C_{j}\lor y_{j})\), for which \(\ell_{C^{\prime}_{j}}\neq y_{j}\). Contribution of \(C^{\prime}_{j}\) and the clause \(\neg y_{j}\) to \(\mathsf{Conf}(\varphi^{\prime},\pi)\) is \(\pi(\ell_{C^{\prime}_{j}})\times\pi(\neg y_{j})\). Since, \(\ell_{C^{\prime}_{j}}\neq y_{j}\), and there is no other clause in which \(y_{j}\) or \(\neg y_{j}\) appear, the above expression is maximized when \(\pi(\neg y_{j})=1\) and thus \(\pi(y_{j})=0\).
**Claim 3.10.2**.: _For every \(i\), if \(y_{i}\) is not maximizing literal for clause \(C^{\prime}_{i}\), then \(\pi(y_{i})=0\)._
Proof.: Let \(C_{i}^{\prime}\) be a clause for which \(y_{i}\) is not maximizing literal. Say \(\ell_{j}\) is the maximizing literal. We first consider the case \(\pi(\ell_{j})<1/2\). By previous claim, \(\pi(y_{i})\in\{0,1/2\}\), and if \(\pi(y_{i})=1/2\), then \(\ell_{j}\) can not be maximizing literal for clause \(C_{i}^{\prime}\). Thus \(\pi(y_{i})=0\). Now consider the case \(\pi(\ell_{j})\geq 1/2\). Suppose that \(\pi(y_{i})=1/2\). Now the contribution of the clauses \(C_{i}^{\prime}\) and \(\neg y_{i}\) to \(\mathsf{Conf}(\varphi,\pi)\) is \(\pi(\ell_{j})/2\). However, if we change \(\pi(y_{i})=0\), then the contribution of these clauses would become \(\pi(\ell_{j})\) and this would contradict the optimality of \(\pi\). Thus by Claim 3.10.1, \(\pi(y_{i})=0\).
**Claim 3.10.3**.: _For all \(x_{i}\), if \(x_{i}\) or \(\neg x_{i}\) is a maximizing literal, then \(\pi(x_{i})\in\{0,1,1/2\}\)_
Proof.: We argue for the case when \(x_{i}\) is a maximizing literal. The case when \(\neg x_{i}\) is a maximizing literal follows by similar arguments. Suppose that \(x_{i}\) is a maximizing literal and \(\pi(x_{i})<1/2\) and \(\pi(x_{i})\) is neither 0 nor 1. It must be the case that \(\neg x_{i}\) is also a maximizing literal, otherwise making \(\pi(x_{i})=1\) will increase the trust value. Suppose \(x_{i}\) is a maximizing literal for \(a\) many clauses and \(\neg x_{i}\) is a maximizing literal for \(b\) many clauses. If \(a>b\), then we can obtain a \(\mathbb{V}\)-interpretation, by swapping \(\pi(x_{i})\) with \(\pi(\neg x_{i})\). If \(a\) equals \(b\), then \(\pi(x_{i})\) must be equal to \(1/2\) as \(x^{a}(1-x)^{a}\) is maximized for \(x=1/2\). Thus \(a<b\). For every clause \(C_{j}^{\prime}\) for which \(x_{i}\) or \(\neg x_{j}\) is the maximizing literal, it must be the case that \(\pi(y_{j})=0\), by Claim 3.10.2. Let \(\mathcal{C}\) be the collection of all clauses \(C_{j}^{\prime}\) together with \(\neg y_{j}\), where either \(x_{i}\) or \(\neg x_{i}\) is maximizing literal. The contribution of these clauses to \(\mathsf{Conf}(\varphi,\pi)\) is \(\pi(x_{i})^{a}\times(1-\pi(x_{i}))^{b}\times 1^{a+b}\).
We now construct a new \(\mathbb{V}\)-interpretation \(\pi^{\prime}\) that will contradict the optimality of \(\pi\). For every clause \(C_{j}^{\prime}\in\mathcal{C}\) in which \(x_{i}\) is the maximizing literal, \(\pi^{\prime}(y_{i})=1/2\) and \(\pi^{\prime}(x_{i})=0\). Now the contribution of clauses from \(\mathcal{C}\) to \(\mathsf{Conf}(\varphi,\pi^{\prime})\) is \((\frac{1}{2})^{a}\times 1^{b}\times(\frac{1}{2})^{a}\times 1^{b}\)
Since \(x^{a}(1-x)^{b}<1/4^{a}\) (when \(a<b\)),
\[(\frac{1}{2})^{a}\times 1^{b}\times(\frac{1}{2})^{a}\times 1^{b}>\pi(x_{i})^{a} \times(1-\pi(x_{i}))^{b}\times 1^{a+b}\]
Thus \(\mathsf{Conf}(\varphi,\pi^{\prime})>\mathsf{Conf}(\varphi,\pi)\) which is a contradiction. Thus if \(\pi(x_{i})<1/2\), then \(\pi(x_{i})=0\), a similar argument shows that if \(\pi(x_{i})>1/2\), then \(\pi(x_{i})=1\).
**Claim 3.10.4**.: _For every \(x_{i}\) with \(\pi(x_{i})=1/2\), \(x_{i}\) and \(\neg x_{i}\) are maximizing literals for exactly the same number of clauses._
Proof.: Let \(\mathcal{C}\) be the collection of clauses for which either \(x_{i}\) or \(\neg x_{i}\) is maximizing literal. Suppose that \(x_{i}\) is maximizing literal for \(a\) clauses and \(\neg x_{i}\) is maximizing literal for \(b\) clauses. If \(a\neq b\), \(\pi(x_{i})=\frac{a}{a+b}\notin\{0,1,1/2\}\) and this contradicts Claim 3.10.3.
We will show how to construct a \(\mathsf{MaxSat}\) assignment from \(\pi\): If \(\pi(x_{i})=0\), set the truth value of \(x_{i}\) to False, else set the truth value of \(x_{i}\) to True.
By Claim 3.10.3, \(\pi(x_{i})=\{0,1/2,1\}\). Let \(H\) be the number of clauses for which maximizing literal \(\ell\) is a \(x\)-variable and \(\pi(\ell)=1\). Note that the above truth assignment will satisfy all the \(H\) clauses. Let \(N\) be number of clauses for which maximizing literal \(\ell\) is a \(x\)-variable and \(\pi(\ell)=1/2\). By Claim 3.10.4, in exactly \(N/2\) clauses a positive literal is maximizing, and thus all these \(N/2\) clauses are satisfied by our truth assignment. Thus the total number of clauses satisfied by the truth assignment is \(N/2+H\). Let \(Y\) the number of clauses in which \(y_{i}\) is maximizing literal. By Claim 3.10.1, \(\pi(y_{i})=1/2\) when \(y_{i}\) is maximizing literal. Thus
\[\mathsf{Conf}(\varphi^{\prime},\pi)=1^{H}\times(\frac{1}{2})^{N}\times(\frac{ 1}{2})^{2Y}=\frac{1}{4^{N/2+Y}}=\frac{1}{4^{m-r}}\]
The last equality follows from Claim 3.9.1. Thus \(m-r=N/2+Y\), combining this with \(m=H+N+Y\), we obtain that \(N/2+H=r\). Thus the truth assignment constructed will satisfy \(r\) clauses and is thus a MaxSat assignment.
## 4 Approximating \(\mathsf{optConfVal}\)
We study the problem of approximating \(\mathsf{optConfVal}\) efficiently. Below, a \(k\)-SAT formula is a CNF formula with _exactly_\(k\) distinct variables in any clause. We start with the following proposition.
**Lemma 4.1**.: _Let \(a_{1},\cdots a_{n}\) be an assignment, that satisfies \(r\) clauses of a CNF formula \(\varphi(x_{1},\cdots x_{n})\). There is an interpretation \(\pi\) so that \(\mathsf{Conf}(\varphi,\pi)\) is \(\left(\frac{m-r}{m}\right)^{m-r}\left(\frac{r}{m}\right)^{r}\)_
Proof.: If \(a_{i}=1\), set \(\pi(x_{i})=(1-\epsilon)\) and if \(a_{i}=0\), then set \(\pi(x_{i})=\epsilon\). For every clause \(C_{i}\) that is satisfied, we obtain a max value of \((1-\epsilon)\) and for every clause that is not satisfied, the max value is \(\geq\epsilon\). Thus the \(\mathsf{optConf}\) obtained by this assignment is \((1-\epsilon)^{r}\epsilon^{m-r}\), and this is maximized when \(\epsilon=\frac{m-r}{m}\) by Proposition 1.
Hence, for example, if \(\varphi\) is a 3-SAT formula, since a random assignment satisfies \(7/8\) fraction of the clauses in expectation, for a random assignment \(r\geq 7m/8\), and by Lemma 4.1, \(\mathsf{optConfVal}(\varphi)>0.686^{m}\). The following lemma shows that one can get a better lower bound on \(\mathsf{optConfVal}\) in terms of the clause sizes for CNF formulae.
**Lemma 4.2**.: _For every CNF formula \(\varphi\), \(\mathsf{optConfVal}(\varphi)\geq e^{-\sum_{i}\frac{1}{k_{i}}}\) where \(k_{i}\) is the arity of the \(i\)'th clause in \(\varphi\)._
Proof.: Consider the interpretation \(\pi\) that assigns every variable \(x_{i}\) a uniformly chosen value in the interval \([0,1]\). Let the clauses in \(\varphi\) be \(C_{1},\ldots,C_{m}\). Then:
\[\log\mathbb{E}[\mathsf{Conf}(\varphi,\pi)] \geq\mathbb{E}\log\mathsf{Conf}(\varphi,\pi)\;(\text{Jensen's Inequality})\] \[=\sum_{i}\mathbb{E}\left[\log\max_{\ell\in C_{i}}\pi(\ell)\right]\] \[=-\sum_{i}\int_{-\infty}^{0}\Pr\left[\log\max_{\ell\in C_{i}}\pi (\ell)\leq t\right]dt\] \[=-\sum_{i}\int_{-\infty}^{0}\Pr\left[\max_{\ell\in C_{i}}\pi( \ell)\leq e^{t}\right]dt\] \[=-\sum_{i}\int_{-\infty}^{0}e^{k_{i}t}dt=-\sum_{i}\frac{1}{k_{i}}\]
Hence, there exists a choice of \(\pi\) achieving this trust value.
This yields a probabilistic algorithm. For example, if \(\varphi\) is a \(3\)-SAT formula, \(\mathsf{optConfVal}(\varphi)>0.716^{m}\) and thus improving on the result of Lemma 4.1. In fact, we can design a deterministic polynomial time algorithm that finds an interpretation achieving the trust value guaranteed by Lemma 4.2, using the well-known'method of conditional expectation' to derandomize the construction in the proof (For example, see [1, 1]).
**Theorem 4.3**.: _There is a polynomial-time, \(e^{-m/k}\)-approximation algorithm for \(\mathsf{optConf}\), when the input formulas are \(k\)-CNF formulas with \(m\)-clauses._
Proof.: Arbitrarily ordering the variables \(x_{1},x_{2},\ldots,x_{n}\), the idea is to sequentially set \(\pi^{*}(x_{1}),\pi^{*}(x_{2}),\ldots,\pi^{*}(x_{n})\) ensuring that for every \(i\):
\[\operatorname*{\mathbb{E}}_{\pi\leftarrow U[0,1]^{n}}\left[\log\mathsf{Conf}( \varphi,\pi)\mid\pi(x_{j})=\pi^{*}(x_{j})\;\forall j\leq i\right]\geq-\sum_{i }\frac{1}{k_{i}}.\] (*)
Assuming \(\pi^{*}(x_{1}),\ldots,\pi^{*}(x_{i-1})\) have already been fixed, we show how to choose \(\pi^{*}(x_{i})\) satisfying the above. We use \(\pi_{<i}\) to denote \(\pi(x_{1})\cdots\pi(x_{i-1})\). For a clause \(C\), let \(\alpha=\max_{\ell\in C\cap\{x_{j},\bar{x}_{j}:j<i\}}\pi^{*}(\ell)\), and suppose \(x_{i}\in C\). Then:
\[=-\int_{-\infty}^{0}\Pr_{\pi}\left[\log\max_{\ell\in C}\pi(\ell) \leq t\mid\pi_{<i}=\pi_{<i}^{*},\pi^{*}(x_{i})=p\right]dt\] \[=-\int_{\log\max(\alpha,p)}^{0}\Pr_{\pi}\left[\log\max_{\ell\in C \cap\{x_{j},\bar{x}_{j}:j>i\}}\pi(\ell)\leq t\right]dt\] \[=-\frac{1}{k^{\prime}}\left(1-\max(\alpha,p)^{k^{\prime}}\right)\]
where \(k^{\prime}\) is the number of literals in the clause \(C\) involving variables \(x_{i+1},\ldots,x_{n}\). One can similarly evaluate the conditional expectation in the cases \(\bar{x}_{i}\in C\) and \(C\cap\{x_{i},\bar{x}_{i}\}=\emptyset\).
Summing up over all the clauses \(C\), we get that
\[\operatorname*{\mathbb{E}}_{\pi}\left[\log\mathsf{Conf}(\varphi,\pi)\mid\pi_{ <i}=\pi_{<i}^{*},\pi^{*}(x_{i})=p\right]\]
is a continuous function of \(p\) that is a piecewise polynomial in at most \(m\) intervals. In polynomial time1, we can find a value of \(p\) that maximizes this function. By induction on \(i\), the maximum value of this function is at least \(-\sum_{i}\frac{1}{k_{i}}\), and hence (*) is satisfied. This completes the description of the algorithm.
Footnote 1: For simplicity, we ignore issues of precision here, but the error can be made inversely polynomial in \(n\).
Next, we show that the approximation factor \(e^{-m/k}\) can not be significantly improved.
We use the following result on hardness of approximating \(\mathsf{MaxSat}\) established by Hastad [10].
**Theorem 4.4** ([10]).: _For any \(\varepsilon>0\) and any \(k\geq 3\) it is NP-hard to distinguish satisfiable \(k\)-SAT formulas from \(k\)-SAT formulae \(<m(1-2^{-k}+\varepsilon)\) satisfiable clauses._
We are now ready to show the following.
**Theorem 4.5**.: _There is no polynomial-time \(\frac{1}{4^{m(2^{-k}-\varepsilon)}}\)-approximation algorithm for \(\mathsf{optConf}\) for \(k\)-SAT formulae, unless \(\mathrm{P}=\mathrm{NP}\)._
Proof.: Assuming such an approximation algorithm \(A\) exists, we contradict Hastad's Theorem (Theorem 4.4). Consider the following algorithm \(A^{\prime}\) that on input a \(k\)-SAT formula \(\varphi\), runs \(A(\varphi)\). If \(A\) outputs a value that is \(\geq\frac{1}{4^{m(2^{-k}-\varepsilon)}}\), then \(A^{\prime}\) outputs YES otherwise outputs NO. Suppose \(\varphi\) is satisfiable, then \(\mathsf{optConf}(\varphi)=1\). Hence \(A\) will output a value \(\geq\frac{1}{4^{m(2^{-k}-\varepsilon)}}\). Thus \(A^{\prime}\) output YES. Suppose maximum number of satisfiable clauses for \(\varphi\) is \(\leq m(1-2^{-k}+\varepsilon)\). By Theorem 3.7,
\[\mathsf{optConf}(\varphi)<\frac{1}{4^{m-m(1-2^{-k}+\varepsilon)}}=\frac{1}{4^{m(2^ {-k}-\varepsilon)}}\]
Hence output of \(A\) is \(<\frac{1}{4^{m(2^{-k}-\varepsilon)}}\) and hence \(A^{\prime}\) will output NO.
Thus \(A^{\prime}\) contradicts Theorem 4.4, unless \(\mathrm{P}=\mathrm{NP}\).
Thus, for example for \(3\)-SAT formulas, while we have a polynomial-time, \(0.716^{m}\)-approximation algorithm (by Theorem 4.3), we cannot expect an efficient \(0.845^{m}\)-approximation algorithm by the above result unless \(\mathrm{P}\) equals \(\mathrm{NP}\). It remains an interesting open problem to determine the optimal approximation ratio for this problem achievable by a polynomial time algorithm.
## 5 Complexity of Access Maximization
In this section, we study the optimization problems for the access control semiring \(\mathbb{A}_{k}=([k],\max,\min,0,k)\). We refer to the corresponding computational problems as \(\mathsf{optAccessVal}\) and \(\mathsf{optAccess}\). For this section we first assume the negation function is the additive inverse modulo \(k\). That is \(\dal(a)=b\) such that \(a+b\equiv 0\pmod{k}\).
**Theorem 5.1**.: _Let \(\varphi(x_{1},\cdots x_{n})\) be a propositional formula in negation normal form and \(\mathbb{A}_{k}=([k],\max,\min,0,k)\). The following statement holds._
* _If_ \(\varphi\) _is satisfiable, then_ \(\mathsf{optAccessVal}(\varphi)=k\)_._
* _If_ \(\varphi\) _is not satisfiable, then_ \(\mathsf{optAccessVal}(\varphi)=\lfloor\frac{k}{2}\rfloor\)_._
Proof.: We will first prove it for the case when \(\varphi\) is in the CNF form, i.e \(\varphi=C_{1}\wedge\cdots\wedge C_{m}\). Suppose that the formula is satisfiable and \(a_{1}\cdots a_{n}\) is a satisfying assignment to the variables \(x_{1},x_{2},\cdots,x_{n}\). Consider the interpretation \(\pi\) defined as follows: If \(a_{i}\) is true, then \(\pi(x_{i})=k\), else \(\pi(x_{i})=\dal(k)\). Consider a clause \(C\), since the formula is satisfiable, there exists a literal \(\ell_{i}\) (either \(x_{i}\) or \(\neg x_{i}\) for some \(i\)) in \(C\) such that \(\ell_{i}\) is set to true. If \(\ell_{i}=x_{i}\), then \(\pi(x_{i})=k\) and \(\mathsf{Sem}(x_{i},\pi)=k\). If \(\ell_{i}=\neg x_{i}\), then \(\pi((x_{i})=\dal(k)=0\) and \(\mathsf{Sem}(\neg x_{i},\pi)=\dal(0)=k\). Since \(C\) is a disjunction \(\mathsf{Sem}(C,\pi)=k\). Thus for every clause \(C_{i}\), \(\mathsf{Sem}(C_{i},\pi)=k\). Since \(\varphi\) is a conjunction of \(C_{1},\cdots C_{m}\), it follows that \(\mathsf{Sem}(\varphi,\pi)=k\).
For the proof of the second item, first assume that \(k\) is even, the proof when \(k\) is odd is very similar. Note that in this case, \(\dal(k/2)=k/2\). Let \(\varphi=C_{1}\wedge\cdots\wedge C_{m}\) be an unsatisfiable formula. Consider an interpretation \(\pi\) where \(\pi(x_{i})=k/2\) for every \(1\leq i\leq n\). Clearly, for this interpretation, \(\mathsf{Sem}(\varphi,\pi)=k/2\). Suppose that \(\pi^{\prime}\) be an interpretation \(\mathsf{Sem}(\varphi,\pi^{\prime})>k/2\). Consider the following satisfying assignment: \(a_{i}\) is true if \(\varphi^{\prime}(x_{i})>k/2\), else \(a_{i}\) is false. Observe that this is a consistent assignment. We will establish that this assignment satisfies \(\varphi\). This establishes that \(\mathsf{optAccessVal}(\varphi)=k/2\).
Note that for every clause \(C_{j}\), \(1\leq j\leq m\), \(\mathsf{Sem}(C_{j},\pi^{\prime})>k/2\). Fix a clause \(C\), since \(\mathsf{Sem}(C,\pi^{\prime})>k/2\), there exists a literal \(\ell_{i}\) in \(C\) such that \(\mathsf{Sem}(\ell_{i},\pi^{\prime})>k/2\). If \(\ell_{i}=x_{i}\), then \(\mathsf{Sem}(x_{i},\pi^{\prime})>k/2\) which implies that \(\pi^{\prime}(x_{i})>k/2\). Thus \(a_{i}\) is true and the clause \(C\) is satisfied by the assignment. If \(\ell_{i}=\neg x_{i}\), then \(\mathsf{Sem}(\neg x_{i},\pi^{\prime})>k/2\). Thus \(\dal(\pi^{\prime}(x_{i}))>k/2\). By the definition of \(\dal\), we have \(\pi^{\prime}(x_{i})<k/2\). Thus \(a_{i}\) is set to false. Thus the clause \(C\) is satisfiable. This proves that the assignment \(a_{1},\cdots,a_{n}\) satisfies the formula \(\varphi(x_{1},\cdots,x_{n})\).
The case where the general formula is in the negation normal form follows by similar ideas using the notion of proof trees as in the case of Viterbi semiring.
For a general negation function, we can establish an analogous theorem. For this, we define the notion of the _index of negation_. Given a negation function \(\dal\), its index denoted by \(Index(\dal)\) is the largest \(\ell\) for which there exists \(a\in[k]\), such that both \(a\) and \(\dal(a)\) are at least \(\ell\).
**Theorem 5.2**.: _Let \(\varphi(x_{1},\cdots x_{n})\) be a propositional formula in negation normal form and \(\mathbb{A}_{k}=([k],\max,\min,0,k)\). The following statement holds._
* _If_ \(\varphi\) _is satisfiable, then_ \(\mathsf{optAccessVal}(\varphi)=k\)_._
* _If_ \(\varphi\) _is not satisfiable, then_ \(\mathsf{optAccessVal}(\varphi)=Index(\dal)\)_._
The following is a corollary to the above result and its proof which states that the complexity of optimization problems over access control semiring is equivalent to their complexity over the Boolean semiring.
**Theorem 5.3**.: _The problem \(\mathsf{optAccessVal}\) and \(\mathsf{SAT}\) are equivalent under metric reductions. Similarly, the problem \(\mathsf{optAccess}\) and the problem of computing a satisfying assignment of a given Boolean formula are equivalent under metric reductions._
## 6 Conclusion
In this work, we provided a comprehensive study of the computational complexity of \(\mathsf{optSem}\) and the related problem \(\mathsf{optSemVal}\) over various semirings such as Viterbi semiring, tropical semiring, access control semiring and fuzzy semiring, from both an algorithmic and a complexity-theoretic viewpoint. An exciting recent development in the field of CSP/SAT solving has been the development of solvers for \(\mathsf{LexSAT}\), which seeks to find the smallest lexicographic satisfying assignment of a formula [16]. In this regard, Theorem 3.2 opens up exciting directions of future work to develop efficient techniques for \(\mathsf{optConf}\).
## 7 Acknowledgements
We thank Val Tannen for introducing us to the world of semiring semantics and for helpful conversations during the nascent stages of the project. We thank the anonymous reviewers of AAAI-23 for valuable comments. This research is supported by the National Research Foundation under the NRF Fellowship Programme [NRF-NRFFAI1-2019-0004] and Campus for Research Excellence and Technological Enterprise (CREATE) program. Bhattacharyya was supported in part by the NRF Fellowship Programme [NRF-NRFFAI1-2019-0002] and an Amazon Research Award. Vinod was supported in part by NSF CCF-2130608 and NSF HDR:TRIPODS-1934884 awards. Pavan was supported in part by NSF CCF-2130536, and NSF HDR:TRIPODS-1934884 awards.
|
2307.09277 | Recurrence Coefficients for Orthogonal Polynomials with a Logarithmic
Weight Function | We prove an asymptotic formula for the recurrence coefficients of orthogonal
polynomials with orthogonality measure $\log \bigl(\frac{2}{1-x}\bigr) {\rm
d}x$ on $(-1,1)$. The asymptotic formula confirms a special case of a
conjecture by Magnus and extends earlier results by Conway and one of the
authors. The proof relies on the Riemann-Hilbert method. The main difficulty in
applying the method to the problem at hand is the lack of an appropriate local
parametrix near the logarithmic singularity at $x = +1$. | Percy Deift, Mateusz Piorkowski | 2023-07-18T14:15:24Z | http://arxiv.org/abs/2307.09277v2 | # Recurrence coefficients for orthogonal polynomials with a logarithmic weight function
###### Abstract.
We prove an asymptotic formula for the recurrence coefficients of orthogonal polynomials with orthogonality measure \(\log\big{(}\frac{2}{1-x}\big{)}dx\) on \((-1,1)\). The asymptotic formula confirms a special case of a conjecture by A. Magnus and extends earlier results by T. O. Conway and one of the authors. The proof relies on the Riemann-Hilbert method. The main difficulty in applying the method to the problem at hand is the lack of an appropriate local parametrix near the logarithmic singularity at \(x=+1\).
Key words and phrases:orthogonal polynomials; Riemann-Hilbert problems; recurrence coefficients; steepest descent method 2020 Mathematics Subject Classification: 42C05, 34M50, 45E05, 45M05
## 1. Introduction
### Background
In this paper we study orthogonal polynomials with orthogonality measure \(w(x)dx\) given by
\[w(x)dx=\log\Big{(}\frac{2}{1-x}\Big{)}dx,\qquad x\in[-1,1). \tag{1.1}\]
Note that \(w(x)\) has a logarithmic singularity for \(x\to+1\) and a simple zero at \(x=-1\). Denote by \(\{p_{n}\}_{n=0}^{\infty}\) the corresponding orthonormal polynomials,
\[\int_{-1}^{1}p_{m}(x)p_{n}(x)w(x)dx=\delta_{mn},\qquad m,n\in\mathbb{N}.\]
The polynomials \(\{p_{n}\}_{n=0}^{\infty}\) satisfy the three terms recurrence relation given by
\[xp_{n}(x)=b_{n}p_{n+1}(x)+a_{n}p_{n}(x)+b_{n-1}p_{n-1}(x),\qquad n\geq 1,\]
where \(a_{n}\in\mathbb{R}\) and \(b_{n}>0\). Note that our notation for \(a_{n}\), \(b_{n}\) is the same as in [3], [4] but opposite to the one in [11], [14]. Listed below are the first few recurrence coefficients for the weight function \(w\).
The large \(n\) asymptotics of the recurrence coefficients \(a_{n}\), \(b_{n}\) are the main focus of the present work. To be precise we will prove the following result.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \(n\) & 0 & 1 & 2 & 3 & 4 \\ \hline \(a_{n}\) & \(\frac{1}{2}\) & \(\frac{1}{14}\) & \(\frac{263}{9058}\approx 0.029\) & \(\frac{1995511}{126347454}\approx 0.016\) & \(\frac{436364251361}{4388656767352}\approx 0.010\) \\ \hline \(b_{n}^{2}\) & \(\frac{7}{36}\) & \(\frac{2588}{11025}\) & \(\frac{71180289}{293026300}\approx 0.243\) & \(\frac{1329399823424}{5405644687527}\approx 0.246\) & \(\frac{39672481023099631594375}{160381475127054568640484}\approx 0.247\) \\ \end{tabular}
\end{table}
Table 1. First recurrence coefficients for the weight function \(w\).
**Theorem 1.1**.: _The recurrence coefficients \(\{a_{n}\}_{n=0}^{\infty}\), \(\{b_{n}\}_{n=0}^{\infty}\) of the orthogonal polynomials with orthogonality measure \(w(x)dx\), with \(w\) given in (1.1), satisfy:_
\[a_{n}=\frac{1}{4n^{2}}-\frac{3}{16n^{2}\log^{2}n}+O\Big{(}\frac{1}{n^{2}\log^{ 3}n}\Big{)} \tag{1.2}\]
_and_
\[b_{n}=\frac{1}{2}-\frac{1}{16n^{2}}-\frac{3}{32n^{2}\log^{2}n}+O\Big{(}\frac{1 }{n^{2}\log^{3}n}\Big{)}. \tag{1.3}\]
Comparing (1.2) and (1.3) with Table 1 we see that, already for \(n=4\), \(a_{n}\) and \(b_{n}^{2}\) are close to their limiting values, \(0\) and \(\frac{1}{4}\) respectively, up to the second digit.
Theorem 1.1 is a special case of a conjecture by A. Magnus, who analyzed in [14] a few examples of continuous weight functions with logarithmic singularities at the edge and in the bulk of the support, among others \(-\log(t)\) with \(t\in(0,1]\) which is equivalent to (1.1) after the affine change of variables \(x=1-2t\) (see the remark below). More generally for the case of a logarithmic singularity at the edge of the support, Magnus considered \(w_{M}(x)\), now supported, without loss of generality, on \(x\in(-1,1)\), satisfying the following two conditions:
* \(w_{M}(x)/(1+x)^{\beta}\) has a positive finite limit for \(x\to-1\),
* \(w_{M}(x)/[-(1-x)^{\alpha}\log(1-x)]\) has a positive finite limit for \(x\to+1\),
where \(\alpha,\beta>-1\). He conjectured based on numerical evidence that the recurrence coefficients \(a_{M,n}\) and \(b_{M,n}\) of the corresponding orthogonal polynomials satisfy for \(n\to\infty\),
\[a_{M,n}=\frac{\beta^{2}-\alpha^{2}}{4n^{2}}+\frac{2B}{n^{2}\log n}+\frac{2C}{ n^{2}\log^{2}n}+o((n\log n)^{-2}) \tag{1.4}\]
and
\[b_{M,n}=\frac{1}{2}-\frac{\alpha^{2}+\beta^{2}-\frac{1}{2}}{8n^{2}}+\frac{B}{ n^{2}\log n}+\frac{C}{n^{2}\log^{2}n}+o((n\log n)^{-2}). \tag{1.5}\]
Additionally it was conjectured that for the special case \(w_{M}=w\) from (1.1), \(B=0\) and \(C=-\frac{3}{32}\) holds, which is confirmed by Theorem 1.1. Note that for \(w\) we have \(\alpha=0\) and \(\beta=1\).
**Remark 1.2**.: _For a general orthogonality measure \(d\mu(x)\) supported on \(x\in(-1,1)\) with recurrence coefficients \(A_{n}\), \(B_{n}\), the orthogonality measure defined by \(d\widetilde{\mu}(t):=d\mu(1-2t)\) with \(t\in(0,1)\) leads to recurrence coefficients given by_
\[\widetilde{A}_{n}=\frac{1}{2}-\frac{A_{n}}{2},\qquad\widetilde{B}_{n}=\frac{B _{n}}{2}.\]
### State of the art
Earlier work on Magnus' conjecture was done by T. O. Conway and one of the authors in [3] using Riemann-Hilbert (R-H) techniques. The weight function considered therein had the form
\[w_{k}(x)=\log\Big{(}\frac{2k}{1-x}\Big{)},\qquad x\in[-1,1),\quad k>1. \tag{1.6}\]
The authors prove the conjecture for this special case corresponding to \(\alpha=\beta=0\), and also obtain \(B=0\) and \(C=-\frac{3}{32}\), further suggesting that these constants do not depend on the behaviour of the weight function away from the logarithmic singularity:
**Theorem 1.3**.: (Conway, D. [3]) _The recurrence coefficients \(\{a_{n}^{(k)}\}_{n=0}^{\infty}\), \(\{b_{n}^{(k)}\}_{n=0}^{\infty}\) of the orthogonal polynomials with orthogonality measure \(w_{k}(x)dx\), with \(w_{k}\) given in (1.6), satisfy:_
\[a_{n}^{(k)}=-\frac{3}{16n^{2}\log^{2}n}+O\Big{(}\frac{1}{n^{2}\log^{3}n}\Big{)}\]
_and_
\[b_{n}^{(k)}=\frac{1}{2}+\frac{1}{16n^{2}}-\frac{3}{32n^{2}\log^{2}n}+O\Big{(} \frac{1}{n^{2}\log^{3}n}\Big{)}.\]
Note that for \(k>1\), the weight function \(w_{k}(x)\) has a positive finite value at \(x=-1\), while \(\lim_{k\to 1+}w_{k}(x)=w(x)\) for \(x\in[-1,1)\).
The main difficulty encountered in [3] was the lack of a known parametrix in the vicinity of the logarithmic singularity, a key ingredient in the usual nonlinear steepest descent analysis. Hence, the authors relied instead on a technically involved comparison to the Legendre problem with weight function \(w_{Leg}(x)\equiv 1\), \(x\in(-1,1)\). Surprisingly, their argument could not be generalized in an obvious way to the weight function \(w=\lim_{k\to 1}w_{k}\), due to the appearance of a simple zero of \(w\) at \(x=-1\). While, an analogous comparison to the Jacobi problem with weight function \(w_{Jac}(x)=1+x\) (or something similar) seems suggestive, significant challenges remain due to the presence of the simple zero. In particular, the crucial Theorem 4.7 in [3] requires a different proof in this case: now one needs to control the behaviour of the Cauchy operator acting on spaces with Muckenhoupt weights.
Orthogonal polynomials with logarithmic weight functions, have applications to both pure mathematics and physics. In particular, apart from logarithmic singularities, these weight functions also tend to have zeros (see [14, Sect. 8], [19]). Such applications motivate us in the present paper to extend the results in [3] to the weight function \(w\) in (1.1), having both a logarithmic singularity and a simple zero.
**Remark 1.4**.: _One might think, a priori, that the vanishing of a weight \(w(x)\) at a point should not give rise to serious technical difficulties. Naively, it would appear that only singularities in the weight, and not zeros, should present obstacles. In this regard we recall the hope and the prophecy of Lenard's 1972 paper [12]:_
_"It is the author's hope that a rigorous analysis will someday carry the results to the point where the true role of the zeros of the generating function will be understood. When that day comes a capstone will have been put on a beautiful edifice to whose construction many contributed and whose foundations lie in the studies of Gabor Szego half a century ago"_
### Relation of the present work to [3]
Significant parts of the analysis performed in the present paper are based on the analysis introduced in [3]. Hence, we will repeatedly refer to that paper for proofs of certain statements. This is justified by the fact that the majority of estimates found in [3] do not depend on the distinction \(k>1\) and \(k=1\) in (1.6). Thus, the proofs of many of the results will also hold for the weight function (1.1) that we are interested in. There are however certain propositions which have their analogs in [3], but still deserve a separate proof due to some minor differences. These are Prop. 2.7 which is the analog of Prop. 2.5 in [3], and Prop. 7.5 which is the analog of Prop. 5.3 and Prop. 5.4 in [3]. In the case of Prop. 2.7 it is necessary to prove a slightly more general result than Prop. 2.5 in
[3]. Meanwhile Prop. 6.5 contains an application of Prop. 2.7 in its more general form, and uses different R-H solutions than Prop. 5.3 and 5.4 in [3]. Both results are proven in the Appendix.
Finally, let us reiterate that the analog of Theorem 4.7 in [3] concerning the uniform boundedness of the inverse of a certain singular integral operator requires a completely different approach to the case \(k=1\), due to the appearance of a simple zero in the weight function \(w\) at \(x=-1\). This necessitates the construction of an appropriate local parametrix in the vicinity of the zero, which is then used to invert the R-H problem locally. While the local parametrix is well-known and can be expressed in terms of Bessel functions, see [11, Sect. 6], its appearance significantly complicates the analysis that follows. Crucially, the method of proving Theorem 4.7 in [3] is no longer sufficient in this new setting. In fact, the material found in Sections 4-6 is entirely devoted to formulating and proving Theorems 6.1 and 6.2, which are the analogs of Theorem 4.7 in [3]. Here, a key role is played by Prop. 4.6, which in a sense localizes the effect that the logarithmic singularity has on the uniform invertibility of the associated singular integral operator. Interestingly, Theorem 4.7 in [3] itself plays a crucial role in the proofs of these results. The Sections 4-6 contain the main novelties of the present work.
### Outline of the paper
In the following we will briefly summarize the content of each section.
* In Sect. 2 we introduce two auxiliary weight functions, the model and the Legendre weight function, together with related quantities, which will be relevant for the R-H analysis. We also list certain estimates and asymptotic results which will be used in later sections.
* In Sect. 3 we introduce the Fokas-Its-Kitaev R-H problem for orthogonal polynomials. We proceed to perform the necessary conjugation and deformation steps to arrive at three distinct R-H problems amenable to asymptotics analysis: the logaritmic, model and Legendre R-H problems. We then state the relation between solutions of these R-H problems and the corresponding recurrence coefficients.
* In Sect. 4 we derive an explicit formula for the Legendre resolvent, that is, the inverse of a singular integral operator associated to the Legendre R-H problem. This formula is not used in [3].
* In Sect. 5 we perform a detailed analysis of the known local parametrices for the R-H problem near the point \(-1\), which can be constructed explicitly using Bessel functions, as in [11, Sect. 6]. This leads in particular, to uniform asymptotics on their growth as \(n\to\infty\).
* In Sect. 6 we introduce modified versions of the three previously mentioned R-H problems using the appropriate local parametrices from Sect. 3 around \(z=-1\). These modified R-H problems are better suited when comparing their associated resolvents. Thus, by showing the uniform invertibility of the modified Legendre resolvent, we obtain the uniform invertibility of the modified logarithmic and model resolvents, thereby proving an analog of
Theorem 4.7 in [3].
* In Sect. 7 we derive an asymptotic formula expressing the difference between the recurrence coefficients for the logarithmic weight and the recurrence coefficients for the model weight. The aforementioned uniform invertibility of the associated resolvents plays a crucial part in this argument. As the asymptotics of the recurrence coefficients for the model weight is known, Theorem 1.1 follows.
* In the Appendix we provide some proofs of more technical nature that are omitted from the main text.
It is also worth mentioning that a common step in the R-H analysis - the construction of a local parametrix - is not performed in the vicinity of the logarithmic singularity at \(+1\). The reason is, quite simply, as in [3], that we were unable to find the local parametrix in the presence of such a singularity. Similar instances in which the local parametrix was not constructed explicitly can be found in [5, Sect. 5] and [10], where non-constructive Fredholm methods were used instead. For a discussion of R-H problems without explicitly solvable local parametrices see [18].
### Notation
Throughout this paper all contours that arise are finite unions of smooth and oriented arcs, with a finite number of points of (self)intersection. More details can be found in the book [2] which treats a more general class of so-called Carleson contours.
Let \(\Gamma\subset\mathbb{C}\) be such a contour and \(m\) an analytic function on \(\mathbb{C}\setminus\Gamma\). For \(s\in\Gamma\), we will denote by \(m_{\pm}(s)\) the limit of \(m(z)\) as \(z\to s\pm\), provided this limit exists. The notation \(z\to s\pm\) denotes a nontangential limit in \(\mathbb{C}\setminus\Gamma\) to \(s\in\Gamma\), from the \(+\), resp. \(-\), side of the contour, see Figure 1. Recall that as \(\Gamma\) is taken to be oriented, this notion is well-defined away from the points of intersection. Everything generalizes to matrix-valued functions \(m\) in a straightforward manner.
We will denote by \(\mathbb{C}_{\pm}\) the upper, resp. lower open half plane of \(\mathbb{C}\) and use the notation \(\overline{\mathbb{C}}\) for the Riemann sphere \(\mathbb{C}\cup\{\infty\}\). Unless specified otherwise, \(z^{1/2}\), \(z\in\mathbb{C}\setminus(-\infty,0)\) will denote the principal branch of the square root.
For two sequences \(A_{n}\) and \(B_{n}\) in a normed space we will use the notation \(A_{n}\lesssim B_{n}\) if there exists a \(c>0\) and \(N\in\mathbb{N}\) such that \(\|A_{n}\|\leq c\|B_{n}\|\) for all \(n\geq N\). A similar definition holds if \(n\) is substituted for a continuous variable, e.g. \(f(x)\lesssim g(x)\) for \(x\to x_{0}\) means \(\|f(x)\|\leq c\|g(x)\|\) for some \(c>0\) and all \(x\) satisfying \(|x-x_{0}|\leq\varepsilon\).
Figure 1. A contour with a point of intersection.
Finally, for a \(d\times d\) dimensional measurable matrix-valued function \(f(s)\), \(s\in\Gamma\), we write \(f\in L^{p}(\Sigma)\), with \(p\in[1,\infty)\), if and only if
\[\|f\|_{L^{p}(\Gamma)}:=\Big{(}\int_{\Gamma}\sum_{i,j=1}^{d}|f_{ij}(s)|^{p}|ds| \Big{)}^{\frac{1}{p}}<\infty,\]
where \(|ds|\) denotes the arc length measure of \(\Gamma\), and \(f\in L^{\infty}(\Gamma)\) if and only if
\[\|f\|_{L^{\infty}(\Gamma)}:=\max_{i,j=1\ldots d}\{\|f_{ij}\|_{L^{\infty}( \Gamma)}\}<\infty.\]
In particular, \(\|f\|_{L^{2}(\Gamma)}\) denotes the \(L^{2}\)-norm on \(\Gamma\) of the Hilbert-Schmidt norm of \(f(s)\). Generalizations to weighted \(L^{p}\)-spaces are introduced in Sect. 4.
## 2. Auxiliary functions used in the R-H analysis
### The model and Legendre weight functions
To obtain the recurrence coefficients related to the weight function \(w\), we will have to compare with a different _model_ weight function given by
\[\widehat{w}(x)=(1+x)\mathrm{e}^{d_{0}x},\qquad x\in[-1,1], \tag{2.1}\]
where \(d_{0}\in\mathbb{R}\) is determined via (2.8). As we will see, the choice of \(d_{0}\) gives an error estimate in (2.14) of order \(O(|1+z|^{3/2})\), rather that \(O(|1+z|^{1/2})\), as in part 3 of Prop. 2.3 in [3]. This extra decay as \(z\to-1\) considerably simplifies the proof of the key Lemma 7.6.
Note that \(\widehat{w}\) can be analytically extended to an entire function. Moreover, \(\widehat{w}(x)\) has a simple zero at \(x=-1\) and a finite positive value at \(x=+1\). As it lies in the class of weight functions considered in [11], the corresponding R-H analysis is well understood.
As we will heavily rely on the arguments found in [3], we will also introduce the Legendre weight function
\[\widetilde{w}(x)=1,\qquad x\in[-1,1].\]
As shown in [3, Theorem 4.7] (see also Theorem 4.2), the weight \(\widetilde{w}\) gives rise to a singular integral operator defined in Sect. 4, with an inverse that is uniformly bounded as \(n\to\infty\). However \(\widetilde{w}\) does not approximate the logarithmic weight function \(w\) for \(x\to-1\), due to the presence of a simple zero at that point. In contrast, the weight \(\widehat{w}\) approximates the logarithmic weight function \(w\) for \(x\to-1\), however for \(\widehat{w}\) the analogous singular integral operator is not invertible in \(L^{2}\) (see property (iv) in R-H problem 3.4 showing that the solution will not be square integrable). This is the essential technical difficulty that we face in this paper.
### The Szego functions
To perform the nonlinear steepest descent analysis, we need to define the Szego function \(F\) associated to the logarithmic weight function \(w\):
\[F(z)=\exp\Bigg{(}\frac{(z^{2}-1)^{1/2}}{2\pi}\int_{-1}^{1}\frac{\log w(s)}{ \sqrt{1-s^{2}}}\frac{ds}{z-s}\Bigg{)},\ \ z\in\mathbb{C}\setminus[-1,1]. \tag{2.2}\]
Here, \((z^{2}-1)^{1/2}\) is uniquely specified as an analytic function having a branch cut along \((-1,1)\), and \((z^{2}-1)^{1/2}\approx z\) for \(z\to\infty\). Analogously we define \(\widehat{F}\) to be the
Szego function associated to the model weight function \(\widehat{w}\):
\[\widehat{F}(z)=\exp\Bigg{(}\frac{(z^{2}-1)^{1/2}}{2\pi}\int_{-1}^{1}\frac{\log \widehat{w}(s)}{\sqrt{1-s^{2}}}\frac{ds}{z-s}\Bigg{)},\ \ z\in\mathbb{C}\setminus[-1,1].\]
Note that as \(\log\widetilde{w}\equiv 0\), the Szego function for the Legendre weight \(\widetilde{w}\) is trivial: \(\widetilde{F}\equiv 1\).
The Szego functions \(F\), \(\widehat{F}\) satisfy the following properties which are crucial for the R-H analysis:
**Proposition 2.1**.: _The functions \(F,\widehat{F}\colon\mathbb{C}\setminus[-1,1]\to\mathbb{C}\) satisfy the following properties:_
1. \(F(z)\)_,_ \(\widehat{F}(z)\) _are analytic for_ \(z\in\mathbb{C}\setminus[-1,1]\)_, with_ \(F(\overline{z})=\overline{F(z)}\) _and_ \(\widehat{F}(\overline{z})=\overline{\widehat{F}(z)}\)_,_
2. \(\lim_{z\to\infty}F(z)=F_{\infty}\in\mathbb{R}_{+}\)_,_ \(\lim_{z\to\infty}\widehat{F}(z)=\widehat{F}_{\infty}\in\mathbb{R}_{+}\)_,_
3. \(F_{+}(x)F_{-}(x)=w(x)\)_,_ \(\widehat{F}_{+}(x)\widehat{F}_{-}(x)=\widehat{w}(x)\) _for_ \(x\in[-1,1)\)_,_
4. \(|F_{\pm}(x)|=w(x)\)_,_ \(|\widehat{F}_{\pm}(x)|=\widehat{w}(x)\) _for_ \(x\in[-1,1)\)_._
We also introduce the function \(\phi\) given by (see [3, Prop. 2.1])
\[\phi(z)=z+(z^{2}-1)^{1/2},\quad z\in\mathbb{C}\setminus[-1,1].\]
Here, \((z^{2}-1)^{1/2}:=(z-1)^{1/2}(z+1)^{1/2}\), where \((z\mp 1)^{1/2}\) denote the principal branch. With this choice, we see that \(\phi\) defines a biholomorphism between \(\mathbb{C}\cup\{\infty\}\setminus[-1,1]\) and \(\mathbb{C}\cup\{\infty\}\setminus\{z:|z|\leq 1\}\), mapping \(\infty\) to itself. In particular, the function \(\phi\) satisfies
\[|\phi(z)|>1,\qquad z\in\mathbb{C}\setminus[-1,1], \tag{2.3}\]
and
\[\phi(z)=2z+O\Big{(}\frac{1}{z}\Big{)},\quad\text{as $z\to\infty$}.\]
Moreover, the inequality (2.3) holds uniformly away from the interval \([-1,1]\), while on the interval we have
\[\lim_{z\to x}|\phi(z)|=1,\qquad x\in[-1,1]. \tag{2.4}\]
Additionally, as \(\phi(\overline{z})=\overline{\phi(z)}\) for \(z\in\mathbb{C}\setminus[-1,1]\), it follows from (2.4) that
\[\phi_{+}(x)\phi_{-}(x)=1,\qquad x\in(-1,1).\]
Near the points \(z=\pm 1\) we have
\[\begin{split}&\phi(z)=1+\sqrt{2}(z-1)^{1/2}+O(|z-1|),\qquad z \to+1,\\ &\phi(z)=-1\pm\sqrt{2}\mathrm{i}(z+1)^{1/2}+O(|z+1|),\quad z\to- 1,\quad z\in\mathbb{C}_{\pm}.\end{split} \tag{2.5}\]
For the subsequent analysis it is necessary to understand the behaviour of the functions \(\frac{F^{2}}{w}(z)\), \(\frac{\widehat{F}^{2}}{\widetilde{w}}(z)\) near the points \(z=\pm 1\), as these functions show up in the jump matrices of the corresponding R-H problems, see Sect. 3.
**Proposition 2.2**.: _The function \(\frac{F^{2}}{w}\colon\mathbb{C}\setminus[-1,\infty)\to\mathbb{C}\) satisfies_
\[\frac{F^{2}}{w}(z)=1\mp\frac{\mathrm{i}\pi}{w(z)}-\frac{\pi^{2}}{2w^{2}(z)}+O \Big{(}\frac{1}{\log^{3}(|z-1|)}\Big{)} \tag{2.6}\]
_uniformly for \(z\to+1\), where the \(-\) is taken for \(\mathrm{Im}(z)>0\) and \(+\) for \(\mathrm{Im}(z)<0\), and also_
\[\frac{F^{2}}{w}(z)=\phi(z)^{-1}\exp\Big{\{}-(z^{2}-1)^{1/2}\big{(}d_{0}+O(|z+1 |)\big{)}\Big{\}}, \tag{2.7}\]
_uniformly for \(z\to-1\), where_
\[d_{0}=\frac{1}{2\pi\mathrm{i}}\int_{\gamma}\frac{\log\big{(}w(\zeta)/(1+\zeta )\big{)}}{(\zeta^{2}-1)^{1/2}}\frac{d\zeta}{\zeta+1} \tag{2.8}\]
_and \(\gamma\) is an oriented contour originating from the point \(\zeta=1\), going anticlockwise around the interval \([-1,1]\) and ending again at the point \(\zeta=1\) as depicted in Fig. 2._
Proof.: For the proof of statement (2.6) see [3, Prop. A.1]. Statement (2.7) is a special case of [11, Lem. 6.6], but with an additional restriction on the choice of the contour \(\gamma\) stemming from the logarithmic singularity of \(w\). Due to this technicality, we repeat the proof found therein.
First let us note that if \(F_{1}\) is the Szego function of a weight \(w_{1}\) and \(F_{2}\) the Szego function of a weight \(w_{2}\), then the Szego function \(F_{12}\) of the product \(w_{1}w_{2}\) is given by the product of the individual Szego functions: \(F_{12}=F_{1}F_{2}\). As the Szego function of a Jacobi weight \(w_{\alpha,\beta}(x)=(1-x)^{\alpha}(1+x)^{\beta}\) with \(\alpha,\beta>-1\), is given by (see [11, Remark 5.1])
\[F_{\alpha,\beta}(z)=\frac{(z-1)^{\alpha/2}(z+1)^{\beta/2}}{\phi(z)^{\frac{ \alpha+\beta}{2}}},\]
we can conclude that
\[F(z)=\frac{(z+1)^{1/2}}{\phi(z)^{1/2}}\exp\Bigg{(}\frac{(z^{2}-1 )^{1/2}}{2\pi}\int_{-1}^{1}\frac{\log(w(s)/(1+s))}{\sqrt{1-s^{2}}}\frac{ds}{z -s}\Bigg{)},\] \[z\in\mathbb{C}\setminus[-1,1]. \tag{2.9}\]
Again, \((z+1)^{1/2}\) and \(\phi(z)^{1/2}\) denote the principal branches (a simple calculation shows that \(\phi(z)\not\in(-\infty,-1]\) for \(z\in\mathbb{C}\setminus(-\infty,1]\)). Moreover, for \(x<-1\), \((\phi(x)^{1/2})_{+}=-(\phi(x)^{1/2})_{-}\) as \(\phi(z)\sim 2z\) for \(z\to\infty\). Thus \((z+1)^{1/2}/\phi(z)^{1/2}\) is indeed analytic in \(\mathbb{C}\setminus[-1,1]\).
Figure 2. The contour \(\gamma\) encircling the point \(z\).
To analyze the argument of the exponential in (2.9) we choose a contour \(\gamma\) as shown in Fig. 2. Then a residue calculation shows that
\[\frac{1}{\pi}\int_{-1}^{1}\frac{\log(w(s)/(1+s))}{\sqrt{1-s^{2}}} \frac{ds}{z-s} \tag{2.10}\] \[=\frac{\log(w(z)/(1+z))}{(z^{2}-1)^{1/2}}-\frac{1}{2\pi\mathrm{i} }\int_{\gamma}\frac{\log(w(\zeta)/(1+\zeta))}{(\zeta^{2}-1)^{1/2}}\frac{d\zeta }{\zeta-z}\]
Here we note the \(w(z)/(1+z)\) is analytic and non-zero in the simply connected region \(\mathbb{C}\setminus[1,\infty)\), and hence \(\log(w(z)/(1+z))\) exists and is analytic in this region. Moreover the integrand in (2.10) is integrable along \(\gamma\) as long as \(z\not\in\gamma\).
Note also that as \(\log(w(\zeta)/(1+\zeta))\) has an iterated logarithmic singularity at \(\zeta=1\), we cannot deform \(\gamma\) away from this point. Plugging (2.9) and (2.10) into the definition of \(\frac{F^{2}}{w}\) we obtain
\[\frac{F^{2}}{w}(z)=\phi(z)^{-1}\exp\Bigg{(}-\frac{(z^{2}-1)^{1/2}}{2\pi \mathrm{i}}\int_{\gamma}\frac{\log(w(\zeta)/(1+\zeta))}{(\zeta^{2}-1)^{1/2}} \frac{d\zeta}{\zeta-z}\Bigg{)}.\]
We compute the Taylor expansion around \(\zeta=-1\) to obtain
\[\frac{1}{2\pi\mathrm{i}}\int_{\gamma}\frac{\log(w(\zeta)/(1+\zeta))}{(\zeta^{ 2}-1)^{1/2}}\frac{d\zeta}{\zeta-z}=\sum_{k=0}^{\infty}d_{k}(z+1)^{k}\]
where \(d_{k}\) is given by
\[d_{k}=\frac{1}{2\pi\mathrm{i}}\int_{\gamma}\frac{\log\big{(}w(\zeta)/(1+\zeta )\big{)}}{(\zeta^{2}-1)^{1/2}}\frac{d\zeta}{(\zeta+1)^{k+1}}.\]
This finishes the proof.
The analog of Prop. 2.2 for \(\frac{\widehat{F}^{2}}{\widehat{w}}\) is more elementary:
**Proposition 2.3**.: _The function \(\frac{\widehat{F}^{2}}{\widehat{w}}\colon\mathbb{C}\setminus[-1,1]\to \mathbb{C}\) satisfies_
\[\frac{\widehat{F}^{2}}{\widehat{w}}(z)=1+O(|z-1|^{1/2}) \tag{2.11}\]
_uniformly for \(z\to+1\) and_
\[\frac{\widehat{F}^{2}}{\widehat{w}}(z)=\phi(z)^{-1}\exp\Big{\{}-(z^{2}-1)^{1/ 2}d_{0}\Big{\}} \tag{2.12}\]
_uniformly for \(z\to-1\)._
Proof.: Statement (2.11) follows directly from [11, Lem. 6.4], while statement (2.12) can be obtained with the same line of reasoning as in Prop. 2.2, but now the integral simplifies:
\[\frac{1}{2\pi\mathrm{i}}\int_{\gamma}\frac{\log(\widehat{w}(\zeta)/(1+\zeta)) }{(\zeta^{2}-1)^{1/2}}\frac{d\zeta}{\zeta-z}=\frac{d_{0}}{2\pi\mathrm{i}} \int_{\gamma}\frac{\zeta}{(\zeta^{2}-1)^{1/2}}\frac{d\zeta}{\zeta-z}.\]
In particular we can deform the contour \(\gamma\) to infinity leading to \(\int_{\gamma}\frac{\zeta}{(\zeta^{2}-1)^{1/2}}\frac{d\zeta}{\zeta-z}=2\pi \mathrm{i}\) and finishing the proof.
The following two corollaries contain important estimates related to the behaviour of the functions \(\frac{F^{2}}{w}\) and \(\frac{\widehat{F}^{2}}{\widehat{w}}\) near the critical points \(\pm 1\).
**Corollary 2.4**.: ([3, Prop. 2.4]) _For \(x\to 1\) such that \(x>1\) we have_
\[\frac{F^{2}}{w_{+}}(x)+\frac{F^{2}}{w_{-}}(x)-2=-\frac{3\pi^{2}}{\log^{2}\frac{2 }{x-1}}+O\Big{(}\frac{1}{\log^{3}(x-1)}\Big{)}. \tag{2.13}\]
**Corollary 2.5**.: _For \(z\to-1\) we have_
\[\frac{F^{2}}{w}(z)-\frac{\widehat{F}^{2}}{\widehat{w}}(z)=O(|z+1|^{3/2}). \tag{2.14}\]
Proof.: The statement follows from (2.7) and (2.12).
**Remark 2.6**.: _As noted earlier, the estimate (2.14) is the motivation for considering the particular weight function \(\widehat{w}(x)\) instead of the simpler Jacobi weight function \(1+x\). \(\Diamond\)_
For later analysis we will also need the following technical result:
**Proposition 2.7**.: _Fix \(R>0\). Let \(r_{n},\tilde{r}_{n}>0\), \(n\in\mathbb{N}\) be two sequences satisfying \(r_{n},\tilde{r}_{n}\to 0\), such that \(n\big{|}\frac{r_{n}}{\tilde{r}_{n}}-1|<R\). Then_
\[\frac{F^{2}}{w_{+}}(1+r_{n})-\frac{F^{2}}{w_{+}}(1+\tilde{r}_{n}) +\frac{F^{2}}{w_{-}}(1+r_{n})-\frac{F^{2}}{w_{-}}(1+\tilde{r}_{n})\] \[=O(r_{n}\log|\log r_{n}|)+O\Big{(}\frac{1}{n\log^{3}r_{n}}\Big{)} +O(n^{-2}),\]
_where the implied constants in the \(O\)-terms depend only on \(R\)._
Proof.: For the proof see Prop. A.1.
**Remark 2.8**.: _The original formulation found in [3, Prop. A.4] assumes that \(r_{n},\tilde{r}_{n}=O\big{(}\frac{1}{n^{2}}\big{)}\), but does not take into account the fact that the error term will additionally depend on the convergence rate of the sequences \(r_{n}\), \(\tilde{r}_{n}\), i.e. on the bound \(C_{r,\tilde{r}}>0\) such that \(r_{n},\tilde{r}_{n}<\frac{C_{r,\tilde{r}}}{n^{2}}\). This fact however becomes crucial in the proof of Prop. 7.3 (Prop. C.4 in [3]). Luckily, this gap in the original formulation can be easily filled in as shown in Prop. A.1._
## 3. The Riemann-Hilbert problem for orthogonal polynomials
In the following section we recall the celebrated Fokas-Its-Kitaev characterization of orthogonal polynomials via R-H problems [9]. We will state the problem explicitly in the case where the weight function is \(w(x)\), \(x\in[-1,1]\), but similar characterizations hold for the other weight functions \(\widehat{w}\) and \(\widetilde{w}\).
### Fokas-Its-Kitaev R-H problem for the logarithmic weight
Find a \(2\times 2\) matrix-valued function \(Y=Y^{(n)}\colon\mathbb{C}\setminus[-1,1]\to\mathbb{C}^{2\times 2}\) satisfying the following properties
1. \(Y(z)\) is analytic for \(z\in\mathbb{C}\setminus[-1,1]\),
2. \(Y\) satisfies the jump condition \[Y_{+}(s)=Y_{-}(s)\begin{pmatrix}1&w(s)\\ 0&1\end{pmatrix},\qquad s\in[-1,1],\]
3. \(Y(z)\begin{pmatrix}z^{-n}&0\\ 0&z^{n}\end{pmatrix}=I+O(z^{-1})\), \(\text{ as }z\to\infty\),
4. \(Y\) is bounded away from the points \(\pm 1\), and has the following behaviours near the points \(\pm 1\): \[Y(z)=\begin{pmatrix}O(1)&O(\log^{2}|z-1|)\\ O(1)&O(\log^{2}|z-1|)\end{pmatrix},\qquad z\to+1,\] and \[Y(z)=\begin{pmatrix}O(1)&O(1)\\ O(1)&O(1)\end{pmatrix},\qquad z\to-1.\]
\(\diamond\)
The condition (iv) for \(z\to+1\) has been shown in [3, Sect. 3.1] for the case of the weight function having a logarithmic singularity, while the behaviour for \(z\to-1\) is a special case of the algebraic-type singularity of the weight function treated in [11, Sect. 2].
To understand the connection between the R-H problem for \(Y\) and orthogonal polynomials with respect to the orthogonality measure \(w(x)dx\) on \([-1,1]\), we need to introduce the Cauchy operator \(\mathcal{C}_{[-1,1]}\) on the interval \([-1,1]\):
\[\mathcal{C}_{[-1,1]}\colon L^{2}([-1,1]) \to\mathcal{O}(\mathbb{C}\setminus[-1,1]),\] \[f(s) \mapsto\mathcal{C}_{[-1,1]}(f)(z)=\frac{1}{2\pi\mathrm{i}}\int_{ -1}^{1}\frac{f(s)}{s-z}\,ds.\]
Here \(\mathcal{O}(\mathbb{C}\setminus[-1,1])\) denotes the space of functions holomorphic in \(\mathbb{C}\setminus[-1,1]\). When \(\mathcal{C}_{[-1,1]}\) is applied to matrix-valued functions, it is understood to act componentwise. Cauchy operators can also be defined on different contours, as in Sect. 4.
We can now state the seminal result by Fokas, Its and Kitaev which characterizes orthogonal polynomials via a R-H problem:
**Theorem 3.1**.: (Fokas, Its, Kitaev [9]) _The R-H problem for \(Y\) is solved uniquely by_
\[Y(z)=\begin{pmatrix}\pi_{n}(z)&\mathcal{C}_{[-1,1]}(\pi_{n}w)(z)\\ -2\pi\mathrm{i}\gamma_{n-1}^{2}\pi_{n-1}(z)&-2\pi\mathrm{i}\gamma_{n-1}^{2} \mathcal{C}_{[-1,1]}(\pi_{n-1}w)(z)\end{pmatrix}\]
_where \(\pi_{n}(z)\) is the \(n\)-th monic orthogonal polynomial with respect to the weight function \(w(x)\) and \(\gamma_{n}>0\) is the leading coefficients of the orthonormal polynomial \(p_{n}\), meaning \(p_{n}=\gamma_{n}\pi_{n}\)._
The fact that the Fokas-Its-Kitaev R-H problem characterizes the corresponding orthogonal polynomials has lead to numerous new results, in particular in the case where the associated weight function satisfies local analyticity properties (see e.g. [4], [11]), but also in the case of nonanalytic weights (see [10], [15], [16]). Instrumental in the derivation of those results has been the nonlinear steepest descent method, first presented in [7] to study the long-time asymptotics of the mKdV equation and later generalized to the Fokas-Its-Kitaev R-H problem in [1], [5], [6]. It turns out that the recurrence coefficients can also be simply expressed in term of \(Y\) (see [4]):
**Proposition 3.2**.: _Let \(Y_{1}^{(n)}\in\mathbb{C}^{2\times 2}\) be given through the expansion_
\[Y^{(n)}(z)z^{-n\sigma_{3}}=I+\frac{Y_{1}^{(n)}}{z}+O\Big{(}\frac{1}{z^{2}}\Big{)},\ \ \text{as}\ \ z\to\infty.\]
_Then the recurrence coefficients \(a_{n}\) and \(b_{n-1}\) can be extracted from \(Y_{1}^{(n)}\) via the formulas:_
\[a_{n}=\big{(}Y_{1}^{(n)}\big{)}_{11}-\big{(}Y_{1}^{(n+1)}\big{)}_{11}\]
_and_
\[b_{n-1}^{2}=\big{(}Y_{1}^{(n)}\big{)}_{12}\big{(}Y_{1}^{(n)}\big{)}_{21}.\]
The nonlinear steepest descent analysis for weight functions supported on a single finite interval has been performed in [11], and in the following we shall repeat the conjugation and deformation steps found therein. In the first step one normalizes the R-H problem at infinity through an appropriate conjugation, viz.,
\[T(z):=\Big{(}\frac{2}{F_{\infty}}\Big{)}^{n\sigma_{3}}Y(z)\Big{(}\frac{F(z)}{ \phi^{n}(z)}\Big{)}^{\sigma_{3}},\qquad z\in\mathbb{C}\setminus[-1,1]. \tag{3.1}\]
Then \(T\) turns out to be the unique solution of the following R-H problem.
### Normalized Fokas-Its-Kitaev R-H problem for the logarithmic weight
Find a \(2\times 2\) matrix-valued function \(T=T^{(n)}\colon\mathbb{C}\setminus[-1,1]\to\mathbb{C}^{2\times 2}\) satisfying the following properties:
1. \(T(z)\) is analytic for \(z\in\mathbb{C}\setminus[-1,1]\),
2. \(T\) satisfies the jump condition (3.2) \[T_{+}(s)=T_{-}(s)\begin{pmatrix}\frac{F_{+}^{2}}{w}(s)\phi_{+}^{-2n}(s)&1\\ 0&\frac{F_{-}^{2}}{w}(s)\phi_{-}^{-2n}(s)\end{pmatrix},\qquad s\in[-1,1],\]
3. \(T(z)=I+O(z^{-1})\), \(\ \text{as}\ z\to\infty\).
4. \(T\) is bounded away from the points \(\pm 1\), and has the following behaviours near the points \(\pm 1\): \[T(z)=\begin{pmatrix}O(\log^{1/2}|z-1|)&O(\log^{3/2}|z-1|)\\ O(\log^{1/2}|z-1|)&O(\log^{3/2}|z-1|)\end{pmatrix},\qquad z\to+1\] and (3.3) \[T(z)=\begin{pmatrix}O(|z+1|^{1/2})&O(|z+1|^{-1/2})\\ O(|z+1|^{1/2})&O(|z+1|^{-1/2})\end{pmatrix},\qquad z\to-1.\]
\(\diamond\)
Note that we follow the convention found in [3] where the matrix \(T\) is conjugated by the Szego function \(F\), as in (3.1). Hence, the matrix \(T\) found in [11, Eq. 3.1] differs from ours in that respect. In our case the inclusion of the Szego function \(F\) in the jump matrices (3.2) will play a crucial role in regularizing the jump matrices of the R-H problems and thus enabling us to make the comparison argument in Section 6.1 effective.
However, as the weight function \(w_{k}\) from (1.6) is nonvanishing at \(z=-1\), the matrix \(T\) in (3.1) also differs crucially in its behaviour as \(z\to-1\) from the one found in [3, Eq. 3.7]. The reason is that our logarithmic weight function \(w\) has a simple zero at \(z=-1\), implying by item \((iv)\) in Prop. 2.1 that \(|F(z)|=O(|z+1|^{1/2})\) as \(z\to-1\). This induces the \(O(|z+1|^{-1/2})\)-behaviour in \(T\) as \(z\to-1\). Crucially, the entries of \(T_{\pm}\) (and later \(Q_{\pm}\)) will not be square integrable, meaning that the \(L^{2}\)-theory used in [3] will not be applicable directly. We will circumvent this difficulty by defining certain \({}^{\star}\)R-H problems in Section 5 through inverting locally by appropriate local parametrix solutions near the endpoint \(z=-1\).
In the second step we will use the following factorization of the jump matrix (3.2):
\[\begin{pmatrix}\frac{F_{+}^{2}}{w}(s)\phi_{+}^{-2n}(s)&1\\ 0&\frac{F_{-}^{2}}{w}(s)\phi_{-}^{-2n}(s)\end{pmatrix}\] \[=\begin{pmatrix}1&0\\ \frac{F_{-}^{2}}{w}(s)\phi_{-}^{-2n}(s)&1\end{pmatrix}\begin{pmatrix}0&1\\ -1&0\end{pmatrix}\begin{pmatrix}1&0\\ \frac{F_{+}^{2}}{w}(s)\phi_{+}^{-2n}(s)&1\end{pmatrix}.\]
Next we introduce the oriented _lens_ jump contour \(\Sigma=\Sigma_{1}\cup\Sigma_{2}\cup(-1,1+\delta)\), see Fig. 3, where \(\delta>0\) is fixed and \(n\)-independent.
Note that the matrices
\[\begin{pmatrix}1&0\\ \frac{F_{+}^{2}}{w}(s)\phi_{+}^{-2n}(s)&1\end{pmatrix},\qquad\begin{pmatrix}1 &0\\ \frac{F^{2}}{w}(s)\phi_{-}^{-2n}(s)&1\end{pmatrix},\qquad s\in(-1,1)\]
can be analytically continued to \(z\in\Omega_{1}\) and \(z\in\Omega_{2}\) respectively. Hence, we can define the following matrix-valued function \(Q\colon\mathbb{C}\setminus\Sigma\to\mathbb{C}^{2\times 2}\):
\[Q(z)=\begin{cases}T(z),\quad z\in\Omega_{0},\\ \\ T(z)\begin{pmatrix}1&0\\ \frac{F^{2}}{w}(z)\phi^{-2n}(z)&1\end{pmatrix},\quad z\in\Omega_{1},\\ \\ T(z)\begin{pmatrix}1&0\\ \frac{F^{2}}{w}(z)\phi^{-2n}(z)&1\end{pmatrix},\qquad z\in\Omega_{2},\end{cases} \tag{3.4}\]
Then \(Q\) will be the solution to the following R-H problem:
Figure 3. Lens-shaped jump contour \(\Sigma\).
### Logarithmic R-H problem
Find a \(2\times 2\) matrix-valued function \(Q\colon\mathbb{C}\setminus\Sigma\to\mathbb{C}^{2\times 2}\) satisfying the following properties:
1. \(Q(z)\) is analytic for \(z\in\mathbb{C}\setminus\Sigma\),
2. \(Q\) satisfies the jump condition \[Q_{+}(s)=Q_{-}(s)v(s),\qquad s\in\Sigma,\] where (3.5) \[v(s)=\begin{cases}\begin{pmatrix}1&0\\ \frac{F^{2}}{w}(s)\phi^{-2n}(s)&1\end{pmatrix},\quad\text{for $s\in\Sigma_{1} \cup\Sigma_{2}$},\\ \\ \begin{pmatrix}1&0\\ \left(\frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)\right)\phi^{-2n}(s)&1 \end{pmatrix},\quad\text{for $s\in(1,1+\delta)$},\\ \\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},\qquad\text{for $s\in(-1,1)$},\end{cases}\]
3. \(Q(z)=I+O(z^{-1})\), as \(z\to\infty\),
4. \(Q\) is bounded away from the points \(\pm 1\), and has the following behaviours near the points \(\pm 1\): \[Q(z)=\begin{pmatrix}O(\log^{3/2}|z-1|)&O(\log^{3/2}|z-1|)\\ O(\log^{3/2}|z-1|)&O(\log^{3/2}|z-1|)\end{pmatrix},\qquad z\to+1\] and \[Q(z)=\begin{pmatrix}O(|z+1|^{-1/2})&O(|z+1|^{-1/2})\\ O(|z+1|^{-1/2})&O(|z+1|^{-1/2})\end{pmatrix},\qquad z\to-1.\]
Note that \(Q_{\pm}\not\in L^{2}(\Sigma)\) due to its behaviour as \(z\to-1\), which is caused by the simple zero of \(w\) at that point.
For the asymptotic analysis in Sect. 7 we will need the analog of the logarithmic R-H problem stated for the model weight function \(\widehat{w}\). The derivation from the Fokas-Its-Kitaev formulation remains unchanged except for the use of the functions \(\widehat{w}\) and \(\widehat{F}\) instead of \(w\) and \(F\). The behaviour near \(z\to\pm 1\) can be read off from [11, Sect. 4], after taking into account the behaviour of the Szego function \(\widehat{F}\).
### Model R-H problem
Find a \(2\times 2\) matrix-valued function \(\widehat{Q}\colon\mathbb{C}\setminus\Sigma\to\mathbb{C}^{2\times 2}\) satisfying the following properties:
1. \(\widehat{Q}(z)\) is analytic for \(z\in\mathbb{C}\setminus\Sigma\),
2. \(\widehat{Q}\) satisfies the jump condition \[\widehat{Q}_{+}(s)=\widehat{Q}_{-}(s)\widehat{v}(s),\qquad s\in\Sigma,\] where (3.6) \[\widehat{v}(s)=\begin{cases}\begin{pmatrix}1&0\\ \frac{\widehat{F}^{2}}{\widehat{w}}(s)\phi^{-2n}(s)&1\end{pmatrix},\quad\text{ for }s\in\Sigma_{1}\cup\Sigma_{2},\\ \\ \begin{pmatrix}1&0\\ 2\frac{\widehat{F}^{2}}{\widehat{w}}(s)\phi^{-2n}(s)&1\end{pmatrix},\quad \text{for }s\in(1,1+\delta),\\ \\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},\qquad\text{ for }s\in(-1,1),\end{cases}\]
3. \(\widehat{Q}(z)=I+O(z^{-1}),\) as \(z\to\infty\),
4. \(\widehat{Q}\) is bounded away from the points \(\pm 1\), and has the following behaviours near the points \(\pm 1\): \[\widehat{Q}(z)=\begin{pmatrix}O(\log|z-1|)&O(\log|z-1|)\\ O(\log|z-1|)&O(\log|z-1|)\end{pmatrix},\qquad z\to+1\] and \[\widehat{Q}(z)=\begin{pmatrix}O(|z+1|^{-1/2})&O(|z+1|^{-1/2})\\ O(|z+1|^{-1/2})&O(|z+1|^{-1/2})\end{pmatrix},\qquad z\to-1.\]
Note that the jump matrix \(\widehat{v}\) simplifies compared to \(v\), as the weight function \(\widehat{w}\) is continues (in fact analytic) across \((1,1+\delta)\). As for \(Q\), due to the simple zero of \(\widehat{w}\) at \(z=-1\), \(\widehat{Q}\) will not be square integrable near that point.
Note that the weight function \(\widehat{w}\) falls into the class of _modified Jacobi weight functions_ considered in [11]. As such, an asymptotic series expansion in powers of \(n^{-1}\) for the recurrence coefficients \(\widehat{a}_{n}\), \(\widehat{b}_{n}\) can be explicitly computed. Note however that we use a different convention here than in [11], i.e. the roles of \(\widehat{a}_{n}\), \(\widehat{b}_{n}\) are interchanged. We write down the expansion up to the \(n^{-2}\)-term:
**Corollary 3.3**.: ([11, Theorem. 1.10]) _The recurrence coefficients \(\widehat{a}_{n}\), \(\widehat{b}_{n}\) associated to the weight function \(\widehat{w}\) satisfy:_
\[\widehat{a}_{n}=\frac{1}{4n^{2}}+O\Big{(}\frac{1}{n^{3}}\Big{)},\qquad\widehat {b}_{n}=\frac{1}{2}-\frac{1}{16n^{2}}+O\Big{(}\frac{1}{n^{3}}\Big{)}.\]
To compute the asymptotics of the recurrence coefficients \(a_{n}\), \(b_{n}\) we will first compute the asymptotics of \(a_{n}-\widehat{a}_{n}\), \(b_{n}^{2}-\widehat{b}_{n}^{2}\) and then use Cor. 3.3. Hence, we will need an analog of Prop. 3.2 above for the differences of recurrence coefficients, which we express in terms of \(Q\) and \(\widehat{Q}\).
**Proposition 3.4**.: ([3, Prop. 3.6]) _Let \(Q_{1}^{(n)}\) and \(\widehat{Q}_{1}^{(n)}\) be given through the expansion_
\[Q^{(n)}=I+\frac{Q_{1}^{(n)}}{z}+O\Big{(}\frac{1}{z^{2}}\Big{)},\ \ \text{as}\ \ z\to\infty,\]
_and_
\[\widehat{Q}^{(n)}=I+\frac{\widehat{Q}_{1}^{(n)}}{z}+O\Big{(}\frac{1}{z^{2}} \Big{)},\ \ \text{as}\ \ z\to\infty.\]
_Then the differences \(a_{n}-\widehat{a}_{n}\) and \(b_{n}^{2}-\widehat{b}_{n}^{2}\) can be expressed via_
\[a_{n}-\widehat{a}_{n}=\big{(}Q_{1}^{(n)}\big{)}_{11}-\big{(} \widehat{Q}_{1}^{(n)}\big{)}_{11}-\Big{(}\big{(}Q_{1}^{(n+1)}\big{)}_{11}- \big{(}\widehat{Q}_{1}^{(n+1)}\big{)}_{11}\Big{)}, \tag{3.7}\]
_and_
\[b_{n-1}^{2}-\widehat{b}_{n-1}^{2} =\Big{(}\big{(}Q_{1}^{(n)}\big{)}_{12}-\big{(}\widehat{Q}_{1}^{(n )}\big{)}_{12}\Big{)}\Big{(}\big{(}Q_{1}^{(n)}\big{)}_{21}-\big{(}Q_{1}^{(n+1) }\big{)}_{21}\Big{)} \tag{3.8}\] \[+\big{(}\widehat{Q}_{1}^{(n)}\big{)}_{12}\Big{[}\Big{(}\big{(}Q_ {1}^{(n)}\big{)}_{21}-\big{(}Q_{1}^{(n+1)}\big{)}_{21}\Big{)}-\Big{(}\big{(} \widehat{Q}_{1}^{(n)}\big{)}_{21}-\big{(}\widehat{Q}_{1}^{(n+1)}\big{)}_{21} \Big{)}\Big{]}.\]
The usefulness of Prop. 3.4 comes from the fact that \(Q_{1}^{(n)}-\widehat{Q}_{1}^{(n)}\) has a simple integral representation.
**Proposition 3.5**.: ([3, Prop. 4.9]) _The following formula holds:_
\[Q_{1}^{(n)}-\widehat{Q}_{1}^{(n)}=-\frac{1}{2\pi\mathrm{i}}\int_{\Sigma}Q_{-} ^{(n)}(s)(v^{(n)}(s)-\widehat{v}^{(n)}(s))[\widehat{Q}_{-}^{(n)}(s)]^{-1}ds. \tag{3.9}\]
Proof.: In the following we drop the subscript \((n)\) for better readability, i.e., write \(Q_{1}\) for \(Q_{1}^{(n)}\) and so on. Let us define the matrix-valued function \(X(z)=Q(z)[\widehat{Q}(z)]^{-1}\) for \(z\in\mathbb{C}\setminus\Sigma\). Note that
\[X(z)=I+\frac{Q_{1}-\widehat{Q}_{1}}{z}+O\Big{(}\frac{1}{z^{2}}\Big{)},\qquad \text{as }z\to\infty. \tag{3.10}\]
Moreover, \(X\) satisfies the jump condition
\[X_{+}(s)=X_{-}(s)v_{X}(s),\qquad s\in\Sigma,\]
where \(v_{X}=\widehat{Q}_{-}v\widehat{v}^{-1}\widehat{Q}_{-}^{-1}\). We claim that \(X_{\pm}\in L^{1}(\Sigma)\). To see this let us first introduce the analog of \(T\) for the weight function \(\widehat{w}\):
\[\widehat{T}(z)=\Big{(}\frac{2}{\widehat{F}_{\infty}}\Big{)}^{n\sigma_{3}} \widehat{Y}(z)\Big{(}\frac{\widehat{F}(z)}{\phi^{n}(z)}\Big{)}^{\sigma_{3}}.\]
Here \(\hat{Y}\) is the solution to the Fokas-Its-Kitaev problem for the weight function \(\widehat{w}\). Then for \(z\in\Omega_{0}\) (cf. Fig.3) we have
\[X(z)=T(z)[\widehat{T}(z)]^{-1}=O(1),\qquad z\in\Omega_{0},\quad z\to-1, \tag{3.11}\]
where we have used (3.3) and its analog (see [11, Sect. 2])
\[\widehat{T}(z)=\begin{pmatrix}O(|z+1|^{1/2})&O(|z+1|^{-1/2})\\ O(|z+1|^{1/2})&O(|z+1|^{-1/2})\end{pmatrix},\qquad z\to-1. \tag{3.12}\]
Note that the analog of (3.4) remains valid:
\[\widehat{Q}(z)=\begin{cases}\widehat{T}(z),\quad z\in\Omega_{0},\\ \\ \widehat{T}(z)\begin{pmatrix}1&0\\ -\frac{\widehat{P}^{2}}{\widehat{w}}(z)\phi^{-2n}(z)&1\end{pmatrix},\quad z\in \Omega_{1},\\ \\ \widehat{T}(z)\begin{pmatrix}1&0\\ \frac{\widehat{P}^{2}}{\widehat{w}}(z)\phi^{-2n}(z)&1\end{pmatrix},\qquad z \in\Omega_{2}.\end{cases}\]
Thus for \(z\in\Omega_{1}\cup\Omega_{2}\) we have
\[X(z)=T(z)\begin{pmatrix}1&0\\ \mp\big{(}\frac{F^{2}}{w}(z)-\frac{\widehat{P}^{2}}{\widehat{w}}(z)\big{)} \phi^{-2n}(z)&1\end{pmatrix}[\widehat{T}(z)]^{-1}=O(1),\] \[z\in\Omega_{1}\cup\Omega_{2},\quad z\to-1,\]
where the \(+\), resp. \(-\) sign refer to \(\Omega_{1}\) and \(\Omega_{2}\). Apart from (3.3) and (3.12) we have used in addition (2.14). As \(X\) can have only logarithmic singularities for \(z\to 1\) and otherwise the limits to the contour are analytic, it follows that \(X_{\pm}\in L^{1}(\Sigma)\).
Next we can use the Sokhotski-Plemelj theorem to conclude that \(X\) can be represented as
\[X(z) =I+\frac{1}{2\pi\mathrm{i}}\int_{\Sigma}\frac{X_{+}(s)-X_{-}(s)}{ s-z}ds\] \[=I+\frac{1}{2\pi\mathrm{i}}\int_{\Sigma}\frac{Q_{-}(s)(v(s)[ \widehat{v}(s)]^{-1}-I)[\widehat{Q}_{-}(s)]^{-1}}{s-z}ds\] \[=I+\frac{1}{2\pi\mathrm{i}}\int_{\Sigma}\frac{Q_{-}(s)(v(s)- \widehat{v}(s))[\widehat{Q}_{-}(s)]^{-1}}{s-z}ds,\]
where we made use of the particular form of the jump matrices \(v\) and \(\widehat{v}\). This representation together with (3.10) implies (3.9).
Finally we recall the Legendre R-H problem taken from [3, Sect. 3.4]. While this R-H problem does not approximate the logarithmic R-H problem globally, it does so near the logarithmic singularity and additionally gives rise to a singular integral operator whose inverse is uniformly bounded as \(n\to\infty\) (see Theorem 4.2). The existence of a R-H problem with these properties will be crucial for the proof of Theorem 6.2. Note that the Szego function for the Legendre weight is just given by \(\widetilde{F}\equiv 1\).
### Legendre R-H problem
Find a \(2\times 2\) matrix-valued function \(\widetilde{Q}\colon\mathbb{C}\setminus\Sigma\to\mathbb{C}^{2\times 2}\) satisfying the following properties (see [3, Sect. 3.4]):
1. \(\widetilde{Q}(z)\) is analytic for \(z\in\mathbb{C}\setminus\Sigma\),
2. \(\widetilde{Q}\) satisfies the jump condition \[\widetilde{Q}_{+}(s)=\widetilde{Q}_{-}(s)\widetilde{v}(s),\qquad s\in\Sigma,\]
where
\[\widetilde{v}(s)=\begin{cases}\begin{pmatrix}1&0\\ \phi^{-2n}(s)&1\end{pmatrix},\quad\text{for $s\in\Sigma_{1}\cup\Sigma_{2}$},\\ \\ \begin{pmatrix}1&0\\ 2\phi^{-2n}(s)&1\end{pmatrix},\quad\text{for $s\in(1,1+\delta)$},\\ \\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},\qquad\text{for $s\in(-1,1)$},\end{cases}\]
3. \(\widetilde{Q}(z)=I+O(z^{-1})\), as \(z\to\infty\),
4. \(\widetilde{Q}(z)\) is bounded away from the points \(\pm 1\), and has the following behaviours near the points \(\pm 1\): \[\widetilde{Q}(z)=\begin{pmatrix}O(\log|z-1|)&O(\log|z-1|)\\ O(\log|z-1|)&O(\log|z-1|)\end{pmatrix},\qquad z\to+1\] and \[\widetilde{Q}(z)=\begin{pmatrix}O(\log|z+1|)&O(\log|z+1|)\\ O(\log|z+1|)&O(\log|z+1|)\end{pmatrix},\qquad z\to-1.\]
\(\Diamond\)
The weight function \(\widetilde{w}(x)=1\), \(x\in(-1,1)\) lies in the class of weight functions considered in [11], and it follow from the calculations therein that the solution \(\widetilde{Q}\) can be globally approximated with arbitrary small errors. Moreover, unlike the logarithmic and model R-H problems found in 3.3 and 3.4, the Legendre R-H problem can be stated with a weaker \(L^{2}\)-condition instead of condition (iv) (cf. [3, Prop. 3.2]), as follows:
**Proposition 3.6**.: _The matrix-valued function \(\widetilde{Q}\) is the unique solution of the Legendre R-H problem with the condition_ (iv) _being replaced by the condition \(\widetilde{Q}_{\pm}\in L^{2}([-1,1])\)._
Proof.: Let \(L\) be another solution of the Legendre R-H problem, but with \(L_{\pm}\in L^{2}([-1,1])\) instead of condition (iv). Then \(\det L\) will be a holomorphic function in \(\mathbb{C}\setminus[-1,1]\) with \(\det L_{+}=\det L_{-}\) on \(\Sigma\) and \(\det L_{\pm}\in L^{1}(\Sigma)\). By Morera's theorem it follows that \(\det L\) is in fact an entire function with \(\lim_{z\to\infty}\det L(z)=1\). Hence by Liouville's theorem \(\det L\equiv 1\) in \(\mathbb{C}\).
We conclude that \(L\) is invertible in \(\mathbb{C}\setminus\Sigma\) and we can define \(\widetilde{Q}[L]^{-1}\). As with the determinant, \(\widetilde{Q}[L]^{-1}\) will have no jump across the contour \(\Sigma\): \((\widetilde{Q}[L]^{-1})_{+}=(\widetilde{Q}[L]^{-1})_{-}\). Moreover, by the \(L^{2}\)-condition for \(\widetilde{Q}\) and \(L\), we have that \((\widetilde{Q}[L]^{-1})_{\pm}\in L^{1}(\Sigma)\). It follows that \(\widetilde{Q}[L]^{-1}\) can be extended to an entire function. By Liouville's theorem we have that \(\widetilde{Q}[L]^{-1}\equiv\lim_{z\to\infty}\widetilde{Q}(z)[L(z)]^{-1}=I\), hence \(\widetilde{Q}=L\)
## 4. An explicit formula for the Legendre resolvent
In this section we will associate to the Legendre R-H problem a singular integral operator. First let us define the Cauchy operator on \(\Sigma\) by
\[\mathcal{C}_{\Sigma}\colon L^{2}(\Sigma)\to\mathcal{O}(\mathbb{C}\setminus\Sigma),\quad f(s)\mapsto\mathcal{C}_{\Sigma}(f)(z)=\frac{1}{2\pi\mathrm{i}}\int_{ \Sigma}\frac{f(s)}{s-z}ds,\]
where \(\mathcal{O}(\mathbb{C}\setminus\Sigma)\) denotes the set of analytic functions on the open set \(\mathbb{C}\setminus\Sigma\). For \(f\in L^{2}(\Sigma)\) we further define the two Cauchy boundary operators by
\[\mathcal{C}_{\Sigma}^{\pm}(f)(s)=\lim_{z\to s\pm}\mathcal{C}_{\Sigma}(f)(z). \tag{4.1}\]
In our setting the curve \(\Sigma\) is clearly a Carleson curve and hence the limit in (4.1) exists for almost all \(s\in\Sigma\) and satisfies \(\mathcal{C}_{\Sigma}^{\pm}(f)\in L^{2}(\Sigma)\), see [2] for more details. We define the two Cauchy boundary operators via
\[\mathcal{C}_{\Sigma}^{\pm}\colon L^{2}(\Sigma)\to L^{2}(\Sigma).\]
These are bounded operators on \(L^{2}(\Sigma)\), cf. Theorem 4.1 below. Note that \(C_{\Sigma}^{\pm}\) satisfy the important identity \(C_{\Sigma}^{+}-C_{\Sigma}^{-}=1\).
More generally the mapping in (4.1) induces a bounded operator on certain weighted \(L^{p}\)-spaces. To be precise let \(\Gamma\) be an oriented composed locally rectifiable curve (see [2, Sect. 1]) and \(p\in(1,\infty)\). Given a weight function \(r\colon\Gamma\to\mathbb{R}\), \(r\geq 0\), define the Banach space \(L^{p}(\Gamma,r)\) of all measurable functions \(f\) on \(\Gamma\), such that the norm
\[\|f\|_{L^{p}(\Gamma,r)}=\Big{(}\int_{\Gamma}|f(s)|^{p}r(s)^{p}|ds|\Big{)}^{ \frac{1}{p}}\]
remains finite. Note that there is a \(p\)-th power of \(r\) in the integral. We say that \(r\) is a _Muckenhoupt weight_ if \(r\in L^{p}_{loc}(\Gamma)\), \(1/r\in L^{q}_{loc}(\Gamma)\) and
\[\sup_{s\in\Gamma}\,\sup_{\rho>0}\Big{(}\frac{1}{\rho}\int_{\Gamma\cap D(s,\rho )}r(s^{\prime})^{p}|ds^{\prime}|\Big{)}^{\frac{1}{p}}\Big{(}\frac{1}{\rho} \int_{\Gamma\cap D(s,\rho)}r(s^{\prime})^{-q}|ds^{\prime}|\Big{)}^{\frac{1}{q} }<\infty,\]
where \(D(s,\rho)\) is the open disc around \(s\) of radius \(\rho\) and \(1/p+1/q=1\). For any \(p\in(1,\infty)\) we denote the set of all Muckenhoupt weights by \(A_{p}(\Gamma)\). The following results holds:
**Theorem 4.1**.: _Let \(p\in(1,\infty)\) and let \(\Gamma\) be an oriented composed locally rectifiable curve. Assume \(r\colon\Gamma\to\mathbb{R}\), \(r\geq 0\) is a given weight. Then the mappings_
\[f\mapsto\lim_{z\to s\pm}\int_{\Gamma}\frac{f(s)}{s-z}ds \tag{4.2}\]
_define bounded operators from \(L^{p}(\Gamma,r)\to L^{p}(\Gamma,r)\) if and only if \(r\) is a Muckenhoupt weight, i.e., \(r\in A_{p}(\Gamma)\)._
The proof can be found in [2, Theorem 4.15], for more material on this topic with emphasis on R-H theory see [13]. We will abuse notation and denote the mapping (4.2) by \(\mathcal{C}_{\Gamma}^{\pm}\) irrespectively of the choice of domain \(L^{p}(\Gamma,r)\).
Next let us define the operator
\[\mathcal{C}_{\widetilde{v}}\colon L^{2}(\Sigma)\to L^{2}(\Sigma),\quad f \mapsto\mathcal{C}_{\Sigma}^{-}(f(\widetilde{v}-I)),\]
and consider the following singular integral equation in \(L^{2}(\Sigma)\)
\[(1-\mathcal{C}_{\widetilde{v}})\widetilde{\mu}=I. \tag{4.3}\]
Note that as the contour is bounded, \(I\) is indeed an element of \(L^{2}(\Sigma)\). Equation (4.3) is in fact equivalent to the Legendre R-H problem. More explicitly, any solution \(\widetilde{\mu}\) will give rise to a solution \(L=I+\mathcal{C}_{\Sigma}(\widetilde{\mu}(\widetilde{v}-I))\) of the Legendre R-H problem with the condition \(L_{\pm}\in L^{2}(\Sigma)\) instead of condition (iv), as can be verified by direct computation. By Prop. 3.6 the solution to the Legendre R-H problem 3.5 exists and is unique, hence we must have \(\widetilde{Q}=L\) implying
\[\widetilde{Q}=I+\mathcal{C}_{\Sigma}(\widetilde{\mu}(\widetilde{v}-I)). \tag{4.4}\]
Moreover, from the Sokhotski-Plemelj formula
\[\widetilde{Q}=I+\mathcal{C}_{\Sigma}(\widetilde{Q}_{+}-\widetilde{Q}_{-})=I+ \mathcal{C}_{\Sigma}(\widetilde{Q}_{-}(\widetilde{v}-I)),\]
it follows, after taking the minus limit to the contour \(\Sigma\), that \(\widetilde{\mu}:=\widetilde{Q}_{-}\) is indeed a solution to (4.3). Moreover, any solution \(\widetilde{\mu}\) of (4.3) must be equal to \(\widetilde{Q}_{-}\) as can be seen from (4.4) and
\[\widetilde{Q}_{-}=I+\mathcal{C}_{\Sigma}^{-}(\widetilde{\mu}(\widetilde{v}-I ))=I+\mathcal{C}_{\widetilde{v}}\widetilde{\mu}=\widetilde{\mu}.\]
Together these arguments imply that (4.3) has a unique solution and hence \(1-\mathcal{C}_{\widetilde{v}}\) must be injective. In [3] it has been shown that \(1-\mathcal{C}_{\widetilde{v}}\) is in fact uniformly invertible for \(n\to\infty\), as described in the following result:
**Theorem 4.2**.: ([3, Theorem 4.5]) _The operator \(1-\mathcal{C}_{\widetilde{v}}\) is invertible for all sufficiently large \(n\). Moreover, the operator bound of \((1-\mathcal{C}_{\widetilde{v}})^{-1}\) as an operator \(L^{2}(\Sigma)\to L^{2}(\Sigma)\) remains uniformly bounded for \(n\to\infty\)._
The Theorem 4.2 played a central role in [3]. The uniform invertibility of the operator \(1-\mathcal{C}_{\widetilde{v}}\) will also be crucial in the approach presented here and is the motivation for introducing the Legendre R-H problem in addition to the logarithmic and model R-H problems. However in order to use Theorem 4.2 we will need to derive an explicit representation of the operator \((1-\mathcal{C}_{\widetilde{v}})^{-1}\). To accomplish this, we recall the definition of an _inhomogeneous R-H problem of the first kind_, as introduced in [8, Sect. 2.6]. This notion has been instrumental in the proof of Theorem 4.2 in [3]. In the following, \(h\) will denote a matrix-valued function on a contour \(\Gamma\), with \(h^{\pm 1}\in L^{\infty}(\Gamma)\).
### Inhomogeneous R-H problem of the first kind
For a given \(g\in L^{2}(\Gamma)\), one seeks an \(f\in L^{2}(\Gamma)\), such that \(m_{\pm}=\mathcal{C}_{\Sigma}^{\pm}(f)+g\) satisfies the jump relation:
\[m_{+}(s)=m_{-}(s)h(s),\qquad s\in\Gamma. \tag{4.5}\]
A similar notion of an inhomogeneous R-H problem of the second kind can be found in [8, Sect. 2], but will not be needed here.
The importance of the above inhomogeneous R-H problem comes from the following result proven in [8, Prop. 2.6]:
**Theorem 4.3**.: _The mapping \(1-\mathcal{C}_{h}\colon L^{2}(\Gamma)\to L^{2}(\Gamma)\) is invertible if and only if the corresponding inhomogeneous R-H problem of the first kind is uniquely solvable for each \(g\in L^{2}(\Gamma)\). Moreover, the inverse satisfies \(\|(1-\mathcal{C}_{h})^{-1}\|_{L^{2}(\Gamma)\to L^{2}(\Gamma)}\leq c\) if and only if \(\|m_{-}\|_{L^{2}(\Gamma)}\leq c\|g\|_{L^{2}(\Gamma)}\) for all \(g\in L^{2}(\Gamma)\) and the same constant \(c>0\)._
**Remark 4.4**.: _Note that by (4.5), \(m_{+}=m_{-}h\) on \(\Gamma\), hence \(\|m_{-}\|_{L^{2}(\Gamma)}\leq c\|g\|_{L^{2}(\Gamma)}\) implies \(\|m_{\pm}\|_{L^{2}(\Gamma)}\leq c^{\prime}\|g\|_{L^{2}(\Gamma)}\) with \(c^{\prime}=c\|h\|_{L^{\infty}(\Gamma)}\)._
As a corollary we obtain:
**Corollary 4.5**.: _Given a sequence \(h_{n}\) of matrix-valued functions with \(h_{n}^{\pm 1}\in L^{\infty}(\Gamma)\), the operator \((1-\mathcal{C}_{h_{n}})^{-1}\) is uniformly bounded if and only if the corresponding inhomogeneous R-H problems are uniquely solvable with \(\|m_{-}\|_{L^{2}(\Gamma)}\leq c\|g\|_{L^{2}(\Gamma)}\), and \(c>0\) independent of \(n\in\mathbb{N}\) and \(g\in L^{2}(\Gamma)\)._
Note that Theorem 4.2 is proven in [3, Sect. 4.2] via Cor. 4.5. We will use Theorems 4.2 and 4.3 to derive an explicit expression for the inverse \((1-\mathcal{C}_{\widetilde{v}})^{-1}\) in Prop. 4.6 below. This allows us to identify locally the contribution of the logarithmic singularity to the uniform boundedness of \((1-\mathcal{C}_{\widetilde{v}})^{-1}\), which is central to our approach as it enables us to prove Theorem 6.2.
**Proposition 4.6**.: _The inverse of \(1-\mathcal{C}_{\widetilde{v}}\) has the explicit form_
\[(1-\mathcal{C}_{\widetilde{v}})^{-1}\colon L^{2}(\Sigma)\to L^{2}(\Sigma), \quad g\mapsto g+\mathcal{C}_{\Sigma}^{-}(g(\widetilde{v}-I)\widetilde{Q}_{+}^ {-1})\widetilde{Q}_{-}. \tag{4.6}\]
Proof.: From Theorem 4.3 we know that for \(g\in L^{2}(\Sigma)\), the (unique) solvability of the equation
\[(1-\mathcal{C}_{\widetilde{v}})\psi=g,\quad g\in L^{2}(\Sigma), \tag{4.7}\]
in \(L^{2}(\Sigma)\) is equivalent to the (unique) solvability of the following inhomogeneous R-H problem:
### Inhomogeneous Legendre R-H problem
For a given \(g\in L^{2}(\Sigma)\), one seeks an \(f\in L^{2}(\Sigma)\), such that \(m_{\pm}=\mathcal{C}_{\Sigma}^{\pm}(f)+g\) satisfies the jump relation:
\[m_{+}(s)=m_{-}(s)\widetilde{v}(s),\qquad s\in\Sigma.\]
Note that Theorem 4.2 together with Cor. 4.5 imply that there exists a constant \(c\) independent of \(n\in\mathbb{N}\) and \(g\in L^{2}(\Sigma)\) such that the inhomogeneous Legendre R-H problem has a unique solution \(m_{\pm}\) with \(\|m_{-}\|_{L^{2}(\Sigma)}\leq c\|g\|_{L^{2}(\Sigma)}\). We shall briefly recall the exact relation between (4.7) and the inhomogeneous Legendre R-H problem (cf. [8, Sect. 2]), as it will be needed later in the proof. First, note that if we have a solution \(m_{\pm}\) to the inhomogenous Legendre R-H problem, we must have \(f=m_{+}-m_{-}=m_{-}(\widetilde{v}-I)\). If we now set \(\psi:=m_{-}=\mathcal{C}_{\Sigma}^{\pm}(f)+g\in L^{2}(\Sigma)\), it follows that
\[(1-\mathcal{C}_{\widetilde{v}})\psi =\psi-\mathcal{C}_{\Sigma}^{-}(\psi(\widetilde{v}-I))\] \[=\mathcal{C}_{\Sigma}^{-}(f)+g-\mathcal{C}_{\Sigma}^{-}(m_{-}( \widetilde{v}-I)))\] \[=g.\]
On the other hand, having a solution \(\psi\) to the integral equation (4.7), we can define \(m_{\pm}:=\mathcal{C}_{\Sigma}^{\pm}(\psi(\widetilde{v}-I))+g\) and compute using the Sokhotski-Plemelj (S-P) formula
\[\begin{split} m_{+}&=\mathcal{C}_{\Sigma}^{+}(\psi( \widetilde{v}-I))+g\\ &\stackrel{{\text{S-P}}}{{=}}\psi(\widetilde{v}-I)+ \mathcal{C}_{\widetilde{v}}(\psi)+g\\ &=\psi\widetilde{v}\\ &=m_{-}\widetilde{v},\end{split} \tag{4.8}\]
as \(m_{-}=\mathcal{C}_{\widetilde{v}}\psi+g=\psi\). We can now derive an expression for the operator \((1-\mathcal{C}_{\widetilde{v}})^{-1}\). Assume \(g\in L^{2}(\Sigma)\) is given and let \(\psi\in L^{2}(\Sigma)\) be the unique solution of \((1-\mathcal{C}_{\widetilde{v}})\psi=g\). Then \(m_{\pm}=\mathcal{C}_{\Sigma}^{\pm}(f)+g\) with \(f=\psi(\widetilde{v}-I)\) solves the corresponding inhomogenous
R-H problem, as we have seen in Eq. (4.8). We want to find an expression for \(\psi=m_{-}\) in terms of \(g\). To derive (4.6) we start with
\[\begin{split} m_{+}&=m_{-}\widetilde{v}\\ \mathcal{C}_{\Sigma}^{+}(f)+g&=(\mathcal{C}_{\Sigma}^ {-}(f)+g)\widetilde{v}\\ (\mathcal{C}_{\Sigma}^{+}(f)+g)\widetilde{Q}_{+}^{-1}& =(\mathcal{C}_{\Sigma}^{-}(f)+g)\widetilde{Q}_{-}^{-1}\\ \mathcal{C}_{\Sigma}^{+}(f)\widetilde{Q}_{+}^{-1}-\mathcal{C}_{ \Sigma}^{-}(f)\widetilde{Q}_{-}^{-1}&=g(\widetilde{Q}_{-}^{-1}- \widetilde{Q}_{+}^{-1})\end{split} \tag{4.9}\]
Observe that the left- and right-hand sides in the last line might not lie in \(L^{2}(\Sigma)\). However, using property (iv) of \(\widetilde{Q}\) from the R-H problem 3.5, we see that they lie in \(L^{2-\epsilon}(\Sigma)\) for any \(\epsilon\in(0,1)\). Hence, if we define \(H=\mathcal{C}_{\Sigma}(f)\widetilde{Q}^{-1}\), we see that \(H\) is analytic in \(\mathbb{C}\setminus\Sigma\), satisfies \(H_{\pm}\in L^{2-\epsilon}(\Sigma)\) and vanishes at \(\infty\). By the Sokhotski-Plemelj formula it follows that \(H=\mathcal{C}_{\Sigma}(H_{+}-H_{-})\). Thus, applying \(\mathcal{C}_{\Sigma}\), which is understood to act on the space \(L^{2-\epsilon}(\Sigma)\), in the last line of (4.9), we obtain:
\[\begin{split}\mathcal{C}_{\Sigma}(f)\widetilde{Q}^{-1}& =\mathcal{C}_{\Sigma}(\underbrace{g(\widetilde{Q}_{-}^{-1}-\widetilde{Q}_{+}^ {-1})}_{g(\widetilde{v}-I)\widetilde{Q}_{+}^{-1}})\\ \mathcal{C}_{\Sigma}(f)&=\mathcal{C}_{\Sigma}(g( \widetilde{v}-I)\widetilde{Q}_{+}^{-1})\widetilde{Q}\\ \mathcal{C}_{\Sigma}^{-}(f)+g&=g+\mathcal{C}_{\Sigma }^{-}(g(\widetilde{v}-I)\widetilde{Q}_{+}^{-1})\widetilde{Q}_{-}\\ \psi&=g+\mathcal{C}_{\Sigma}^{-}(g(\widetilde{v}-I) \widetilde{Q}_{+}^{-1})\widetilde{Q}_{-}.\end{split} \tag{4.10}\]
Note here that \(\mathcal{C}_{\Sigma}^{-}(g(\widetilde{v}-I)\widetilde{Q}_{+}^{-1})\widetilde{ Q}_{-}\) is a priori a function in \(L^{2-\epsilon}(\Sigma)\), however as \(\psi=(1-\mathcal{C}_{\widetilde{v}})^{-1}g\in L^{2}(\Sigma)\) by Theorem 4.2, we conclude that indeed \(\mathcal{C}_{\Sigma}^{-}(g(\widetilde{v}-I)\widetilde{Q}_{+}^{-1})\widetilde{ Q}_{-}=\psi-g\in L^{2}(\Sigma)\).
We have thus proved that \((1-\mathcal{C}_{\widetilde{v}})^{-1}\) must indeed have the formed stated in (4.6).
## 5. Local parametrices around the point \(z=-1\)
In the following we will use appropriate local parametrices \(P\), \(\widehat{P}\) and \(\widetilde{P}\) to invert the three R-H problems introduced in Sect. 3, locally near \(z=-1\). We denote the modified R-H problems with \({}^{\star}\)R-H.
The explicit construction of the local parametrices is taken from [11, Eq. 6.50] and can be given in terms of Bessel functions and the Szego functions corresponding to the three weights. To define these parametrices, we first need a local \(n\)-dependent change of variables \(z\to\zeta\). Following [11, Sect. 6] we choose a sufficiently small neighbourhood \(U\) of the point \(z=-1\) and define the mapping
\[\xi\colon U\to\mathbb{C},\qquad z\mapsto\xi(z)=\frac{\log^{2}(-\phi(z))}{4}. \tag{5.1}\]
Note that for \(s\in U\cap[-1,1]\), \(\log^{2}(-\phi_{+}(s))=\log^{2}(-\phi_{-}^{-1}(s))=\log^{2}(-\phi_{-}(s))\) implying that \(\xi\) is indeed well-defined and holomorphic.
Now using (2.5) we see that \(\xi^{\prime}(-1)=-1/2\), meaning that for \(U\) sufficiently small, \(\xi\) will define a biholomorphic mapping between \(U\) and its image \(\xi(U)\). Introduce now \(\zeta=\zeta^{(n)}(z)=n^{2}\xi(z)\) for \(z\in U\) together with \(\Sigma_{\Psi}^{(n)}=n^{2}\xi(U\cap\Sigma)\). We can assume that \(\Sigma\) has been chosen such that \(\Sigma_{\Psi}^{(n)}\) can be extended to \(\Sigma_{\Psi}\supset\Sigma_{\Psi}^{(n)}\) consisting of three straight line segments \(\gamma_{i}\), \(i=1,2,3\), originating from \(\zeta=0\) at the angles \(\pm\frac{2\pi}{3}\) and \(\pi\), see Fig. 5. Accordingly, we will regard \(\zeta\) as a variable in the whole complex plane.
Figure 4. The change of variables \(z\to\xi\)
Figure 5. Contour for the local parametrix problems in the \(\zeta\)-plane
Next we shall define two piecewise holomorphic functions \(\Psi_{\nu}\colon\mathbb{C}\setminus\Sigma_{\Psi}\to\mathbb{C}^{2\times 2}\) for \(\nu=0,1\) (see [11, Eq. 6.51]):
\[\Psi_{\nu}(\zeta)=\begin{cases}\begin{pmatrix}I_{\nu}(2\zeta^{1/2})&-\frac{ \mathrm{i}}{\pi}K_{\nu}(2\zeta^{1/2})\\ -2\pi\mathrm{i}\zeta^{1/2}I^{\prime}_{\nu}(2\zeta^{1/2})&-2\zeta^{1/2}K^{ \prime}_{\nu}(2\zeta^{1/2})\end{pmatrix},\qquad|\arg\zeta|<\frac{2\pi}{3},\\ \\ \begin{pmatrix}\frac{1}{2}H^{(1)}_{\nu}(2(-\zeta)^{1/2})&-\frac{1}{2}H^{(2)}_{ \nu}(2(-\zeta)^{1/2})\\ -\pi\zeta^{1/2}(H^{(1)}_{\nu})^{\prime}(2(-\zeta)^{1/2})&\pi\zeta^{1/2}(H^{(2) }_{\nu})^{\prime}(2(-\zeta)^{1/2})\end{pmatrix}\mathrm{e}^{\frac{\mathrm{i}}{ 2}\pi\mathrm{i}\nu\sigma_{3}},\\ &\frac{2\pi}{3}<\arg\zeta<\pi,\\ \\ \begin{pmatrix}\frac{1}{2}H^{(2)}_{\nu}(2(-\zeta)^{1/2})&\frac{1}{2}H^{(1)}_{ \nu}(2(-\zeta)^{1/2})\\ \pi\zeta^{1/2}(H^{(2)}_{\nu})^{\prime}(2(-\zeta)^{1/2})&\pi\zeta^{1/2}(H^{(1) }_{\nu})^{\prime}(2(-\zeta)^{1/2})\end{pmatrix}\mathrm{e}^{-\frac{\mathrm{i}}{ 2}\pi\mathrm{i}\nu\sigma_{3}},\\ &-\pi<\arg\zeta<-\frac{2\pi}{3}.\end{cases}\]
Here the functions \(I_{\nu}\), \(K_{\nu}\) with \(\nu\in\mathbb{C}\) are the familiar _modified Bessel functions_. Generally, these are holomorphic functions in the domain \(z\in\mathbb{C}\setminus(-\infty,0]\) and have a branch cut along the negative real axis. In the special case \(\nu\in\mathbb{Z}\), \(I_{\nu}\) is entire.
Analogously, the functions \(H^{(1)}_{\nu}\), \(H^{(2)}_{\nu}\) with \(\nu\in\mathbb{C}\) are holomorphic for \(z\in\mathbb{C}\setminus(-\infty,0]\) and have a branch cut on the negative real axis. They are the _Bessel functions of the third kind_, also known as the _Hankel functions_. Properties of these function can be found in [17, Sect. 10]. In the following we will be interested in the behaviour of \(\Psi_{\nu}\) as \(\zeta\to 0\), which can be deduced from the following lemma:
**Lemma 5.1**.: ([17, Sect. 10]) _The following asymptotic formulas hold uniformly for \(\zeta\to 0\):_
\[\begin{split} I_{0}(2\zeta^{1/2}),\,K_{0}(2\zeta^{1/2}),\,H^{(1) }_{0}(2(-\zeta)^{1/2}),\,H^{(2)}_{0}(2(-\zeta)^{1/2})&=O(\log| \zeta|),\\ I^{\prime}_{0}(2\zeta^{1/2}),\,K^{\prime}_{0}(2\zeta^{1/2}),\,(H^{(1)}_{0} )^{\prime}(2(-\zeta)^{1/2}),\,(H^{(2)}_{0})^{\prime}(2(-\zeta)^{1/2})& =O\Big{(}\frac{1}{|\zeta|^{1/2}}\Big{)},\end{split} \tag{5.2}\]
_and_
\[\begin{split} I_{1}(2\zeta^{1/2}),\,K_{1}(2\zeta^{1/2}),\,H^{(1 )}_{1}(2(-\zeta)^{1/2}),\,H^{(2)}_{1}(2(-\zeta)^{1/2})&=O\Big{(} \frac{1}{|\zeta|^{1/2}}\Big{)},\\ I^{\prime}_{1}(2\zeta^{1/2}),\,K^{\prime}_{1}(2\zeta^{1/2}),\,(H^{(1) }_{1})^{\prime}(2(-\zeta)^{1/2}),\,(H^{(2)}_{1})^{\prime}(2(-\zeta)^{1/2})& =O\Big{(}\frac{1}{|\zeta|}\Big{)}.\end{split} \tag{5.3}\]
_The following asymptotic formulas hold uniformly for \(\zeta\to\infty\) in the prescribed sectors for any \(\delta>0\) and \(\nu=0,1\):_
\[\begin{split} I_{\nu}(2\zeta^{1/2})\mathrm{e}^{-2\zeta^{1/2}},I^ {\prime}_{\nu}(\zeta^{1/2})\mathrm{e}^{-2\zeta^{1/2}}&=O\Big{(} \frac{1}{|\zeta|^{1/4}}\Big{)},\\ K_{\nu}(2\zeta^{1/2})\mathrm{e}^{2\zeta^{1/2}},K^{\prime}_{\nu}(2 \zeta^{1/2})&=O\Big{(}\frac{1}{|\zeta|^{1/4}}\Big{)},\end{split} \qquad|\arg\zeta|<\pi-\delta\\ H^{(1)}_{\nu}(2(-\zeta)^{1/2})\mathrm{e}^{-2\mathrm{i}(-\zeta)^{1/ 2}},(H^{(1)}_{\nu})^{\prime}(2(-\zeta)^{1/2})&\mathrm{e}^{-2 \mathrm{i}(-\zeta)^{1/2}}=O\Big{(}\frac{1}{|\zeta|^{1/4}}\Big{)},\qquad\arg \zeta\neq 0,\\ H^{(2)}_{\nu}(2(-\zeta)^{1/2})\mathrm{e}^{2\mathrm{i}(-\zeta)^{1/ 2}},(H^{(2)}_{\nu})^{\prime}(2(-\zeta)^{1/2})&\mathrm{e}^{2 \mathrm{i}(-\zeta)^{1/2}}=O\Big{(}\frac{1}{|\zeta|^{1/4}}\Big{)},\qquad\arg \zeta\neq 0.\end{split} \tag{5.4}\]
_In the formulas (5.2)-(5.4), \((\cdot)^{1/2}\) denotes the principal branch._
In [11, Sect. 6] it is shown that \(\Psi_{\nu}\) satisfies the following jump conditions across the contours \(\gamma_{i}\):
\[\Psi_{\nu,+}(\zeta)=\Psi_{\nu,-}(\zeta)\begin{cases}\begin{pmatrix}1&0\\ \mathrm{e}^{\nu\pi\mathrm{i}}&1\end{pmatrix},&\zeta\in\gamma_{1},\\ \\ \begin{pmatrix}0&1\\ -1&0\end{pmatrix},&\zeta\in\gamma_{2},\\ \\ \begin{pmatrix}1&0\\ \mathrm{e}^{-\nu\pi\mathrm{i}}&1\end{pmatrix},&\zeta\in\gamma_{3}.\end{cases}\]
In the following we will use \(\Psi_{\nu}\), \(\nu=0,1\) to write down three local parametrices \(P\), \(\widehat{P}\), \(\widetilde{P}\) around \(z=-1\) for the three R-H problems defined in Section 3 (see [11, Eq. 6.52]):
\[P(z) =E(z)(2\pi n)^{\sigma_{3}/2}\Psi_{1}(n^{2}\xi(z))[-\phi(z)]^{-n \sigma_{3}}\Big{(}\frac{F(z)}{W(z)}\Big{)}^{\sigma_{3}},\] \[\widehat{P}(z) =\widehat{E}(z)(2\pi n)^{\sigma_{3}/2}\Psi_{1}(n^{2}\xi(z))[- \phi(z)]^{-n\sigma_{3}}\Big{(}\frac{\widehat{F}(z)}{\widetilde{W}(z)}\Big{)} ^{\sigma_{3}},\qquad z\in U\setminus\Sigma,\] \[\widetilde{P}(z) =\widetilde{E}(z)(2\pi n)^{\sigma_{3}/2}\Psi_{0}(n^{2}\xi(z))[- \phi(z)]^{-n\sigma_{3}}. \tag{5.5}\]
Here \(W=\sqrt{-w}\), \(\widetilde{W}=\sqrt{-\widehat{w}}\) are chosen to have a branch cut on \((-1,\infty)\cap U\) and to be positive on \((-\infty,-1)\cap U\). The matrix-valued functions \(E\), \(\widehat{E}\) and \(\widetilde{E}\) are in fact holomorphic for \(z\) in \(U\). More explicitly we have (see [11, Eq. 6.53])
\[E(z)=N(z)\Big{(}\frac{W(z)}{F(z)}\Big{)}^{\sigma_{3}}\frac{1}{\sqrt{2}} \begin{pmatrix}1&\mathrm{i}\\ \mathrm{i}&1\end{pmatrix}\xi(z)^{\sigma_{3}/4}, \tag{5.6}\]
which is in fact holomorphic for \(z\in U\). Here \(N\) is the outer parametrix solution
\[N(z)=\begin{pmatrix}\frac{a(z)+a(z)^{-1}}{2}&\qquad\frac{a(z)-a(z)^{-1}}{2 \mathrm{i}}\\ \frac{a(z)-a(z)^{-1}}{-2\mathrm{i}}&\qquad\frac{a(z)+a(z)^{-1}}{2}\end{pmatrix}, \tag{5.7}\]
where
\[a(z)=\Big{(}\frac{z-1}{z+1}\Big{)}^{1/4}\]
with a branch cut on \((-1,1)\) and \(a(\infty)=1\). Similar formulae can be obtained for \(\widehat{E}\) and \(\widetilde{E}\) by substituting in (5.6) \(\widehat{F}\), \(\widetilde{W}\) and \(\widetilde{F}\equiv 1\), \(\widetilde{W}\equiv 1\) respectively. Crucially however, the outer parametrix \(N\) is the same for all three problems. One can check that the determinants of all three parametrices are constant equal to \(1\) inside \(U\), cf. [11, Sect. 7]. Furthermore, \(E\), \(\widehat{E}\) and \(\widetilde{E}\) are analytic and bounded in \(U\), the singularity of \(a(z)\) at \(z=-1\) being compensated by the factor \(\xi(z)^{\sigma_{3}/4}\).
**Lemma 5.2**.: _The matrix-valued functions \(P\), \(\widehat{P}\) and \(\widetilde{P}\) defined in \(U\setminus\Sigma\), satisfy the following conditions:_
1. _For_ \(s\in U\cap\Sigma\)_,_ (5.8) \[P_{+}(s)=P_{-}(s)v(s),\quad\widehat{P}_{+}(s)=\widehat{P}_{-}(s)\widehat{v} (s),\quad\widetilde{P}_{+}(s)=\widetilde{P}_{-}(s)\widetilde{v}(s).\]
_._
2. _For_ \(s\in\partial U\) _(see_ _[_11_, Eq._ 6.41_]__),_ (5.9) \[P(s),\widehat{P}(s),\widetilde{P}(s)=N(s)+O(n^{-1}).\]
3. _For_ \(z\in U\) _we have uniformly (see_ _[_11_, Eq._ 7.10_]__)_ (5.10) \[\widetilde{Q}(z)[\widetilde{P}(z)]^{-1}=I+O(n^{-1}),\qquad\widehat{Q}(z)[ \widehat{P}(z)]^{-1}=I+O(n^{-1}).\]
4. _For_ \(z\in U\)_,_ (5.11) \[P(z),\widehat{P}(z)=O(\max\{|z+1|^{-1/4},n^{-1/2}|z+1|^{-1/2}\}),\] _and_ (5.12) \[\widetilde{P}(z)=O(|z+1|^{-1/4})\] _uniformly as_ \(n\to\infty\)_._
Proof.: A detailed derivation of the local parametrices can be found in [11, Sect. 6] together with a proof of properties \((i)\), \((ii)\), for property \((iii)\) see [11, Sect. 7]. Note that while the weight function \(w\) does not fall into the class of weight functions considered in [11] due to the logarithmic singularity at \(z=+1\), the local construction and estimation of the left parametrix near \(z=-1\) found therein remains unchanged.
Regarding point \((iv)\), we start with the properties of \(P\) and \(\widehat{P}\). Noting that \(E(z)\) and \(\widehat{E}(z)\) are holomorphic, \(n\)-independent and have unit determinants, hence it is enough to consider \(E^{-1}(z)P(z)\) and \(\widehat{E}^{-1}(z)\widehat{P}(z)\) instead of \(P(z)\) and \(\widehat{P}(z)\) in (5.11). Analogously, it follows from (2.7) and (2.12) that \(\big{(}\frac{F}{\overline{W}}\big{)}^{\sigma_{3}}\) and \(\big{(}\frac{\widehat{P}}{\overline{W}}\big{)}^{\sigma_{3}}\) are \(n\)-independent and bounded in a neighbourhood of \(z=-1\), hence they also do not contribute in (5.11).
It remains to study
\[(2\pi n)^{\sigma_{3}/2}\Psi_{1}(n^{2}\xi(z))[-\phi(z)]^{-n\sigma_{3}}\]
which is equal to both \(E^{-1}(z)P(z)\big{(}\frac{F}{\overline{W}}\big{)}^{-\sigma_{3}}\) and \(\widehat{E}^{-1}(z)\widehat{P}(z)\big{(}\frac{\widehat{P}}{\overline{W}} \big{)}^{-\sigma_{3}}\). It follows from the definition of \(\xi\) in (5.1) that \([-\phi(z)]^{-n\sigma_{3}}=\mathrm{e}^{-2\sqrt{n^{2}\xi(z)}\sigma_{3}}\) where the square root has a branch cut along \(z>-1\). Writing \(\zeta=n^{2}\xi(z)\) and assuming \(|z+1|\gtrsim O(n^{-2})\), we see that \(|\zeta|\gtrsim O(1)\) and using the estimates in (5.4) we conclude that
\[(2\pi n)^{\sigma_{3}/2}\Psi_{1}(n^{2}\xi(z))[-\phi(z)]^{-n\sigma _{3}}=(2\pi n)^{\sigma_{3}/2}\begin{pmatrix}O(|\zeta|^{-1/4})&O(|\zeta|^{-1/4} )\\ O(|\zeta|^{1/4})&O(|\zeta|^{1/4})\end{pmatrix} \tag{5.13}\] \[=\begin{pmatrix}O(|z+1|^{-1/4})&O(|z+1|^{-1/4})\\ O(|z+1|^{1/4})&O(|z+1|^{1/4})\end{pmatrix},\qquad|z+1|\gtrsim O(n^{-2}).\]
For \(|z+1|\lesssim O(n^{-2})\) we use the estimates (5.3) instead to conclude
\[(2\pi n)^{\sigma_{3}/2}\Psi_{1}(n^{2}\xi(z))[-\phi(z)]^{-n\sigma_{3}}=(2\pi n )^{\sigma_{3}/2}\begin{pmatrix}O(|\zeta|^{-1/2})&O(|\zeta|^{-1/2})\\ O(|\zeta|^{-1/2})&O(|\zeta|^{-1/2})\end{pmatrix} \tag{5.14}\] \[=\begin{pmatrix}O(n^{-1/2}|z+1|^{-1/2})&O(n^{-1/2}|z+1|^{-1/2})\\ O(n^{-3/2}|z+1|^{-1/2})&O(n^{-3/2}|z+1|^{-1/2})\end{pmatrix},\qquad z\in|z+1| \lesssim O(n^{-2}).\]
Note that in this case \([-\phi(z)]^{-n\sigma_{3}}=O(1)\), hence this term does not contribute. One checks that indeed for \(|z+1|\sim n^{-2}\) the bounds in (5.13) and (5.14) are of the same order.
The proof of (5.12) works in a similar fashion. Again the holomorphic prefactor \(\widetilde{E}\) can be ignored. For \(|z+1|\gtrsim O(n^{-2})\) we get as before
\[(2\pi n)^{\sigma_{3}/2}\Psi_{0}(n^{2}\xi(z))[-\phi(z)]^{-n\sigma_{3}}=\] \[\begin{pmatrix}O(|z+1|^{-1/4})&O(|z+1|^{-1/4})\\ O(|z+1|^{1/4})&O(|z+1|^{1/4})\end{pmatrix},\qquad|z+1|\gtrsim O(n^{-2}).\]
However for \(|z+1|\lesssim O(n^{-2})\), we get different asymptotics after applying (5.2). We obtain
\[(2\pi n)^{\sigma_{3}/2}\Psi_{0}(n^{2}\xi(z))[-\phi(z)]^{-n\sigma_{3}}\] \[=\begin{pmatrix}O(n^{1/2}|\log(n^{2}(z+1))|)&O(n^{1/2}|\log(n^{2} (z+1))|)\\ O(n^{-1/2})&O(n^{-1/2})\end{pmatrix},\qquad|z+1|\lesssim O(n^{-2}). \tag{5.15}\]
Now observe that for \(|z+1|\lesssim O(n^{-2})\) we have trivially \(n^{2}|z+1|\lesssim O(1)\) and thus
\[|\log(n^{2}(z+1))|\lesssim|n^{2}(z+1)|^{-1/4}.\]
This estimate, together with \(n^{-1/2}\lesssim|z+1|^{-1/4}\) (in fact \(n^{1/2}\lesssim|z+1|^{-1/4}\) holds), implies that the matrix entries in (5.15) can be bounded by \(O(|z+1|^{-1/4})\) uniformly as \(n\to\infty\), showing (5.12) and finishing the proof.
The fact that all three parametrices display the same asymptotic behaviour for \(|z+1|\gtrsim O(n^{-2})\) is consistent with the matching condition (5.9) which is the same in all three cases. Note that for a fixed \(n\), \(\widetilde{P}(z)\) has only a logarithmic singularity near \(z=-1\), but the order in (5.12) is necessary to obtain a uniform bound for \(n\to\infty\).
**Corollary 5.3**.: _For \(z\) in a neighbourhood \(U_{+1}\) of \(+1\) the matrix-valued function \(\widetilde{Q}\) satisfies the asymptotics_
\[\widetilde{Q}(z)=O(|z-1|^{-1/4}) \tag{5.16}\]
_uniformly for \(n\to\infty\). Moreover, \(\widetilde{Q}\), and its boundary values \(\widetilde{Q}_{\pm}\) on \(\Sigma\), are bounded in \(\overline{\mathbb{C}}:=\mathbb{C}\cup\{\infty\}\) away from small neighbourhoods of \(\pm 1\), uniformly for \(n\to\infty\)._
Proof.: Analogously to the local parametrix for the Legendre problem at \(z=-1\), \(\widetilde{P}_{-1}(z):=\widetilde{P}(z)\), one can construct a local parametrix \(\widetilde{P}_{+1}(z)\) for the Legendre problem near the point \(z=+1\) with local jumps inside an open neighbourhood \(U_{+1}\) of \(+1\) as depicted in Fig. 6:
In fact we have \(\widetilde{P}_{+1}(z)=\sigma_{3}\widetilde{P}_{-1}(-z)\sigma_{3}\), see [3, Eq. B.10]. Hence, it follows from Lem. 5.2\((iv)\) that
\[\widetilde{P}_{+1}(z)=O(|z-1|^{-1/4}) \tag{5.17}\]
uniformly for \(n\to\infty\). After deforming the local contour such that it matches locally with \(\Sigma\) as depicted in Fig. 7, we can analytically continue \(\widetilde{P}_{+1}\) as necessary to obtain a _deformed_ local parametrix \(\widetilde{P}_{+1}^{\rm def}\), which would satisfy locally the same jump conditions as \(\widetilde{Q}\).
Because the jump matrices \(\widetilde{v}\) and their analytic continuations are uniformly bounded near \(z=+1\), \(\widetilde{P}_{+1}^{\rm def}\) would continue to satisfy the estimate (5.17)
\[\widetilde{P}_{+1}^{\rm def}(z)=O(|z-1|^{-1/4}) \tag{5.18}\]
uniformly for \(n\to\infty\). As the contour deformations are local, we also know that the matching condition (5.10) remains unchanged, at least on \(\partial U_{+1}\):
\[\widetilde{Q}(s)[\widetilde{P}^{\mathrm{def}}_{+1}(s)]^{-1}=I+O(n^{-1}),\qquad s \in\partial U_{+1}. \tag{5.19}\]
However, as \(\widetilde{Q}[\widetilde{P}^{\mathrm{def}}_{+1}]^{-1}\) has no jumps inside \(U_{+1}\), we can extend (5.19) to all of \(U_{+1}\) by the maximum modulus principle for holomorphic functions. Together with (5.18), the estimate (5.16) follows.
Regarding the second statement, it has been shown in [11, Sect. 7] that for a wide class of weight functions including \(\widetilde{w}\), the outer parametrix solution \(N\) is an approximation to the exact R-H solution, in our case \(\widetilde{Q}\), uniformly in \(z\) as long we stay away from the points \(\pm 1\):
\[|\widetilde{Q}(z)-N(z)|=O(n^{-1}),\qquad z\,\,\,\text{staying away from}\,\,\pm 1.\]
As can be seen from (5.7), \(N\) is \(n\)-independent and bounded away from the points \(\pm 1\). This finishes the proof.
## 6. Modified R-H problems
We are now in a position to define modified versions of the three R-H problems found in Section 3, which will be referred to as the respective \({}^{\star}\)-analogs. The three \({}^{\star}\)R-H problems are introduced implicitly by defining their respective solutions. Here \(U\) is a small neighbourhood of \(-1\), as before.
\[Q^{\star}(z) =\begin{cases}Q(z),&z\in\mathbb{C}\setminus(\Sigma\cup U)\\ Q(z)[P(z)]^{-1},&z\in U\end{cases}\] \[\widehat{Q}^{\star}(z) =\begin{cases}\widehat{Q}(z),&z\in\mathbb{C}\setminus(\Sigma\cup U )\\ \widehat{Q}(z)[\widehat{P}(z)]^{-1},&z\in U\end{cases}\] \[\widetilde{Q}^{\star}(z) =\begin{cases}\widetilde{Q}(z),&z\in\mathbb{C}\setminus(\Sigma \cup U)\\ \widetilde{Q}(z)[\widetilde{P}(z)]^{-1},&z\in U.\end{cases}\]
Note that because of (5.8), the jumps on \(U\cap\Sigma\) cancel out, which means that all three solutions can be uniquely defined on all of \(U\). We will denote the new contour, which is the same for all three problems, by \(\Sigma^{\star}=(\Sigma\setminus U)\cup\partial U\) and assume that \(\partial U\) is oriented clockwise. The corresponding jump matrices are denoted by \(v^{\star}\), \(\widehat{v}^{\star}\) and \(\widetilde{v}^{\star}\), hence
\[Q^{\star}_{+}(s)=Q^{\star}_{-}(s)v^{\star}(s),\qquad s\in\Sigma^{\star},\]
and so on. Note also that the normalization at infinity remains unchanged and the jump matrices on \(\partial U\) are just the corresponding local parametrices:
\[v^{\star}(s)=P(s),\quad\widehat{v}^{\star}(s)=\widehat{P}(s),\quad\widetilde {v}^{\star}(s)=\widetilde{P}(s),\qquad s\in\partial U.\]
Now consider the singular integral operator
\[1-\mathcal{C}_{\widetilde{v}^{\star}}\colon L^{2}(\Sigma^{\star})\to L^{2}( \Sigma^{\star}),\ \ f\mapsto f-\mathcal{C}^{-}_{\Sigma^{\star}}(f(\widetilde{v}^{\star}-I)).\]
We claim that this operator is invertible and that the inverse is given by
\[(1-\mathcal{C}_{\widetilde{v}^{\star}})^{-1}\colon L^{2}(\Sigma^{\star})\to L ^{2}(\Sigma^{\star}),\ \ f\mapsto f+\mathcal{C}^{-}_{\Sigma^{\star}}(f(\widetilde{v}^{\star}-I)[ \widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}.\]
At the moment it is not even clear whether the above operator is well-defined as a map from \(L^{2}(\Sigma^{\star})\) to itself, as \(\widetilde{Q}^{\star}_{\pm}\) has a logarithmic singularity near \(+1\).
To prove the claim we proceed as follows: First we partition the jump contour \(\Sigma^{\star}=\underbrace{\partial U}_{\Sigma^{\ell}}\ \cup\ \underbrace{\Sigma\setminus U}_{\Sigma^{r}}\), as shown in Fig. 8.
Next we decompose the operator \(\mathcal{C}^{-}_{\Sigma^{\star}}(\ \cdot\ (\widetilde{v}^{\star}-I)[ \widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}\). Setting \(\widetilde{u}^{\star}=\widetilde{v}^{\star}-I\) we obtain:
\[\mathcal{C}^{-}_{\Sigma^{\star}}(\ \cdot\ \widetilde{u}^{\star}[ \widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-} =\mathcal{C}^{-}_{\Sigma^{\star}}(\ \cdot\ \chi_{\Sigma^{\ell}}\widetilde{u}^{\star}[ \widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}+\mathcal{C}^{-}_{ \Sigma^{\star}}(\ \cdot\ \chi_{\Sigma^{r}}\widetilde{u}^{\star}[ \widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}\] \[=\mathcal{C}^{-}_{\Sigma^{\star}}(\ \cdot\ \chi_{\Sigma^{\ell}} \widetilde{u}^{\star}[\widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{ -}+\mathcal{C}^{-}_{\Sigma^{\star}}(\ \cdot\ \chi_{\Sigma^{r}}\widetilde{u}^{\star}[ \widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}\chi_{\Sigma^{r}}\] \[\ \ \ +\mathcal{C}^{-}_{\Sigma^{\star}}(\ \cdot\ \chi_{\Sigma^{r}} \widetilde{u}^{\star}[\widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{ \chi_{\Sigma^{\ell}}}\]
where \(\chi_{\Sigma^{j}}\) is the characteristic function of \(\Sigma^{j}\) for \(j=\ell,r\). By definition we have \(\widetilde{Q}^{\star}_{\pm}\chi_{\Sigma^{r}}=\widetilde{Q}_{\pm}\chi_{\Sigma ^{r}}\). Note that the mapping
\[f\mapsto f\chi_{\Sigma^{\ell}}\widetilde{u}^{\star}[\widetilde{Q}^{\star}_{+} ]^{-1}\]
is an operator uniformly bounded in \(n\) from \(L^{2}(\Sigma^{\star})\to L^{2}(\Sigma^{\star},|z-1|^{-1/4})\), as \([\widetilde{Q}^{\star}_{+}]^{-1}\) converges uniformly on \(\Sigma^{\ell}=\partial U\) to the outer parametrix \(N\), see (5.9) and (5.10). As \(|z-1|^{-1/4}\in A_{2}(\Sigma^{\star})\) (see Theorem 4.1), we have that the mapping
\[f\mapsto\mathcal{C}^{-}_{\Sigma^{\star}}(f\chi_{\Sigma^{\ell}}\widetilde{u}^{ \star}[\widetilde{Q}^{\star}_{+}]^{-1})\]
defines a uniformly bounded operator from \(L^{2}(\Sigma^{\star})\to L^{2}(\Sigma^{\star},|z-1|^{-1/4})\). Finally, using \(\widetilde{Q}^{\star}=\widetilde{Q}\) in \(\mathbb{C}\setminus U\) and the estimate (5.16), we conclude that
\[f\mapsto\mathcal{C}^{-}_{\Sigma^{\star}}(f\chi_{\Sigma^{\ell}}\widetilde{u}^{ \star}[\widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}\]
defines a uniformly bounded operator from \(L^{2}(\Sigma^{\star})\to L^{2}(\Sigma^{\star})\).
The uniform boundedness of the mapping
\[f\mapsto\mathcal{C}^{-}_{\Sigma^{\star}}(f\chi_{\Sigma^{r}}\widetilde{u}^{ \star}[\widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}\chi_{\Sigma ^{r}}\]
as an operator from \(L^{2}(\Sigma^{\star})\to L^{2}(\Sigma^{\star})\) follows directly from Prop. 4.6 together with Theorem 4.2.
Finally, the mapping
\[f\mapsto f\chi_{\Sigma^{r}}\widetilde{u}^{\star}[\widetilde{Q}^{\star}_{+}]^{-1}\]
defines a uniformly bounded operator from \(L^{2}(\Sigma^{\star})\) to \(L^{2}(\Sigma^{\star},|z-1|^{1/4})\). As \(|z-1|^{1/4}\in A_{2}(\Sigma^{\star})\), the mapping
\[f\mapsto\mathcal{C}^{-}_{\Sigma^{\star}}(f\chi_{\Sigma^{r}}\widetilde{u}^{ \star}[\widetilde{Q}^{\star}_{+}]^{-1})\]
is a uniformly bounded operator from \(L^{2}(\Sigma^{\star})\to L^{2}(\Sigma^{\star},|z-1|^{1/4})\) as well. Moreover, the multiplication with \(\widetilde{Q}^{\star}_{-}\chi_{\Sigma^{\ell}}\) defines a uniformly bounded operator from \(L^{2}(\Sigma^{\star},|z-1|^{1/4})\to L^{2}(\Sigma^{\star})\), implying that
\[f\mapsto\mathcal{C}_{\Sigma^{\star}}^{-}(f\chi_{\Sigma^{\Gamma}}\widetilde{u} ^{\star}[\widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}\]
is a uniformly bounded operator from \(L^{2}(\Sigma^{\star})\to L^{2}(\Sigma^{\star})\). We obtain:
**Theorem 6.1**.: _The inverse of the operator \(1-\mathcal{C}_{\widetilde{v}^{\star}}\) on \(L^{2}(\Sigma^{\star})\) exists and is given by_
\[(1-\mathcal{C}_{\widetilde{v}^{\star}})^{-1}\colon L^{2}(\Sigma^{\star})\to L ^{2}(\Sigma^{\star}),\ \ f\mapsto f+\mathcal{C}_{\Sigma^{\star}}^{-}(f(\widetilde{v}^{\star}-I)[ \widetilde{Q}^{\star}_{+}]^{-1})\widetilde{Q}^{\star}_{-}.\]
_Moreover, the operator norm is uniformly bounded as \(n\to\infty\)._
Proof.: That the mapping is indeed uniformly bounded follows from the preceding argument. The fact that it coincides with the inverse of \(1-\mathcal{C}_{\widetilde{v}^{\star}}\) follows from a computation analogous to the one given in the proof of Prop. 4.6.
### Uniform invertibility of the \({}^{\star}\)resolvents
The uniform boundedness stated in Theorem 6.1 can now be extended to the uniform invertibility of the operators \(1-\mathcal{C}_{\widetilde{v}^{\star}}\), \(1-\mathcal{C}_{\widetilde{v}^{\star}}\). Note that the jump matrices satisfy
\[\|v^{\star}-\widetilde{v}^{\star}\|_{L^{\infty}(\Sigma^{\star})},\ \|\widehat{v}^{\star}-\widetilde{v}^{\star}\|_{L^{\infty}(\Sigma^{\star})}\to 0 \tag{6.1}\]
for \(n\to\infty\). In fact, we have
\[v^{\star}-\widetilde{v}^{\star}=\begin{cases}\begin{pmatrix}0&0\\ \big{(}\frac{F^{2}}{w}(s)-1\big{)}\phi^{-2n}(s)&0\end{pmatrix},&s\in(\Sigma_{1 }\cup\Sigma_{2})\cap\Sigma^{r},\\ \\ \begin{pmatrix}0&0\\ \big{(}\frac{F^{2}}{w}_{+}(s)+\frac{F^{2}}{w}_{-}(s)-2\big{)}\phi^{-2n}(s)&0 \end{pmatrix},&s\in(1,1+\delta),\\ \\ 0,&s\in(-1,1)\cap\Sigma^{r},\\ \\ P(s)-\widetilde{P}(s),&s\in\Sigma^{\ell}.\end{cases} \tag{6.2}\]
and the corresponding claim in (6.1) for the difference \(v^{\star}-\widetilde{v}^{\star}\) follows from (2.3), Cor. 2.4 and (5.9). An analog formula of the form (6.2) can be written down for \(\widehat{v}^{\star}-\widetilde{v}^{\star}\):
\[\widehat{v}^{\star}-\widetilde{v}^{\star}=\begin{cases}\begin{pmatrix}0&0\\ \big{(}\frac{\widehat{F}^{2}}{\widetilde{w}}(s)-1\big{)}\phi^{-2n}(s)&0\end{pmatrix},&s\in(\Sigma_{1}\cup\Sigma_{2})\cap\Sigma^{r},\\ \\ \begin{pmatrix}0&0\\ 2\big{(}\frac{\widehat{F}^{2}}{\widetilde{w}}(s)-1\big{)}\phi^{-2n}(s)&0 \end{pmatrix},&s\in(1,1+\delta),\\ \\ 0,&s\in(-1,1)\cap\Sigma^{r},\\ \\ \widehat{P}(s)-\widetilde{P}(s),&s\in\Sigma^{\ell}.\end{cases}\]
Here, (6.1) follows as before after using estimate (2.11) instead of (2.6).
Thus it follows that
\[\|(1-\mathcal{C}_{v^{\star}})-(1-\mathcal{C}_{\widetilde{v}^{\star}})\|_{L^{2}( \Sigma^{\star})\to L^{2}(\Sigma^{\star})}\to 0,\]
\[\|(1-\mathcal{C}_{\widetilde{v}^{\star}})-(1-\mathcal{C}_{\widetilde{v}^{ \star}})\|_{L^{2}(\Sigma^{\star})\to L^{2}(\Sigma^{\star})}\to 0,\]
as \(n\to\infty\). As the operator \(1-\mathcal{C}_{\widetilde{v}^{\star}}\) is uniformly invertible, a standard argument (see e.g. [3, Theorem 4.7]) shows that the operators \(1-\mathcal{C}_{v^{\star}}\), \(1-\mathcal{C}_{\widetilde{v}^{\star}}\) are also uniformly invertible for \(n\) large enough. We summarize:
**Theorem 6.2**.: _The operators \(1-\mathcal{C}_{v^{\star}}\) and \(1-\mathcal{C}_{\widetilde{v}^{\star}}\) are invertible for \(n\) large enough and the operator norms of their inverses are uniformly bounded as \(n\to\infty\)._
Note that Theorem 6.2 is the analog of Theorem 4.2 initially proved in [3, Theorem 4.5], but for the logarithmic weight function and on a contour avoiding the problematic point \(z=-1\). Interestingly Theorem 4.2 played a crucial role in the proof of Theorem 6.2.
## 7. Asymptotic Analysis
The following section will be largely based on [3, Sect. 5] and culminates in the proof of Theorem 1.1.
### Some norm estimates
Let us define \(\mu_{n}^{\star}=Q_{-}^{\star(n)}\) and \(\widehat{\mu}_{n}^{\star}=\widehat{Q}_{-}^{\star(n)}\), where in the following we will make the \(n\)-dependence explicit. In particular we will denote the jump matrices with a subscript \(n\) and the Cauchy operator with a superscript \((n)\) for notational convenience, i.e. \(v_{n}^{\star}\), \(\mathcal{C}_{\Sigma}^{(n)}\) and so on. We will now prove an analog of [3, Prop. 5.1] in the \({}^{\star}\)-case.
**Proposition 7.1**.: _The following estimates hold for \(n\to\infty\)._
1. \(\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})\|_{L^{2}( \Sigma^{\star})}=O\big{(}\frac{1}{n^{1/2}\log^{2}n}\big{)}\)_,_
2. \(\|\mu_{n}^{\star}-\widehat{\mu}_{n}^{\star}\|_{L^{2}(\Sigma^{\star})}=O\big{(} \frac{1}{n^{1/2}\log^{2}n}\big{)}\)_,_
3. \(\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})-\widehat{ \mu}_{n+1}^{\star}(v_{n+1}^{\star}-\widehat{v}_{n+1}^{\star})\|_{L^{2}(\Sigma^ {\star})}=O\big{(}\frac{1}{n^{3/2}\log^{2}n}\big{)}\)_,_
4. \(\|(\mu_{n}^{\star}-\widehat{\mu}_{n}^{\star})-(\mu_{n+1}^{\star}-\widehat{ \mu}_{n+1}^{\star})\|_{L^{2}(\Sigma^{\star})}=O\big{(}\frac{1}{n^{3/2}\log^{2} n}\big{)}\)_._
Proof.: The proof is for the most part taken from [3, Prop. B.2], with the only difference being the contribution from \(\Sigma^{\ell}=\partial U\) instead of \(\Sigma\cap U\).
For \((i)\) let us write
\[\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})\|_{L^{2}( \Sigma^{\star})}^{2}=\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n} ^{\star})\|_{L^{2}(\Sigma^{\ell})}^{2}+\|\widehat{\mu}_{n}^{\star}(v_{n}^{ \star}-\widehat{v}_{n}^{\star})\|_{L^{2}(\Sigma^{r})}^{2}. \tag{7.1}\]
As \(\widehat{\mu}_{n}|_{s\in\Sigma^{r}}=\widehat{\mu}_{n}^{\star}|_{s\in\Sigma^{r }}\), \(v_{n}|_{s\in\Sigma^{r}}=v_{n}^{\star}|_{s\in\Sigma^{r}}\) and \(\widehat{v}_{n}|_{s\in\Sigma^{r}}=\widehat{v}_{n}^{\star}|_{s\in\Sigma^{r}}\), one can conclude from [3, Eq. 5.1] that
\[\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})\|_{L^{2}( \Sigma^{r})}=O\Big{(}\frac{1}{n^{1/2}\log^{2}n}\Big{)}. \tag{7.2}\]
Hence it remains to consider the term \(\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})\|_{L^{2}( \Sigma^{\ell})}\). Recall that for \(s\in\Sigma^{\ell}\) we have \(v_{n}^{\star}(s)=P^{(n)}(s)\) and \(\widehat{v}_{n}^{\star}(s)=\widehat{P}^{(n)}(s)\) and so it follows from Lem. 5.2\((ii)\) that
\[\|v_{n}^{\star}-\widehat{v}_{n}^{\star}\|_{L^{\infty}(\Sigma^{\ell})}=O(n^{-1}). \tag{7.3}\]
Moreover, from Lem. 5.2\((iii)\) it also follows that \(\widehat{\mu}_{n}^{\star}(s)=\widehat{Q}^{(n)}(s)[\widehat{P}^{(n)}(s)]^{-1}\) is uniformly bounded for \(s\in\Sigma^{\ell}\). Hence we conclude
\[\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})\|_{L^{2}( \Sigma^{\ell})}=O(n^{-1})\]
which together with (7.2) proves \((i)\).
Point \((ii)\) follows from \((i)\) by considering
\[\mu_{n}^{\star}-\widehat{\mu}_{n}^{\star} =(1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}I-(1-\mathcal{C}_{\widehat {v}^{\star}}^{(n)})^{-1}I\] \[=(1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}(\mathcal{C}_{v^{\star}}^ {(n)}-\mathcal{C}_{\widehat{v}^{\star}}^{(n)})(1-\mathcal{C}_{\widehat{v}^{ \star}}^{(n)})^{-1}I\] \[=(1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}(\mathcal{C}_{v^{\star}}^ {(n)}-\mathcal{C}_{\widehat{v}^{\star}}^{(n)})\widehat{\mu}_{n}^{\star}\] \[=(1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}\mathcal{C}_{\Sigma^{\star }}^{-}(\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})).\]
As \((1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}\), \(\mathcal{C}_{\Sigma^{\star}}^{-}\) are bounded operators (uniformly in \(n\)), it follows that
\[\|\mu_{n}^{\star}-\widehat{\mu}_{n}^{\star}\|_{L^{2}(\Sigma^{\star})}\lesssim \|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})\|_{L^{2}( \Sigma^{\star})}=O\Big{(}\frac{1}{n^{1/2}\log^{2}n}\Big{)}\]
showing \((ii)\).
For \((iii)\) we will again decompose the norm as in (7.1). As before, following the arguments found in [3, Prop. B.2] we conclude that
\[\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})-\widehat{ \mu}_{n+1}^{\star}(v_{n+1}^{\star}-\widehat{v}_{n+1}^{\star})\|_{L^{2}(\Sigma^ {r})}=O\Big{(}\frac{1}{n^{3/2}\log^{2}n}\Big{)}.\]
For the remaining term we will use the fact that the local parametrices \(P^{(n)}(s)\), \(\widehat{P}^{(n)}(s)\) have an infinite series expansion in powers of \(n^{-1}\) which is uniform on \(\Sigma^{\ell}\), see [11, Eq. 8.2]:
\[v_{n}^{\star}(s) =P^{(n)}(s)\sim\Big{(}I+\sum_{k=1}^{\infty}\frac{\Delta_{k}(s)}{ n^{k}}\Big{)}N(s),\] \[\widehat{v}_{n}^{\star}(s) =\widehat{P}^{(n)}(s)\sim\Big{(}I+\sum_{k=1}^{\infty}\frac{ \widehat{\Delta}_{k}(s)}{n^{k}}\Big{)}N(s),\]
where \(\Delta_{k}\) and \(\widehat{\Delta}_{k}\) are \(n\) independent and can be explicitly computed. In particular
\[v_{n}^{\star}(s)-v_{n+1}^{\star}(s)=O(n^{-2}),\qquad\widehat{v}_{n}^{\star}(s)- \widehat{v}_{n+1}^{\star}(s)=O(n^{-2}) \tag{7.4}\]
uniformly for \(s\in\Sigma^{\ell}\). Hence using the fact \(\widehat{\mu}_{n}^{\star}(s)=I+O(n^{-1})\) uniformly for \(s\in\Sigma^{\ell}\) (see (5.10)) we can write
\[\|\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star })-\widehat{\mu}_{n+1}^{\star}(v_{n+1}^{\star}-\widehat{v}_{n+1}^{\star})\|_{ L^{2}(\Sigma^{\ell})}\] \[\lesssim\|(\widehat{\mu}_{n}^{\star}-\widehat{\mu}_{n+1}^{\star })(v_{n}^{\star}-\widehat{v}_{n}^{\star})\|_{L^{2}(\Sigma^{\ell})}+O(n^{-2}).\]
Additionally, it follows that \(\widehat{\mu}_{n}^{\star}(s)-\widehat{\mu}_{n+1}^{\star}(s)=O(n^{-1})\), (even \(O(n^{-2})\) see [11, Eq. 8.7]) uniformly for \(s\in\Sigma^{\ell}\). Together with (7.3) we can conclude that
\[\|(\widehat{\mu}_{n}^{\star}-\widehat{\mu}_{n+1}^{\star})(v_{n}^{\star}- \widehat{v}_{n}^{\star})\|_{L^{2}(\Sigma^{\ell})}=O(n^{-2})\]
which proves \((iii)\).
In order to prove \((iv)\) we first write
\[(\mu_{n}^{\star}-\widehat{\mu}_{n}^{\star})-(\mu_{n+1}^{\star}- \widehat{\mu}_{n+1}^{\star}) =((1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}I-(1-\mathcal{C}_{\widehat {v}^{\star}}^{(n)})^{-1}I)\] \[\qquad-((1-\mathcal{C}_{v^{\star}}^{(n+1)})^{-1}I-(1-\mathcal{C}_ {\widehat{v}^{\star}}^{(n+1)})^{-1}I)\] \[=(1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}\mathcal{C}_{\Sigma^{ \star}}^{-}(\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star}))\] \[\qquad-(1-\mathcal{C}_{v^{\star}}^{(n+1)})^{-1}\mathcal{C}_{ \Sigma^{\star}}^{-}(\widehat{\mu}_{n+1}^{\star}(v_{n+1}^{\star}-\widehat{v}_{ n+1}^{\star}))\] \[=((1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}-(1-\mathcal{C}_{v^{\star }}^{(n+1)})^{-1})\mathcal{C}_{\Sigma^{\star}}^{-}(\widehat{\mu}_{n}^{\star}(v _{n}^{\star}-\widehat{v}_{n}^{\star}))\] \[\quad+(1-\mathcal{C}_{v^{\star}}^{(n+1)})^{-1}\mathcal{C}_{ \Sigma^{\star}}^{-}(\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{ \star})-\widehat{\mu}_{n+1}^{\star}(v_{n+1}^{\star}-\widehat{v}_{n+1}^{\star})).\]
From the uniform boundedness of \((1-\mathcal{C}_{v^{\star}}^{(n+1)})^{-1}\), \(\mathcal{C}_{\Sigma^{\star}}^{-}\) and point \((iii)\) it follows that
\[\|(1-\mathcal{C}_{v^{\star}}^{(n+1)})^{-1}\mathcal{C}_{\Sigma^{ \star}}^{-}(\widehat{\mu}_{n}^{\star}(v_{n}^{\star}-\widehat{v}_{n}^{\star})- \widehat{\mu}_{n+1}^{\star}(v_{n+1}^{\star}-\widehat{v}_{n+1}^{\star}))\|_{L^ {2}(\Sigma^{\star})}=O\Big{(}\frac{1}{n^{3/2}\log^{2}n}\Big{)}.\]
For the remaining term we have
\[((1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}-(1-\mathcal{C}_{v^{\star }}^{(n+1)})^{-1})\mathcal{C}_{\Sigma^{\star}}^{-}(\widehat{\mu}_{n}^{\star}(v_ {n}^{\star}-\widehat{v}_{n}^{\star}))\] \[=((1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}(\mathcal{C}_{v^{\star}}^ {(n)}-\mathcal{C}_{v^{\star}}^{(n+1)})(1-\mathcal{C}_{v^{\star}}^{(n+1)})^{-1}) \mathcal{C}_{\Sigma^{\star}}^{-}(\widehat{\mu}_{n}^{\star}(v_{n}^{\star}- \widehat{v}_{n}^{\star})). \tag{7.5}\]
Now observe that
\[\|\mathcal{C}_{v^{\star}}^{(n)}-\mathcal{C}_{v^{\star}}^{(n+1)}\|_{L^{2}( \Sigma^{\star})\to L^{2}(\Sigma^{\star})}\lesssim\|v_{n}^{\star}-v_{n+1}^{ \star}\|_{L^{\infty}(\Sigma^{\star})}. \tag{7.6}\]
It has been shown in [3, p. 54] that \(\|v_{n}^{\star}-v_{n+1}^{\star}\|_{L^{\infty}(\Sigma^{r})}=O(n^{-1})\), hence with (7.4) it follows that
\[\|v_{n}^{\star}-v_{n+1}^{\star}\|_{L^{\infty}(\Sigma^{\star})}=O(n^{-1}),\]
which implies using the bound (7.6)
\[\|\mathcal{C}_{v^{\star}}^{(n)}-\mathcal{C}_{v^{\star}}^{(n+1)}\|_{L^{2}( \Sigma^{\star})\to L^{2}(\Sigma^{\star})}=O(n^{-1}).\]
Plugging this estimate together with \((i)\) into (7.5) we conclude that
\[\|((1-\mathcal{C}_{v^{\star}}^{(n)})^{-1}-(1-\mathcal{C}_{v^{\star}}^{(n+1)})^ {-1})\mathcal{C}_{\Sigma^{\star}}^{-}(\widehat{\mu}_{n}^{\star}(v_{n}^{\star}- \widehat{v}_{n}^{\star}))\|_{L^{2}(\Sigma^{\star})}=O\Big{(}\frac{1}{n^{3/2} \log^{2}n}\Big{)},\]
which implies \((iv)\) and finishes the proof.
We immediately get from Prop. 7.1:
**Corollary 7.2**.: _The following estimates hold for \(n\to\infty\)._
1. \(\|\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\|_{L^{2}(\Sigma^{r})}=O\big{(} \frac{1}{n^{1/2}\log^{2}n}\big{)}\)_,_
2. \(\|\mu_{n}-\widehat{\mu}_{n}\|_{L^{2}(\Sigma^{r})}=O\big{(}\frac{1}{n^{1/2} \log^{2}n}\big{)}\)_,_
3. \(\|\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})-\widehat{\mu}_{n+1}(v_{n+1}- \widehat{v}_{n+1})\|_{L^{2}(\Sigma^{r})}=O\big{(}\frac{1}{n^{3/2}\log^{2}n} \big{)}\)_,_
4. \(\|(\mu_{n}-\widehat{\mu}_{n})-(\mu_{n+1}-\widehat{\mu}_{n+1})\|_{L^{2}(\Sigma^{ r})}=O\big{(}\frac{1}{n^{3/2}\log^{2}n}\big{)}\)
Note that we restricted the path of integration to \(\Sigma^{r}\). The contributions coming from \(\Sigma\cap U\) turn out to be of smaller order as will be shown in Lem. 7.6.
To obtain an asymptotic formula for the recurrence coefficients we need to obtain an asymptotic formula for (3.9) which in our current notation reads:
\[Q_{1}^{(n)}-\widehat{Q}_{1}^{(n)}=-\frac{1}{2\pi\mathrm{i}}\int_{\Sigma}\mu_{n }(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^{-1}\,ds. \tag{7.7}\]
The key in proving Theorem 1.1 lies in the following proposition.
**Proposition 7.3**.: _For \(n\to\infty\) the following estimates hold:_
\[Q_{1}^{(n)}-\widehat{Q}_{1}^{(n)}=\frac{3}{16n\log^{2}n}\begin{pmatrix}-1& \mathrm{i}\\ \mathrm{i}&1\end{pmatrix}+O\Big{(}\frac{1}{n\log^{3}n}\Big{)}\]
_and_
\[Q_{1}^{(n)}-\widehat{Q}_{1}^{(n)}-(Q_{1}^{(n+1)}-\widehat{Q}_{1}^{(n+1)})= \frac{3}{16n^{2}\log^{2}n}\begin{pmatrix}-1&\mathrm{i}\\ \mathrm{i}&1\end{pmatrix}+O\Big{(}\frac{1}{n^{2}\log^{3}n}\Big{)}. \tag{7.8}\]
We will make use of two important results stated in [3, Sect. 5.2], but restricted to the contour \(\Sigma^{r}\):
**Proposition 7.4**.: _The following estimates hold:_
\[\int_{\Sigma^{r}}\mu_{n}(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^{-1}\,ds=\int _{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^{-1}\, ds+O\Big{(}\frac{1}{n\log^{4}n}\Big{)}, \tag{7.9}\]
_and_
\[\int_{\Sigma^{r}}\mu_{n}(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^ {-1}\,ds-\int_{\Sigma^{r}}\mu_{n+1}(v_{n+1}-\widehat{v}_{n+1})\widehat{\mu}_{ n+1}^{-1}\,ds\\ =\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\widehat{ \mu}_{n}^{-1}\,ds-\int_{\Sigma^{r}}\widehat{\mu}_{n+1}(v_{n+1}-\widehat{v}_{n+ 1})\widehat{\mu}_{n+1}^{-1}\,ds+O\Big{(}\frac{1}{n^{2}\log^{4}n}\Big{)}. \tag{7.10}\]
Proof.: For (7.9) observe that
\[\int_{\Sigma^{r}}\mu_{n}(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^ {-1}\,ds\\ =\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\widehat {\mu}_{n}^{-1}\,ds+\int_{\Sigma^{r}}(\mu_{n}-\widehat{\mu}_{n})(v_{n}-\widehat {v}_{n})\widehat{\mu}_{n}^{-1}\,ds. \tag{7.11}\]
As \(\det\mu=\det\widehat{\mu}\equiv 1\) and \(v-\widehat{v}\) has a nonzero entry only in the \(21\)-entry, it follows through explicit calculation that \(\|(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^{-1}\|_{L^{2}(\Sigma^{r})}=\| \widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\|_{L^{2}(\Sigma^{r})}\). Now using estimates \((i)\) and \((ii)\) from Cor. 7.2, it follows that the error term in (7.11) can be bounded by
\[\big{|}\int_{\Sigma^{r}}(\mu_{n}-\widehat{\mu}_{n})(v_{n}-\widehat {v}_{n})\widehat{\mu}_{n}^{-1}\,ds\big{|}\\ \lesssim\|\mu_{n}-\widehat{\mu}_{n}\|_{L^{2}(\Sigma^{r})}\|(v_{n} -\widehat{v}_{n})\widehat{\mu}_{n}^{-1}\|_{L^{2}(\Sigma^{r})}=O\Big{(}\frac{1 }{n\log^{4}n}\Big{)}.\]
In a similar fashion for (7.10) we can write
\[\begin{split}&\int_{\Sigma^{r}}\mu_{n}(v_{n}-\widehat{v}_{n}) \widehat{\mu}_{n}^{-1}\,ds-\int_{\Sigma^{r}}\mu_{n+1}(v_{n+1}-\widehat{v}_{n+1} )\widehat{\mu}_{n+1}^{-1}\,ds\\ &=\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n}) \widehat{\mu}_{n}^{-1}\,ds-\int_{\Sigma^{r}}\widehat{\mu}_{n+1}(v_{n+1}- \widehat{v}_{n+1})\widehat{\mu}_{n+1}^{-1}\,ds\\ &+\int_{\Sigma^{r}}(\mu_{n}-\widehat{\mu}_{n})\big{[}(v_{n}- \widehat{v}_{n})\widehat{\mu}_{n}^{-1}-(v_{n+1}-\widehat{v}_{n+1})\widehat{\mu }_{n+1}^{-1}\big{]}\,ds\\ &+\int_{\Sigma^{r}}\big{[}(\mu_{n}-\widehat{\mu}_{n})-(\mu_{n+1} -\widehat{\mu}_{n+1})\big{]}(v_{n+1}-\widehat{v}_{n+1})\widehat{\mu}_{n+1}^{-1 }\,ds.\end{split} \tag{7.12}\]
We can now estimate the two error terms in (7.12) using Cor. 7.2,
\[\begin{split}&\big{|}\int_{\Sigma^{r}}(\mu_{n}-\widehat{\mu}_{n}) \big{[}(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^{-1}-(v_{n+1}-\widehat{v}_{n+1 })\widehat{\mu}_{n+1}^{-1}\big{]}\,ds\big{|}\\ &\lesssim\|\mu_{n}-\widehat{\mu}_{n}\|_{L^{2}(\Sigma^{r})}\| \widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})-\widehat{\mu}_{n+1}(v_{n+1}-\widehat{ v}_{n+1})\|_{L^{2}(\Sigma^{r})}=O\Big{(}\frac{1}{n^{2}\log^{4}n}\Big{)},\end{split}\]
and
\[\begin{split}&\big{|}\int_{\Sigma^{r}}\big{[}(\mu_{n}-\widehat{ \mu}_{n})-(\mu_{n+1}-\widehat{\mu}_{n+1})\big{]}(v_{n+1}-\widehat{v}_{n+1}) \widehat{\mu}_{n+1}^{-1}\,ds\big{|}\\ &\lesssim\|(\mu_{n}-\widehat{\mu}_{n})-(\mu_{n+1}-\widehat{\mu}_ {n+1})\|_{L^{2}(\Sigma^{r})}\|\widehat{\mu}_{n+1}(v_{n+1}-\widehat{v}_{n+1})\| _{L^{2}(\Sigma^{r})}=O\Big{(}\frac{1}{n^{2}\log^{4}n}\Big{)},\end{split}\]
proving (7.10).
The next result, stated in [3, Sect. 5], contains the leading order term in the integrals found in Prop. 7.4:
**Proposition 7.5**.: _The following estimates holds:_
\[\frac{1}{2\pi\mathrm{i}}\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{ n})\widehat{\mu}_{n}^{-1}\,ds=\frac{3}{16n\log^{2}n}\begin{pmatrix}1&- \mathrm{i}\\ -\mathrm{i}&-1\end{pmatrix}+O\Big{(}\frac{1}{n\log^{3}n}\Big{)}\]
_and_
\[\frac{1}{2\pi\mathrm{i}}\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}- \widehat{v}_{n})\widehat{\mu}_{n}^{-1}\,ds-\frac{1}{2\pi\mathrm{i}}\int_{ \Sigma^{r}}\widehat{\mu}_{n+1}(v_{n+1}-\widehat{v}_{n+1})\widehat{\mu}_{n+1}^ {-1}\,ds\] \[\qquad\qquad=\frac{3}{16n^{2}\log^{2}n}\begin{pmatrix}1&-\mathrm{ i}\\ -\mathrm{i}&-1\end{pmatrix}+O\Big{(}\frac{1}{n^{2}\log^{3}n}\Big{)}.\]
Proof.: The proof is essentially given in [3, Prop. C.3, C.4]. The main difference is the occurrence of \(\widehat{\mu}_{n}\) instead of \(\widetilde{\mu}_{n}=\widetilde{Q}_{-}^{(n)}\). We provide the full proof of both formulas in Prop. A.2.
Note that in the formula (7.7) the integral is over \(\Sigma\), while in the Prop. 7.4 and 7.5 the integrals are over \(\Sigma^{r}\). It remains to show that the remaining integral over \(\Sigma^{U}=\Sigma\cap U\), which is localized around the point \(-1\), is negligible. This is the content of the next lemma:
**Lemma 7.6**.: _For \(n\to\infty\) we have_
\[\int_{\Sigma^{U}}\mu_{n}(v_{n}-\widehat{v}_{n})\widehat{\mu}_{n}^{-1}\,ds=O \Big{(}\frac{1}{n^{3}}\Big{)}. \tag{7.13}\]
Proof.: First, we need to show that \(Q^{\star}(z)=Q(z)[P(z)]^{-1}\) for \(z\in U\) is uniformly bounded as \(n\to\infty\). Note that this statement is more intricate than the corresponding statements for \(\widehat{Q}(z)[\widehat{P}(z)]^{-1}\) and \(\widehat{Q}(z)[\widetilde{P}(z)]^{-1}\), as the logarithmic weight function is not covered in [11] and hence we do not have Eq. (5.10) for \(Q(z)[P(z)]^{-1}\) at our disposal. Recall that for \(s\in\Sigma^{\ell}=\partial U\) we have
\[\mu^{\star}(s)=Q^{\star}_{-}(s)=Q(s)[P(s)]^{-1}. \tag{7.14}\]
From (7.14) we see that on \(\partial U\), \(\mu^{\star}\) is in fact the restriction of the analytic function \(Q^{\star}(z)=Q(z)[P(z)]^{-1}\), \(z\in U\). Moreover, \(\|\mu^{\star}\|_{L^{2}(\Sigma^{U})}=O(1)\) for \(n\to\infty\), as \(\mu^{\star}=(1-\mathcal{C}_{v^{\star}})^{-1}I\) and \((1-\mathcal{C}_{v^{\star}})^{-1}\) is uniformly bounded on \(L^{2}(\Sigma^{\star})\). Thus using Cauchy's integral formula we get for \(z\in U\):
\[|Q^{\star}(z)|=\Big{|}\frac{1}{2\pi\mathrm{i}}\int_{\partial U}\frac{\mu^{ \star}(s)ds}{s-z}\Big{|}\lesssim O\Big{(}\frac{1}{\mathrm{dist}(z,\partial U)} \Big{)}.\]
We have some freedom to choose \(U\), so if necessary we can shrink it and conclude
\[|Q^{\star}(z)|=O(1),\qquad z\in U, \tag{7.15}\]
uniformly as \(n\to\infty\). Let us now rewrite (7.13), where we drop the \(n\)-dependence for better readability:
\[\int_{\Sigma^{U}}\mu(v-\widehat{v})\widehat{\mu}^{-1}\,ds=\int_{\Sigma^{U}}Q ^{\star}(s)P_{-}(s)(v(s)-\widehat{v}(s))[\widehat{P}_{-}(s)]^{-1}[\widehat{Q }^{\star}(s)]^{-1}\,ds \tag{7.16}\]
We now list bounds for each of the factors in the integrand:
* (2.5) and (2.14) imply that for some \(c>0\) \[v(s)-\widehat{v}(s)=\begin{cases}&\begin{pmatrix}0&0\\ \Big{(}\frac{F^{2}}{w}(s)-\frac{\widehat{F}^{2}}{\widehat{w}}(s)\Big{)}\phi^{- 2n}(s)&0\end{pmatrix}&s\in\Sigma^{U}\setminus(-1,1),\\ &=O(|s+1|^{3/2}\mathrm{e}^{-cn|s+1|^{1/2}}),\\ &0,&s\in\Sigma^{U}\cap(-1,1),\end{cases}\]
* (5.10) and (7.15) implies that \([Q^{\star}(z)]^{\pm 1},[\widehat{Q}^{\star}(z)]^{\pm 1}=O(1),\)
* and (5.11) implies that \([P_{-}(z)]^{\pm 1},[\widehat{P}_{-}(z)]^{\pm 1}=O(|z+1|^{-1/2}).\)
Taking the contributions of all factors in (7.16) into account we see that the integrand can be estimated by \(O(|z+1|^{1/2}\mathrm{e}^{-cn|z+1|^{1/2}})\), which precisely integrates to the error in (7.13).
Prop. 7.4 and 7.5 together with Lemma 7.6 now readily imply Prop. 7.3.
### Asymptotics of the recurrence coefficients
Next we will derive the asymptotics for the recurrence coefficients stated in Theorem 1.1. This subsection follows the same line of reasoning as [3, Sect. 5.2].
Recall the result of Cor. 3.3, which states that the recurrence coefficients of the orthogonal polynomials with weight function \(\widehat{w}\) satisfy
\[\widehat{a}_{n}=\frac{1}{4n^{2}}+O\Big{(}\frac{1}{n^{3}}\Big{)},\qquad\widehat {b}_{n}=\frac{1}{2}-\frac{1}{16n^{2}}+O\Big{(}\frac{1}{n^{3}}\Big{)}. \tag{7.17}\]
With the help of Prop. 7.3 we can now expand (3.7):
\[a_{n}-\widehat{a}_{n} =\bigl{(}Q_{1}^{(n)}\bigr{)}_{11}-\bigl{(}\widehat{Q}_{1}^{(n)} \bigr{)}_{11}-\Bigl{(}\bigl{(}Q_{1}^{(n+1)}\bigr{)}_{11}-\bigl{(}\widehat{Q}_{1 }^{(n+1)}\bigr{)}_{11}\Bigr{)}\] \[=\biggl{(}\frac{3}{16n\log^{2}n}\begin{pmatrix}-1&\mathrm{i}\\ \mathrm{i}&1\end{pmatrix}+O\Bigl{(}\frac{1}{n\log^{3}n}\Bigr{)}\biggr{)}_{11}\] \[=-\frac{3}{16n^{2}\log^{2}n}+O\Bigl{(}\frac{1}{n^{2}\log^{3}n} \Bigr{)}.\]
Now substituting the asymptotic formula (7.17) for \(\widehat{a}_{n}\) implies (1.2).
The proof of the asymptotic formula (1.3) is more involved. First recall [11, Sect. 8], in which it is shown that for \(z\) away from \(\pm 1\) we have
\[\widehat{Q}^{(n)}(z)=\Bigl{(}I+\frac{R_{1}(z)}{n}+Er(z,n)\Bigr{)}N(z), \tag{7.18}\]
where \(R_{1}(z)\), \(Er(z,n)\) are matrix-valued functions, holomorphic for \(z\in\Omega_{0}\) (cf. Fig. 3), satisfying
\[|R_{1}(z)|\leq\frac{c_{1}}{|z|},\quad|Er(z,n)|\leq\frac{c_{2}}{|z|n^{2}}\qquad z\to\infty \tag{7.19}\]
for some \(c_{1},c_{2}>0\). Importantly, \(R_{1}\) is \(n\)-independent. As a consequence of (7.18) and (7.19) we obtain
\[|\widehat{Q}^{(n+1)}(z)-\widehat{Q}^{(n)}(z)|\lesssim\frac{c_{2}}{|z|n^{2}}, \qquad z\to\infty,\]
from which
\[\widehat{Q}_{1}^{(n)}-\widehat{Q}_{1}^{(n+1)}=O\Bigl{(}\frac{1}{n^{2}}\Bigr{)} \tag{7.20}\]
follows. Additionally, direct computation leads to \(N_{12}(z)=-\frac{1}{2\mathrm{i}z}+O(z^{-2})\) implying
\[\bigl{(}\widehat{Q}_{1}^{(n)}\bigr{)}_{12}=-\frac{1}{2\mathrm{i}}+O\Bigl{(} \frac{1}{n}\Bigr{)}. \tag{7.21}\]
Now using (7.8) we conclude from (7.20) that
\[Q_{1}^{(n)}-Q_{1}^{(n+1)} =\widehat{Q}_{1}^{(n)}-\widehat{Q}_{1}^{(n+1)}+O\Bigl{(}\frac{1}{ n^{2}\log^{2}n}\Bigr{)} \tag{7.22}\] \[=O\Bigl{(}\frac{1}{n^{2}}\Bigr{)}.\]
Regarding formula (3.8) we now obtain using (7.21) and (7.22), together with Prop. 7.3:
\[b_{n}^{2}-\widehat{b}_{n}^{2} =\Bigl{(}\bigl{(}Q_{1}^{(n+1)}\bigr{)}_{12}-\bigl{(}\widehat{Q}_{1 }^{(n+1)}\bigr{)}_{12}\Bigr{)}\Bigl{(}\bigl{(}Q_{1}^{(n+1)}\bigr{)}_{21}- \bigl{(}Q_{1}^{(n+2)}\bigr{)}_{21}\Bigr{)}\] \[+\bigl{(}\widehat{Q}_{1}^{(n)}\bigr{)}_{12}\Bigl{[}\Bigl{(}\bigl{(} Q_{1}^{(n+1)}\bigr{)}_{21}-\bigl{(}Q_{1}^{(n+2)}\bigr{)}_{21}\Bigr{)}-\Bigl{(} \bigl{(}\widehat{Q}_{1}^{(n+1)}\bigr{)}_{21}-\bigl{(}\widehat{Q}_{1}^{(n+2)} \bigr{)}_{21}\Bigr{)}\Bigr{]}\] \[=\Bigl{(}\frac{3\mathrm{i}}{16n\log^{2}n}+O\Bigl{(}\frac{1}{n\log ^{3}n}\Bigr{)}\Bigr{)}O\Bigl{(}\frac{1}{n^{2}}\Bigr{)}\] \[+\Bigl{(}-\frac{1}{2\mathrm{i}}+O\Bigl{(}\frac{1}{n}\Bigr{)} \Bigr{)}\Bigl{(}\frac{3\mathrm{i}}{16n^{2}\log^{2}n}+O\Bigl{(}\frac{1}{n^{2} \log^{3}n}\Bigr{)}\Bigr{)}\] \[=-\frac{3}{32n^{2}\log^{2}n}+O\Bigl{(}\frac{1}{n^{2}\log^{3}n} \Bigr{)}. \tag{7.23}\]
Moreover we have
\[b_{n}^{2}-\widehat{b}_{n}^{2}=(b_{n}-\widehat{b}_{n})(b_{n}+\widehat {b}_{n}) =(b_{n}-\widehat{b}_{n})(2\widehat{b}_{n}+b_{n}-\widehat{b}_{n}) \tag{7.24}\] \[=(b_{n}-\widehat{b}_{n})(1+b_{n}-\widehat{b}_{n}+O(n^{-2}))\]
As \(\widehat{b}_{n}=\frac{1}{2}+O(n^{-2})\), see Cor. 3.3, and \(b_{n}>0\), we have that \((1+b_{n}-\widehat{b}_{n}+O(n^{-2}))>1/2-\epsilon\), which with (7.23) implies that
\[b_{n}-\widehat{b}_{n}=O\Big{(}\frac{1}{n^{2}\log^{2}n}\Big{)}. \tag{7.25}\]
Substituting (7.25) once again into the term \((1+b_{n}-\widehat{b}_{n}+O(n^{-2}))\), we conclude from (7.24) that in fact
\[b_{n}^{2}-\widehat{b}_{n}^{2}=(b_{n}-\widehat{b}_{n})(1+O(n^{-2})),\]
which with (7.23) implies
\[b_{n}-\widehat{b}_{n}=-\frac{3}{32n^{2}\log^{2}n}+O\Big{(}\frac{1}{n^{2}\log^ {3}n}\Big{)}.\]
That together with the asymptotic formula (7.17) implies (1.3) finishing the proof of Theorem 1.1.
## Appendix A Proofs of certain propositions
The following appendix contains proofs of Prop. 2.7 and Prop. 7.5. These differ in certain details from the analogous proofs in [3], hence are included here.
**Proposition A.1**.: _Let \(r_{n},\tilde{r}_{n}>0\), \(n\in\mathbb{N}\) be two sequences satisfying \(r_{n},\tilde{r}_{n}\to 0\), such that \(n\big{|}\frac{r_{n}}{\tilde{r}_{n}}-1\big{|}<R\). Then_
\[\frac{F^{2}}{w_{+}}(1+r_{n})-\frac{F^{2}}{w_{+}}(1+\tilde{r}_{n}) +\frac{F^{2}}{w_{-}}(1+r_{n})-\frac{F^{2}}{w_{-}}(1+\tilde{r}_{n})\] (A.1) \[=O(r_{n}\log|\log r_{n}|)+O\Big{(}\frac{1}{n\log^{3}r_{n}}\Big{)} +O\Big{(}\frac{1}{n^{2}}\Big{)},\]
_where the implied constants in the \(O\)-terms depend only on \(R\)._
Proof.: Let us assume without loss of generality \(r_{n}\geq\tilde{r}_{n}\). It follows from definition (2.2) that
\[\log F(1+r_{n}) =((1+r_{n})^{2}-1)^{1/2}\frac{1}{2\pi\mathrm{i}}\int_{-1}^{1} \frac{\log w(s)}{(s^{2}-1)_{+}^{1/2}}\frac{ds}{s-(1+r_{n})}\] \[=(\sqrt{2}r_{n}^{1/2}+O(r_{n}^{3/2}))\frac{1}{2\pi\mathrm{i}}\int _{-1}^{1}\frac{\log w(s)}{(s^{2}-1)_{+}^{1/2}}\frac{ds}{s-(1+r_{n})}.\]
Moreover, by [3, Eq. A.11] we have
\[\int_{-1}^{1}\frac{\log w(s)}{(s^{2}-1)_{+}^{1/2}}\frac{ds}{s-(1+r_{n})}=O \Big{(}\frac{\log|\log r_{n}|}{r_{n}^{1/2}}\Big{)},\]
implying that in fact
\[\log F(1+r_{n})=\sqrt{2}r_{n}^{1/2}\frac{1}{2\pi\mathrm{i}}\int_{-1}^{1}\frac {\log w(s)}{(s^{2}-1)_{+}^{1/2}}\frac{ds}{s-(1+r_{n})}+O(r_{n}\log|\log r_{n}|).\]
In particular, we obtain
\[\log\frac{F(1+r_{n})}{F(1+\tilde{r}_{n})}=\] \[\sqrt{2}r_{n}^{1/2}\frac{1}{2\pi{\rm i}}\int_{0}^{1}\frac{\log w(s) }{(s^{2}-1)_{+}^{1/2}}\frac{ds}{s-(1+r_{n})}-\sqrt{2}\tilde{r}_{n}^{1/2}\frac{1 }{2\pi{\rm i}}\int_{0}^{1}\frac{\log w(s)}{(s^{2}-1)_{+}^{1/2}}\frac{ds}{s-(1+ \tilde{r}_{n})}\] (A.2) \[\quad+\frac{1}{2\pi{\rm i}}\int_{-1}^{0}\frac{\log w(s)}{(s^{2}-1 )_{+}^{1/2}}\Bigg{[}\frac{\sqrt{2}r_{n}^{1/2}}{s-(1+r_{n})}-\frac{\sqrt{2} \tilde{r}_{n}^{1/2}}{s-(1+\tilde{r}_{n})}\Bigg{]}ds+O(r_{n}\log|\log r_{n}|).\]
The term in the bracket can be estimated as follow:
\[\Bigg{|}\frac{\sqrt{2}r_{n}^{1/2}}{s-(1+r_{n})}-\frac{\sqrt{2} \tilde{r}_{n}^{1/2}}{s-(1+\tilde{r}_{n})}\Bigg{|} =\Bigg{|}\sqrt{2}\frac{r_{n}^{1/2}(s-(1+\tilde{r}_{n}))-\tilde{r}_ {n}^{1/2}(s-(1+r_{n}))}{(s-(1+r_{n}))(s-(1+\tilde{r}_{n}))}\Bigg{|}\] \[=\Bigg{|}\sqrt{2}\frac{(s-1+r_{n}^{1/2}\tilde{r}_{n}^{1/2})(r_{n} ^{1/2}-\tilde{r}_{n}^{1/2})}{(s-(1+r_{n}))(s-(1+\tilde{r}_{n}))}\Bigg{|}\] \[=\underbrace{\Bigg{|}\sqrt{2}\frac{(s-1+r_{n}^{1/2}\tilde{r}_{n} ^{1/2})\tilde{r}_{n}^{1/2}}{(s-(1+r_{n}))(s-(1+\tilde{r}_{n}))}\Bigg{|}}_{=\,O (r_{n}^{1/2})}\cdot\underbrace{\Big{|}\Big{(}\frac{r_{n}}{\tilde{r}_{n}}\Big{)} ^{1/2}-1\Big{|}}_{<\frac{R}{n}}\] \[=O\Big{(}\frac{r_{n}^{1/2}}{n}\Big{)},\]
where we used that \(s\in(-1,0)\). Thus (A.2) can be rewritten as
\[\log\frac{F(1+r_{n})}{F(1+\tilde{r}_{n})}=\] \[\sqrt{2}r_{n}^{1/2}\frac{1}{2\pi{\rm i}}\int_{0}^{1}\frac{\log w (s)}{(s^{2}-1)_{+}^{1/2}}\frac{ds}{s-(1+r_{n})}-\sqrt{2}\tilde{r}_{n}^{1/2} \frac{1}{2\pi{\rm i}}\int_{0}^{1}\frac{\log w(s)}{(s^{2}-1)_{+}^{1/2}}\frac{ ds}{s-(1+\tilde{r}_{n})}\] \[\qquad\qquad+O(r_{n}\log|\log r_{n}|)+O\Big{(}\frac{r_{n}^{1/2}} {n}\Big{)}\]
Now after the change of variables \(t=1-s\) and some algebraic manipulation we get (cf. proof of [3, Prop. A.4])
\[\log\frac{F(1+r_{n})}{F(1+\tilde{r}_{n})} =\frac{r_{n}^{1/2}}{2\pi}\int_{0}^{1}\frac{\log w(1-t)}{\sqrt{t}} \frac{dt}{t+r_{n}}-\frac{\tilde{r}_{n}^{1/2}}{2\pi}\int_{0}^{1}\frac{\log w(1- t)}{\sqrt{t}}\frac{dt}{t+\tilde{r}_{n}}\] (A.3) \[+\frac{1}{2\pi}H(r_{n},\tilde{r}_{n})+O(r_{n}\log|\log r_{n}|)+O \Big{(}\frac{r_{n}^{1/2}}{n}\Big{)},\]
where
\[H(r_{n},\tilde{r}_{n}) =(r_{n}^{1/2}-\tilde{r}_{n}^{1/2})\int_{0}^{1}\log w(1-t)\Bigg{[} \frac{\sqrt{t}}{\sqrt{2}\sqrt{2-t}+(2-t)}\Bigg{]}\frac{dt}{t+r_{n}}\] \[+\tilde{r}_{n}^{1/2}\int_{0}^{1}\log w(1-t)\Bigg{[}\frac{\sqrt{t} }{\sqrt{2}\sqrt{2-t}+(2-t)}\Bigg{]}\Bigg{(}\frac{1}{t+r_{n}}-\frac{1}{t+ \tilde{r}_{n}}\Bigg{)}dt.\]
Thus we can estimate
\[|H(r_{n},\tilde{r}_{n})| \leq|r_{n}^{1/2}-\tilde{r}_{n}^{1/2}|\int_{0}^{1}|\log w(1-t)|\Bigg{[} \frac{\sqrt{t}}{\sqrt{2}\sqrt{2-t}+(2-t)}\Bigg{]}\frac{dt}{t}\] \[+\tilde{r}_{n}^{1/2}\int_{0}^{1}|\log w(1-t)|\Bigg{[}\frac{\sqrt{t }}{\sqrt{2}\sqrt{2-t}+(2-t)}\Bigg{]}\frac{|\tilde{r}_{n}-r_{n}|dt}{|t+r_{n}| \cdot|t+\tilde{r}_{n}|}\] \[\leq c\tilde{r}_{n}^{1/2}\Big{|}\frac{r_{n}^{1/2}}{\tilde{r}_{n}^ {1/2}}-1\Big{|}+\tilde{r}_{n}^{1/2}\Big{|}\frac{\tilde{r}_{n}-r_{n}}{\tilde{r} _{n}}\Big{|}\int_{0}^{1}|\log w(1-t)|\Bigg{[}\frac{\sqrt{t}}{\sqrt{2}\sqrt{2-t }+(2-t)}\Bigg{]}\frac{dt}{t}\] \[\leq c\tilde{r}_{n}^{1/2}\underbrace{\Big{|}\frac{r_{n}^{1/2}}{ \tilde{r}_{n}^{1/2}}-1\Big{|}}_{<\frac{R}{n}}+c\tilde{r}_{n}^{1/2}\underbrace{ \Big{|}\frac{r_{n}}{\tilde{r}_{n}}-1\Big{|}}_{<\frac{R}{n}}=O\Big{(}\frac{r_{ n}^{1/2}}{n}\Big{)}.\]
So we see that \(H(r_{n},\tilde{r}_{n})\) can be included in the error term \(O\big{(}\frac{r_{n}^{1/2}}{n}\big{)}\).
For the remaining integrals in (A.3) we obtain after performing the change of variables \(t\to r_{n}t\) and \(t\to\tilde{r}_{n}t\) in the first and second integral respectively:
(A.4) \[r_{n}^{1/2}\int_{0}^{1}\frac{\log w(1-t)}{\sqrt{t}}\frac{dt}{t+r_ {n}}-\tilde{r}_{n}^{1/2}\int_{0}^{1}\frac{\log w(1-t)}{\sqrt{t}}\frac{dt}{t+ \tilde{r}_{n}}\] \[=\int_{0}^{1/r_{n}}\frac{\log w(1-r_{n}t)}{\sqrt{t}}\frac{dt}{t+1 }-\int_{0}^{1/\tilde{r}_{n}}\frac{\log w(1-\tilde{r}_{n}t)}{\sqrt{t}}\frac{dt} {t+1}\] \[=\int_{0}^{1/r_{n}}\frac{\big{[}\log w(1-r_{n}t)-\log w(1-\tilde{ r}_{n}t)\big{]}}{\sqrt{t}}\frac{dt}{t+1}-\int_{1/r_{n}}^{1/\tilde{r}_{n}} \frac{\log w(1-\tilde{r}_{n}t)}{\sqrt{t}}\frac{dt}{t+1}.\]
Using the fact that \(|\log w(1-\tilde{r}_{n}t)|\) is uniformly bounded for \(t\in[\frac{1}{\tilde{r}_{n}},\frac{1}{r_{n}}]\), the last integral in (A.4) can be estimated via:
\[\Bigg{|}\int_{1/\tilde{r}_{n}}^{1/r_{n}}\frac{\log w(1-\tilde{r}_ {n}t)}{\sqrt{t}}\frac{dt}{t+1}\Bigg{|} \leq\Bigg{\|}\frac{1}{\sqrt{t}}\frac{1}{t+1}\Bigg{\|}_{L^{\infty}( 1/r_{n},1/\tilde{r}_{n})}\int_{1/\tilde{r}_{n}}^{1/r_{n}}|\log w(1-\tilde{r}_{ n}t)|dt\] \[\leq cr_{n}^{3/2}\Big{|}\frac{1}{\tilde{r}_{n}}-\frac{1}{r_{n}} \Big{|}=cr_{n}^{1/2}\Big{|}\frac{r_{n}}{\tilde{r}_{n}}-1\Big{|}=O\Big{(}\frac{r_ {n}^{1/2}}{n}\Big{)},\]
and hence can be again included in the \(O\big{(}\frac{r_{n}^{1/2}}{n}\big{)}\)-term in (A.3).
Let us now consider the remaining integral in the last line of (A.4):
(A.5) \[\int_{0}^{1/r_{n}}\big{[}\log w(1-r_{n}t) -\log w(1-\tilde{r}_{n}t)\big{]}\frac{dt}{t^{3/2}+t^{1/2}}\] \[=\int_{0}^{1/r_{n}}\log\Big{(}\frac{w(1-r_{n}t)}{w(1-\tilde{r}_ {n}t)}\Big{)}\frac{dt}{t^{3/2}+t^{1/2}}\]
Now define \(a=a(r_{n},\tilde{r}_{n};n):=n(\frac{r_{n}}{\tilde{r}_{n}}-1)\in[-R,R]\). Note that for \(t\in(0,\frac{1}{r_{n}})\) we have \(\log 2\leq\log\frac{2}{\tilde{r}_{n}t}\), hence
\[\frac{w(1-r_{n}t)}{w(1-\tilde{r}_{n}t)} =\frac{\log\frac{2}{\tilde{r}_{n}t}}{\log\frac{2}{\tilde{r}_{n}t }}=1+\frac{\log\frac{2}{\tilde{r}_{n}t}-\log\frac{2}{\tilde{r}_{n}t}}{\log \frac{2}{\tilde{r}_{n}t}}=1+\frac{\log\frac{\tilde{r}_{n}}{\tilde{r}_{n}}}{ \log\frac{2}{\tilde{r}_{n}t}}\] \[=1+\frac{\log(1+\frac{n}{n})}{\log\frac{2}{\tilde{r}_{n}t}}=1+ \frac{a}{n\log\frac{2}{\tilde{r}_{n}t}}+O\Big{(}\frac{1}{n^{2}}\Big{)}.\]
Thus we can estimate the integrand of (A.5) by
\[\log\frac{w(1-r_{n}t)}{w(1-\bar{r}_{n}t)}=\frac{a}{n\log\frac{2}{\bar{r}_{n}t}}+O \Big{(}\frac{1}{n^{2}}\Big{)},\]
where the \(O\big{(}\frac{1}{n^{2}}\big{)}\)-term is uniform for \(t\in[0,\frac{1}{r_{n}}]\). Substituting this into (A.5) we obtain
(A.6) \[\int_{0}^{1/r_{n}}\Bigg{(}\frac{a}{n\log\frac{2}{\bar{r}_{n}t}}+O \Big{(}\frac{1}{n^{2}}\Big{)}\Bigg{)}\frac{dt}{t^{3/2}+t^{1/2}}\] \[=\Bigg{(}\int_{0}^{r_{n}^{1/2}}+\int_{r_{n}^{1/2}}^{1/r_{n}^{1/2}} +\int_{1/r_{n}^{1/2}}^{1/r_{n}}\Bigg{)}\frac{a}{n\log\frac{2}{\bar{r}_{n}t}} \frac{dt}{t^{3/2}+t^{1/2}}+O\Big{(}\frac{1}{n^{2}}\Big{)}.\]
Note that we can assume w.l.o.g. that \(r_{n}\leq 1\), as in the case \(r_{n}>1\), the estimate in (A.1) is trivial. Two of the integrals in (A.6) can be estimated by
\[\Bigg{|}\int_{0}^{r_{n}^{1/2}}\frac{a}{n\log\frac{2}{\bar{r}_{n}t}}\frac{dt}{ t^{3/2}+t^{1/2}}\Bigg{|}\leq\Bigg{|}\int_{0}^{r_{n}^{1/2}}\frac{c_{1}}{n\log r _{n}}\frac{dt}{t^{1/2}}\Bigg{|}=\frac{2c_{1}r_{n}^{1/4}}{n|\log r_{n}|}=O \Big{(}\frac{r_{n}^{1/4}}{n\log r_{n}}\Big{)}\]
and
\[\Bigg{|}\int_{1/r_{n}^{1/2}}^{1/r_{n}}\frac{a}{n\log\frac{2}{\bar{r}_{n}t}} \frac{dt}{t^{3/2}+t^{1/2}}\Bigg{|}\leq\Bigg{|}\int_{1/r_{n}^{1/2}}^{1/r_{n}} \frac{c_{2}}{n}\frac{dt}{t^{3/2}}\Bigg{|}\leq\frac{c_{2}r_{n}^{1/4}}{n}=O \Big{(}\frac{r_{n}^{1/4}}{n}\Big{)}.\]
For the remaining integral we have
\[\int_{r_{n}^{1/2}}^{1/r_{n}^{1/2}}\frac{a}{n\log\frac{2}{\bar{r}_{n}t}}\frac{ dt}{t^{3/2}+t^{1/2}} =\int_{r_{n}^{1/2}}^{1/r_{n}^{1/2}}\frac{a}{n\log\frac{2}{\bar{r}_ {n}}-n\log t}\frac{dt}{t^{3/2}+t^{1/2}}\] (A.7) \[=\int_{r_{n}^{1/2}}^{1/r_{n}^{1/2}}\frac{a}{n\log\frac{2}{\bar{r} _{n}}}\Bigg{(}\frac{1}{1-\frac{\log t}{\log\frac{2}{\bar{r}_{n}}}}\Bigg{)} \frac{dt}{t^{3/2}+t^{1/2}}.\]
Note that because \(t\in[r_{n}^{1/2},r_{n}^{-1/2}]\) we have \(|\log t|\leq\frac{1}{2}\log\frac{1}{r_{n}}<\frac{1}{2}\log\frac{2}{\bar{r}_{n}}\) for \(n\) sufficiently large depending on \(R\), so the last integral in (A.7) can be estimated by
\[\frac{a}{n\log\frac{2}{\bar{r}_{n}}}\int_{r_{n}^{1/2}}^{1/r_{n}^{1/2}}\Bigg{(} 1+\frac{\log t}{\log\frac{2}{\bar{r}_{n}}}+O\Bigg{(}\frac{\log^{2}t}{\log^{2} \frac{2}{\bar{r}_{n}}}\Bigg{)}\Bigg{)}\frac{dt}{t^{3/2}+t^{1/2}}\]
Making the change of variables \(\gamma=t^{1/2}\) this can be rewritten as
\[\frac{2a}{n\log\frac{2}{\bar{r}_{n}}}\int_{r_{n}^{1/4}}^{1/r_{n}^{ 1/4}}\Bigg{(}1+\frac{2\log\gamma}{\log\frac{2}{\bar{r}_{n}}}+O\Bigg{(}\frac{ \log^{2}\gamma}{\log^{2}\frac{2}{\bar{r}_{n}}}\Bigg{)}\Bigg{)}\frac{d\gamma}{ \gamma^{2}+1}\] \[=\frac{2a}{n\log\frac{2}{\bar{r}_{n}}}\Bigg{(}\int_{r_{n}^{1/4}}^{ 1/r_{n}^{1/4}}\frac{d\gamma}{\gamma^{2}+1}+\int_{r_{n}^{1/4}}^{1/r_{n}^{1/4}} \frac{2\log\gamma}{\log\frac{2}{\bar{r}_{n}}}\frac{d\gamma}{\gamma^{2}+1} \Bigg{)}+O\Big{(}\frac{1}{n\log^{3}r_{n}}\Big{)}\] \[=\frac{2a}{n\log\frac{2}{\bar{r}_{n}}}\Bigg{(}\int_{0}^{\infty} \frac{d\gamma}{\gamma^{2}+1}+\int_{0}^{\infty}\frac{2\log\gamma}{\log\frac{2} {\bar{r}_{n}}}\frac{d\gamma}{\gamma^{2}+1}\Bigg{)}+O\Big{(}\frac{1}{n\log^{3} r_{n}}\Big{)}.\]
Note that
\[\int_{0}^{\infty}\frac{d\gamma}{\gamma^{2}+1}=\arctan\gamma\Big{|}_{0}^{\infty} =\frac{\pi}{2},\]
while the substitution \(\eta=\gamma^{-1}\) yields
\[\int_{0}^{\infty}\frac{\log(\gamma)d\gamma}{\gamma^{2}+1}=-\int_{0}^{\infty} \frac{\log(\eta)d\eta}{\eta^{2}+1}\]
implying \(\int_{0}^{\infty}\frac{\log(\gamma)d\gamma}{\gamma^{2}+1}=0\).
Summarizing we have shown that
\[\int_{0}^{1/r_{n}}\big{[}\log w(1-r_{n}t) -\log w(1-\tilde{r}_{n}t)\big{]}\frac{dt}{t^{3/2}+t^{1/2}}\] \[=\frac{a\pi}{n\log(\frac{2}{\tilde{r}_{n}})}+O\Big{(}\frac{1}{n^{ 2}}\Big{)}+O\Big{(}\frac{1}{n\log^{3}r_{n}}\Big{)}.\]
We can substitute this estimate in (A.4) and then (A.3) to obtain
(A.8) \[\log\frac{F^{2}(1+r_{n})}{F^{2}(1+\tilde{r}_{n})}=2\log\frac{F(1+r_{n})}{F(1+ \tilde{r}_{n})}=\frac{a}{n\log\frac{2}{\tilde{r}_{n}}}+\Theta(r_{n},n),\]
where \(\Theta(r_{n},n)\) is short for \(O(r_{n}\log|\log r_{n}|)+O\big{(}\frac{1}{n\log^{3}r_{n}}\big{)}+O\big{(}\frac {1}{n^{2}}\big{)}\). Exponentiating the expression (A.8) leads to
\[\frac{F^{2}(1+r_{n})}{F^{2}(1+\tilde{r}_{n})}=1+\frac{a}{n\log\frac{2}{\tilde{ r}_{n}}}+\Theta(r_{n},n).\]
Moreover,
(A.9) \[\begin{split}&\frac{F^{2}}{w_{\pm}}(1+r_{n})-\frac{F^{2}}{w_{\pm}}(1+ \tilde{r}_{n})\\ &\quad=\frac{F^{2}(1+r_{n})-F^{2}(1+\tilde{r}_{n})}{w_{\pm}(1+r_{ n})}+F^{2}(1+\tilde{r}_{n})\Big{(}\frac{1}{w_{\pm}(1+r_{n})}-\frac{1}{w_{\pm}(1+ \tilde{r}_{n})}\Big{)}\\ &\quad=\frac{F^{2}(1+\tilde{r}_{n})}{w_{\pm}(1+r_{n})}\Big{(} \frac{F^{2}(1+r_{n})}{F^{2}(1+\tilde{r}_{n})}-1\Big{)}+\frac{F^{2}(1+\tilde{r }_{n})}{w_{\pm}(1+r_{n})}\Big{(}1-\frac{w_{\pm}(1+r_{n})}{w_{\pm}(1+\tilde{r} _{n})}\Big{)}\\ &\quad=\frac{F^{2}(1+\tilde{r}_{n})}{w_{\pm}(1+r_{n})}\Big{(} \frac{a}{n\log\frac{2}{\tilde{r}_{n}}}+\Theta(r_{n},n)+1-\frac{w_{\pm}(1+r_{n} )}{w_{\pm}(1+\tilde{r}_{n})}\Big{)}\end{split}\]
For the ratio of the weight functions we have (here \(\pm\) refers to the limit from \(\mathbb{C}_{\pm}\)):
\[\begin{split}\frac{w_{\pm}(1+r_{n})}{w_{\pm}(1+\tilde{r}_{n})}& =\frac{\log\frac{2}{r_{n}}\pm\pi\mathrm{i}}{\log\frac{2}{\tilde{ r}_{n}}\pm\pi\mathrm{i}}=1+\frac{\log\frac{2}{r_{n}}-\log\frac{2}{\tilde{r}_{n}}}{ \log\frac{2}{\tilde{r}_{n}}\pm\pi\mathrm{i}}\\ &=1+\frac{\log\frac{\tilde{r}_{n}}{r_{n}}}{\log\frac{2}{\tilde{r}_ {n}}\pm\pi\mathrm{i}}=1+\frac{a}{n(\log\frac{2}{\tilde{r}_{n}}\pm\pi\mathrm{i} )}+O\Big{(}\frac{1}{n^{2}}\Big{)},\end{split}\]
so
(A.10) \[\begin{split}\frac{a}{n\log\frac{2}{\tilde{r}_{n}}}+1-\frac{w_{ \pm}(1+r_{n})}{w_{\pm}(1+\tilde{r}_{n})}&=\frac{a}{n\log\frac{2}{ \tilde{r}_{n}}}-\frac{a}{n(\log\frac{2}{\tilde{r}_{n}}\pm\pi\mathrm{i})}+O \Big{(}\frac{1}{n^{2}}\Big{)}\\ &=\pm\frac{\pi\mathrm{i}a}{n\log\frac{2}{\tilde{r}_{n}}(\log \frac{2}{\tilde{r}_{n}}\pm\pi\mathrm{i})}+O\Big{(}\frac{1}{n^{2}}\Big{)}\\ &=\pm\frac{\pi\mathrm{i}a}{n\log^{2}\frac{2}{\tilde{r}_{n}}}+O \Big{(}\frac{1}{n\log^{3}r_{n}}\Big{)}+O\Big{(}\frac{1}{n^{2}}\Big{)}.\end{split}\]
Note that both error terms in (A.10) already appear in \(\Theta(r_{n},n)\). Substituting (A.10) into (A.9) and using (2.6) leads to
\[\frac{F^{2}}{w_{\pm}}(1+r_{n})-\frac{F^{2}}{w_{\pm}}(1+\tilde{r}_{n}) =\frac{F^{2}(1+\tilde{r}_{n})}{w_{\pm}(1+r_{n})}\Big{(}\pm\frac{ \pi\mathrm{i}a}{n\log^{2}\frac{2}{\tilde{r}_{n}}}+\Theta(r_{n},n)\Big{)}\] \[=\Big{(}1+O\Big{(}\frac{1}{\log r_{n}}\Big{)}\Big{)}\Big{(}\pm \frac{\pi\mathrm{i}a}{n\log^{2}\frac{2}{\tilde{r}_{n}}}+\Theta(r_{n},n)\Big{)}\] \[=\pm\frac{\pi\mathrm{i}a}{n\log^{2}\frac{2}{\tilde{r}_{n}}}+ \Theta(r_{n},n).\]
In particular,
\[\frac{F^{2}}{w_{+}}(1+r_{n}) -\frac{F^{2}}{w_{+}}(1+\tilde{r}_{n})+\frac{F^{2}}{w_{-}}(1+r_{n })-\frac{F^{2}}{w_{-}}(1+\tilde{r}_{n})\] \[=\Theta(r_{n},n)=O(r_{n}\log|\log r_{n}|)+O\Big{(}\frac{1}{n\log ^{3}r_{n}}\Big{)}+O\Big{(}\frac{1}{n^{2}}\Big{)},\]
finishing the proof.
**Proposition A.2**.: _The following estimates hold:_
(A.11) \[\frac{1}{2\pi\mathrm{i}}\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_ {n})\widehat{\mu}_{n}^{-1}\,ds=\frac{3}{16n\log^{2}n}\begin{pmatrix}1&- \mathrm{i}\\ -\mathrm{i}&-1\end{pmatrix}+O\Big{(}\frac{1}{n\log^{3}n}\Big{)}\]
_and_
\[\frac{1}{2\pi\mathrm{i}}\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n} -\widehat{v}_{n})\widehat{\mu}_{n}^{-1}\,ds-\frac{1}{2\pi\mathrm{i}}\int_{ \Sigma^{r}}\widehat{\mu}_{n+1}(v_{n+1}-\widehat{v}_{n+1})\widehat{\mu}_{n+1}^ {-1}\,ds\] (A.12) \[=\frac{3}{16n^{2}\log^{2}n}\begin{pmatrix}1&-\mathrm{i}\\ -\mathrm{i}&-1\end{pmatrix}+O\Big{(}\frac{1}{n^{2}\log^{3}n}\Big{)}.\]
Proof.: Following [3, Prop. C.3] one can show that the contributions to the integrals from the contour \(\Sigma^{r}\setminus(1,1+1/n)\) are exponentially small for \(n\to\infty\). Thus we see that for \(n\) large enough, up to an exponentially small error, the integral in (A.11) takes on the form
(A.13) \[-\frac{1}{2\pi\mathrm{i}}\int_{1}^{1+1/n}\widehat{\mu}_{n}(v_{n}-\widehat{v}_ {n})\widehat{\mu}_{n}^{-1}\,ds,\]
as \(\Sigma^{r}\cap(1,1+1/n)=(1,1+1/n)\) is oriented right to left. Hence, we need to consider the behaviour of \(\widehat{\mu}_{n}\) near the point \(+1\). In fact, as \(v_{n}-\widehat{v}_{n}\) is nonzero only in the 21-entry, see (3.5) and (3.6), we just need to consider the second column of \(\widehat{\mu}_{n}\) which we will denote by \(\widehat{\mu}_{2,n}\):
\[\widehat{\mu}_{2,n}=\begin{pmatrix}\widehat{\mu}_{12,n}\\ \widehat{\mu}_{22,n}\end{pmatrix}.\]
It follows from [11, Sect. 6, 7] (cf. [3, Eq. B.12]), that for \(s\in(1,1+1/n)\), \(\widehat{\mu}_{2,n}(s)\) takes on the form
(A.14) \[\widehat{\mu}_{2,n}(s)=\mathcal{R}^{(n)}(s)\mathcal{E}(s)(2\pi n)^{\sigma_{3}/ 2}\mathcal{K}(n^{2}f(s))\phi^{n}(s)\Big{(}\frac{\widehat{F}}{\mathcal{W}}(s) \Big{)}^{\sigma_{3}}.\]
Here
\[\mathcal{K}(\zeta)=\begin{pmatrix}\mathcal{K}_{1}(\zeta)\\ \mathcal{K}_{2}(\zeta)\end{pmatrix}=\begin{pmatrix}\frac{\mathrm{i}}{\pi}K_{0} (2\zeta^{1/2})\\ -2\zeta^{1/2}K_{0}^{\prime}(2\zeta^{1/2})\end{pmatrix},\]
where \(K_{0}\) is a special solution to the modified Bessel differential equation, see [17, Sect. 10.25], characterized by the condition
\[K_{0}(u)\sim\sqrt{\frac{\pi}{2u}}\mathrm{e}^{-u},\quad\text{for}\ \ u\to\infty\ \ \text{with}\ \ |\arg u|<\frac{3\pi}{2}-\varepsilon,\quad\varepsilon>0,\]
and \(f(z)=\frac{\log^{2}\phi(z)}{4}\) locally around \(z=+1\). The matrix-valued function \(\mathcal{R}^{(n)}\) is holomorphic in a fixed neighbourhood \(U_{+1}\), of \(+1\), where it satisfies uniformly:
\[\mathcal{R}^{(n)}(z)=I+O\Big{(}\frac{1}{n}\Big{)},\qquad z\in U_{+1},\quad n\to\infty.\]
For \(\mathcal{E}\) we have
\[\mathcal{E}(z)=N(z)\Big{(}\frac{\mathcal{W}}{\widehat{F}}(z)\Big{)}^{\sigma_ {3}}\frac{1}{\sqrt{2}}\begin{pmatrix}1&-\mathrm{i}\\ -\mathrm{i}&1\end{pmatrix}f(z)^{\sigma_{3}/4},\qquad z\in U_{+1},\]
where \(\mathcal{W}=\sqrt{\widehat{w}}\) is holomorphic in \(U_{+1}\) as \(\widehat{w}\) is nonvanishing close to \(+1\) and \(N\) is taken from (5.7). One sees easily that \(\mathcal{E}\) is holomorphic in \(U_{+1}\) and satisfies
\[\mathcal{E}(1)=\frac{1}{\sqrt{2}}\begin{pmatrix}1&*\\ -\mathrm{i}&*\end{pmatrix},\]
cf. [3, Prop. C.2] and (2.11). Using the explicit form of \(\widehat{\mu}_{2,n}\) in (A.14) and abbreviating \(\mathcal{K}_{j}=\mathcal{K}_{j}(n^{2}f(s))\), \(j=1,2\), the integral in (A.13) without the \(\frac{1}{2\pi\mathrm{i}}\)-prefactor can be written as
(A.15) \[-\int_{1}^{1+1/n}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n}) \widehat{\mu}_{n}^{-1}\,ds\] \[=-\int_{1}^{1+1/n}\mathcal{R}^{(n)}(s)\mathcal{E}(s)(2\pi n)^{ \sigma_{3}/2}\begin{pmatrix}*&\mathcal{K}_{1}\\ *&\mathcal{K}_{2}\end{pmatrix}\phi^{-n\sigma_{3}}(s)\Big{(}\frac{\widehat{F}} {\mathcal{W}}(s)\Big{)}^{\sigma_{3}}\] \[\times\begin{pmatrix}0&0\\ (\frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat{F}^{2}}{ \widehat{w}}(s)\big{)}\phi^{-2n}(s)&0\end{pmatrix}\Big{(}\frac{\widehat{F}}{ \mathcal{W}}(s)\Big{)}^{-\sigma_{3}}\phi^{n\sigma_{3}}(s)\] \[\begin{pmatrix}\mathcal{K}_{2}&-\mathcal{K}_{1}\\ *&*\end{pmatrix}(2\pi n)^{-\sigma_{3}/2}\mathcal{E}^{-1}(s)[\mathcal{R}^{(n)} (s)]^{-1}ds\] \[=-\int_{1}^{1+1/n}\Big{(}I+O\Big{(}\frac{1}{n}\Big{)}\Big{)} \mathcal{E}(s)(2\pi n)^{\sigma_{3}/2}\begin{pmatrix}*&\mathcal{K}_{1}\\ *&\mathcal{K}_{2}\end{pmatrix}\begin{pmatrix}0&0\\ 1&0\end{pmatrix}\begin{pmatrix}\mathcal{K}_{2}&-\mathcal{K}_{1}\\ *&*\end{pmatrix}\] \[\times(2\pi n)^{-\sigma_{3}/2}\mathcal{E}^{-1}(s)\Big{(}I+O \Big{(}\frac{1}{n}\Big{)}\Big{)}\Big{(}\frac{\mathcal{W}}{\widehat{F}}(s) \Big{)}^{2}\Big{(}\frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat {F}^{2}}{\widehat{w}}(s)\Big{)}ds\] \[=-\int_{1}^{1+1/n}\Big{(}\mathcal{E}(1)+O\Big{(}\frac{1}{n}\Big{)} \Big{)}(2\pi n)^{\sigma_{3}/2}\begin{pmatrix}\mathcal{K}_{1}\mathcal{K}_{2}& -\mathcal{K}_{1}^{2}\\ \mathcal{K}_{2}^{2}&-\mathcal{K}_{1}\mathcal{K}_{2}\end{pmatrix}\] \[\times(2\pi n)^{-\sigma_{3}/2}\Big{(}\mathcal{E}^{-1}(1)+O \Big{(}\frac{1}{n}\Big{)}\Big{)}\Big{(}\frac{\mathcal{W}}{\widehat{F}}(s) \Big{)}^{2}\Big{(}\frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat {F}^{2}}{\widehat{w}}(s)\Big{)}ds\] \[=-2\pi n\int_{1}^{1+1/n}\Big{(}\mathcal{E}(1)+O\Big{(}\frac{1}{n} \Big{)}\Big{)}\begin{pmatrix}1&0\\ 0&\frac{1}{2\pi n}\end{pmatrix}\begin{pmatrix}\mathcal{K}_{1}\mathcal{K}_{2}& -\mathcal{K}_{1}^{2}\\ \mathcal{K}_{2}^{2}&-\mathcal{K}_{1}\mathcal{K}_{2}\end{pmatrix}\] \[\times\begin{pmatrix}\frac{1}{2\pi n}&0\\ 0&1\end{pmatrix}\Big{(}\mathcal{E}^{-1}(1)+O\Big{(}\frac{1}{n}\Big{)}\Big{)} \Big{(}\frac{\mathcal{W}}{\widehat{F}}(s)\Big{)}^{2}\Big{(}\frac{F^{2}}{w_{+}} (s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat{F}^{2}}{\widehat{w}}(s)\Big{)}ds.\]
Here we used that the matrix \(\begin{pmatrix}*&\mathcal{K}_{1}\\ *&\mathcal{K}_{2}\end{pmatrix}\) has determinant equal to \(1\), cf. [11, Remark 7.1]. Note that all \(O\big{(}\frac{1}{n}\big{)}\)-terms in (A.15) for \(s\in(1,1+\frac{1}{n})\) are bounded by \(\frac{c}{n}\), where \(c>0\) is fixed. Hence, we obtain using (2.11) and (2.13) in the second last line
(A.16) \[-\int_{1}^{1+1/n}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\widehat {\mu}_{n}^{-1}\,ds\] \[=-2\pi n\int_{1}^{1+1/n}\mathcal{E}(1)\begin{pmatrix}1&0\\ 0&0\end{pmatrix}\begin{pmatrix}\mathcal{K}_{1}\mathcal{K}_{2}&-\mathcal{K}_{1 }^{2}\\ \mathcal{K}_{2}^{2}&-\mathcal{K}_{1}\mathcal{K}_{2}\end{pmatrix}\begin{pmatrix} 0&0\\ 0&1\end{pmatrix}\mathcal{E}^{-1}(1)\] \[\times\Big{(}\frac{\mathcal{W}}{\widehat{F}}(s)\Big{)}^{2} \Big{(}\frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat{F}^{2}}{ \widehat{w}}(s)\Big{)}ds\] \[+\underbrace{O\Bigg{\|}\begin{pmatrix}\mathcal{K}_{1}\mathcal{K}_ {2}&-\mathcal{K}_{1}^{2}\\ \mathcal{K}_{2}^{2}&-\mathcal{K}_{1}\mathcal{K}_{2}\end{pmatrix}\Bigg{(}- \frac{3\pi^{2}}{\log^{2}\frac{2}{x-1}}+O\Big{(}\frac{1}{\log^{3}(x-1)}\Big{)} \Bigg{)}\Bigg{\|}_{L^{1}(1,1+1/n)}.\] \[=O\big{(}\frac{1}{n^{2}\log^{2}n}\big{)},\text{ by arguments as in \@@cite[cite]{[\@@bibref{}{A.15}{}{}, Prop. C.1]}}\] \[=-2\pi n\int_{1}^{1+1/n}\mathcal{E}(1)\begin{pmatrix}0&-\mathcal{ K}_{1}^{2}\\ 0&0\end{pmatrix}\mathcal{E}^{-1}(1)\Big{(}\frac{\mathcal{W}}{\widehat{F}}(s) \Big{)}^{2}\Big{(}\frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat {F}^{2}}{\widehat{w}}(s)\Big{)}ds\] \[+O\Big{(}\frac{1}{n^{2}\log^{2}n}\Big{)}\] \[=\pi n\begin{pmatrix}\mathrm{i}&1\\ 1&-\mathrm{i}\end{pmatrix}\underbrace{\int_{1}^{1+1/n}\mathcal{K}_{1}^{2} \Big{(}-\frac{3\pi^{2}}{\log^{2}\frac{2}{s-1}}+O\Big{(}\frac{1}{\log^{3}(s-1)} \Big{)}\Big{)}ds}_{=\frac{3}{8n^{2}\log^{2}n}+O\big{(}\frac{1}{n^{2}\log^{3}n} \big{)}\text{ by \@@cite[cite]{[\@@bibref{}{A.15}{}{}, Prop. C.1]}}}+O\Big{(}\frac{1}{n^{2}\log^{2}n}\Big{)}\] \[=\frac{3\pi\mathrm{i}}{8n\log^{2}n}\begin{pmatrix}1&-\mathrm{i} \\ -\mathrm{i}&-1\end{pmatrix}+O\Big{(}\frac{1}{n\log^{3}n}\Big{)},\]
which after dividing by \(2\pi\mathrm{i}\) is equal to (A.11).
To show (A.12) we proceed just a before, but use a more precise expansion of \(\mathcal{R}^{(n)}\), see [11, Eq. 8.7]:
\[\mathcal{R}^{(n)}(z)=I+\frac{\mathcal{R}_{1}(z)}{n}+O\Big{(}\frac{1}{n^{2}} \Big{)},\]
where the \(O\big{(}\frac{1}{n^{2}}\big{)}\)-term is uniform in \(z\). We use this expansion in (A.15) to obtain
(A.17) \[\begin{split}&-\int_{1}^{1+1/n}\widehat{\mu}_{n}(v_{n}-\widehat{ v}_{n})\widehat{\mu}_{n}^{-1}\,ds\\ &=-2\pi n\int_{1}^{1+1/n}\Big{(}I+\frac{\mathcal{R}_{1}(s)}{n}+O \Big{(}\frac{1}{n^{2}}\Big{)}\Big{)}\mathcal{E}(s)\begin{pmatrix}1&0\\ 0&\frac{1}{2\pi n}\end{pmatrix}\\ &\times\begin{pmatrix}\mathcal{K}_{1}\mathcal{K}_{2}&-\mathcal{K}_{1}^{2}\\ \mathcal{K}_{2}^{2}&-\mathcal{K}_{1}\mathcal{K}_{2}\end{pmatrix}\begin{pmatrix} \frac{1}{2\pi n}&0\\ 0&1\end{pmatrix}\mathcal{E}^{-1}(s)\Big{(}I-\frac{\mathcal{R}_{1}(s)}{n}+O \Big{(}\frac{1}{n^{2}}\Big{)}\Big{)}\\ &\times\Big{(}\frac{\mathcal{W}}{\widehat{F}}(s)\Big{)}^{2}\Big{(} \frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat{F}^{2}}{ \widehat{w}}(s)\Big{)}ds.\end{split}\]
A similar expression holds for the second integral in (A.12) but with \(n+1\) instead of \(n\). Next define \(y_{1}=n^{2}f(s)\) and \(y_{2}=(n+1)^{2}f(s)\). Note that for \(s\in(1,1+1/n)\), we have \(0<y_{1},y_{2}<cn\) for some \(c>0\). Let us denote by \(f_{1}^{-1}\) the local inverse of \(f\) around \(1\). Then we have with some constant \(a\)
\[\begin{split}& ds=\frac{1}{n^{2}}(f_{1}^{-1})^{\prime}\Big{(} \frac{y_{1}}{n^{2}}\Big{)}dy_{1}=\frac{2dy_{1}}{n^{2}}\Big{(}1+\frac{ay_{1}}{n ^{2}}+O\Big{(}\frac{1}{n^{2}}\Big{)}\Big{)},\\ & ds=\frac{1}{(n+1)^{2}}(f_{1}^{-1})^{\prime}\Big{(}\frac{y_{2}}{( n+1)^{2}}\Big{)}dy_{2}=\frac{2dy_{2}}{(n+1)^{2}}\Big{(}1+\frac{ay_{2}}{(n+1)^{2}}+O \Big{(}\frac{1}{n^{2}}\Big{)}\Big{)}.\end{split}\]
Now performing the substitution \(y_{1}=n^{2}f(s)\) in the expression (A.17) we get
(A.18) \[\begin{split}&-\int_{1}^{1+1/n}\widehat{\mu}_{n}(v_{n}- \widehat{v}_{n})\widehat{\mu}_{n}^{-1}\,ds\\ &=-\frac{4\pi}{n}\int_{0}^{n^{2}f(1+1/n)}\Big{(}I+\frac{\mathcal{ R}_{1}\big{(}f_{1}^{-1}\big{(}\frac{y_{1}}{n^{2}}\big{)}\big{)}}{n}+O\Big{(} \frac{1}{n^{2}}\Big{)}\Big{)}\mathcal{E}\Big{(}f_{1}^{-1}\Big{(}\frac{y_{1}}{ n^{2}}\Big{)}\Big{)}\\ &\times\begin{pmatrix}1&0\\ 0&\frac{1}{2\pi n}\end{pmatrix}\begin{pmatrix}\mathcal{K}_{1}(y_{1})\mathcal{ K}_{2}(y_{1})&-\mathcal{K}_{1}^{2}(y_{1})\\ \mathcal{K}_{2}^{2}(y_{1})&-\mathcal{K}_{1}(y_{1})\mathcal{K}_{2}(y_{1})\end{pmatrix} \begin{pmatrix}\frac{1}{2\pi n}&0\\ 0&1\end{pmatrix}\\ &\times\mathcal{E}^{-1}\Big{(}f_{1}^{-1}\Big{(}\frac{y_{1}}{n^{2}}\Big{)} \Big{)}\Big{(}I-\frac{\mathcal{R}_{1}\big{(}f_{1}^{-1}\big{(}\frac{y_{1}}{n^{2} }\big{)}\big{)}}{n}+O\Big{(}\frac{1}{n^{2}}\Big{)}\Big{)}\\ &\times\Big{(}\frac{\mathcal{W}}{\widehat{F}}\Big{(}f_{1}^{-1} \Big{(}\frac{y_{1}}{n^{2}}\Big{)}\Big{)}\Big{)}^{2}\Big{(}\frac{F^{2}}{w_{+}} \Big{(}f_{1}^{-1}\Big{(}\frac{y_{1}}{n^{2}}\Big{)}\Big{)}+\frac{F^{2}}{w_{-}} \Big{(}f_{1}^{-1}\Big{(}\frac{y_{1}}{n^{2}}\Big{)}\Big{)}-2\frac{\widehat{F}^{2} }{\widehat{w}}\Big{(}f_{1}^{-1}\Big{(}\frac{y_{1}}{n^{2}}\Big{)}\Big{)}\Big{)} \\ &\times\Big{(}1+\frac{ay_{1}}{n^{2}}+O\Big{(}\frac{1}{n^{2}}\Big{)} \Big{)}dy_{1}.\end{split}\]
We get a similar expression for the second integral in (A.12) after the change of variables \(s=f_{1}^{-1}(\frac{y_{2}}{(n+1)^{2}})\):
(A.19) \[-\int_{1}^{1+1/(n+1)}\widehat{\mu}_{n+1}(v_{n+1}-\widehat{v}_{n+1} )\widehat{\mu}_{n+1}^{-1}\,ds\] \[=-\frac{4\pi}{n+1}\int_{0}^{(n+1)^{2}f(1+1/(n+1))}\Big{(}I+\frac{ \mathcal{R}_{1}\big{(}f_{1}^{-1}\big{(}\frac{y_{2}}{(n+1)^{2}}\big{)}\big{)}}{n +1}+O\Big{(}\frac{1}{n^{2}}\Big{)}\Big{)}\mathcal{E}\Big{(}f_{1}^{-1}\Big{(} \frac{y_{2}}{(n+1)^{2}}\Big{)}\Big{)}\] \[\times\Big{(}\frac{1}{0}\quad\frac{1}{2\pi(n+1)}\Big{)}\,\binom{ \mathcal{K}_{1}(y_{2})\mathcal{K}_{2}(y_{2})}{\mathcal{K}_{2}^{2}(y_{2})} \quad\begin{matrix}-\mathcal{K}_{1}^{2}(y_{2})\\ -\mathcal{K}_{1}(y_{2})\mathcal{K}_{2}(y_{2})\end{matrix}\,\binom{1}{2\pi(n+1) }\quad 0\quad 1\Big{)}\] \[\times\mathcal{E}^{-1}\Big{(}f_{1}^{-1}\Big{(}\frac{y_{2}}{(n+1) ^{2}}\Big{)}\Big{)}\Big{(}I-\frac{\mathcal{R}_{1}\big{(}f_{1}^{-1}\big{(} \frac{y_{2}}{(n+1)^{2}}\big{)}\big{)}}{n+1}+O\Big{(}\frac{1}{n^{2}}\Big{)} \Big{)}\] \[\times\Big{(}\frac{\mathcal{W}}{\widehat{F}}\Big{(}f_{1}^{-1} \Big{(}\frac{y_{2}}{(n+1)^{2}}\Big{)}\Big{)}\Big{)}^{2}\Big{(}\frac{F^{2}}{w_{ +}}\Big{(}f_{1}^{-1}\Big{(}\frac{y_{2}}{(n+1)^{2}}\Big{)}\Big{)}+\frac{F^{2}}{ w_{-}}\Big{(}f_{1}^{-1}\Big{(}\frac{y_{2}}{(n+1)^{2}}\Big{)}\Big{)}\] \[-2\frac{\widehat{F}^{2}}{\widehat{w}}\Big{(}f_{1}^{-1}\Big{(} \frac{y_{2}}{(n+1)^{2}}\Big{)}\Big{)}\Big{)}\Big{(}1+\frac{ay_{2}}{(n+1)^{2}} +O\Big{(}\frac{1}{n^{2}}\Big{)}\Big{)}dy_{2}.\]
Now for \(0<y<cn\) we have uniformly
\[\frac{\mathcal{R}_{1}\big{(}f_{1}^{-1}\big{(}\frac{y}{n^{2}}\big{)} \big{)}}{n}-\frac{\mathcal{R}_{1}\big{(}f_{1}^{-1}\big{(}\frac{y}{(n+1)^{2}} \big{)}\big{)}}{n+1} =O\Big{(}\frac{1}{n^{2}}\Big{)},\] \[\mathcal{E}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2}}\Big{)}\Big{)}- \mathcal{E}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{(n+1)^{2}}\Big{)}\Big{)} =O\Big{(}\frac{y}{n^{3}}\Big{)}\lesssim O\Big{(}\frac{1}{n^{2}} \Big{)},\] \[\frac{ay}{n^{2}}-\frac{ay}{(n+1)^{2}} =O\Big{(}\frac{y}{n^{3}}\Big{)}\lesssim O\Big{(}\frac{1}{n^{2}} \Big{)},\] \[\frac{\mathcal{W}}{\widehat{F}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^ {2}}\Big{)}\Big{)}-\frac{\mathcal{W}}{\widehat{F}}\Big{(}f_{1}^{-1}\Big{(} \frac{y}{(n+1)^{2}}\Big{)}\Big{)} =O\Big{(}\frac{y^{1/2}}{n^{2}}\Big{)},\]
where we used [11, Lem. 6.4] in the last estimate. Note that all these error terms can be uniformly bounded by \(O\big{(}\frac{1+y^{1/2}}{n^{2}}\big{)}\). Additionally we can choose \(r_{n}=f_{1}^{-1}\big{(}\frac{y}{n^{2}}\big{)}-1=O\big{(}\frac{y}{n^{2}}\big{)}\), \(\tilde{r}_{n}=f_{1}^{-1}\big{(}\frac{y}{(n+1)^{2}}\big{)}-1=O\big{(}\frac{y}{n^ {2}}\big{)}\) in Prop. 2.7 to obtain:
\[\frac{F^{2}}{w_{+}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2}}\Big{)} \Big{)}+\frac{F^{2}}{w_{-}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2}}\Big{)}\Big{)} -\frac{F^{2}}{w_{+}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{(n+1)^{2}}\Big{)}\Big{)}\] \[-\frac{F^{2}}{w_{-}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{(n+1)^{2}} \Big{)}\Big{)}=O\Big{(}\frac{y}{n^{2}}\log\Big{|}\log\Big{(}\frac{y}{n^{2}} \Big{)}\Big{|}\Big{)}+O\Big{(}\frac{1}{n\log^{3}n}\Big{)}.\]
Note that indeed \(n\big{|}\frac{r_{n}}{\tilde{r}_{n}}-1\big{|}<R\) for an appropriate \(R>0\) as long as \(0<y<cn\).
From [17, Eq. 10.40] we have that
(A.20) \[K_{0}(u)\sim\sqrt{\frac{\pi}{2u}}\mathrm{e}^{-u},\quad K_{0}^{\prime}(u)\sim- \sqrt{\frac{\pi}{2u}}\mathrm{e}^{-u}\]
for \(u\to\infty\) with \(|\arg u|<\frac{3\pi}{2}-\varepsilon\) and \(\varepsilon>0\), implying that the \(\mathcal{K}_{j}(y)\) decay exponentially for \(y\to\infty\). Therefore changing the limit of integration from \((n+1)^{2}f(1+1/(n+1))\) to \(n^{2}f(1+1/n)\) in (A.19) will only introduce an exponentially
small error which we will neglect. We are now in a position to evaluate (A.12), by taking the difference of (A.18) and (A.19):
(A.21) \[=-\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\widehat {\mu}_{n}^{-1}\,ds+\int_{\Sigma^{r}}\widehat{\mu}_{n+1}(v_{n+1}-\widehat{v}_{n+ 1})\widehat{\mu}_{n+1}^{-1}\,ds\] \[=\Big{(}-\frac{4\pi}{n}+\frac{4\pi}{n+1}\Big{)}\int_{0}^{n^{2}f(1 +1/n)}\Big{(}I+O\Big{(}\frac{1}{n}\Big{)}\Big{)}\mathcal{E}\Big{(}f_{1}^{-1} \Big{(}\frac{y}{n^{2}}\Big{)}\Big{)}\] \[\times\begin{pmatrix}1&0\\ 0&\frac{1}{2\pi n}\end{pmatrix}\begin{pmatrix}\mathcal{K}_{1}(y)\mathcal{K}_{ 2}(y)&-\mathcal{K}_{1}^{2}(y)\\ \mathcal{K}_{2}^{2}(y)&-\mathcal{K}_{1}(y)\mathcal{K}_{2}(y)\end{pmatrix} \begin{pmatrix}\frac{1}{2\pi n}&0\\ 0&1\end{pmatrix}\] \[\times\Big{(}\frac{F^{2}}{w_{+}}\Big{(}f_{1}^{-1}\Big{(}\frac{y} {n^{2}}\Big{)}\Big{)}+\frac{F^{2}}{w_{-}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2 }}\Big{)}\Big{)}\Big{)}-2\frac{\widehat{F}^{2}}{\widehat{w}}\Big{(}f_{1}^{-1} \Big{(}\frac{y}{n^{2}}\Big{)}\Big{)}\Big{)}\frac{1}{2}(f_{1}^{-1})^{\prime} \Big{(}\frac{y}{n^{2}}\Big{)}dy\] \[+\frac{4\pi}{n}O\Bigg{(}\Big{\|}\begin{pmatrix}\mathcal{K}_{1}(y) \mathcal{K}_{2}(y)&-\mathcal{K}_{1}^{2}(y)\\ \mathcal{K}_{2}^{2}(y)&-\mathcal{K}_{1}(y)\mathcal{K}_{2}(y)\end{pmatrix} \Big{(}\frac{1+y^{1/2}}{n^{2}}\Big{)}\Big{\|}_{L^{1}(0,n^{2}f(1+1/n))}\Bigg{)}\] \[+\frac{4\pi}{n}\Big{\|}\Big{[}O\Big{(}\frac{y}{n^{2}}\log\Big{|} \log\Big{(}\frac{y}{n^{2}}\Big{)}\Big{|}\Big{)}+O\Big{(}\frac{1}{n\log^{3}n} \Big{)}\Big{]}\] \[\times\begin{pmatrix}\mathcal{K}_{1}(y)\mathcal{K}_{2}(y)&- \mathcal{K}_{1}^{2}(y)\\ \mathcal{K}_{2}^{2}(y)&-\mathcal{K}_{1}(y)\mathcal{K}_{2}(y)\end{pmatrix} \Big{\|}_{L^{1}(0,n^{2}f(1+1/n))}.\]
Note that the \(y^{1/2}\) in the first \(L^{1}\)-norm is absorbed by the exponential decay of \(\mathcal{K}_{j}(y)\), which are in \(L^{2}(\mathbb{R}_{+})\) because of (5.2) and (A.20), implying that this norm is finite and of order \(O\big{(}\frac{1}{n^{2}}\big{)}\). Now observe that for \(\mathrm{e}^{-1}\leq y<cn\) we have
\[\log\Big{|}\log\Big{(}\frac{y}{n^{2}}\Big{)}\Big{|}=\log(-\log y+\log n^{2}) \leq\log(1+\log n^{2})\lesssim O(\log\log n),\]
by the monotonicity of the logarithm, while for \(0<y<\mathrm{e}^{-1}\) we have
\[\log\Big{|}\log\Big{(}\frac{y}{n^{2}}\Big{)}\Big{|}=\log(-\log y+ \log n^{2}) \leq\log(-2\log y)+\log(2\log n^{2})\] \[\lesssim O(\log|\log y|)+O(\log\log n),\]
again by the monotonicity of the logarithm.
Hence for \(0<y<cn\) we have
\[O\Big{(}\frac{y}{n^{2}}\log\Big{|}\log\Big{(}\frac{y}{n^{2}}\Big{)} \Big{|}\Big{)} \lesssim O\Big{(}\frac{y}{n^{2}}\Big{[}\log|\log y|\chi_{(0, \mathrm{e}^{-1})}+\log\log n\Big{]}\Big{)}\] \[\lesssim O\Big{(}\frac{\log\log n}{n^{2}}\Big{)}\big{(}O(y)+1 \big{)}.\]
We see that the growth of \(O(y)\) can be again absorbed into the exponential decay of \(\mathcal{K}_{j}(y)\), implying that the second \(L^{1}\)-norm in (A.21) can be bounded by \(O\big{(}\frac{1}{n\log^{3}n}\big{)}\).
We summarize:
\[=-\int_{\Sigma^{r}}\widehat{\mu}_{n}(v_{n}-\widehat{v}_{n})\widehat{ \mu}_{n}^{-1}\,ds+\int_{\Sigma^{r}}\widehat{\mu}_{n+1}(v_{n+1}-\widehat{v}_{n+1 })\widehat{\mu}_{n+1}^{-1}\,ds\] \[=-\frac{4\pi}{n(n+1)}\int_{0}^{n^{2}f(1+1/n)}\Big{(}I+O\Big{(} \frac{1}{n}\Big{)}\Big{)}\mathcal{E}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2}} \Big{)}\Big{)}\] \[\times\begin{pmatrix}1&0\\ 0&\frac{1}{2\pi n}\end{pmatrix}\begin{pmatrix}\mathcal{K}_{1}(y)\mathcal{K}_{2 }(y)&-\mathcal{K}_{1}^{2}(y)\\ \mathcal{K}_{2}^{2}(y)&-\mathcal{K}_{1}(y)\mathcal{K}_{2}(y)\end{pmatrix} \begin{pmatrix}\frac{1}{2\pi n}&0\\ 0&1\end{pmatrix}\] \[\times\mathcal{E}^{-1}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2}} \Big{)}\Big{)}\Big{(}I+O\Big{(}\frac{1}{n}\Big{)}\Big{)}\Big{(}\frac{\mathcal{ W}}{\widehat{F}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2}}\Big{)}\Big{)}\Big{)}^{2}\] \[\times\Big{(}\frac{F^{2}}{w_{+}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{ n^{2}}\Big{)}\Big{)}+\frac{F^{2}}{w_{-}}\Big{(}f_{1}^{-1}\Big{(}\frac{y}{n^{2}} \Big{)}\Big{)}-2\frac{\widehat{F}^{2}}{\widehat{w}}\Big{(}f_{1}^{-1}\Big{(} \frac{y}{n^{2}}\Big{)}\Big{)}\Big{)}\frac{1}{2}(f_{1}^{-1})^{\prime}\Big{(} \frac{y}{n^{2}}\Big{)}dy\] \[+O\Big{(}\frac{1}{n^{2}\log^{3}n}\Big{)}.\]
Inverting the change of variables \(y=n^{2}f(s)\) and Taylor expanding \(\mathcal{E}\), this becomes equal to
(A.22) \[-\frac{2\pi n}{n+1}\int_{1}^{1+1/n}\Big{(}\mathcal{E}(1)+O\Big{(} \frac{1}{n}\Big{)}\Big{)}\begin{pmatrix}1&0\\ 0&\frac{1}{2\pi n}\end{pmatrix}\] \[\times\begin{pmatrix}\mathcal{K}_{1}\mathcal{K}_{2}&-\mathcal{K}_ {1}^{2}\\ \mathcal{K}_{2}^{2}&-\mathcal{K}_{1}\mathcal{K}_{2}\end{pmatrix}\begin{pmatrix} \frac{1}{2\pi n}&0\\ 0&1\end{pmatrix}\Big{(}\mathcal{E}^{-1}(1)+O\Big{(}\frac{1}{n}\Big{)}\Big{)}\] \[\times\Big{(}\frac{\mathcal{W}}{\widehat{F}}(s)\Big{)}^{2}\Big{(} \frac{F^{2}}{w_{+}}(s)+\frac{F^{2}}{w_{-}}(s)-2\frac{\widehat{F}^{2}}{\widehat {w}}(s)\Big{)}ds+O\Big{(}\frac{1}{n^{2}\log^{3}n}\Big{)}.\]
However, (A.22) is precisely \(\frac{1}{n+1}\)-times the integral in the last line of (A.15), which was shown to be equal to \(\frac{3\pi{\rm i}}{8n\log^{2}n}\begin{pmatrix}1&-{\rm i}\\ -{\rm i}&-1\end{pmatrix}+O\big{(}\frac{1}{n\log^{3}n}\big{)}\) in Eq. (A.16). Thus, the expression in (A.22) is equal to
\[\frac{1}{n+1}\Bigg{(}\frac{3\pi{\rm i}}{8n\log^{2}n}\begin{pmatrix} 1&-{\rm i}\\ -{\rm i}&-1\end{pmatrix}+O\Big{(}\frac{1}{n\log^{3}n}\Big{)}\Bigg{)}+O\Big{(} \frac{1}{n^{2}\log^{3}n}\Big{)}\] \[=\frac{3\pi{\rm i}}{8n^{2}\log^{2}n}\begin{pmatrix}1&-{\rm i}\\ -{\rm i}&-1\end{pmatrix}+O\Big{(}\frac{1}{n^{2}\log^{3}n}\Big{)}.\]
Taking into account the \(\frac{1}{2\pi{\rm i}}\)-prefactor, this is seen to be equal to the right hand side of (A.12), finishing the proof.
**Acknowledgments.** The work of P.D. was supported in part by a Silver Grant from NYU. The work of M.P. was supported by the Methusalem grant METH/21/03 - long term structural funding of the Flemish Government and the National Science Foundation under Grant No. 1440140. The authors also gratefully acknowledge the support of MSRI (now the Simons Laufer Mathematical Sciences Institute) where the work in this paper was begun when the authors attended a semester program in Fall 2021.
|
2306.01502 | Ruin probability for renewal risk models with neutral net profit
condition | In ruin theory, the net profit condition intuitively means that the incurred
random claims on average do not occur more often than premiums are gained. The
breach of the net profit condition causes guaranteed ruin in few but simple
cases when both the claims' inter-occurrence time and random claims are
degenerate. In this work, we give a simplified argumentation for the
unavoidable ruin when the incurred claims on average occur equally as the
premiums are gained. We study the discrete-time risk model with
$N\in\mathbb{N}$ periodically occurring independent distributions, the
classical risk model, also known as the Cram\'er-Lundberg risk process, and the
more general E. Sparre Andersen model. | Andrius Grigutis, Arvydas Karbonskis, Jonas Šiaulys | 2023-06-02T12:49:58Z | http://arxiv.org/abs/2306.01502v1 | ###### Abstract
###### Abstract
In ruin theory, the net profit condition intuitively means that the incurred random claims on average do not occur more often than premiums are gained. The breach of the net profit condition causes guaranteed ruin in few but simple cases when both the claims' inter-occurrence time and random claims are degenerate. In this work, we give a simplified argumentation for the unavoidable ruin when the incurred claims on average occur equally as the premiums are gained. We study the discrete-time risk model with \(N\in\mathbb{N}\) periodically occurring independent distributions, the classical risk model, also known as Cramer-Lundberg risk process, and the more general E. Sparre Andersen model.
**Keywords:** net profit condition, ruin probability, discrete-time risk model, classical risk model, E.S. Andersen risk model, random walk.
**MSC 2020:** 60G50, 60J80, 91G05.
**Ruin probability for renewal risk models**
**with neutral net profit condition**
**Andrius Grigutis, Arvydas Karbonskis, Jonas Siaulys**
Institute of Mathematics, Vilnius University,
Naugarduko 24, Vilnius, Lithuania, LT-03225
[email protected], [email protected],
[email protected]
## 1 Introduction
In 1957, during the 15th International Congress of Actuaries, E. Sparre Andersen [1] proposed to use a _renewal risk model_ to describe the behavior of the insurer's surplus. According to Andersen's proposed model, the insurer's surplus process \(W\) admits the following representation
\[W(t)=u+ct-\sum_{i=1}^{\Theta(t)}X_{i},\ t\geqslant 0, \tag{1}\]
where:
\(\bullet\)\(u\geqslant 0\) denotes the initial insurer's surplus, \(W(0)=u\);
\(\bullet\)\(c>0\) denotes the premium rate per unit of time;
\(\bullet\) the cost of claims \(X_{1},\,X_{2},\,\ldots\) are independent copies of a non-negative random variable \(X\);
\(\bullet\) the inter-occurrence times \(\theta_{1},\,\theta_{2},\,\ldots\) between claims is another sequence of independent copies of a non-negative random variable \(\theta\) which is not degenerate at zero, i. e. \(\mathbb{P}(\theta=0)<1\);
\(\bullet\) the sequences \(\{X_{1},\,X_{2},\,\ldots\}\) and \(\{\theta_{1},\,\theta_{2},\,\ldots\}\) are mutually independent;
\(\bullet\)\(\Theta(t)=\#\{n\geqslant 1:T_{n}\in[0,t]\}\) is the renewal process generated by the random variable \(\theta\), where \(T_{n}=\theta_{1}+\theta_{2}+\ldots+\theta_{n}\).
The main critical characteristics of the defined renewal risk model (1) are the _time of ruin_
\[\tau_{u}=\begin{cases}&\inf\{t\geqslant 0:W(t)<0\},\\ &\infty,\ \text{if}\ \ W(t)\geqslant 0\ \ \text{for all}\ \ t\geqslant 0\end{cases}\]
and the _ultimate time ruin probability_ (or just the _ruin probability_)
\[\psi(u)=\mathbb{P}(\tau_{u}<\infty).\]
The model (1) and the definition of \(\psi(u)\) imply that for all \(u\geqslant 0\)
\[\psi(u) =\mathbb{P}\left(\bigcup_{t\geqslant 0}\left\{W(t)<0\right\}\right)\] \[=\mathbb{P}\left(\inf_{n\geqslant 1}\left\{u+cT_{n}-\sum_{i=1}^{ n}X_{n}\right\}<0\right)\] \[=\mathbb{P}\left(\sup_{n\geqslant 1}\sum_{k=1}^{n}(X_{k}-c \theta_{k})>u\right). \tag{2}\]
Thus, the ultimate time ruin probability \(\psi(u)\) is nothing but the tail of the distribution function of the random variable \(\sup_{n\geqslant 1}\sum_{k=1}^{n}(X_{k}-c\theta_{k})\). In ruin theory the difference \(\mathbb{E}X-c\,\mathbb{E}\theta\) describes the so-called _net profit condition_. It is well known that \(\psi(u)=1\) for any \(u\geqslant 0\) if \(\mathbb{E}X-c\,\mathbb{E}\theta>0\) where this fact is easily implied by the strong law of large numbers, see [12, Prop. 7.2.3]. Also, \(\psi(u)=1\) for any \(u\geqslant 0\) if \(\mathbb{E}X-c\,\mathbb{E}\theta=0\) (see [12, pp. 559-564]), except in some simple cases when both random variables \(X\) and \(\theta\) are degenerate. We call the net profit condition to be _neutral_ if \(\mathbb{E}X-c\,\mathbb{E}\theta=0\) and say that it holds if \(\mathbb{E}X-c\,\mathbb{E}\theta<0\). In general, the latter fact that
\[\mathbb{E}X-c\,\mathbb{E}\theta=0\quad\Rightarrow\quad\psi(u)=1 \tag{3}\]
for all \(u\geqslant 0\), can be deduced from some deep study of random walk, see for example [7], [12], [16]. Therefore, the mathematical curiosity drives us to derive (3) by using simpler arguments.
In [3], authors basically use Silverman-Toeplitz theorem to prove (3) for the discrete-time and classical risk models. The proofs presented for both models are significantly simpler than those presented in [7], [12], [16]. In this article, we show that the implication (3) can be simplified even further, however in some instances using the Pollaczek-Khinchine formula. The desired simplification of the proof can
be achieved by defining the random vector \((X^{*},\,X)\), where \(X^{*}\) is the new random variable which is arbitrarily close to \(X\) and \(\mathbb{P}(X^{*}\leqslant X)=1\).1 This way is similar to the probabilistic proof of the Turan's theorem given in [2]2. For the defined random variable \(X^{*}\) we make the net profit condition satisfied \(\mathbb{E}X^{*}-c\mathbb{E}\theta<0\) and show that the known algorithms of the ruin probability calculation under the net profit condition imply \(\psi(u)=1\) for all \(u\geqslant 0\) as \(X^{*}\) approaches to \(X\).
Footnote 1: Originaly the idea was raised by the fourth-year student of Faculty of Mathematics and Informatics Justas Klimavicius in 2017.
Footnote 2: We thank Professor Eugenijus Manstavicius for pointing to this fact.
In Section 3 we derive (3) for the more general discrete-time risk model when \(\theta\equiv 1\), \(c\in\mathbb{N}\) and non-negative independent integer-valued random variables \(X_{i}\stackrel{{ d}}{{=}}X_{i+N}\) for all \(i\in\mathbb{N}\) and some fixed natural \(N\), i.e. we allow the random variables \(X_{1},\,X_{2},\,\ldots\) in model (1) to be independent but not necessarily identically distributed. Obviously, if \(N=1\) then we get that r.v.s \(X_{1},\,X_{2},\,\ldots\) are identically distributed. In Section 4, we derive (3) for the classical risk model when \(\Theta(t)\) in (1) is assumed to be a Poison process with intensity \(\lambda>0\). Recall that in this case
\[\mathbb{P}(\Theta(t+s)-\Theta(s)=n)=e^{-\lambda t}\frac{(\lambda t)^{n}}{n!}\]
for all \(n\in\mathbb{N}\) and \(t,\,s>0\). In the last Section 5, we consider the most general E.S. Andersen's model (1) in terms of proving (3) by the known facts of ruin probability calculation under the net profit condition. More precisely, we reformulate and give different proofs than the existing ones to the following three theorems.
**Theorem 1**.: _Suppose the insurer's surplus process \(W(t)\) varies according to the discrete-time risk model (4) with \(N\) periodically occurring independent discrete and integer-valued non-negative r.v.s \(X_{i}\stackrel{{ d}}{{=}}X_{i+N}\) and \(\theta\equiv 1\). Let \(S_{N}=X_{1}+X_{2}+\ldots+X_{N}\). If the net profit condition is neutral \(cN-\mathbb{E}S_{N}=0\) and \(\mathbb{P}(S_{N}=cN)<1\), the ultimate time ruin probability \(\psi(u)=1\) for all \(u\in\mathbb{N}\cup\{0\}\)._
**Theorem 2**.: _Let \(W(t),\,t\geqslant 0\) be a surplus process of the classical risk model generated by a random claim amount \(X\), an exponentially distributed inter-occurrence time \(\theta\) with mean \(\mathbb{E}\theta=1/\lambda,\,\lambda>0\), and a constant premium rate \(c>0\). If the net profit condition is neutral \(\lambda\mathbb{E}X=c\), then \(\psi(u)=1\) for all \(u\geqslant 0\)._
**Theorem 3**.: _Let \(W(t),\,t\geqslant 0\) be a surplus process of E. Sparre Andersen model generated by a random claim amount \(X\), inter-occurrence time \(\theta\), and a constant premium rate \(c>0\). If the net profit condition is neutral \(\mathbb{E}X/\mathbb{E}\theta=c\) and \(\mathbb{P}(X=c\theta)<1\), then \(\psi(u)=1\) for all \(u\geqslant 0\)._
## 2 One auxiliary statement
Proving Theorems 2 and 3 we use the Pollaczek-Khinchine formula. This raises the need for the following statement.
**Lemma 4**.: _Let \(\eta_{1},\,\eta_{2},\,\ldots\) be independent identically distributed non-negative random variables which are not degenerate at zero. Then_
\[\sum_{n=1}^{\infty}\mathbb{P}(\eta_{1}+\ldots+\eta_{n}\leqslant x)<\infty\]
_for any \(x\geqslant 0\)._
Proof.: Let \(t\) be some small positive number and say that the non-negative random variables \(\eta_{1},\,\eta_{2},\,\ldots\) are independent copies of \(\eta\). Then, rearranging and using Markov's inequality, we obtain
\[\sum_{n=1}^{\infty}\mathbb{P}(\eta_{1}+\ldots+\eta_{n}\leqslant x)=\sum_{n=1}^ {\infty}\mathbb{P}\left(e^{-t(\eta_{1}+\ldots+\eta_{n})}\geqslant e^{-tx} \right)\leqslant e^{tx}\sum_{n=1}^{\infty}\left(\mathbb{E}e^{-t\eta}\right)^{ n}<\infty,\]
since \(\mathbb{E}e^{-t\eta}<1\) under the considered conditions.
Of course, the upper bound of the sum \(\sum_{n=1}^{\infty}\mathbb{P}(\eta_{1}+\ldots+\eta_{n}\leqslant x)\) can be improved compared to the given one; see for instance [11, Proof of lem. 8] and other literature on concentration inequalities.
## 3 Discrete-time risk model
Let us consider the model (1). Suppose \(c\in\mathbb{N}\), \(\theta\equiv 1\), the independent random variables \(X_{1},\,X_{2},\,\ldots\) are non-negative integer-valued and follow the N-seasonal pattern, i.e. \(X_{i}\stackrel{{ d}}{{=}}X_{i+N}\) for all \(i\in\mathbb{N}\) and some fixed \(N\in\mathbb{N}\). If these requirements are satisfied, then the general E. S. Andersen's renewal risk model (1) becomes the discrete-time risk model
\[W(t)=u+ct-\sum_{i=1}^{\lfloor t\rfloor}X_{i},\,t\geqslant 0, \tag{4}\]
where symbol \(\lfloor\cdot\rfloor\) denotes the floor function. Then, there is sufficient to consider (4) (in terms of \(W(t)<0\) for at least one \(t\geqslant 0\)) when \(u\in\{0,\,1,\,2,\,\ldots\}=:\mathbb{N}_{0}\) and \(t\in\mathbb{N}\) only. Then, the ruin time and the ultimate time ruin probability have the following standard expressions
\[\tau_{u}=\begin{cases}&\min\{t\in\mathbb{N}:W(t)<0\},\\ &\infty,\,\,\mbox{if}\,\,\,\,W(t)\geqslant 0\,\,\,\,\mbox{for all}\,\,\,\,t\in\mathbb{N}, \end{cases}\] \[\psi(u)=\mathbb{P}\left(\tau_{u}<\infty\right)=\mathbb{P}\left( \sup_{k\geqslant 1}\sum_{i=1}^{k}(X_{i}-c)>u\right),\,u\in\mathbb{N}_{0}. \tag{5}\]
If we denote \(\varphi=1-\psi\) the ultimate time survival probability, then, according to (5),
\[\varphi(u)=\mathbb{P}\left(\sup_{k\geqslant 1}\sum_{i=1}^{k}(X_{i}-c)\leqslant u \right),\,u\in\mathbb{N}_{0}. \tag{6}\]
In [8] and various other papers, the survival probability is studied according to a slightly different definition than (6), i.e.
\[\hat{\varphi}(u)=\mathbb{P}\left(\sup_{k\geqslant 1}\sum_{i=1}^{k}(X_{i}-c)<u \right). \tag{7}\]
It is easy to see that
\[\varphi(u)=\hat{\varphi}(u+1) \tag{8}\]
for all \(u\in\mathbb{N}_{0}\). We now prove Theorem 1.
Proof of Theorem 1.: We first demonstrate the proof for the most simplistic version of the homogeneous discrete-time risk model (4) when \(c=1\) and \(N=1\). Let \(h_{k}=\mathbb{P}(X=k)\), \(k\in\mathbb{N}_{0}\) and observe that conditions \(\mathbb{E}X=1\) and \(\mathbb{P}(X=1)=h_{1}<1\) imply \(h_{l}>0\) for some \(l\geqslant 2\). Indeed,
\[\mathbb{E}X=h_{1}+2h_{2}+3h_{3}+\ldots=1\]
and \(h_{1}<1\) means that at least one probability out of \(h_{2},\,h_{3},\,\ldots\) is positive. In addition, conditions \(h_{1}<1\) and \(\mathbb{E}X=1\) imply \(h_{0}>0\). Indeed, if \(h_{0}=0\), then \(h_{1}+h_{2}+h_{3}+\ldots=1\) and
\[1=\mathbb{E}X=h_{1}+2h_{2}+3h_{3}+\ldots>h_{1}+h_{2}+h_{3}+\ldots=1\]
leads to the contradiction.
Let us choose \(l\geqslant 2\) such that \(h_{l}=\mathbb{P}(X=l)>0\) and define the distribution of an integer-valued random vector \((X^{*},\,X)\) by the following equalities:
\[\mathbb{P}(X^{*}=k,\,X=k)=h_{k},\,k\in\mathbb{N}_{0},k\neq l,\] \[\mathbb{P}(X^{*}=l,\,X=l)=h_{l}-\frac{\varepsilon}{l},\] \[\mathbb{P}(X^{*}=0,\,X=l)=\frac{\varepsilon}{l},\] \[\mathbb{P}(X^{*}=k,\,X=m)=0,\,\{k,\,m\}\in\mathbb{N}_{0}^{2},\, \{k,\,m\}\neq\{0,\,l\},\,k\neq m,\]
where \(\varepsilon\in(0,\,lh_{l})\) is arbitrary small.
Visually, vector's \((X^{*},\,X)\) distribution is the following
\begin{tabular}{|c|c|c|c|c|c|c|c||c||c|} \hline \(X^{*}\backslash X\) & 0 & 1 & 2 & \(\ldots\) & \(l-1\) & \(l\) & \(l+1\) & \(\ldots\) & \(\Sigma\) \\ \hline
0 & \(h_{0}\) & 0 & 0 & \(\ldots\) & 0 & \(\varepsilon/l\) & 0 & \(\ldots\) & \(h_{0}+\varepsilon/l\) \\ \hline
1 & 0 & \(h_{1}\) & 0 & \(\ldots\) & 0 & 0 & 0 & \(\ldots\) & \(h_{1}\) \\ \hline
2 & 0 & 0 & \(h_{2}\) & \(\ldots\) & 0 & 0 & 0 & \(\ldots\) & \(h_{2}\) \\ \hline \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\ddots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\ddots\) & \(\vdots\) \\ \hline \(l-1\) & 0 & 0 & 0 & \(\ldots\) & \(h_{l-1}\) & 0 & 0 & \(\ldots\) & \(h_{l-1}\) \\ \hline \(l\) & 0 & 0 & 0 & \(\ldots\) & 0 & \(h_{l}-\varepsilon/l\) & 0 & \(\ldots\) & \(h_{l}-\varepsilon/l\) \\ \hline \(l+1\) & 0 & 0 & 0 & \(\ldots\) & 0 & 0 & \(h_{l+1}\) & \(\ldots\) & \(h_{l+1}\) \\ \hline \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\ddots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\ddots\) & \(\vdots\) \\ \hline \hline \(\Sigma\) & \(h_{0}\) & \(h_{1}\) & \(h_{2}\) & \(\ldots\) & \(h_{l-1}\) & \(h_{l}\) & \(h_{l+1}\) & \(\ldots\) & 1 \\ \hline \end{tabular}
It is easy to see, that \(\mathbb{E}X^{*}=1-\varepsilon<1\), and
\[\mathbb{P}(X^{*}\leqslant X) =\sum_{k=0}^{\infty}\sum_{m=k}^{\infty}\mathbb{P}(X^{*}=k,\,X=m)\] \[=\mathbb{P}(X^{*}=0,\,X=l)+\sum_{k=0}^{\infty}\mathbb{P}(X^{*}=k, \,X=k)=1.\]
Let \((X^{*}_{j},\,X_{j})\), \(j\in\mathbb{N}\), be independent copies of random vector \((X^{*},\,X)\). We have that \(\mathbb{P}(X^{*}_{j}\leqslant X_{j})=1\) for each \(j\in\mathbb{N}\). Therefore,
\[\mathbb{P}(X^{*}_{1}+X^{*}_{2}\leqslant X_{1}+X_{2}) =\sum_{k=0}^{\infty}\sum_{l=0}^{\infty}\mathbb{P}(X^{*}_{1}+k \leqslant X_{1}+l)\mathbb{P}(X^{*}_{2}=k,\,X_{2}=l)\] \[=\sum_{k=0,\ k\neq l}^{\infty}\mathbb{P}(X^{*}_{1}+k\leqslant X_{1 }+k)h_{k}\] \[+\ \mathbb{P}(X^{*}_{1}+l\leqslant X_{1}+l)\left(h_{l}-\frac{ \varepsilon}{l}\right)\] \[+\ \mathbb{P}(X^{*}_{1}\leqslant X_{1}+l)\frac{\varepsilon}{l}=1,\]
due to \(\mathbb{P}(X^{*}_{1}\leqslant X_{1})=1\).
We now use the mathematical induction to show
\[\mathbb{P}\left(\sum_{k=1}^{n}X^{*}_{k}\leqslant\sum_{k=1}^{n}X_{k}\right)=1,\,n\in\mathbb{N}. \tag{9}\]
Indeed, if \(\mathbb{P}\left(\sum_{k=1}^{n}X_{k}^{*}\leqslant\sum_{k=1}^{n}X_{k}\right)=1\) up to some natural \(n\), then we get that
\[\mathbb{P}\left(\sum_{k=1}^{n+1}X_{k}^{*}\leqslant\sum_{k=1}^{n+1} X_{k}\right) = \sum_{k=0,\ k\neq l}^{\infty}\mathbb{P}\left(\sum_{k=1}^{n}X_{k}^ {*}\leqslant\sum_{k=1}^{n}X_{k}\right)h_{k}\] \[+\ \mathbb{P}\left(\sum_{k=1}^{n}X_{k}^{*}\leqslant\sum_{k=1}^{n} X_{k}\right)\left(h_{l}-\frac{\varepsilon}{l}\right)\] \[+\ \mathbb{P}\left(\sum_{k=1}^{n}X_{k}^{*}\leqslant\sum_{k=1}^{n} X_{k}+l\right)\frac{\varepsilon}{l}=1.\]
For \(u\in\mathbb{N}_{0}\), the equality (9) implies that
\[\psi(u) = \mathbb{P}\left(\sup_{n\geqslant 1}\left\{\sum_{k=1}^{n}X_{k}-n \right\}>u\right)=\mathbb{P}\left(\bigcup_{n=1}^{\infty}\left\{\sum_{k=1}^{n} X_{k}>n+u\right\}\right)\] \[\geqslant \mathbb{P}\left(\bigcup_{n=1}^{\infty}\left\{\sum_{k=1}^{n}X_{k} ^{*}>n+u\right\}\right)=\mathbb{P}\left(\sup_{n\geqslant 1}\left\{\sum_{k=1}^{n}X_{k}^ {*}-n\right\}>u\right)\] \[=:\psi_{\varepsilon}^{*}(u),\]
or, equivalently,
\[\varphi(u)\leqslant\varphi_{\varepsilon}^{*}(u), \tag{10}\]
for all \(u\in\mathbb{N}_{0}\), where \(\varphi=1-\psi\) and \(\varphi_{\varepsilon}^{*}=1-\psi_{\varepsilon}^{*}\) are the model's survival probabilities.
Let \(s\in\mathbb{C}\) and \(h_{k}^{*}=\mathbb{P}(X^{*}=k)\), \(k\in\mathbb{N}_{0}\). Since \(\mathbb{E}X^{*}=1-\varepsilon<1\), Corollary 3.2 of [9] implies that the probability generating function of the survival probability \(\varphi^{*}\) satisfies the following equation
\[\varphi^{*}(0)+\varphi^{*}(1)s+\varphi^{*}(2)s^{2}+\ldots=\frac{1-\mathbb{E}X ^{*}}{G_{X^{*}}(s)-s}=\frac{\varepsilon}{G_{X^{*}}(s)-s},\,|s|<1, \tag{11}\]
where \(G_{X^{*}}(s)\) is the probability generating function of r.v. \(X^{*}\), i.e.
\[G_{X^{*}}(s)=h_{0}^{*}+h_{1}^{*}s+h_{2}^{*}s^{2}+\ldots,\,|s|\leqslant 1.\]
Inequality (10) and equation (11) imply that
\[0\leqslant\varphi(0)\leqslant\frac{\varepsilon}{h_{0}^{*}}=\frac{\varepsilon }{h_{0}+\varepsilon/l},\ \ 0\leqslant\varphi(1)\leqslant\frac{\varepsilon(1-h_{1})}{(h_{0}+\varepsilon/l)^{ 2}},\]
and, in general,
\[0\leqslant\varphi(n)\leqslant\varepsilon\cdot\frac{1}{n!}\lim_{s\to 0}\frac{d^{n}}{ ds^{n}}\left(\frac{1}{G_{X^{*}}(s)-s}\right)\]
for all \(n\in\mathbb{N}_{0}\). Since \(\varepsilon\) can be arbitrarily small, we conclude that \(\varphi(u)=0\) or, equivalently, \(\psi(u)=1\) for all \(u\in\mathbb{N}_{0}\).
It is worth mentioning that, having \(\varphi(0)=0\), the equality \(\varphi(u)=0\) for all \(u\in\mathbb{N}\) can be concluded from the following recurrence formula (see, for instance, [4, Section 6], [5], [14], [15])
\[\varphi(u)=\frac{1}{h_{0}}\left(\varphi(u-1)-\sum_{k=1}^{u}\varphi(u-k)h_{k} \right),\,u\in\mathbb{N}. \tag{12}\]
Indeed, the recurrence (12) yields \(\varphi(u),\,u\in\mathbb{N}_{0}\) being the multiple of \(\varphi(0)=1-\mathbb{E}X\). More precisely,
\[\varphi(u)=\alpha_{u}\varphi(0),\]
with
\[\alpha_{0}=1,\,\alpha_{u}=\frac{1}{h_{0}}\left(\alpha_{u-1}-\sum_{k=1}^{u} \alpha_{u-k}h_{k}\right),\,u\in\mathbb{N}.\]
The latter expression can be verified by mathematical induction. So, the particular case with \(c=1\) and \(N=1\) in Theorem 1 is proved.
The general case when \(c\in\mathbb{N}\) and \(N\in\mathbb{N}\) in the discrete-time risk model (4) can be considered by the same means. Let us explain how.
Let us suppose the model (4) is generated by \(X_{1},\,X_{2},\,\dots,\,X_{N}\) periodically occurring independent non-negative and integer-valued random variables, i.e.
\(X_{i+N}\stackrel{{ d}}{{=}}X_{i}\) for all \(i\in\mathbb{N}\) and some fixed \(N\in\mathbb{N}\). In such a case we can choose any random variable from \(\{X_{1},\,X_{2},\,\dots,\,X_{N}\}\) and define the random vector \((X_{j}^{*},\,X_{j})\) such that \(\mathbb{P}(X_{j}^{*}\leqslant X_{j})=1\) where \(j\in\{1,\,2,\,\dots,\,N\}\) is some fixed number. Obviously, the random vector \((X_{j}^{*},\,X_{j})\) must be defined in a similar way as vector \((X^{*},\,X)\) before, where both random variables \(X_{j}^{*}\) and \(X_{j}\) attain the same values and the probability of some smaller value of \(X_{j}^{*}\) gets enlarged by some arbitrarily small value and the probability of some larger value of \(X_{j}^{*}\) gets reduced by the same size. Note that conditions \(\mathbb{P}(X_{j}\geqslant c)=1\) and \(\mathbb{P}(S_{N}=cN)<1\) imply the estimate \(cN-\mathbb{E}S_{N}<0\) which is not the case under consideration. Hence, always there exists at least one value in the set \(\{0,\,1,\,\dots,\,c-1\}\) for r.v. \(X_{j}\) which we can choose to enlarge its probability defining \(X_{j}^{*}\). Then we achieve
\[\varepsilon:=cN-\mathbb{E}S_{N}^{*}>cN-\mathbb{E}S_{N}=0,\]
where \(S_{N}^{*}=X_{1}+\dots+X_{j}^{*}+\dots+X_{N}\). By the same arguments as deriving inequality (10), we get that \(0\leqslant\varphi(0)\leqslant\varphi_{\varepsilon}^{*}(0)\), where \(\varphi_{\varepsilon}^{*}(0)\) is the ultimate time survival probability at \(u=0\) for the model in which r.v. \(X_{j}^{*}\) replaces r.v. \(X_{j}\) for some \(j\in\{1,2,\dots,N\}\). According to [8, Thm. 4] we obtain
\[\varphi_{\varepsilon}^{*}(0)=\frac{m_{0}^{*(1)}}{\mathbb{P}(S_{N}^{*}=0)} \tag{13}\]
if \(\mathbb{P}(S_{N}^{*}=0)>0\), where \(m_{0}^{*(1)}\) is the first component of the solution of the following system of linear equations
\[M_{cN\times cN}\times\begin{pmatrix}m_{0}^{*(1)}\\ m_{1}^{*(1)}\\ \ldots\\ m_{c-1}^{*(1)}\\ m_{0}^{*(2)}\\ m_{1}^{*(2)}\\ \ldots\\ m_{c-1}^{*(2)}\\ \ldots\\ m_{0}^{*(N)}\\ m_{1}^{*(N)}\\ \ldots\\ m_{c-1}^{*(N)}\end{pmatrix}\ =\begin{pmatrix}0\\ \vdots\\ 0\\ cN-\mathbb{E}S_{N}^{*}\end{pmatrix}_{cN\times 1}=\begin{pmatrix}0\\ \vdots\\ 0\\ \varepsilon\end{pmatrix}_{cN\times 1}, \tag{14}\]
where \(M_{cN\times cN}\) is a certain matrix with elements related to the roots of equation \(G_{S_{N}^{*}}(s)=s^{cN},\,|s|\leqslant 1\) (see [8, Sec. 3]). Letting \(\varepsilon\to 0^{+}\) we derive from the system (14) that \(\varphi_{\varepsilon}^{*}(0)\to 0\) because of (13). Consequently \(\varphi(0)=0\) due to the estimate \(0\leqslant\varphi(0)\leqslant\varphi_{\varepsilon}^{*}(0)\) provided for an arbitrary \(\varepsilon>0\). It should be noted that the requirement \(\mathbb{P}(S_{N}^{*}=0)>0\) for equality (13) does not reduce generality, because \(\mathbb{P}(S_{N}^{*}=0)\) can be replaced by the probability of the smallest value of \(S_{N}^{*}\) if \(\mathbb{P}(S_{N}^{*}=0)=0\), see the comments in [8, Sec. 4]. In addition, the non-singularity of the matrix \(M_{cN\times cN}\) in (14) is not known in general, see [8, Sec. 4] and [9], also [10]. On the other hand, if \(c\in\mathbb{N}\), \(N=1\) and the roots of \(G_{X^{*}}=s^{c}\) are simple, the solution of (14) admits the closed-form expression and, obviously, \(m_{0}^{*(1)}\) is the multiple of \(c-\mathbb{E}X^{*}=\varepsilon\), see [9]. In cases when the non-singularity of the matrix \(M_{cN\times cN}\) in (14) remains questionable, we can refer to [8, Thm. 3], for the different proof that \(\varphi(0)=0\) if the net profit condition is neutral \(\mathbb{E}S_{N}=cN\) and \(\mathbb{P}(S_{N}=cN)<1\).
Having \(\varphi(0)=0\), the remaining values \(\varphi(u)=0,\,u\in\mathbb{N}\) can be obtained by the recurrence relation
\[\varphi(u)=\sum_{\begin{subarray}{c}i_{1}\leqslant u+c\\ i_{1}i_{2}\leqslant u+2c\\ \ldots\\ i_{1}+i_{2}+\ldots+i_{N}\leqslant u+cN\end{subarray}}\mathbb{P}(X_{1}=i_{1}) \mathbb{P}(X_{2}=i_{2})\cdots\mathbb{P}(X_{N}=i_{N})\,\varphi\left(u+cN-\sum_ {j=1}^{N}i_{j}\right),\]
see [8, eq. (5)] or by the following expression of survival probability generating function (see [8, Thm. 2])
\[\varphi_{\varepsilon}^{*}(0)+\varphi_{\varepsilon}^{*}(1)s+\varphi_{ \varepsilon}^{*}(2)s^{2}+\ldots=\frac{\mathtt{u}^{\,T_{\mathbbm{v}}}}{G_{S_{N} ^{*}}(s)-s^{cN}},\]
where, having in mind that some \(X_{j}\) from \(\{X_{1},\,\ldots,\,X_{n}\}\) is replaced by \(X_{j}^{*}\),
\[\mathfrak{u}=\begin{pmatrix}s^{c(N-1)}\\ s^{c(N-2)}G_{S_{1}^{*}}(s)\\ s^{c(N-3)}G_{S_{2}^{*}}(s)\\ \vdots\\ s^{c}G_{S_{N-2}^{*}}(s)\\ G_{S_{N-1}^{*}}(s)\end{pmatrix},\,\,\mathfrak{v}=\begin{pmatrix}&\sum\limits_{ i=0}^{c-1}m_{k}^{*(2)}\sum\limits_{k=i}^{c-1}s^{k}F_{X_{1}}(k-i)\\ &\sum\limits_{i=0}^{c-1}m_{i}^{*(3)}\sum\limits_{k=i}^{c-1}s^{k}F_{X_{2}}(k-i) \\ &\vdots\\ &\sum\nolimits_{i=0}^{c-1}m_{i}^{*(j+1)}\sum\limits_{k=i}^{c-1}s^{k}F_{X_{j}^ {*}}(k-i)\\ &\vdots\\ &\sum\nolimits_{i=0}^{c-1}m_{i}^{*(N)}\sum\limits_{k=i}^{c-1}s^{k}F_{X_{N-1}}( k-i)\\ &\sum\nolimits_{i=0}^{c-1}m_{i}^{*(1)}\sum\limits_{k=i}^{c-1}s^{k}F_{X_{N}}(k-i) \end{pmatrix},\]
and
\[G_{S_{l}^{*}}(s),\,|s|\leqslant 1,\,l\in\{1,\,2,\,\ldots,\,N-1\}\]
is the probability generating function of random variable
\[S_{l}^{*}=\begin{cases}X_{1}+\ldots+X_{l}&\text{if}\,\,l<j,\\ X_{1}+\ldots+X_{j-1}+X_{j}^{*}+X_{j+1}+\ldots+X_{l}&\text{if}\,\,l\geqslant j, \end{cases}\]
\(F_{X_{l}}\) is the distribution function of \(X_{l}\) and the collection
\[\left\{m_{0}^{*(1)},\,m_{1}^{*(1)},\ldots,m_{c-1}^{*(1)},\,m_{0}^{*(2)},\,m_{1 }^{*(2)},\ldots,\,m_{c-1}^{*(2)},\ldots,m_{0}^{*(N)},\,m_{1}^{*(N)},\,\ldots, \,m_{c-1}^{*(N)}\right\}\]
satisfies the system (14) being the multiple of \(cN-\mathbb{E}S_{N}^{*}\).
## 4 Classical risk model
In this section we prove Theorem 2.
Proof of Theorem 2.: Since the random variable \(X\) in model (1) is non-negative and \(X\equiv 0\) is out of options for the considered stochastic process, then \(\mathbb{E}X>0\) and there exists \(a>0\) such that \(\mathbb{P}(X>a)>0\). Similarly as proving Theorem 1, we now define the pair of dependent random variables \((X^{*},\,X)\) where \(X^{*}\) for any \(\varepsilon\in(0,a)\) is
\[X^{*}=\begin{cases}X-\varepsilon&\text{if}\,\,X>a,\\ X&\text{if}\,\,X\leqslant a.\end{cases}\]
For this new r.v.
\[\mathbb{E}X^{*} =\mathbb{E}X-\varepsilon\mathbb{P}(X>a)<\mathbb{E}X,\] \[\mathbb{P}(X^{*}\leqslant X) =\mathbb{P}(X^{*}\leqslant X,X>a)+\mathbb{P}(X^{*}\leqslant X,X \leqslant a)=1.\]
Let \((X_{j}^{*},\,X_{j})\), \(j=1,\,2,\,\ldots\) be independent copies of \((X^{*},\,X)\). Then we have:
\[\mathbb{P}\left(X_{j}^{*}\leqslant X_{j}\right)=1,\text{ for all }j \in\mathbb{N},\] \[\mathbb{P}\left(\sum_{j=1}^{n}X_{j}^{*}\leqslant\sum_{j=1}^{n}X_{ j}\right)=1,\text{ for all }n\in\mathbb{N},\] \[\mathbb{P}\left(\sum_{j=1}^{n}(X_{j}^{*}-c\theta_{j})\leqslant \sum_{j=1}^{n}(X_{j}-c\theta_{j})\right)=1,\text{ for all }n\in\mathbb{N},\] \[\mathbb{P}\left(\sup_{n\geqslant 1}\sum_{j=1}^{n}(X_{j}^{*}-c \theta_{j})\leqslant\sup_{n\geqslant 1}\sum_{j=1}^{n}(X_{j}-c\theta_{j}) \right)=1,\]
and, by similar arguments as in (10), \(\psi_{\varepsilon}^{*}(u)\leqslant\psi(u)\leqslant 1\) for all \(u\geqslant 0\). Conditions
\[\mathbb{E}X^{*}=\mathbb{E}X-\varepsilon\mathbb{P}(X>a),\,\,\lambda\mathbb{E}X/ c=1\]
and well-known formula for \(\psi_{\varepsilon}^{*}(0)\) (see, for example, [13] or many other sources for the Pollaczek-Khinchine formula) imply that
\[\psi_{\varepsilon}^{*}(0)=\frac{\lambda\mathbb{E}X^{*}}{c}=1-\frac{\lambda \varepsilon\mathbb{P}(X>a)}{c}\leqslant\psi(0)\leqslant 1.\]
By letting \(\varepsilon\to 0^{+}\) in the last inequalities, we get \(\psi(0)=1\), or equivalently \(\varphi(0)=0\). Then, \(\psi(u)=1\) for all \(u\geqslant 0\) is implied by the same Pollaczek-Khinchine formula observing \(\varphi_{\varepsilon}^{*}(u)\) being the multiple of \(\varphi_{\varepsilon}^{*}(0)\). Indeed,
\[\varphi_{\varepsilon}^{*}(u) =\left(1-\frac{\lambda\mathbb{E}X^{*}}{c}\right)\left(1+\sum_{n =1}^{\infty}\left(\frac{\lambda\mathbb{E}X^{*}}{c}\right)^{n}F_{I}^{*n}(u)\right)\] \[=\varphi_{\varepsilon}^{*}(0)\left(1+\sum_{n=1}^{\infty}\left( \psi_{\varepsilon}^{*}(0)\right)^{n}F_{I}^{*n}(u)\right),\,u\geqslant 0,\]
where
\[F_{I}(u)=\frac{1}{\mathbb{E}X^{*}}\int_{0}^{u}\mathbb{P}(X^{*}>x)\,\mathrm{d}x\]
and \(F_{I}^{*n}\) denotes the \(n\)-fold convolution of \(F_{I}\). Here
\[\sum_{n=1}^{\infty}\left(\psi^{*}(0)\right)^{n}F_{I}^{*n}(u)\leqslant\sum_{n =1}^{\infty}F_{I}^{*n}(u)=\sum_{n=1}^{\infty}\mathbb{P}(\eta_{1}+\ldots+\eta_{ n}\leqslant u)<\infty,\]
because of Lemma 4, where the non-negative independent and identically distributed random variables \(\eta_{1},\,\eta_{2},\,\ldots\) are described by the distribution function \(F_{I}\).
E.S. Andersen's model
In this section we prove Theorem 3.
Proof of Theorem 3.: Arguing the same as proving Theorem 2 in Section 4, we can define the random vector \((X^{*},\,X)\), its independent copies \((X^{*}_{1},\,X_{1})\), \((X^{*}_{2},\,X_{2})\), \(\dots\) and show that \(\psi^{*}_{\varepsilon}(u)\leqslant\psi(u)\leqslant 1\) for all \(u\geqslant 0\). Let \(S^{*}_{n}=\sum_{i=1}^{n}(X^{*}_{i}-c\theta_{i})\) and \(S_{n}=\sum_{i=1}^{n}(X_{i}-c\theta_{i})\) for all \(n\in\mathbb{N}\). Then, see [6, eq. (10)],
\[\psi^{*}_{\varepsilon}(0)=1-\exp\left\{-\sum_{n=1}^{\infty}\frac{\mathbb{P}(S ^{*}_{n}>0)}{n}\right\},\]
because of the net profit condition \(\mathbb{E}X^{*}-c\mathbb{E}\theta=-\varepsilon\mathbb{P}(X>a)<0\).
It is known that, see [17, Thm. 4.1], \(\mathbb{E}(X^{*}-c\theta)<0\) implies
\[\sum_{n=1}^{\infty}\frac{\mathbb{P}(S^{*}_{n}>0)}{n}<\infty,\]
while \(\mathbb{E}(X-c\theta)=0\) implies
\[\sum_{n=1}^{\infty}\frac{\mathbb{P}(S_{n}>0)}{n}=\infty.\]
Therefore
\[\varphi(0)\leqslant\varphi^{*}_{\varepsilon}(0)\leqslant\exp\left\{-\sum_{n= 1}^{N}\frac{\mathbb{P}(S^{*}_{n}>0)}{n}\right\}\]
for any \(N\in\mathbb{N}\). By letting \(\varepsilon\to 0^{+}\) in the last inequalities, we obtain
\[\varphi(0)\leqslant\exp\left\{-\sum_{n=1}^{N}\frac{\mathbb{P}(S_{n}>0)}{n}\right\}\]
and consequently \(\varphi(0)=0\) as \(N\) can be arbitrarily large and the series
\[\sum_{n=1}^{\infty}\frac{\mathbb{P}(S_{n}>0)}{n}\]
diverges. The equality \(\psi(u)=1\) for all \(u\geqslant 0\) is implied by the fact that \(\varphi^{*}_{\varepsilon}(u)\) is the multiple of \(\varphi^{*}_{\varepsilon}(0)\). Indeed, by the Pollaczek-Khinchine formula (see [6, eq. (10)])
\[\varphi^{*}_{\varepsilon}(u) =e^{-A}\left(1+\sum_{n=1}^{\infty}\left(1-e^{-A}\right)^{n}H^{*n} (u)\right)\] \[=\varphi^{*}_{\varepsilon}(0)\left(1+\sum_{n=1}^{\infty}\left( \psi^{*}_{\varepsilon}(0)\right)^{n}H^{*n}(u)\right),\,u\geqslant 0,\]
where
\[A=\sum_{n=1}^{\infty}\frac{\mathbb{P}(S_{n}^{*}>0)}{n},\] \[H(u)=\frac{F_{+}(u)}{F_{+}(\infty)},\] \[F_{+}(u)=\mathbb{P}(S_{N^{+}}^{*}\leqslant u)\] \[N^{+}=\inf\{n\geqslant 1:S_{n}^{*}>0\},\] \[S_{n}^{*}=\sum_{i=1}^{n}\left(X_{i}^{*}-c\theta_{i}\right),\]
and \(H^{*n}\) denotes the \(n\)-fold convolution of \(H\). Proof of the considered theorem follows according to the comments at the end of the proof of Theorem 2.
|
2305.18503 | From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework | Textual adversarial attacks can discover models' weaknesses by adding
semantic-preserved but misleading perturbations to the inputs. The long-lasting
adversarial attack-and-defense arms race in Natural Language Processing (NLP)
is algorithm-centric, providing valuable techniques for automatic robustness
evaluation. However, the existing practice of robustness evaluation may exhibit
issues of incomprehensive evaluation, impractical evaluation protocol, and
invalid adversarial samples. In this paper, we aim to set up a unified
automatic robustness evaluation framework, shifting towards model-centric
evaluation to further exploit the advantages of adversarial attacks. To address
the above challenges, we first determine robustness evaluation dimensions based
on model capabilities and specify the reasonable algorithm to generate
adversarial samples for each dimension. Then we establish the evaluation
protocol, including evaluation settings and metrics, under realistic demands.
Finally, we use the perturbation degree of adversarial samples to control the
sample validity. We implement a toolkit RobTest that realizes our automatic
robustness evaluation framework. In our experiments, we conduct a robustness
evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation
framework, and further show the rationality of each component in the framework.
The code will be made public at \url{https://github.com/thunlp/RobTest}. | Yangyi Chen, Hongcheng Gao, Ganqu Cui, Lifan Yuan, Dehan Kong, Hanlu Wu, Ning Shi, Bo Yuan, Longtao Huang, Hui Xue, Zhiyuan Liu, Maosong Sun, Heng Ji | 2023-05-29T14:55:20Z | http://arxiv.org/abs/2305.18503v1 | # From Adversarial Arms Race to Model-centric Evaluation
###### Abstract
Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs. The long-lasting adversarial attack-and-defense arms race in Natural Language Processing (NLP) is algorithm-centric, providing valuable techniques for automatic robustness evaluation. However, the existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples. In this paper, we aim to set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to further exploit the advantages of adversarial attacks. To address the above challenges, we first determine robustness evaluation dimensions based on model capabilities and specify the reasonable algorithm to generate adversarial samples for each dimension. Then we establish the evaluation protocol, including evaluation settings and metrics, under realistic demands. Finally, we use the perturbation degree of adversarial samples to control the sample validity. We implement a toolkit **RobTest** that realizes our automatic robustness evaluation framework. In our experiments, we conduct a robustness evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation framework, and further show the rationality of each component in the framework. The code will be made public at [https://github.com/thunlp/RobTest](https://github.com/thunlp/RobTest).
## 1 Introduction
Pre-trained language models (PLMs) are vulnerable to textual adversarial attacks that fool the models by adding semantic-preserved perturbations to the inputs (Zhang et al., 2020). Compared to the static evaluation benchmarks (Wang et al., 2018, 2019), attack methods can continually generate diverse adversarial samples to reveal models' weaknesses, rendering a more comprehensive and rigorous model evaluation. Previous work explores adversarial NLP in both the attack (Gao et al., 2018; Alzantot et al., 2018) and the defense (Mozes et al., 2021; Huang et al., 2019) sides, leading to a long-lasting adversarial arms race.
The arms race is algorithm-centric. It continually motivates stronger attack and defense methods to explore and fix models' weaknesses, providing useful techniques for robustness evaluation. However, existing work on model robustness evaluation naturally follows the previous evaluation practice, and doesn't fully consider the real-world needs of robustness evaluation (Zeng et al., 2021; Wang et al., 2021; Goel et al., 2021) (See Figure 1). We identify three weaknesses in previous robustness evaluation: (1) Relying on a single attack method (Zang et al., 2020) or static challenging datasets (Nie et al., 2019; Wang et al., 2021), which can only measure a limited number of aspects of models' capabilities; (2) Directly inheriting the evaluation settings and metrics in the arms race era, which may result in impractical evaluation (Zeng et al., 2021; Morris et al., 2020); (3) Designing
Figure 1: The original evaluation pipeline. The attacker is usually selected by intuition and practitioners get little information from scores.
invalid adversarial sample1 filtering rules based on certain thresholds (e.g., sentence similarity), which cannot generalize to all kinds of adversarial samples Wang et al. (2021); Zeng et al. (2021).
Footnote 1: Detailed explanation for validity is in Appendix A.
Thus, we propose to shift towards the model-centric evaluation, which should satisfy the following characteristics accordingly: (1) **Comprehensively** measuring NLP models' robustness; (2) Establishing a **reasonable** evaluation protocol considering practical scenarios; (3) Filtering out invalid adversarial samples for **reliable** robustness estimation. Given these challenges, a standard and acknowledged framework for employing adversarial attacks to automatically measure and compare NLP models' robustness is lacking (See Figure 7).
In this paper, we motivate a unified model-centric automatic robustness evaluation framework based on the foundation of the adversarial arms race. To achieve **comprehensive** evaluation, we define eight robustness dimensions from top to down, constituting a evaluation of multi-dimensional robustness towards sentence-level, word-level, and char-level transformations. For each robustness dimension, we specify the concrete algorithm to generate adversarial samples. Then we set up a **reasonable** evaluation protocol by specifying evaluation settings and metrics under realistic demands. Finally, we rely on the perturbation degree to control the validity of generated adversarial samples for more **reliable** robustness evaluation. Our intuition is that adversarial samples with smaller perturbation degrees are more likely to be valid, which is justified through human annotation experiments.
We implement a toolkit **RobTest** to realize our robustness evaluation framework (See Figure 6). We highlight four core features in RobTest, including basic adversarial attack methods, robustness report generation, general user instructions, and adversarial data augmentation. In experiments, we use RobTest to measure the robustness of RoBERTa models Liu et al. (2019) to demonstrate the effectiveness of our evaluation framework in addressing the core challenges. Further, we show the rationality of each component in our robustness evaluation framework through detailed analysis.
## 2 Model-centric Robustness Evaluation
In this section, we motivate the first model-centric automatic robustness evaluation framework. We first define robustness evaluation dimensions and specify corresponding attack algorithms (Sec. 2.1). Then we discuss the evaluation protocol under realistic demands (Sec. 2.2). Finally, we provide solutions to filter out invalid adversarial samples for more reliable robustness evaluation (Sec. 2.3).
### Robustness Evaluation Dimension
Motivation.Existing research designs adversarial attacks based on observations Le et al. (2022) or intuitions Li et al. (2020) and adopts the proposed method to test the robustness of evaluated models. In this procedure, the robustness evaluation is restricted to the specific attack method without considering samples from other potential distributions. We argue that considering only one single dimension cannot comprehensively describe the models' robustness (See Sec. 4.3 for verification).
Selection Criteria.We build our model-centric robustness evaluation framework based on the foundation of adversarial NLP but aim to cover a more comprehensive set of robustness dimensions. We integrate previous adversarial attack methods in a systematic way. We focus on task-agnostic robustness dimensions2, and define them from top to down (See Table 1). The selection criteria of robustness evaluation dimensions and attack methods are: (1) **Important and practical**: Methods that can reasonably simulate common inputs from real-world users or attackers; (2) **Representative**: Methods that have been studied for a long time in the adversarial arms race stage and have many homogeneous counterparts; (3) **Diversified**: Methods that explore various aspects of model capabilities.
Footnote 2: Task-specific robustness dimensions are also essential, and we leave it for future work.
Note that we don't consider the "imperceptible perturbations" requirement in the selection of robustness dimensions, although previous work repeatably emphasizes this requirement Goodfellow et al. (2014); Ren et al. (2019); Zang et al. (2020). We give our justification in Appendix B.
Dimensions.We start from a high-level categorization, considering char-level, word-level, and sentence-level transformations, differing in the perturbation granularity (See Table 1). **Char-level** transformations add perturbations to characters in the word units. We include the following dimensions in our framework: (1) **Typo**Li et al. (2018); Eger and Benz (2020) considers five basic operations to add typos in the inputs, including randomly
delete, insert, replace, swap, or repeat one character; (2) **Glyph**Li et al. (2018); Eger et al. (2019) replaces characters with visually-similar ones; (3) **Phonetic**Le et al. (2022) replaces characters but makes the whole word sound similar to the origin. **Word-level** transformations modify word units as a whole. We include the following dimensions in our framework: (1) **Synonym**Ren et al. (2019); Zang et al. (2020) replaces words with their synonymous substitutes according to external knowledge sources. We consider WordNet Miller (1995) and HowNet Dong and Dong (2003) in our implementation; (2) **Contextual**Li et al. (2020); Garg and Ramakrishnan (2020) replaces words with their context-similar substitutes, which are generated by masked language models; (3) **Inflection**Tan et al. (2020) perturbs the inflectional morphology of words. **Sentence-level** transformations generate adversarial samples directly from the entire original sentences. We include the following dimensions in our framework: (1) **Syntax**Iyyer et al. (2018); Huang and Chang (2021); Sun et al. (2021) transforms the syntactic patterns of original samples; (2) **Distraction**Naik et al. (2018); Ribeiro et al. (2020); Chen et al. (2022) appends some irrelevant contents to the end of sentences.
Malicious & General Tags.For each robustness dimension, we also attach the general or malicious tag to characterize the intended simulated agents. The general (malicious) tag indicates that the generated samples mainly come from benign users (malicious attackers). For example, Synonym and Distraction are representative types of general and malicious dimensions respectively. Note that we attach both tags to three char-level transformations since both benign users and malicious attackers can produce these kinds of samples.
### Evaluation Protocol
Motivation.Previous work in adversarial NLP naturally follows the early attempts Szegedy et al. (2013); Goodfellow et al. (2014); Liang et al. (2017); Gao et al. (2018) to establish the evaluation protocol. However, Chen et al. (2022) categorize and summarize four different roles of textual adversarial samples, and argue for a different evaluating protocol for each role. In our framework, we reconsider the robustness evaluation protocol when employing adversarial attack methods for model evaluation. We first describe the evaluation setting, and then the evaluation metrics in our framework.
Evaluation Setting (available information from evaluated models).Most existing attack methods assume the accessibility to confidence scores only Alzantot et al. (2018); Ren et al. (2019); Zang et al. (2020); Li et al. (2020); Chen et al. (2021). We acknowledge the rationality of this assumption since the size of models may become too large nowadays Radford et al. (2019); Brown et al. (2020), resulting in inefficient evaluation if also requiring the gradients information for adversarial samples generation Goodfellow et al. (2014). However, in practice, we as practitioners mostly have all access to the evaluated models, including the parameters and gradient information, for better robustness evaluation.
Thus, we implement three evaluation settings in our framework, assuming different available information from evaluated models. The settings include rule-based, score-based, and gradient-based attacks. Rule-based attacks don't assume any information from the evaluated models and generate adversarial samples based on pre-defined rules. Score-based and gradient-based attacks assume access to the confidence scores and gradients information respectively from evaluated models for more rigorous evaluation. They first compute the saliency maps that give the importance scores to each word for samples and then perform selective perturbations based on the scores. Specifically, for score-based attacks, we employ the difference in confidence scores when iteratively masking each word as the
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Granularity & Dimension & General? & Malicious? & Case \\ \hline \multirow{2}{*}{Char-level} & Typo & Yes & Yes & I watch a smart, sweet a dish playful romantic comedy. \\ & Glyph & Yes & Yes & I watch a smart, sweet and playful romantic comedy. \\ & Phonetic & Yes & Yes & I watch a smart, sweet and playful romantic comedy. \\ \hline \multirow{3}{*}{Word-level} & Synonym & Yes & No & I watch a smart, sweet and nashyf romantic comedy. \\ & Contextual & Yes & No & We watch a smart, sweet and playful romantic teleplay. \\ & Inflection & Yes & No & I watched a smart, sweet and playful romantic comedies. \\ \hline \multirow{2}{*}{Sentence-level} & Syntax & Yes & No & In my eyes will be a witry, sweet romantic comedy. \\ & Distraction & No & Yes & I watch a smart, sweet and playful romantic comedy. True is not False. \\ \hline \hline \end{tabular}
\end{table}
Table 1: The robustness dimensions included in our framework. We also attach general and malicious robustness tags to each dimension. The original sentence is “I watch a smart, sweet and playful romantic comedy.”
important score for that word. For gradient-based attacks, we employ integrated gradient (IG) (Sundararajan et al., 2017) to compute the saliency map. IG computes the average gradient along the linear path of varying the input from a baseline value to itself. Besides, we use greedy search since it can achieve satisfying performance within a reasonable time (Yoo et al., 2020).
Evaluation Metrics.Most previous work considers the "is robust" problem (Li et al., 2020, 2021; Chen et al., 2021). They generate adversarial samples for each original sample and test if at least one of them can successfully attack the evaluated models. Then the final score is computed as the percentage of samples that are not attacked successfully. This is the **worst performance estimation**, requiring models to be robust to all potential adversarial samples in order to score. In our framework, we introduce the **average performance estimation** for a more comprehensive robustness evaluation. Specifically, for each original sample, we compute the percentage of cases that models can correctly classify among all potential adversarial samples. Then we average over all original samples to get the average performance estimation score.
### Reliable Robustness Evaluation
Motivation.Previous work chases for higher attack success rate, while the validity of adversarial samples may be sacrificed3. The consequence of this practice is unreliable and inaccurate robustness evaluation. We showcase adversarial samples crafted by three popular methods on SST-2 (Socher et al., 2013) in Table 2. While all samples successfully flip the predictive label, they are not good choices for robustness evaluation because the ground truth label is changed (e.g., BERT-Attack) or the meaning of the original sentence is changed (e.g., GA, Textbugger). Morris et al. (2020); Wang et al. (2021); Hauser et al. (2021) show that there are many such invalid cases in adversarial samples that successfully mislead models' predictions. We further conduct a human evaluation to support this conclusion. We hire annotators to evaluate adversarial samples validity of three representative attack methods, namely contextual-based (Li et al., 2020), synonym-based (Zang et al., 2020), and typo-based attacks (Karpukhin et al., 2019). The results show that on average only **25.5%**, **20.0%**, and **31.5%** generated samples are valid. Thus, if directly employing original adversarial samples for robustness evaluation, the results are unreliable and don't convey too much useful information to practitioners.
Footnote 3: We give a detailed explanation for adversarial samples validity in Appendix A.
Potential Solutions.For reliable robustness evaluation, we need to consider how to ensure the validity of constructed adversarial samples. We can approach this problem in two different ways: (1) Verify generated adversarial samples; (2) Incorporating the validity criterion in robustness evaluation. All existing work focuses on verification. For example, in the implementation of OpenAttack (Zeng et al., 2021) and TextFlint (Wang et al., 2021), an embedding similarity threshold is set for filtering adversarial samples. However, we argue that **a unified sample selection standard without considering the specific trait of the attack method can not perform effective filtering.** For example, consider the adversarial sample crafted by adding typos: "I love the way that it took chanes and really asks you to takke these great leaps of faith and pays off." This sample may be filtered out by the similarity or perplexity threshold due to its unnatural expression. However, it well simulates the input from real-world users and retains the original meaning, thus should be considered in the evaluation.
Our Method.In our framework, we consider incorporating the validity criterion into robustness evaluation. We hold a basic intuition that there is an inverse correlation between the perturbation degree and the validity of adversarial samples. Thus, we rely on the perturbation degree to measure the adversarial sample validity. Note that the perturbation degree is defined according to the concrete transformation level4. We justify our intuition and demonstrate the superiority of this filtering strategy compared to previous heuristic rules (e.g., grammar
\begin{table}
\begin{tabular}{l l} \hline \hline Original & I love the way that it took chances and really asks you to take these great leaps of faith and pays off. \\ \hline BERT-Attack (Li et al., 2020) & I hate the way that it took chances and jesus asking you to take these grand leaps of faith and pays off. \\ \hline GA (Alzantot et al., 2018) & I screw the way that it read chances and really asks you to remove these great leaps of faith and pays off. \\ \hline Textbugger (Li et al., 2018) & I lve the way that it took chances and really asks you to take these great leaps of faith and pays off. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Cases of invalid adversarial samples crafted by three popular attack methods. The original label is positive.
error, sentence similarity, perplexity) in Sec. 4.3.
**We propose to measure models' robustness under the specific attack method in various perturbation degrees and compute a robustness score for each degree.** The robustness score is the model's worst performance estimation or average performance estimation. We put more emphasis on the robustness scores computed at lower perturbation degrees5 and employ the exponentially weighted moving average Hunter (1986) to compute the final score for each robustness dimension. Formally, we use \(\theta_{1},\theta_{2},...,\theta_{n}\) to denote robustness scores computed at \(n\) perturbation degrees from high to low. Set \(\mathcal{V}_{1}=\theta_{1}\). To compute the **final robustness score**\(\mathcal{V}_{n}\):
Footnote 5: Note that the perturbation degree computation methods are different for different dimensions (See Appendix C).
\[\mathcal{V}_{t}=\beta*\mathcal{V}_{t-1}+(1-\beta)*\theta_{t},\quad t=2,...,n, \tag{1}\]
where \(\beta\) controls the weights on scores computed at different degrees. Empirically, it should be chosen depending on the risk level of the considered task, and smaller \(\beta\) will more emphasize the importance of evaluation on high-perturbed samples, which is essential for high-stake applications. In our framework, we set \(\beta\)=0.5 for demonstration.
## 3 RobTest
We implement an automatic robustness evaluation toolkit named **RobTest** to realize our proposed framework. We highlight four features of RobTest.
Basic Adversarial Attack Methods.We implement eight attack methods, corresponding to eight robustness evaluation dimensions in our framework. We also include three attack types that assume different information available from evaluated models, namely rule-based, score-based, and gradient-based attacks. RobTest allows practitioners to customize evaluated models and datasets and design new attack methods to test specified robustness dimensions. Also, it supports the multi-process running of adversarial attacks for efficiency.
Robustness Report.RobTest provides comprehensive robustness reports for evaluated models. See Figure 2 and Appendix G for examples of single-model robustness reports. See Figure 3 and Appendix H for examples of the robustness comparison of the two models. We further discuss the details of robustness reports in Sec. 4.
General Instructions.Existing toolkits that implement various attack methods don't provide detailed guidance on how to conduct robustness evaluation Morris et al. (2020); Zeng et al. (2021); Wang et al. (2021). In RobTest, we provide general instructions for practitioners. Two kinds of instructions are included: (1) How to select appropriate robustness dimensions to evaluate, and which accessibility (e.g., score-based) should be considered. We introduce detailed descriptions of all robustness dimensions in RobTest, including the real-world distributions they consider; (2) How to understand the robustness report. We give detailed explanations for the figures and tables in the report.
Data Augmentation.Practitioners may identify several weak robustness dimensions of evaluated models. RobTest supports generating adversarial samples under the specified perturbation degree for data augmentation to improve the robustness.
## 4 Experiment
We conduct experiments to demonstrate the effectiveness of our automatic robustness evaluation framework using RobTest. We aim to show how our framework fulfills the characteristics of model-centric robustness evaluation6.
Footnote 6: We leave the detailed evaluation and analysis of various model architectures and robustness-enhanced algorithms for future work.
### Experimental Setting
Dataset and Evaluated Model.In our experiments, we choose the general, common, and application-driven tasks that our task-agnostic robustness dimensions can be applied to7. We consider sentiment analysis, news classification, and hate-speech detection tasks. We choose SST-2 Socher et al. (2013), AG's News Zhang et al. (2015), and Jigsaw8 as evaluation datasets. We choose RoBERTa-base and RoBERTa-large Liu et al. (2019) as evaluated models.
Footnote 7: Task-specific robustness dimensions can be designed for certain tasks, e.g., name entity robustness for reading comprehension Yan et al. (2021). We leave it for future work.
Evaluation Setting.For each dataset, we sample 1,000 samples from the test set for experiments and generate at least 100 testing cases for each sample under each perturbation degree. In pilot experiments, we found no advantage of employing gradient information to generate saliency maps, and
thus we only consider rule-based and score-based accessibility in experiments. Further research is needed for more effective utilization of gradients.
### Robustness Evaluation
We consider two kinds of robustness evaluation: (1) Robustness evaluation of a given model; (2) Robustness comparison of two models. This can be easily extended to three or more models included.
Single-model Robustness Evaluation.We generate robustness evaluation reports for given evaluated models. Figure 2 shows an example of one single page of the robustness report of RoBERTa-base on SST-2, considering the Typo (Malicious) dimension. Full reports for all datasets and models are in Appendix G. For each dimension, we show the robustness score computed at each robustness level considering two evaluation settings and two metrics, in both figures and the table. We can observe that on average, the model can tolerate inputs with very small perturbation degrees (e.g., 0.05), but its performance degrades significantly in the worst performance estimation. This indicates that the model will be misled if malicious attackers try a little longer, even in small perturbation degrees. The final robustness scores for this dimension are derived by averaging over all robustness scores using Eq. 1, which will serve as overall estimations of the model's robustness in this dimension considering the validity criterion. Also, we adopt the radar map to record the final robustness scores for all robustness dimensions, from which we can easily observe which dimension models fail. For example, we can observe from the radar map in Figure 2 that RoBERTa-base fails frequently when users use various syntactic structures in their expressions or char-level transformations have been adopted for malicious attacks. The implications are: (1) Practitioners should improve the model's capacity to capture syntax patterns or have extra mechanisms to deal with inputs with complex syntactic structures; (2) Practitioners should avoid deploying the model on security-related applications (e.g., hate-speech detection) to prevent hidden dangers.
Robustness Comparison.We can also generate reports to compare the two models' robustness. Figure 3 shows the core part of the report that compares the robustness of RoBERTa-base and RoBERTa-large considering all dimensions on SST-2. We also employ radar maps to clearly show the robustness gap between the two models. The full report is in Appendix H for demonstra
Figure 3: Radar map to compare the robustness of RoBERTa-base and -large considering all dimensions on SST-2. We use Base- and Large- to denote two models, and other denotations are the same as Figure 2.
Figure 2: Example of one single page of the robustness report of RoBERTa-base on SST-2, regarding the Typo (Malicious) dimension. The full report is shown in Figure 10. We use Rule- and Score- to denote two evaluation settings, and use -Average and -Worst to denote two metrics.
tion. We observe that RoBERTa-large consistently shows better robustness in all dimensions compared to RoBERTa-base. This can be attributed to two potential factors: a) Larger models can generalize better beyond simple patterns (e.g., spurious correlations) in the in-distribution training dataset, thus more robust to distribution shifts Tu et al. (2020); b) Given the strong correlation between in-distribution and out-of-distribution performance Miller et al. (2021), the robustness of larger models can be partially explained by better performance on in-distribution data. The quantification of these two factors is left for future work since the experiments in this paper are mainly for demonstration purposes.
### Analysis of Framework Components
In this section, we analyze and prove the rationality of each component in our framework, including eight robustness dimensions, evaluation protocol, and our method to tackle the validity of adversarial samples. For better demonstrations, we aggregate the results of eight dimensions considering two model sizes, two evaluation settings, and two metrics. The results on SST-2 are in Figure 4. The results on AG's News and Jigsaw are in Appendix E.
Robustness Dimensions.We observe that models exhibit different capacities across all robustness dimensions, evidenced by substantially different robustness scores. This indicates the insufficiency in previous practice that adopts one single attack method to evaluate models' robustness. For example, only showing models' robustness to morphology inflection doesn't guarantee the same robustness transfer to inputs containing typos. Thus, a multi-dimensional robustness evaluation in our framework is needed to reveal models' vulnerability in various circumstances, ensuring a more comprehensive evaluation of model capacities.
Evaluation Protocol.Our evaluation protocol includes two evaluation metrics (average and worst performance estimation) and two evaluation settings (rule-based and score-based). We show that the average performance estimation is in complementary to the worst performance estimation, showing the models' average success rates on the corresponding robustness dimension. Thus, it can better reflect models' capacities since most attack methods can reduce models' worst performance estimation to near zero in high perturbation degrees, making it hard to compare different models.
Also, score-based and rule-based attacks consider different evaluation settings. The score-based attacks are more effective than rule-based attacks considering average performance estimation. But the opposite is true considering worst performance
Figure 4: Comprehensive results of RoBERTa-base (Base) and RoBERTa-large (Large) on SST-2. We consider rule-based (Rule) and score-based (Score) attacks, and worst (Worst) and average (Average) performance estimation.
estimation, probably because score-based attacks only perturb certain important words, limiting the search space. Thus, incorporating these two evaluation settings is essential in robustness evaluation.
Invalid Adversarial Samples Filtering.We observe that robustness scores drop along with the increase in the perturbation degrees across different models, datasets, and attack methods. However, as we argue, the robustness scores in higher perturbation degrees underestimate models' robustness since many successful but invalid adversarial samples exist. Thus, directly looking into the robustness curves without considering the influence of perturbation degrees on validity is unreliable.
We justify our solution of incorporating the validity criterion into the robustness estimation process. The basic intuition is that adversarial samples with higher perturbation degrees are more likely to become invalid. We conduct human annotation to verify it (See Table 3). The annotation details are in Appendix D. We can observe that (1) attack methods have a large impact on sample validity, and (2) our intuition is justifiable since mostly a larger perturbation degree substantially harms the validity.
Also, we compare with previous heuristic filtering rules based on grammar errors (Grammar) (Zang et al., 2020; Chen et al., 2021), sentence similarity (USE) (Li et al., 2020; Morris et al., 2020; Wang et al., 2021; Zeng et al., 2021), and perplexity (Perplexity) (Qi et al., 2021). We compute predictive validity scores for each adversarial sample based on the filtering rules (e.g., the perplexity rule will assign low validity scores to samples with high perplexity). For each filtering rule, we divide generated adversarial samples into five validity levels based on their validity scores and compute the average human annotated validity score of samples in five levels respectively (See Figure 5). Our method based on the perturbation degree better aligns with the ideal trend, while previous filtering methods show inconsistent trends and cannot effectively distinguish invalid cases.
## 5 Related Work
Standard evaluation benchmarks (Wang et al., 2018, 2019) follow the Independently Identical Distribution hypothesis that assumes the training and testing data come from the same distribution. However, there is no such guarantee in practice, motivating the requirement to evaluate models' robustness beyond the standard accuracy. Various approaches have been proposed to simulate distribution shifts to construct static robustness evaluation benchmarks, including stress test (Naik et al., 2018), identifying and utilizing spurious correlations (McCoy et al., 2019; Zhang et al., 2019), and domain shifts construction (Hendrycks et al., 2020; Yang et al., 2022). Also, adversarial samples have been involved in robustness benchmarks, including machine-generated (Wang et al., 2021) or human-in-the-loop generated (Wallace et al., 2019, 2021; Kiela et al., 2021) samples.
Compared to static benchmarks, we motivate to employ automatic attack methods to evaluate models' robustness dynamically, which is more comprehensive and rigorous. Our work is built upon the long-lasting attack-and-defense arms race in adversarial NLP (Wang et al., 2019; Zhang et al., 2020), mainly absorbing various attack methods. The attack methods can be roughly categorized into char-level, word-level, and sentence-level attacks, corresponding to the hierarchy in our framework. Char-level attacks perturb the texts in the finest granularity, including deleting, inserting, replacing, swapping, and repeating characters (Karpukhin et al., 2019; Gao et al., 2018). Word-level attacks search for an optimal solution for word substitutions, using external knowledge bases (Ren et al., 2019; Zang et al., 2020) or contextual information (Li et al., 2020; Garg and Ramakrishnan, 2020; Yuan et al., 2021). Sentence-level attacks transform the text considering syntactic patterns (Iyyer et al., 2018), text styles (Qi et al., 2021), and domains (Wang et al., 2020).
## 6 Conclusion
We present a unified framework, providing solutions to three core challenges in automatic robustness evaluation. We give a further discussion about robustness evaluation in Appendix F. In the future,
Figure 5: Results of the validity prediction. An ideal prediction should ensure the annotation validity score is proportional to the predicted validity level.
we will selectively include more robustness dimensions in our framework.
## Limitation
Although we explore diverse robustness dimensions, there are more possible dimensions to cover, and we highly encourage future researchers to complete our paradigm for more comprehensive robustness evaluations. Moreover, our sample selection strategy is base on the perturbation degree. While being effective, this strategy is an approximate suboptimal solution to the problem. We leave finding better selection strategies as future work.
## Ethical Consideration
In this section, we discuss the intended use and energy saving considered in our paper.
Intended Use.In this paper, we consider beyond the textual attack-and-defense arms race and highlight the role of adversarial attacks in robustness evaluation. We design a systematic robustness evaluation paradigm to employ adversarial attacks for robustness evaluation. We first summarize deficiencies in current works that limit the further use of adversarial attacks in practical scenarios. Then we propose a standardized paradigm to evaluate the robustness of models using adversarial attacks. We also develop an extensible toolkit to instantiate our paradigm.
Energy Saving.We describe our experimental details to prevent other researchers from unnecessary hyper-parameter adjustments and to help them quickly reproduce our results. We will also release all models we use in our experiments.
## Acknowledgements
This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Major Project of the National SocialScience Foundation of China (No. 22&ZD298), Institute Guo Qiang at Tsinghua University.
Yangyi Chen and Ganqu Cui made the original research proposal and wrote the paper. Hongcheng Gao conducted experiments and helped to organize the paper. Lifan Yuan initiated the codebase and contributed to the proposal. Everyone else participated in the discussion, experiments, and paper writing of this study.
|
2305.01052 | Activity-driven phase transition causes coherent flows of chromatin | We discover a new type of nonequilibrium phase transition in a model of
chromatin dynamics, which accounts for the coherent motions that have been
observed in experiment. The coherent motion is due to the long-range
cooperation of molecular motors tethered to chromatin. Cooperation occurs if
each motor acts simultaneously on the polymer and the surrounding solvent,
exerting on them equal and opposite forces. This drives the flow of solvent
past the polymer, which in turn affects the orientation of nearby motors and,
if the drive is strong enough, an active polar (``ferromagnetic'') phase of
motors can spontaneously form. Depending on boundary conditions, either
transverse flows, or sustained longitudinal oscillations and waves are
possible. Predicted time and length scales are consistent with experiments. We
now have in hand a coarse-grained description of chromatin dynamics which
reproduces the directed coherent flows of chromatin seen in experiments. This
field-theoretic description can be analytically coupled to other features of
the nuclear environment such as fluctuating or porous boundaries, local
heterogeneities in the distribution of chromatin or its activity, leading to
insights on the effects of activity on the cell nucleus and its contents. | Iraj Eshghi, Alexanda Zidovska, Alexander Y. Grosberg | 2023-05-01T19:32:11Z | http://arxiv.org/abs/2305.01052v2 | # Activity-driven phase transition causes coherent flows of chromatin
###### Abstract
We discover a new type of nonequilibrium phase transition in a model of chromatin dynamics, which accounts for the coherent motions that have been observed in experiment. The coherent motion is due to the long-range cooperation of molecular motors tethered to chromatin. Cooperation occurs if each motor acts simultaneously on the polymer and the surrounding solvent, exerting on them equal and opposite forces. This drives the flow of solvent past the polymer, which in turn affects the orientation of nearby motors and, if the drive is strong enough, an active polar ("ferromagnetic") phase of motors can spontaneously form. Depending on boundary conditions, either transverse flows, or sustained longitudinal oscillations and waves are possible. Predicted time and length scales are consistent with experiments. We now have in hand a coarse-grained description of chromatin dynamics which reproduces the directed coherent flows of chromatin seen in experiments. This field-theoretic description can be analytically coupled to other features of the nuclear environment such as fluctuating or porous boundaries, local heterogeneities in the distribution of chromatin or its activity, leading to insights on the effects of activity on the cell nucleus and its contents.
_Introduction -_ Chromatin is the functional form of DNA in living cells, with a variety of active processes such as transcription, replication and DNA repair, taking place directly on the chromatin fiber [1; 2; 3]. Active forces from these processes affect the organization and dynamics of chromatin [4; 5; 6]. Through Displacement Correlation Spectroscopy (DCS), chromatin motions were simultaneously mapped across the entire nucleus in live cells, revealing that chromatin exhibits fast uncorrelated motions at short times (\(<1\) s) and slow correlated motions at longer times [7]. The correlated chromatin motions are coherent over 3-5 \(\mu\)m for several seconds, before the coherent domains break up and new ones form, resembling an oscillatory-like behavior [7]. Furthermore, while the uncorrelated motions were shown to be thermal-like, the coherent chromatin flows were eliminated upon ATP depletion or inhibition of major nuclear enzymes such as RNA polymerase II, DNA polymerase and topoisomerase II, demonstrating active, energy-dissipating and nonequilibrium nature of the coherent chromatin flows [7; 8; 9].
From the active matter prospective, hydrodynamics of systems with activity was the subject of many studies, as reviewed in [10]. Depending on the role of solvent and the symmetry of the order parameter [11; 12], active hydrodynamics exhibit phenomena ranging from coherent instabilities [13; 14], to nematic or polar order [15; 16], to treadmilling [17; 18]. In many works, e.g., on active nematics the idea is that nematic order is formed as in the usual passive system, due to interactions between, say, elongated molecules, and then activity drives spectacularly interesting dynamics (see [19]).
In the context of chromatin, molecular motors driving active dynamics, such as RNA polymerases, do not appear to be close enough to form a long-range order due to direct contact with each other [20]. At the same time, hydrodynamic treatment of chromatin finds that coherent chromatin dynamics can be sustained only in the presence of the ordered orientations of force dipoles [8]. In alternative hydrodynamics-free approaches, computationally reproducing coherent chromatin motions required the use of artificial long-range interactions [21; 22; 23]. An important hint came from hydrodynamic simulations work, where large-scale coherent chromatin dynamics as well as strong nematic order of chromatin fiber was observed, without inserting any artificial long-range forces [9]. Instead, this model relies on the non-specific effects of hydrodynamics to mediate such interactions. In our earlier study, we identified motors, which exert equal but opposite forces on the polymer and solvent, as responsible for the large-scale hydrodynamic flows in the chromatin-nucleoplasm two-fluid system [24]. Here, we aim to develop a coarse-grained hydrodynamic model, which reproduces the development of the coherent chromatin phase. We hypothesize that there can be an ordering phase transition when the force of the motors exceeds a threshold value. We seek to analyze which properties of the chromatin-nucleoplasm system govern this phase transition as well as the structure of ordered phase.
_The model and equations of motion: linear response -_ Following earlier work [8; 24] we describe chromatin using the two-fluid model originally by Doi and Onuki [25]. The dynamics of the system in this model is described by the fields of polymer velocity \(\mathbf{v}^{\mathrm{p}}(\mathbf{r},t)\), polymer volume fraction \(\phi(\mathbf{r},t)\), and the solvent velocity \(\mathbf{v}^{\mathrm{s}}(\mathbf{r},t)\), while the solvent volume fraction is \(1-\phi(\mathbf{r},t)\) because of overall incompressibility. To describe the onset of spontaneous symmetry breaking and formation of polar ordered domains, we start with the assumption of linear and local rheological response of the polymer. This implies that the velocities are small, as are the deviations from the average density, \(\phi(\mathbf{r},t)=\phi_{0}+\delta\phi(\mathbf{r},t)\). This implies further that polymer osmotic pressure is \(\Pi\simeq K\delta\phi(\mathbf{r},t)\), with os
motic modulus \(K\), while the force resulting from polymer viscous stress is \(\eta^{\rm p}\star\nabla^{2}\mathbf{v}^{\rm p}(\mathbf{r},t)\), where polymer viscosity may have some time memory kernel and \(\star\) means convolution (see below about neglect of extensional viscosity and terms \(\sim\nabla(\nabla\cdot\mathbf{v}^{\rm p})\)). In this approximation, equations of motion of the model are conveniently written in the Fourier-transformed frequency domain (with sign convention \(\partial/\partial t\rightarrow-i\omega\)) as follows:
\[\zeta(\mathbf{v}_{\omega}^{\rm p}-\mathbf{v}_{\omega}^{\rm s}) =\eta_{\omega}^{\rm p}\nabla^{2}\mathbf{v}_{\omega}^{\rm p}-K \nabla\delta\phi_{\omega}-\phi_{0}\nabla P_{\omega}+\mathbf{F}_{\omega}^{\rm p} \tag{1a}\] \[\zeta(\mathbf{v}_{\omega}^{\rm s}-\mathbf{v}_{\omega}^{\rm p}) =\eta^{\rm s}\nabla^{2}\mathbf{v}_{\omega}^{\rm s}-(1-\phi_{0}) \nabla P_{\omega}+\mathbf{F}_{\omega}^{\rm s}\] (1b) \[i\omega\delta\phi_{\omega} =\phi_{0}\nabla\cdot\mathbf{v}_{\omega}^{\rm p}=-(1-\phi_{0}) \nabla\cdot\mathbf{v}_{\omega}^{\rm s} \tag{1c}\]
The first two equations represent force balance conditions for polymer and solvent respectively, while the last two are continuity conditions for these two components. Here \(\zeta\) is the friction coefficient of polymer against solvent, per unit volume, \(\eta^{\rm s}\) is the viscosity of the solvent, \(P_{\omega}\) is the hydrostatic pressure.
The heart of the problem is the understanding of active force densities \(\mathbf{F}^{\rm p}\) and \(\mathbf{F}^{\rm s}\) generated by active motors. Typical size of every motor, which we denote \(a\), is on the order of or smaller than the mesh size \(\lambda\). As explained above, we focus on motors exerting equal and opposite forces on polymer and on solvent, which to the first approximation, means \(\mathbf{F}^{\rm p}=-\mathbf{F}^{\rm s}=f\rho\mathbf{m}(\mathbf{r},t)\), where \(\rho\) is the number density of motors, while \(\mathbf{m}(\mathbf{r},t)=\langle\hat{\mathbf{n}}\rangle\) is the average orientation. With \(f>0\), this describes extensile force dipoles, contractile ones correspond to \(f<0\). Remaining within linear response, we assume \(|\mathbf{m}|\) small and neglect the change of motor density associated with changing polymer density \(\delta\phi\). Note that every motor has, generally, some finite processivity, stemming from its on- and off-rates; density \(\rho\) includes only those motors that are simultaneously working. The geometry of the source is illustrated in Fig. 1A.
Since the body of the force-exerting motor is tethered to the polymer at one end and experiences friction from the solvent, there must be a torque acting on the motor and proportional to the relative velocity \(\mathbf{v}^{\rm p}-\mathbf{v}^{\rm s}=\mathbf{w}\), leading to the following dynamics of the \(\mathbf{m}\) field (see Appendix, Section A for detailed derivation):
\[-i\omega\mathbf{m}_{\omega}=\frac{2}{3a}\mathbf{w}_{\omega}-2\frac{T}{\gamma} \mathbf{m}_{\omega}\, \tag{2}\]
where \(\gamma\) is the rotational drag coefficient for the motor.
Apart from nonlinearities (considered below), we neglect in Eq. (2) coupling of motor orientation to polymer concentration gradient (because motor size is smaller than or comparable to polymer mesh); don't consider renormalization of active force due to the flow itself (because \(f\) is large enough); ignore the possibility of the induced nematicity of the polymer and corresponding active stress. Our theory is in some ways similar to that of Adar and Joanny [16], as they also examine coupling between flow and polarization in a two fluid model, but they focus on the regime of strong polarization which can only rotate in response to the flow, while we concentrate on the chromatin-relevant opposite regime of weak polarization which only arises due to the flow.
Along with Eq. (2), it is convenient to recast the equations of motion (1) in terms of the above defined relative velocity \(\mathbf{w}_{\omega}=\mathbf{v}_{\omega}^{\rm p}-\mathbf{v}_{\omega}^{\rm s}\) and viscosity-weighted average velocity \(\mathbf{u}_{\omega}=\left(\eta_{\omega}^{\rm p}\mathbf{v}_{\omega}^{\rm p}+ \eta^{\rm s}\mathbf{v}_{\omega}^{\rm s}\right)/\left(\eta_{\omega}^{\rm p}+ \eta^{\rm s}\right)\) (see Appendix, Section C). Doing so, one can easily see that relative velocity \(\mathbf{v}\) is driven by \(\mathbf{m}\), i.e., mathematically by force monopoles rather than dipoles. A similar mathematical structure appeared in the work [26], albeit in an entirely different physics context. This explains why hydrodynamic interactions are so important in our active system, despite the fact that in passive polymers they are screened at the distances not far exceeding the mesh size [27]. Another remarkable feature of the full set of equations is that they allow for simultaneous Helmholtz decomposition of the three vector fields \(\mathbf{m}\), \(\mathbf{u}\), and \(\mathbf{w}\) to the uncoupled divergence-free (transverse, \(\perp\)) and curl-free (longitudinal, \(\parallel\)) modes.
_Threshold of instability, divergence free (transverse) modes_ - Transverse modes do not involve density change, \(\delta\phi=0\), and, accordingly, no pressure gradient, \(\nabla P=0\). This leaves us with just two equations which are easily combined into one (see Appendix, Section C):
\[-i\omega\tau\left(1-\lambda^{2}\nabla^{2}\right)\mathbf{w}_{\omega\perp}=2 \left(\frac{f\rho\gamma}{3a\zeta T}-1+\lambda^{2}\nabla^{2}\right)\mathbf{w}_{ \omega\perp}\, \tag{3}\]
where we introduced short hand notations
\[\tau=\frac{\gamma}{T}\,\ \ \text{and}\ \ \ \lambda^{2}=\frac{\eta_{\omega}^{\rm p }\eta^{\rm s}/\zeta}{\eta_{\omega}^{\rm p}+\eta^{\rm s}}\simeq\frac{\eta^{\rm s} }{\zeta}\, \tag{4}\]
and in the last transformation we took into account the fact that \(\eta_{\omega}^{\rm p}\gg\eta^{\rm s}\), by several orders of magnitude, over the entire frequency range of interest [28; 29; 30; 31; 32; 33; 34; 35; 36]. Clearly, \(\tau\)
Figure 1: Sketch of our model and the two types of solutions. A: Example of a region of disordered polymer and attached force dipoles, with a zoomed-in section where the parameters describing the microscopic features of the motors are shown. B: Sketch of the transverse solution in a spherical domain, showing the polar alignment of the sources and the sustained solvent flow being pumped in the opposite direction of their orientation. C: Sketch of the longitudinal, oscillatory solution to the equations of motion. Dashed arrows show the relaxational (osmotic) flow of polymer in the absence of active forces, which seeks to even out density fluctuations. Solid arrows show the active polymer flow induced by the sources. Time goes from the upper panel to the lower one, with time per frame given in Eq. (9).
is the characteristic time of passive re-orientation by a single motor, while \(\lambda\) is the length scale of the mesh size.
In an infinite domain, the modes are just plane waves, \(\nabla^{2}\rightarrow-q^{2}\), and we see that modes become unstable when \(\frac{f\rho\gamma}{3a\zeta T}>1+\lambda^{2}q^{2}\). The fact that the length scale \(1/q\) of the unstable modes diverges as we approach from above the critical force level at which \(\frac{f\rho\gamma}{3a\zeta T}-1=0\) is reminiscent of a second-order phase transition, similar to that in a magnet, with \(\mathbf{w}_{\perp}\) playing the role of (self-consistent) magnetic field and \(\mathbf{m}_{\perp}\) the local averaged spin. The critical parameter \(\epsilon=\frac{f\rho\gamma}{3a\zeta T}-1\) describes a competition between the velocity produced by the cooperatively acting motors \(\frac{f\rho}{\zeta}\), and the characteristic velocity needed to align a motor, \(aT/\gamma=a/\tau\).
If the system is confined in a finite domain of size \(R\), then modes have a more elaborate structure and discrete spectrum. Although the stability analysis for this case may require a separate study [37], the qualitative estimate of the amount of force needed to generate instability can be obtained by just setting \(q\sim 1/R\) (see Fig. 2):
\[f\rho>\frac{3a\zeta T}{\gamma}+\frac{a\eta_{s}T}{\gamma R^{2}}. \tag{5}\]
The meaning of this condition becomes transparent if we imagine an arrangement of motors in a typical transverse mode in a round domain (depicted in Fig. 1B). These motors acting together have to be strong enough to overcome the friction of the solvent pumped through the network (the first term) and additional friction against the boundary (the second term).
_Threshold of instability, curl free (longitudinal) modes_ - The longitudinal waves involve density fluctuations, which is why their description is more complicated. Nevertheless, even in this case, the problem is reduced to a single equation for the field \(\mathbf{w}_{\parallel}\) (see Appendix, Section C for derivation):
\[\begin{split}&\left[1-\lambda_{s}^{2}\nabla^{2}\right]\tau^{2} \partial_{t}^{2}\mathbf{w}_{\parallel}-4\lambda_{d}^{2}\nabla^{2}\mathbf{w}_{ \parallel}+\\ &+2\left[1-\left(\lambda_{s}^{2}+\lambda_{d}^{2}\right)\nabla^{2}- \frac{f\rho\gamma}{3a\zeta T}\right]\tau\partial_{t}\mathbf{w}_{\parallel}=0\,\end{split} \tag{6}\]
where in addition to (4) we introduced two new length scales, their complete expressions are cumbersome (see Appendix, Eq. C7), but in simplified form (due to \(\eta_{\omega}^{\mathrm{p}}\gg\eta^{\mathrm{s}}\)) they are as follows:
\[\lambda_{s}^{2}\simeq\frac{\eta^{\mathrm{p}}\left(1-\phi_{0}\right)^{2}}{ \zeta}\ \ \text{and}\ \ \lambda_{d}^{2}\simeq\frac{K\phi_{0}\left(1-\phi_{0}\right)^{2}\gamma}{2\zeta T }. \tag{7}\]
In equation (6), we returned to time domain (\(-i\omega\rightarrow\partial_{t}\)), making the oscillator structure of the equation more transparent. This is possible only as long as polymer viscosity, \(\eta_{\omega}^{\mathrm{p}}\), is only smoothly dependent on frequency.
As in the transverse case before, in an infinite domain the modes are just plane waves, \(\nabla^{2}\rightarrow-q^{2}\), and Eq. (6) becomes that of a damped harmonic oscillator. Remarkably, active driving force comes only in the friction term. In particular, sufficiently strong and numerous motors can lead to the flipped sign of friction, making the oscillator unstable. As before, structure of modes for a finite size domain of size \(R\) requires special analysis [37], but qualitatively we can estimate the instability threshold by just replacing \(q\to 1/R\) (see Fig. 2):
\[f\rho>\frac{3a\zeta T}{\gamma}+\frac{(1-\phi_{0})^{2}a}{R^{2}}\left[\frac{3T \eta^{\mathrm{p}}}{\gamma}+\frac{3}{2}K\phi_{0}\right]. \tag{8}\]
Similar to the formula (5) for the transverse case, Eq. (8) means that motors have to be strong enough to overcome friction, which this time involves moving and deforming polymers, thus dependent on \(\eta_{\omega}^{\mathrm{p}}\) and \(K\), respectively. This implies that a larger force is needed to generate longitudinal modes compared to the transverse ones (and the extensional viscosity of the polymer can further increase this threshold).
When force is exactly equal to the threshold value for some \(q\), this mode exhibits a sustained oscillation with frequency such that \(\left(\omega\tau\right)^{2}=2\lambda_{q}^{2}q^{2}/\left(1+\lambda_{s}^{2}q^{2}\right)\). In particular, the small \(q\) modes (\(q\lambda_{s}\ll 1\)) are just propagating waves with \(\omega\propto q\) and with velocity \(\sim\lambda_{d}/\tau\sim K/\zeta\). Numerically generated movies illustrating possible wave packet dynamics can be found in Appendix, Section D.
This can be rationalized in an interesting way. Let us define rate \(\tau_{q}^{-1}\sim(K/\zeta)q^{2}\); given that \(K/\zeta\) has dimensionality of a diffusion coefficient, \(\tau_{q}\) is the characteristic relaxation time of a density wave of length \(1/q\) by cooperative diffusion, driven by polymer elasticity (\(K\)) against friction (\(\zeta\)). In terms of \(\tau_{q}\), we can write mode \(q\) frequency as the geometric mean of two rates:
\[\omega\sim\left(\tau\tau_{q}\right)^{-1/2}\,\ \ \text{with}\ \ \tau_{q}^{-1}\sim(K/\zeta)q^{2}. \tag{9}\]
The mathematical structure of frequency as the geometric mean of two rates is analogous to that which arises in the Lotka-Volterra equations [38; 39], which is the geometric mean of the growth rate of the prey and the death rate of the predator. This structure reflects the physical nature of the oscillator: by the time some dense region of size \(1/q\) relaxes, it will have generated a velocity field which locally aligns the field \(\mathbf{m}_{\parallel}\). This field has a persistence time \(\tau\), and pumps the polymer in the same direction in which it was relaxing. This causes a new dense region to develop, until the dipoles lose their alignment in turn after a time \(\tau\), and the polymer relaxation begins yet again at a rate \(1/\tau_{q}\) in the opposite direction. This is illustrated in Fig. 1C.
If the force is slightly above or slightly below the threshold (8), then oscillator is either slowly decaying (below) or slowly increase swinging (above), with characteristic time that diverges at the threshold, again reminiscent of a standard critical slowing down in phase transitions.
_Beyond linear response -_ Once driving force exceeds the threshold value, unstable modes exponentially explode, grow out of the linear response range, and then non-linearity comes to rescue and eventually arrests the
growth. There are many non-linear effects possible, including non-linear osmotic and/or rheological behavior, advection of motors, but we will focus on the most basic and omnipresent one, namely, the fact that orientational order of motors is limited such that \(\left|\mathbf{m}\right|\leq 1\): the maximum motors can do together is to align completely.
Complete description of orientation dynamics in an orienting field is rather cumbersome (see Appendix, Section A). We will restrict ourselves with the simplest estimate, assuming that polarization vector \(\mathbf{m}\) beyond linear regime (2) evolves according to
\[\begin{split}\tau\partial_{t}\mathbf{m}&=2\left( \mathbf{m}_{\text{eq}}\left(\mathbf{w}\right)-\mathbf{m}\right)\,\\ &\text{with}\ \mathbf{m}_{\text{eq}}\left(\mathbf{w}\right)\simeq \mathbf{w}\frac{\tau}{3a}\left(1-\frac{\left(\mathbf{w}\tau/a\right)^{2}}{15} \right)\.\end{split} \tag{10}\]
Here \(\mathbf{m}_{\text{eq}}\left(\mathbf{w}\right)\) is the equilibrium value that would be achieved in a constant flow \(\mathbf{w}\); similar to classical orientation of dipoles, \(m_{\text{eq}}(w)=\coth\left(w\gamma/aT\right)-aT/\gamma w\), and we use the first non-linear term of expansion. Eq. (10) is not exact, but captures main qualitative features.
Once the dynamics is nonlinear, separation of longitudinal and transverse modes is not possible. Nevertheless, neglecting frequency dependence of \(\eta^{\text{p}}\) (and, therefore, \(\lambda_{s}\)), we can reduce equations of motion to a single equation (see Appendix, Section C):
\[\begin{split}&\tau^{2}\partial_{t}^{2}\left[1-\lambda_{s}^{2} \nabla\nabla\cdot+\lambda^{2}\nabla\times\nabla\times\right]\mathbf{w}-4\left[ \lambda_{d}^{2}\nabla\nabla\cdot\right]\mathbf{w}\\ &+2\tau\partial_{t}\left[1-\left(\lambda_{s}^{2}+\lambda_{d}^{2} \right)\nabla\nabla\cdot+\lambda^{2}\nabla\times\nabla\times-\right.\\ &\qquad\qquad\qquad\qquad\left.-\frac{f\rho\gamma}{3a\zeta T} \left(1-\frac{\tau^{2}}{15a^{2}}\mathbf{w}^{2}\right)\right]\mathbf{w}=0\.\end{split} \tag{11}\]
Equation (11) is instructive. First of all, if we drop the nonlinear term, then it is reduced to either Eq. (3) or Eq. (6) if the field \(\mathbf{w}\) is divergence-free or curl-free, respectively [40]. Of course, full non-linear equation is difficult to analyze. Nevertheless, Eq. (11) is still similar to that for an oscillator (specifically, van der Pol oscillator [41; 42]), with both active forces and non-linear saturation contributing to the friction term (with first time derivative); All types of second spatial derivatives, arising from viscous stresses, are controlled by the domain size and estimated as \(1/R^{2}\), although the detailed shape of the vector field \(\mathbf{w}\) is sensitive to the domain shape and boundary conditions. For an estimate, we just say that modes start to grow when force makes friction term in Eq. (11) negative and then \(\left|\mathbf{w}\right|\) grows until friction term becomes positive again. If the force threshold for instability is \(f^{*}\) (determined, e.g., by Eq. (8)), then the steady velocity amplitude scales as \(w^{2}\sim\left(15a^{2}/\tau^{2}\right)\left(f-f^{*}\right)/f^{*}\), and corresponding density variations amplitude is \(\delta\phi^{2}\sim\left(15a^{2}\zeta/K\tau\right)\left(f-f^{*}\right)/f^{*}\). Corresponding numerical solutions are shown in Appendix, Section D.
_Discussion -_ Our model predicts three phases for chromatin dynamics: disordered, and two types of polar order - transverse flows and oscillatory regime. These are controlled by the active force density \(f\rho\) and the domain size \(R\) (Fig. 2).
Our results are consistent with extensive simulations reported in [9], showing that extensile motors (\(f>0\)), if present in sufficient density \(f\rho\), produce polar ordered state and coherent motion. An additional feature of the computational model [9] is that they observe nematic ordering of polymer itself; we speculate that nematicity of the polymer may be a consequence of the polar order of motors, because the motors in the simulations were tied to local direction of the polymer.
Speaking about chromatin _in vivo_, we consider RNA polymerase II as a likely motor driving chromatin dynamics, as it binds to chromatin and pushes RNA into the solvent [1], although many other nuclear enzymes can also mechanically couple chromatin fiber to the nucleoplasm, e.g., loop extruding condensin [43]. For these motors, density \(\rho\gtrsim 10^{2}\,\mu\text{m}^{-3}\)[44], force \(f\sim 25\,\text{pN}\)[45], size \(a\sim 20\,\text{nm}\)[46]. At full cooperation, when perfectly aligned, these motors can drive solvent past chromatin at a very large speed \(w_{\text{max}}\sim f\rho/\zeta\sim 10^{7}\,\text{nm/s}\); here, we used \(\zeta=\eta^{\text{s}}/\lambda^{2}\), assuming nucleoplasm viscosity similar to that of water, \(\eta^{\text{s}}\sim 10^{-3}\,\text{Pa}\cdot\text{s}\)[35; 34], and taking chromatin mesh size \(\lambda\sim 50\,\text{nm}\) (\(30-100\,\text{nm}\) reported in experiments [47; 48]). Of course, polymer moves with a smaller speed, reduced by a factor of ratio of viscosities, \(v^{\text{p}}\sim\left(\eta^{\text{s}}/\eta^{\text{p}}\right)w\).
Unfortunately, the ratio of viscosities is difficult to measure directly. Using experimentally measured values of \(\eta^{\text{p}}\) and \(\eta^{\text{s}}\)[28; 29; 30; 31; 32; 33; 34; 35; 36], we estimate the ratio to be in the range \(10^{-2}\) to \(10^{-6}\). The latter figure would be in agreement with experimentally measured polymer speed in slow coherent motion about \(10\,\text{nm/s}\)[7]. If the actual ratio of viscosities is not quite that small, then we will
Figure 2: Phase diagram of the instabilities and the regions of parameter space where they develop, as a function of the active force density and the size of the container. In the left (blue) region of the diagram, the forcing is insufficient to drive instabilities and the system remains disordered. The middle (red) region is where the forces are sufficient to drive transverse flows but not strong enough to cause polymer density fluctuations. Finally, the bottom-right (purple) region of parameter space is where both longitudinal oscillations and transverse flows are possible. The lines separating the regions correspond to the conditions (5,8) respectively.
have to conclude that chromatin _in vivo_ operates close to criticality, where our model predicts reduction of velocity by a factor \(\left(f\rho/\left(f\rho\right)^{*}-1\right)^{1/2}\).
To estimate actual closeness to criticality in the case of transverse flows, it is convenient to rewrite the critical conditions Eq. (5) in terms of the above mentioned maximal speed at full cooperation: \(f\rho/\zeta>\left(f\rho\right)^{*}/\zeta=\left(3a/\tau\right)\left[1+\lambda^{2 }/R^{2}\right]\). Here \(\lambda/R\) is completely negligible for realistic nucleus size of about \(R\sim 10\,\mu\)m [1; 3], while passive reorientation time of a motor we calculate as \(\tau\gtrsim 10^{-6}\,\)s (see Appendix, Section E). The actual value of \(\tau\) could be significantly higher, since we underestimated the dissipative coupling between motor and polymer. Current estimate yields \(3a/\tau\lesssim 10^{7}\,\)nm/s, similar to \(w_{\text{max}}\) above. This suggests that transverse flows could indeed be responsible for the coherent chromatin flows in live cells.
In the oscillatory regime, required critical force density is larger, \(f\rho/\zeta>\left(f\rho\right)^{*}/\zeta=\left(3a/\tau\right)\left[1+(\eta^{ \text{p}}/\eta^{\text{s}})\lambda^{2}/R^{2}\right]\), see Eq. (8). Given the uncertainties in the estimates of \(\tau\) and, most importantly, ratio of viscosities, it is difficult at the present time to make definitive statements about feasibility of this regime for _in vivo_ chromatin. A similar uncertainty exists about our predictions of running waves speed and oscillations period (Eq. 9), which is poorly constrained, but seems significantly shorter than measured lifetime of coherent chromatin flows in cells of \(\sim 5-10\,\)s [7]. Importantly, a set of parameters consistent with current knowledge can be chosen that yields physiologically relevant results, yet such a choice cannot be presently motivated.
Overall, our model might be consistent with current measurements, although the significant approximations in our theory and uncertainties in parameters call for future efforts towards more detailed modeling. This will require consideration of the boundary conditions [37], including solvent permeation through the nuclear envelope [49], coupling of chromatin to lamin [50; 51; 52] and to nuclear envelope fluctuations [53]. Another promising direction is to account for a nonuniform distribution of active motors in the nucleus and along the chromatin fiber, such as active motors preferentially residing in transcriptionally active euchromatin [52; 54; 21]. But already now our theory makes predictions that beg for experimental tests, in particular for solvent motions, which unlike chromatin motions have not been measured before.
AZ is grateful for support from the NSF Grants CAREER PHY-1554880, CMMI-1762506, PHY-2210541, DMS-2153432 and NYU MRSEC DMR-1420073. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. AZ and AYG acknowledge useful discussions with participants of the 2020 virtual KITP program on "Biological Physics of Chromosomes". AYG thanks S. Ramaswamy for useful discussions.
**Appendices**
### Single motor dynamics
Since the body of the force-exerting motor is tethered to the polymer at one end and experiences friction from the solvent, there must be a torque acting on the motor and proportional to the relative speed of polymer past solvent, \(\mathbf{v}^{\text{p}}-\mathbf{v}^{\text{s}}=\mathbf{w}\). This leads to the following Langevin equation describing the stochastic dynamics of the motor orientation vector \(\hat{\mathbf{n}}\):
\[\begin{split}&\partial_{t}\hat{\mathbf{n}}=\left(\mathcal{I}- \hat{\mathbf{n}}\hat{\mathbf{n}}\right)\cdot\left[\frac{\mathbf{w}}{a}+\sqrt{ \frac{2T}{\gamma}}\ \mathbf{\xi}\right]\\ &\langle\xi_{i}(t)\xi_{j}(t^{\prime})\rangle=\delta_{ij}\delta(t-t ^{\prime})\;.\end{split} \tag{10}\]
We assume here that the dipoles experience rotational friction with coefficient \(\gamma\). We also assume the presence of Gaussian white noise that obeys fluctuation-dissipation theorem and thus has variance \(2\gamma T\), with \(T\) temperature (although we do not _a priori_ exclude the possibility that \(T\) may be some sort of an effective temperature). \(\mathcal{I}-\hat{\mathbf{n}}\hat{\mathbf{n}}\) (with \(\mathcal{I}\) the identity matrix) projects the expression in square brackets onto the plane perpendicular to \(\hat{\mathbf{n}}\) and thus ensures that the dynamics does not change the length and only rotates the \(\hat{\mathbf{n}}\) vector. It is worth noting that equation (10) is identical to the equation of motion for Langevin dipoles in an external electric field at finite temperature, where in our case \(\mathbf{w}\) plays the role of orienting field [55].
Equation (10) is an embodiment of our minimal model of force dipole dynamics. It certainly neglects a number of potentially relevant factors, two of which we mention. First, we do not take into account any direct interaction between motors (e.g., excluded volume), assuming they are sufficiently far apart. Second, we assume that motor is attached to a polymer by a swivel and thus turning the motor does not cause polymer to bend; in other words, motor direction \(\hat{\mathbf{n}}\) is assumed independent of the local direction of the polymer backbone (note that the computational model in the work [9] makes essentially the opposite assumption that these two vectors are the same).
To describe the onset of polar order, we consider the dynamics of the coarse-grained orientation field \(\mathbf{m}(\mathbf{r},t)\). We define the coarse-graining to take place inside a ball \(\mathcal{B}\) centered at \(\mathbf{r}\), such that this ball is large enough to contain many dipoles, while still being smaller than the relevant dynamic length scales over which gradients develop in the system. Within this ball, we define \(\mathbf{m}(\mathbf{r})=\frac{\sum\hat{\mathbf{n}}_{i}}{\sum_{i}}\), the average orientation. Thus, \(\mathbf{m}\) is a vector with length \(0\leq|\mathbf{m}|\leq 1\).
Then, to derive the equation of motion for \(\mathbf{m}\), we consider the distribution of directions of motors in \(\mathcal{B}\). We can assume that the external field \(\mathbf{w}\) is constant in this region, and we orient our coordinate system such that it
points in the \(\hat{\mathbf{z}}\) direction. Then, the external field leads to an effective potential which in spherical coordinates is proportional to \(\cos(\theta)\). The resulting Fokker-Planck equation for the distribution of orientation angles is
\[\partial_{t}p(\hat{\mathbf{n}})=-\frac{1}{a}\nabla\cdot\left(\nabla\left(\mathbf{ w}\cdot\hat{\mathbf{n}}\right)p\right)+\frac{T}{\gamma}\nabla^{2}p \tag{10}\]
### Linear response
Linear dynamics of \(\mathbf{m}\) corresponds to the situation where the distribution \(p(\hat{\mathbf{n}})\) deviates weakly from isotropic, which is equivalent to assuming \(\frac{|\mathbf{w}|\gamma}{aT}\ll 1\). The equation of motion for \(\mathbf{m}\) can be found by multiplying equation (10) by \(\hat{\mathbf{n}}\) and integrating over the unit sphere:
\[\partial_{t}\mathbf{m}=-2\frac{T}{\gamma}\mathbf{m}-\frac{1}{a}\int\nabla \cdot\left(\nabla\left(\mathbf{w}\cdot\hat{\mathbf{n}}\right)p\right)\hat{ \mathbf{n}}d\Omega\;. \tag{11}\]
Here, we used \(\nabla^{2}\hat{\mathbf{n}}=-2\hat{\mathbf{n}}\) to simplify the Laplacian term. The integral on the right-hand side can be performed by parts in spherical coordinates, and then using \(\langle P_{2}\rangle=\frac{1}{2}\int_{0}^{\pi}P(3\cos(\theta)^{2}-1)d\Omega\) we get
\[\partial_{t}\mathbf{m}=-2\frac{T}{\gamma}\mathbf{m}+\frac{2\mathbf{w}}{3} \left(1-\langle P_{2}\rangle\right) \tag{12}\]
In the linear response regime, the distribution in the second term should be assumed to be isotropic, leading to \(\langle P_{2}\rangle=0\) and we get
\[\partial_{t}\mathbf{m}=-2\frac{T}{\gamma}\mathbf{m}+\frac{2\mathbf{w}}{3a} \tag{13}\]
which is equation (2) used in the main text.
### Beyond linear response
Going beyond the linear response regime, we still assume the distribution to be axially symmetric about \(\mathbf{w}\). In this approximation, the Fokker-Planck equation (10) can be evaluated more exactly by considering the time-dependence of each Legendre mode \(m_{l}=\langle P_{l}(\cos(\theta))\rangle\), where \(\cos\theta=\hat{\mathbf{w}}\cdot\hat{\mathbf{n}}\). To do this, we multiply equation (10) by \(P_{l}\) and integrate over the unit sphere. Because of axial symmetry, only the \(\hat{\mathbf{z}}\) component is relevant:
\[\tau\partial_{t}m_{l}=-l(l+1)m_{l}-\frac{|\mathbf{w}|\tau}{a}\int\nabla\cdot \left(\nabla\left(\cos(\theta)\right)p\right)P_{l}d\Omega\;, \tag{14}\]
where we have inserted the definition \(\tau=\gamma/T\). The integral on the right-hand side can be evaluated by integration by parts, which leads to the following sequence of differential equations for \(m_{l}\):
\[\frac{\tau}{l(l+1)}\partial_{t}m_{l}=-m_{l}+\frac{|\mathbf{w}|\tau/a}{2l+1} \left(m_{l-1}-m_{l+1}\right) \tag{15}\]
This should be complemented with the initial condition that \(m_{0}=1\), ensuring that this can be solved sequentially. Note that the equation for \(m_{1}\) involves \(m_{2}\), a fact which is negligible in the linear response treatment, but important beyond linear response.
In equilibrium, the time derivatives are all \(0\), and the resulting recurrence relation for the equilibrium \(m_{l}^{eq}\) can be identified as the Bessel recurrence relation. Requiring regularity at \(w=0\), one can obtain the following result:
\[m_{l}^{eq}=\sqrt{\frac{\pi\alpha}{2}}\frac{I_{l+1/2}(\alpha)}{\sinh\alpha}\;, \tag{16}\]
where \(\alpha=|\mathbf{w}|\tau/a\), and \(I_{\nu}\) is the modified Bessel function of the first kind. In particular, if \(l=1\), we obtain the well-known result \(m_{1}^{eq}=\coth\alpha-1/\alpha=\mathcal{L}(\alpha)\), also known as the Langevin function.
The dynamical behavior of this system can be investigated perturbatively in the small parameter \(\alpha\). Indeed, the leading behavior for the moment \(m_{l}\) is proportional to \(\alpha^{l}\). Therefore, we may consider the leading dynamical behavior by truncating the sequence after \(l=2\):
\[\begin{split}\tau\partial_{t}m_{1}&=-2m_{1}+\frac{ 2}{3}\alpha(1-m_{2})\\ \tau\partial_{t}m_{2}&=-6m_{2}+\frac{6}{5}\alpha m _{1}\;.\end{split} \tag{17}\]
These can be combined into one equation for \(\mathbf{m}=m_{1}\hat{\mathbf{z}}\):
\[\frac{3}{2}\tau^{2}\partial_{t}^{2}\mathbf{m}-12\tau\partial_{t}\mathbf{m}- \left(18+\frac{6}{5}\alpha^{2}\right)\mathbf{m}=-6\alpha\hat{\mathbf{z}} \tag{18}\]
This is the equation for an overdamped harmonic oscillator, around the equilibrium value \(\mathbf{m}_{eq}=\frac{\alpha}{3+\alpha^{2}/5}\hat{\mathbf{z}}\simeq\left( \frac{\alpha}{3}-\frac{\alpha^{3}}{45}\right)\hat{\mathbf{z}}\). The oscillator is overdamped for any \(\alpha<5\), and since we are in the regime \(\alpha\ll 1\) we can safely neglect inertia.
In the main text, we simplify this overdamped equation of motion by writing it as
\[\tau\partial_{t}\mathbf{m}=2\left(\mathbf{m}_{eq}(\mathbf{w})-\mathbf{m} \right)\;, \tag{19}\]
where \(\mathbf{m}_{eq}=\frac{\mathbf{w}\tau}{3a}\left(1-\frac{\left(\mathbf{w}\tau/a \right)^{2}}{15}\right)\)
### Derivation of equations of motion in terms of \(\mathbf{u}\) and \(\mathbf{w}\)
In the main text, we define the velocity fields \(\mathbf{w}=\mathbf{v}^{\mathrm{p}}-\mathbf{v}^{\mathrm{s}}\) and \(\mathbf{u}_{\omega}=\left(\eta_{\mathrm{L}}^{\mathrm{p}}\mathbf{v}_{\omega}^{ \mathrm{p}}+\eta^{\mathrm{s}}\mathbf{v}_{\omega}^{\mathrm{s}}\right)/\left( \eta_{\omega}^{\mathrm{p}}+\eta^{\mathrm{s}}\right)\). Here we show how to transform the two-fluid equations of motion into these new variables. We begin with the force-balance equations
\[\zeta(\mathbf{v}_{\omega}^{\mathrm{p}}-\mathbf{v}_{\omega}^{\mathrm{s}})=\eta_ {\omega}^{\mathrm{p}}\nabla^{2}\mathbf{v}_{\omega}^{\mathrm{p}}-K\nabla\delta \phi_{\omega}-\phi_{0}\nabla P_{\omega}+\mathbf{F}_{\omega}^{\mathrm{p}} \tag{20a}\] \[\zeta(\mathbf{v}_{\omega}^{\mathrm{s}}-\mathbf{v}_{\omega}^{\mathrm{p}})= \eta^{\mathrm{s}}\nabla^{2}\mathbf{v}_{\omega}^{\mathrm{s}}-(1-\phi_{0}) \nabla P_{\omega}+\mathbf{F}_{\omega}^{\mathrm{s}}\;. \tag{20b}\]
To get the force-balance equation for \(\mathbf{w}\), we divide equation (16a) by \(\eta^{\mathrm{p}}\) and equation (16b) by \(\eta^{\mathrm{s}}\) before taking their difference. This gives us
\[\begin{split}\zeta\left(\frac{1}{\eta^{\mathrm{p}}_{\omega}}+\frac{1 }{\eta^{\mathrm{s}}}\right)\mathbf{w}_{\omega}=\nabla^{2}\mathbf{w}_{\omega}+ \left(\frac{1}{\eta^{\mathrm{p}}_{\omega}}+\frac{1}{\eta^{\mathrm{s}}}\right) f\rho\mathbf{m}_{\omega}\\ +\left(\frac{1-\phi_{0}}{\eta^{\mathrm{s}}}-\frac{\phi_{0}}{\eta ^{\mathrm{p}}_{\omega}}\right)\nabla P_{\omega}-\frac{K}{\eta^{\mathrm{p}}_{ \omega}}\nabla\delta\phi_{\omega}\;.\end{split} \tag{17}\]
The equation for \(\mathbf{u}\) is obtained by simply taking the sum of equations (16a) and (16b), and using the fact that in our model \(\mathbf{F}^{\mathrm{p}}+\mathbf{F}^{\mathrm{s}}=0\):
\[\left(\eta^{\mathrm{p}}_{\omega}+\eta^{\mathrm{s}}\right)\nabla^{2}\mathbf{u} _{\omega}=K\nabla\delta\phi_{\omega}+\nabla P_{\omega}\;. \tag{18}\]
Next we turn our attention to the equation of continuity
\[i\omega\delta\phi_{\omega}=\phi_{0}\nabla\cdot\mathbf{v}^{\mathrm{p}}_{\omega }=-(1-\phi_{0})\nabla\cdot\mathbf{v}^{\mathrm{s}}_{\omega}\;. \tag{19}\]
In the above, we solve for \(\nabla\cdot\mathbf{v}^{\mathrm{s}}\), \(\nabla\cdot\mathbf{v}^{\mathrm{p}}\) and insert them into the definitions \(\nabla\cdot\mathbf{w}=\nabla\cdot\mathbf{v}^{\mathrm{p}}-\nabla\cdot\mathbf{ v}^{\mathrm{s}}\), and \(\nabla\cdot\mathbf{u}=\left(\eta^{\mathrm{p}}\nabla\cdot\mathbf{v}^{\mathrm{p}}+ \eta^{\mathrm{s}}\nabla\cdot\mathbf{v}^{\mathrm{s}}\right)/(\eta^{\mathrm{p}} +\eta^{\mathrm{s}})\). This yields the continuity equation in terms of the two new fields
\[\nabla\cdot\mathbf{w}_{\omega} =i\omega\delta\phi_{\omega}\left(\frac{1}{\phi_{0}}+\frac{1}{1- \phi_{0}}\right)\;, \tag{20a}\] \[\left(\eta^{\mathrm{p}}_{\omega}+\eta^{\mathrm{s}}\right)\nabla \cdot\mathbf{u}_{\omega} =i\omega\delta\phi_{\omega}\left(\frac{\eta^{\mathrm{p}}_{\omega }}{\phi_{0}}-\frac{\eta^{\mathrm{s}}}{1-\phi_{0}}\right)\;. \tag{20b}\]
We now have the full set of equations of motion, in terms of the new velocity fields \(\mathbf{u},\mathbf{w}\).
Full derivation of equation (6, main text) for longitudinal case, as well as more general equation (11, main text)
In the main text, we used equations (3,6, main text) for the transverse and longitudinal flows in the linear regime. Later in the paper, when we turned our attention to non-linear dynamics, we used equation (11, main text). Here we will derive a more general result from which both of the linear-response equations of motion, as well as the nonlinear equation of motion may be derived as a particular case, after inserting the appropriate dynamics for \(\mathbf{m}\):
\[\begin{split}-i\omega\tau\left(1+\lambda^{2}\nabla\times\nabla \times-\lambda^{2}_{s}\nabla\nabla\cdot\right)\mathbf{w}_{\omega}\\ -2\lambda^{2}_{d}\nabla\nabla\cdot\mathbf{w}_{\omega}=-i\omega \frac{f\rho\tau}{\zeta}\mathbf{m}_{\omega}\;.\end{split} \tag{21}\]
This is the linear response relation describing the response of \(\mathbf{w}\) to the forcing \(\mathbf{m}\).
We begin the derivation of formula (21) with the force-balance equation of motion derived above
\[\begin{split}&\left(\zeta\left(\frac{1}{\eta^{\mathrm{p}}_{\omega}}+ \frac{1}{\eta^{\mathrm{s}}}\right)-\nabla^{2}\right)\mathbf{w}_{\omega}= \left(\frac{1}{\eta^{\mathrm{p}}}+\frac{1}{\eta^{\mathrm{s}}}\right)f\rho \mathbf{m}_{\omega}\\ &-\left(\frac{\phi_{0}}{\eta^{\mathrm{p}}_{\omega}}-\frac{1-\phi_ {0}}{\eta^{\mathrm{s}}}\right)\nabla P_{\omega}-\frac{K}{\eta^{\mathrm{p}}_{ \omega}}\nabla\delta\phi_{\omega}\;.\end{split} \tag{22}\]
We then eliminate the pressure gradient \(\nabla P_{\omega}\) by solving for it in equation (18). Notice that the divergence-free part of \(\mathbf{u}\) does not couple to any other fields, so we can safely assume \(\nabla\times\mathbf{u}=0\). Then, using the identity
\[\nabla^{2}\mathbf{v}=\nabla\left(\nabla\cdot\mathbf{v}\right)-\nabla\times \left(\nabla\times\mathbf{v}\right)\;, \tag{23}\]
which is valid for any vector field \(\mathbf{v}\), we can write \(\nabla^{2}\mathbf{u}_{\omega}=\nabla\left(\nabla\cdot\mathbf{u}_{\omega}\right)\). Thus we obtain
\[\begin{split}&\nabla P_{\omega}=-K\nabla\delta\phi_{\omega}+\left( \eta^{\mathrm{p}}_{\omega}+\eta^{\mathrm{s}}\right)\nabla^{2}\mathbf{u}_{ \omega}\\ &=\left(-K+i\omega\left(\frac{\eta^{\mathrm{p}}_{\omega}}{\phi_{0} }-\frac{\eta^{\mathrm{s}}}{1-\phi_{0}}\right)\right)\nabla\delta\phi_{\omega} \end{split} \tag{24}\]
where we used the continuity equation (20b). The equations can be closed by relating \(\delta\phi\) back to \(\mathbf{w}\) using equation (20a). We have \(\nabla\delta\phi_{\omega}=\frac{\phi_{0}\left(1-\phi_{0}\right)}{i\omega}\nabla \left(\nabla\cdot\mathbf{w}_{\omega}\right)\), thus leading to the full equation of motion
\[\begin{split}&-i\omega\left(\zeta-\frac{\eta^{\mathrm{p}}\eta^{ \mathrm{s}}}{\eta^{\mathrm{p}}+\eta^{\mathrm{s}}}\nabla^{2}\right)\mathbf{w}_{ \omega}=-i\omega f\rho\mathbf{m}_{\omega}\\ &+K\phi_{0}(1-\phi_{0})^{2}\nabla\left(\nabla\cdot\mathbf{w}_{ \omega}\right)\\ &-i\omega\frac{\left((1-\phi_{0})\eta^{\mathrm{p}}-\phi_{0}\eta^{ \mathrm{s}}\right)^{2}}{\eta^{\mathrm{s}}+\eta^{\mathrm{p}}}\nabla\left( \nabla\cdot\mathbf{w}_{\omega}\right)\;.\end{split} \tag{25}\]
We can collect terms and rewrite the equation as follows
\[\begin{split}-i\omega\tau\left(1-\lambda^{2}\nabla^{2}-(\lambda^{2}_ {s}-\lambda^{2})\nabla\nabla\cdot\right)\mathbf{w}_{\omega}=\\ & 2\lambda^{2}_{d}\nabla\nabla\cdot\mathbf{w}_{\omega}-i\omega\frac{f \rho\tau}{\zeta}\mathbf{m}_{\omega}\;,\end{split} \tag{26}\]
where we have defined the length scales
\[\begin{split}\lambda^{2}&=\frac{\eta^{\mathrm{p}}_{ \omega}\eta^{\mathrm{s}}/\zeta}{\eta^{\mathrm{p}}_{\omega}+\eta^{\mathrm{s}}} \simeq\frac{\eta^{\mathrm{s}}}{\zeta}\;,\\ \lambda^{2}_{s}&=\frac{\eta^{\mathrm{p}}_{\omega}\eta^{ \mathrm{s}}+(\eta^{\mathrm{p}}_{\omega}(1-\phi_{0})-\eta^{\mathrm{s}}\phi_{0})^{2} }{\zeta(\eta^{\mathrm{p}}_{\omega}+\eta^{\mathrm{s}})}\simeq\\ &\simeq\frac{\eta^{\mathrm{p}}_{\omega}\left(1-\phi_{0}\right)^{2}}{ \zeta}\;,\\ \text{and}\\ \lambda^{2}_{d}&=\frac{K\phi_{0}\left(1-\phi_{0}\right)^{2} \gamma}{2\zeta T}\;.\end{split} \tag{27}\]
Using the identity (23), equation (26) can be finally transformed into the desired equation (21). It is the general equation of motion for the velocity field \(\mathbf{w}\), agnostic to the specific dynamics that \(\mathbf{m}\) obeys. It is valid for both longitudinal and transverse modes, as well as any combination of them.
### Derivation of the linearized equation of motion
As we mentioned, equation (21) represents the linear response of \(\mathbf{w}\) given some source \(\mathbf{m}\). It can be formally solved for the Green's function of the velocity field for a
given orientation field. The physical description of the system is complete once we introduce the feedback of \(\mathbf{w}\) on \(\mathbf{m}\), which is done through the linear equation (100), as long as we are in the linear response regime. Altogether, this gives a closed linear equation of motion for \(\mathbf{w}\):
\[\begin{split}&\left(i\omega\tau\right)^{2}(1+\lambda^{2}\nabla \times\nabla\times-\lambda_{s}^{2}\nabla\nabla\cdot)\mathbf{w}_{\omega}\\ &-2\left(i\omega\tau\right)\left(1-\frac{f\rho\gamma}{3a\zeta T} +\lambda^{2}\nabla\times\nabla\times-(\lambda_{s}^{2}+\lambda_{d}^{2})\nabla \nabla\cdot\right)\mathbf{w}_{\omega}\\ &-4\lambda_{d}^{2}\nabla\nabla\cdot\mathbf{w}_{\omega}=0\.\end{split} \tag{101}\]
The beauty of this equation is that it automatically produces equations (3, main text) and (6, main text). For the transverse case, when we take \(\mathbf{w}=\mathbf{w}_{\perp}\), since \(\nabla\cdot\mathbf{w}_{\perp}=0,\ \nabla\times\nabla\times\mathbf{w}_{\perp}=- \nabla^{2}\mathbf{w}_{\perp}\), this produces (3, main text). Conversely, when we take the longitudinal component \(\mathbf{w}=\mathbf{w}_{\parallel}\), then \(\nabla\nabla\cdot\mathbf{w}_{\parallel}=\nabla^{2}\mathbf{w}_{\parallel}\), and \(\nabla\times\mathbf{w}_{\parallel}=0\), and we get (6, main text).
It is worth noting that this structure, of one equation describing response and the other feedback, each with their own timescale, is strongly reminiscent of the structure of the Lotka-Volterra equations for predator-prey dynamics. As we have noted in the main text, the resonant timescale of the oscillator we obtain for the longitudinal modes is the geometric mean of the two underlying relaxation times, just as in the Lotka-Volterra model [38; 39].
Instead of excluding \(\mathbf{m}\) and writing the equation of motion for \(\mathbf{w}\) (101), we can equally well exclude \(\mathbf{w}\) and write the equation of motion for \(\mathbf{m}\). This happens to have identically the same form as equation (101). Mathematically, this is due to the fact that the differential operators that relate \(\mathbf{m}\) and \(\mathbf{w}\) commute with one another.
We can also derive a closed equation of motion for \(\delta\phi\). Again, we begin with the linear response relation for \(\delta\phi\) given \(\mathbf{m}\), which may be derived by taking the divergence of equation (100) and using the continuity equation (100a)
\[\begin{split}\left(1-\lambda_{s}^{2}\nabla^{2}\right)\tau\partial_ {t}\delta\phi=& 2\lambda_{d}^{2}\nabla^{2}\delta\phi\\ &+\frac{f\rho\tau\phi_{0}(1-\phi_{0})}{\zeta}\nabla\cdot\mathbf{ m}\.\end{split} \tag{102}\]
Within linear response, taking the feedback equation in linear form (100), this produces
\[\begin{split}&(1-\lambda_{s}^{2}\nabla^{2})\tau^{2}\partial_{t}^{ 2}\delta\phi-4\lambda_{d}^{2}\nabla^{2}\delta\phi\\ &+2\left(1-\frac{f\rho\tau}{3a\zeta}-(\lambda_{s}^{2}+\lambda_{d }^{2})\nabla^{2}\right)\tau\partial_{t}\delta\phi=0\.\end{split} \tag{103}\]
As before, at any wave vector \(q\), this is an oscillator equation with friction term affected by the force. Analysis of this equation, therefore, leads to the same conclusions as before.
### Nonlinear regime
Beyond linear response when nonlinearities are at play, we cannot resort to Fourier modes, so we must work with a version of equation (100) in the time domain
\[\begin{split}\left(1+\lambda^{2}\nabla\times\nabla\times-\lambda _{s}^{2}\nabla\nabla\cdot\right)\tau\partial_{t}\mathbf{w}=& 2\lambda_{d}^{2}\nabla\nabla\cdot \mathbf{w}\\ &+\frac{f\rho\tau}{\zeta}\partial_{t}\mathbf{m}\.\end{split} \tag{104}\]
We formally write the solution of the nonlinear equation for \(\mathbf{m}\) as
\[\mathbf{m}=\left(2+\tau\partial_{t}\right)^{-1}2\mathbf{m}_{\text{eq}}( \mathbf{w})\, \tag{105}\]
plug this solution into equation (104), and then use the fact that the operator \((2+\tau\partial_{t})\) commutes with both spatial and time derivatives in equation (104). As a result, we arrive at
\[\begin{split}&\tau^{2}\partial_{t}^{2}\left[1-\lambda_{s}^{2}\nabla \nabla\cdot+\lambda^{2}\nabla\times\nabla\times\right]\mathbf{w}-4\left[ \lambda_{d}^{2}\nabla\nabla\cdot\right]\mathbf{w}\\ &+2\tau\partial_{t}\left[1-\left(\lambda_{s}^{2}+\lambda_{d}^{2} \right)\nabla\nabla\cdot+\lambda^{2}\nabla\times\nabla\times-\right.\\ &\left.-\frac{f\rho\gamma}{3a\zeta T}\left(1-\frac{\tau^{2}}{15a^ {2}}\mathbf{w}^{2}\right)\right]\mathbf{w}=0\,\end{split} \tag{106}\]
which is equation (11) in the main text.
### List of possible nonlinearities
In our analysis of the nonlinear regime above, we investigated the effects of the saturation of the orientation field \(\mathbf{m}\), which cannot take values \(|\mathbf{m}|>1\). We deem this to be an important nonlinearity to consider, as otherwise the system quickly diverges into states which violate the very definition of \(\mathbf{m}\) as an average of unit vectors, rendering the model inconsistent. In addition however, there are a number of deviations from the linear response regime which could be taken into account, but which we choose to neglect for simplicity. These include:
* The advection of force dipoles by the surrounding fluid flow, which would result in a term proportional to \(\mathbf{v}^{\text{p}}\cdot\nabla\mathbf{m}\) added to Equation (102).
* The nonlinear osmotic pressure \(\Pi\) due to large variations in concentration, proportional to \(\delta\phi^{2}\) and higher powers.
* Nonlinear rheology (dependence of stress tensor \(\sigma^{\text{p}}\) on velocities), such as shear-thinning or thickening effects, as well as non-local rheological response (so-called \(q-\)dependent rheology [56]).
* Nematic contribution to the stress tensor, proportional to \(\mathbf{w}\mathbf{w}\).
* Active nematic contribution to the stress tensor, proportional to \(f\mathbf{w}\mathbf{w}\).
* Dependence of activity on density, as has been observed in the case of bacterial swarms for example. This would have a generic nonlinear effect on the microscopic forcing of the dipoles \(f(\delta\phi)\).
* Extra osmotic pressure due to activity, be it due to resulting ATP concentration gradients or other chemical fuels and waste resulting from activity.
We choose to neglect these so that our model may be tractable analytically, however they may be included in future numerical studies of this model.
### Stability and conservation of mass
Although by construction our equations describe only the redistribution of chromatin driven by motors, and do not involve either change in the amount of material or spontaneous motion of chromatin, it is technically useful and important to see how these properties are implemented in the final equations of motion, like Eq. (18). Furthermore, it will be useful for us to ensure that our numerical scheme detailed in Section D indeed satisfies these constraints.
Consider, for instance, the linear response equation for \(\delta\phi\) (17). If there is no drive, i.e. \(\mathbf{m}=0\), but \(\delta\phi\) happens to be nonzero at \(t=0\), then (17) guarantees that \(\delta\phi\) will decay stably to \(0\). This follows from the fact that the Laplacian operator has negative eigenvalues. For instance, in an infinite domain where we can write \(\nabla^{2}\to-q^{2}\), we would have
\[\delta\phi_{q}(t)=\delta\phi_{q}(0)\exp\left(-\frac{2\lambda_{d}^{2}q^{2}}{1+ \lambda_{s}^{2}q^{2}}t\right)\;. \tag{19}\]
Consider now the more interesting case where there is a drive, \(\mathbf{m}\neq 0\). Suppose first that the domain is very large but \(\mathbf{m}\) is only located in some part of this domain, while far away both \(\mathbf{m}\) and \(\delta\phi\) are \(0\). Then, integrating equation (17) over the whole volume gives
\[\partial_{t}\int\delta\phi dV=0\;, \tag{20}\]
which means the total amount of polymer material is conserved, as expected.
In the case of a finite domain \(\Omega\) where activity may happen close to the boundary, we still expect a boundary condition \(\mathbf{w}=0,\;\mathbf{m}=0\) at the boundary (or, if there is hydrodynamic slip on the boundary, then only the normal components are \(0\), which does not affect our conclusions). Then, integrating equation (17) over \(\Omega\), we are left with
\[\tau\partial_{t}\int_{\Omega}\delta\phi\mathrm{d}V=\oint_{\partial \Omega}\left(\lambda_{d}^{2}+\lambda_{s}^{2}\tau\partial_{t}\right)\nabla \delta\phi\cdot\mathrm{d}\mathbf{S}\] \[\quad=\frac{\phi_{0}(1-\phi_{0})^{2}\tau}{\zeta}\oint_{\partial \Omega}\left(K\nabla\delta\phi-\eta^{\mathrm{p}}\nabla\nabla\cdot\mathbf{v}^ {\mathrm{p}}\right)\cdot\mathrm{d}\mathbf{S}\;, \tag{21}\]
where we have used divergence theorem on the right-hand side integral, followed by using the continuity equation (17a). The term on the right-hand side must therefore be \(0\) to guarantee the conservation of \(\delta\phi\). This is seen by remembering the force-balance condition for the polymer at the boundary: since \(\mathbf{v}^{\mathrm{p}}=\mathbf{m}=0\) at the boundary, the only forces are due to viscosity and osmotic pressure, which must exactly cancel out. Thus, the integrand in the right-hand-side of (21) is exactly \(0\) everywhere at the boundary.
Finally, the numerical scheme we show in Section D has no boundaries and assumes a periodic domain, so it will automatically conserve \(\int\delta\phi dV\).
### Numerical Solutions
To investigate the solutions to the nonlinear equation of motion (18), we wrote a simple numerical scheme to solve the system in one dimension. We write the equations using a non-dimensional version of the velocity field \(\tilde{w}(x,t)=\frac{\tau}{3a}w(x,t)\). We use the two equations (17) and (17) instead of combining them, which allows us to numerically integrate only first-order differential equations in time. In one-dimensional form, these equations read
\[\begin{split}(1-\lambda_{s}^{2}\partial_{x}^{2})\partial_{t}\tilde {w}&=2\lambda_{d}^{2}\partial_{x}^{2}\tilde{w}+(\epsilon+1) \partial_{t}m\\ \partial_{t}m&=-2m+2\tilde{w}\left(1-\beta\frac{3}{ 5}\tilde{w}^{2}\right)\;.\end{split} \tag{22}\]
Here, \(\beta\) is a parameter which we set to \(0\) or \(1\) depending on whether we want to consider nonlinear effects. We have also set the characteristic time \(\tau=1\). We solve these equations using an explicit forward time-stepping scheme, and treat the spatial derivatives with Fast Fourier Transform by assuming the domain is periodic. It is worth noting that since these two equations are equivalent to one second-order differential equation in time for \(\tilde{w}\), we must specify two initial conditions. Either we set \(\tilde{w}(x,t=0),m(x,t=0)\), or one of these must be specified along with its time derivative. For all of the solutions below, we set the screening length to be much smaller than the domain size, \(\lambda_{s}/L=10^{-4}\), since we are interested in the large-scale near-critical dynamics of this system.
### Linear dynamics
First we set \(\beta=0\) and investigate the linear dynamics. As expected, when \(\epsilon>0\) the solutions diverge in time, but keeping this parameter close to \(0\) (we chose \(\epsilon=0.02\)), we observe interesting transient dynamics. We initialize \(\tilde{w}(x,t=0)=0\), and set \(m(x,t=0)\) to be a localized perturbation in the form of the derivative of a Gaussian with width \(0.2\), while the size of the whole domain is \(L=10\). This initial condition does not select a direction of propagation, which is why we observe it splitting into
two wave packets which move away from each other at a constant speed. In the main text, we identified this speed as being set by the combination \(\lambda_{d}/\tau\). This is shown in Supplemental Movie 1.
We also initialized the dynamics with an initial condition which does set the direction of propagation, by initializing \(\tilde{w}\) and \(m\) as shown in Fig. 3. When the system is thus initialized, the packet moves to the left at a constant speed and its shape is conserved. We first considered these dynamics for small \(\epsilon\) which leads to instabilities developing very slowly. Thus, this wavepacket keeps its shape for the duration of the numerical integration. This is shown in Supplemental Movie 2.
### Nonlinear dynamics
After turning on the nonlinearity, we increased the critical parameter to \(\epsilon=0.2\) so the system quickly reaches the nonlinear regime. We initialize the fields with \(\tilde{w}=0\), and \(m(x,t=0)\) also corresponding to the derivative of a Gaussian with width \(0.2\). After some complex developments, the system settles into a steady evolution where a near-square wave propagates at constant speed, which we checked to be close to \(2\lambda_{d}/\tau\), shown in Fig. 5. A movie showing the development of such nonlinear waves is shown in Supplemental Movie 3. The amplitude of the waves scales as \(\sqrt{\epsilon}\), as shown in Fig. 4, where we scanned multiple values of \(\epsilon\) and measured the amplitude of the resulting steady waves. When \(\epsilon\) gets large, the amplitude slightly deviates from the simple power-law behavior, and instead follows \(\tilde{w}=\sqrt{\frac{5\epsilon}{3(1+\epsilon)}}\). The latter relation can be found by solving for a constant steady-state in the equations (14). In Supplemental Movie 3, these predicted amplitudes are shown as black dashed lines. We also numerically verified that the wave speed in these nonlinear waves scales linearly with \(\lambda_{d}\). Over a range of values for this parameter, we tracked the maximum of a traveling pulse and recorded its velocity. These velocities grow linearly with \(\lambda_{d}\) as expected, shown in Fig. 5.
### Estimates
In the main text, we make the claim that molecular motors are too far from one another to form long-range
Figure 4: Scaling of the wave amplitude with the critical parameter \(\epsilon\), as determined by numerical integration of equations (14). For small values of \(\epsilon\), the scaling of the amplitude follows the expected \(\epsilon^{1/2}\). At larger values there is a deviation. This can be explained by solving for the steady-state of the equations of motion, which give an expected amplitude of \(\sqrt{5\epsilon/3(1+\epsilon)}\).
Figure 5: Scaling of pulse velocity with the osmotic lengthscale \(\lambda_{d}\), as measured from numerical integration of (14). From our equations of motion we expected the velocity to scale linearly with \(\lambda_{d}\), and indeed here the line \(v=2\lambda_{d}\) goes through the data, confirming our expectations.
Figure 3: Initial condition for \(m,\tilde{w}\) used in Supplemental Movie 2, which leads to a conserved wave-packet moving to the left at constant speed.
order due to their direct contacts. There are approximately \(10^{5}\) RNA polymerase II molecules in a HeLa cell [44], whose nucleus is approximately \(10\,\mu\)m in diameter, leading to a density around \(10^{2}\,\)molecules/\(\mu\)m\({}^{3}\), which corresponds to the average distance of the order of \(\sim 200\,\)nm. In other words, if we take the size of RNA polymerase to be on the order of \(10\,\)nm in each dimension [3], then we obtain a small volume fraction \(\phi_{\text{RNAPII}}\sim 10^{-4}\). This sparseness is also seen, despite some local functional clustering, in superresolution microscopy experiments [20].
We had previously estimated [24] the relevant length scales, and we will repeat these estimates here so that this paper may be self-contained. We expect the mesh size, \(\lambda\), to range from around \(30\,\)nm to \(100\,\)nm [47; 48]. In contrast, the ratio of viscosities \(\eta^{\text{p}}/\eta^{\text{s}}\) is harder to estimate. Bare nucleoplasm has been measured to have a viscosity on the same order as that of water [34; 35], \(\eta^{\text{s}}\sim 10^{-3}\,\text{Pa}\cdot\text{s}\), whereas a wide range of chromatin viscosities has been measured, \(\eta^{\text{p}}\approx 0.6-3000\,\text{Pa}\cdot\text{s}\)[28; 29; 30; 31; 32; 33; 34; 35; 36], reflecting in part the complicated nature of this quantity. Thus, experimental ranges for \(\eta^{\text{p}}/\eta^{\text{s}}\) lie between \(10^{2}\) and \(10^{6}\). The screening length scale relevant in this paper is \(\lambda_{s}=\eta^{\text{p}}/\zeta=\lambda\sqrt{\eta^{\text{p}}/\eta^{\text{s}}}\). At the upper limit of the estimates, this length scale becomes much larger than the size of the nucleus, making it irrelevant for our system of interest. The lower limit is \(\approx 300\,\)nm, which is more consistent with the length scales relevant in the context of chromatin.
To estimate the length scale \(\lambda_{d}\), we assume \(K\simeq\frac{T}{\lambda^{3}}\)[56], and \(\gamma\simeq C\eta^{\text{s}}a^{3}\), where the constant \(C\) is an unknown parameter, resulting from the fact that it is unclear whether the motors experiencing the rotational friction \(\gamma\) are able to "feel" the polymer viscosity or whether they are small enough that the only relevant viscous dissipation is that of the solvent. As their size is about \(20\) nm, comparable to the mesh size [46], the constant \(C\) can be assumed to be \(C\ll\eta^{\text{p}}/\eta^{\text{s}}\sim(10^{2}-10^{6})\). From these assumptions, we obtain \(\lambda_{d}\sim\sqrt{\frac{a^{3}}{\lambda}}\sqrt{C}\ll\lambda_{s}\), resulting in roughly \(10-20\,\)nm. Finally, we estimate the dipole relaxation time \(\tau\simeq C\eta^{\text{s}}a^{3}/T\sim C10^{-6}\,\)s, and so the expected speed for traveling polymer waves is on the order of \(\lambda_{d}/\tau\sim 10^{7}\,\)nm/s.
|
2306.08240 | Semi-supervised Cell Recognition under Point Supervision | Cell recognition is a fundamental task in digital histopathology image
analysis. Point-based cell recognition (PCR) methods normally require a vast
number of annotations, which is extremely costly, time-consuming and
labor-intensive. Semi-supervised learning (SSL) can provide a shortcut to make
full use of cell information in gigapixel whole slide images without exhaustive
labeling. However, research into semi-supervised point-based cell recognition
(SSPCR) remains largely overlooked. Previous SSPCR works are all built on
density map-based PCR models, which suffer from unsatisfactory accuracy, slow
inference speed and high sensitivity to hyper-parameters. To address these
issues, end-to-end PCR models are proposed recently. In this paper, we develop
a SSPCR framework suitable for the end-to-end PCR models for the first time.
Overall, we use the current models to generate pseudo labels for unlabeled
images, which are in turn utilized to supervise the models training. Besides,
we introduce a co-teaching strategy to overcome the confirmation bias problem
that generally exists in self-training. A distribution alignment technique is
also incorporated to produce high-quality, unbiased pseudo labels for unlabeled
data. Experimental results on four histopathology datasets concerning different
types of staining styles show the effectiveness and versatility of the proposed
framework. Code is available at
\textcolor{magenta}{\url{https://github.com/windygooo/SSPCR} | Zhongyi Shui, Yizhi Zhao, Sunyi Zheng, Yunlong Zhang, Honglin Li, Shichuan Zhang, Xiaoxuan Yu, Chenglu Zhu, Lin Yang | 2023-06-14T04:56:31Z | http://arxiv.org/abs/2306.08240v1 | # Semi-supervised Cell Recognition under Point Supervision+
###### Abstract
Cell recognition is a fundamental task in digital histopathology image analysis. Point-based cell recognition (PCR) methods normally require a vast number of annotations, which is extremely costly, time-consuming and labor-intensive. Semi-supervised learning (SSL) can provide a shortcut to make full use of cell information in gigapixel whole slide images without exhaustive labeling. However, research into semi-supervised point-based cell recognition (SSPCR) remains largely overlooked. Previous SSPCR works are all built on density map-based PCR models, which suffer from unsatisfactory accuracy, slow inference speed and high sensitivity to hyper-parameters. To address these issues, end-to-end PCR models are proposed recently. In this paper, we develop a SSPCR framework suitable for the end-to-end PCR models for the first time. Overall, we use the current models to generate pseudo labels for unlabeled images, which are in turn utilized to supervise the models training. Besides, we introduce a co-teaching strategy to overcome the confirmation bias problem that generally exists in self-training. A distribution alignment technique is also incorporated to produce high-quality, unbiased pseudo labels for unlabeled data. Experimental results on four histopathology datasets concerning different types of staining styles show the effectiveness and versatility of the proposed framework. The code is available at [https://github.com/windygooo/SSPCR](https://github.com/windygooo/SSPCR).
Keywords:semi-supervised learning cell recognition microscopy image.
## 1 Introduction
Cell recognition, which aims to localize and classify cells in histopathology images, is fundamental for numerous downstream tasks including whole slide image (WSI) classification [3], tumor microenvironment analysis [10] and cancer prognosis prediction [8]. Recently, point-based cell recognition (PCR) has attracted much attention because of its low annotation cost [9, 19, 28]. In general, the histopathology images for the training of PCR models are cropped from
WSIs. As exhaustively labeling hundreds of thousands of cells in a gigapixel WSI is extremely expensive, it is a common practice to crop a few region-of-interest (ROI) patches from WSIs for annotation, leaving a large proportion of cells unused (see Fig. 1). Without a shadow of doubt, exploiting these unlabeled cells effectively would improve the performance of full-supervised PCR models. However, under the point annotation setting, the study on how to use these unlabeled cells remains largely under-explored.
Semi-supervised learning (SSL) that intends to use labeled as well as unlabeled data to perform specific learning tasks [24] provides a shortcut to make full use of cells in WSIs. To the best of our knowledge, there are three papers [1; 22; 23] exploring the pathways of semi-supervised point-based cell recognition (SSPCR) so far. In [1], the model is retrained with the predicted density map for unlabeled images and this progress is repeated for several times. [22] performs SSL via global consistency regularization and local consistency adversarial learning. [23] utilizes unlabeled samples via location-aware adversarial image reconstruction. However, these three frameworks are all built on density map-based PCR models [18; 26; 28], which inevitably suffer from unsatisfactory accuracy, low inference efficiency and extensive, data-specific hyper-parameter tuning due to the need for pre-and post-processing [19]. To address these problems, recent studies [13; 19; 20; 21] propose end-to-end PCR models that can directly output the coordinates and categories of cells, exhibiting superior cell recognition accuracy and efficiency over density map-based PCR models [19]. However, the existing SSPCR frameworks are incompatible with the end-to-end PCR models.
In this paper, we contribute a SSPCR framework applicable for the state-of-the-art (SOTA) end-to-end PCR models for the first time. Overall, the proposed framework follows the teacher-student self-training paradigm, where the teacher model is updated by the student model in an Exponential Moving Aver
Figure 1: Due to limited human resources in actual scenes, only a small portion of cells, marked by the red boxes, is annotated for developing PCR models. The large number of unlabeled cells outside the boxes contains much valuable information for cell recognition, but is commonly ignored.
age (EMA) manner and meanwhile provides pseudo point annotations on unlabeled images for the student model training. Moreover, we introduce co-teaching [11, 14] and distribution alignment [7, 2] techniques to improve the effectiveness of our framework. Extensive experiments on four histopathology datasets with different types of staining styles (HE, Ki-67, PD-L1 and HER2) show that our method improves the performance of the end-to-end PCR models significantly under various labeling ratios.
## 2 Semi-Supervised Learning Framework
In SSPCR, a set of labeled images \(\mathcal{D}_{l}=\{(x_{i}^{l},y_{i}^{l})\}_{i=1}^{N_{l}}\) and a set of unlabeled images \(\mathcal{D}_{u}=\{x_{i}^{u}\}_{i=1}^{N_{u}}\) are available, where \(N_{l}\) and \(N_{u}\) denote the number of labeled and unlabeled data, respectively. Usually, \(N_{u}\gg N_{l}\). For each labeled image \(x_{i}^{l}\), the annotation \(y_{i}^{l}\) comprises locations and cell categories of all points.
We illustrate the proposed framework in Fig. 2. In the following sections, we first describe the teacher-student mutual learning scheme and then introduce the co-teaching strategy [11, 14] where two paired teacher-student models are built to provide pseudo labels crossly. Finally, we elaborate how to generate unbiased pseudo labels using the distribution alignment technique [2, 7].
Teacher-student Mutual LearningThe proposed framework adopts the pseudo-labeling method to utilize unlabeled samples. As the quality of pseudo labels is critical to the performance of our method, we maintain a teacher model, which can be regarded as a temporal ensemble of the student models at different training iterations, to generate accurate pseudo labels [15]. To be specific, the student model is optimized by back-propagation with both ground-truth
Figure 2: Overview of our proposed SSCPR framework. (a) Schematic of the co-teaching strategy. S and T represent student and teacher models, respectively. (b) Detailed framework. The symbol # indicates that the strong augmentations applied on labeled and unlabeled images are different. \(\mathcal{L}_{s}\) and \(\mathcal{L}_{u}\) represent the training losses calculated with ground-truth and pseudo labels separately.
and pseudo labels, whereas the teacher model is slowly updated via Exponential Moving Average (EMA):
\[\begin{split}\theta_{s}&=\theta_{s}+\eta\frac{\partial \mathcal{L}}{\partial\theta_{s}}\\ \theta_{t}&=\alpha\theta_{t}+(1-\alpha)\theta_{s} \end{split} \tag{1}\]
where \(\theta_{s}\) and \(\theta_{t}\) represent the parameters of the student and teacher models, respectively. \(\mathcal{L}\) is the training loss and \(\eta\) denotes the learning rate. \(\alpha\in[0,1]\) is a free parameter controlling the update speed of the teacher model.
In the training process, a mini-batch is composed of \(n_{l}\) labeled and \(n_{u}\) unlabeled images. We apply strong augmentation on the labeled images to increase the diversity of the student model so that the performance of the teacher model can be progressively improved [15]. The unlabeled images processed by two data augmentations with different strengths are used as input of the student and teacher models, respectively. Concretely, the teacher model takes weakly augmented unlabeled images as input to produce reliable pseudo labels while the strongly perturbed ones are fed into the student model for consistency learning [15, 25].
Co-teachingAs revealed in [11], the teacher model is prone to make similar predictions with the student model as training proceeds, leading to error accumulation once incorrect pseudo labels are injected. To alleviate this confirmation bias problem, we introduce a co-teaching strategy from [11, 14]. Specifically, we train two student models (S1, S2) with different initializations simultaneously, each of which maintains its paired teacher model (T1 or T2) via EMA. T1 and T2 are separately used to generate pseudo labels for the training of S2 and S1, which allows the rectification of incorrect pseudo labels when at least one teacher model gives the correct prediction for an unlabeled sample.
Unbiased Pseudo Label GenerationClass imbalance is a widespread problem in cell recognition applications. It is well known that deep learning models trained on an imbalanced dataset would produce predictions biased toward the dominant categories [7]. Therefore, the class distribution of pseudo labels deviates from the true distribution. Furthermore, the imbalance ratio would further increase if a single confidence threshold is applied upon all categories of predictions to filter out low-quality pseudo labels. Training with such biased pseudo labels can impair the performance of PCR models significantly [15].
To generate unbiased pseudo labels using a teacher model, we introduce a distribution alignment technique [2, 7] to customize class-specific thresholds \(T=\{t_{i}\}_{i=1}^{C}\). Specifically, the thresholds are calculated through the following equations:
\[\hat{n}_{i}^{u}=\frac{N_{u}}{N_{l}}\cdot n_{i}^{l},\ i=1,\cdots,C \tag{2}\]
where \(n_{i}^{l}\) is the number of labeled cells of category \(i\). \(\hat{n}_{i}^{u}\) denotes the number of cells predicted as the \(i\)-th class with confidence scores higher than \(t_{i}\) in the unlabeled data. With the support of \(T\), we can align the class distribution of
pseudo labels generated by a teacher model with that of labeled data, which greatly mitigates the side effect of class imbalance. It is worth noting that we refresh \(T\) at the start of each epoch to ensure its timeliness.
Loss FunctionThe end-to-end PCR methods [13, 20, 19, 21] are optimized by minimizing the weighted sum of classification and regression losses under fully supervised conditions. Prior semi-supervised object detection studies [2, 11, 14, 15] show that locations of pseudo boxes are seriously noisy. We also find this positional noise under the setting of point annotation. Therefore, we only utilize the unlabeled data to supervise the classification learning of the student models. Overall, the loss function of the proposed SSPCR framework is:
\[\mathcal{L}=\mathcal{L}_{s}^{cls}+\lambda\mathcal{L}_{s}^{reg}+\beta\mathcal{ L}_{u}^{cls} \tag{3}\]
where \(\mathcal{L}_{s}^{cls}\) and \(\mathcal{L}_{u}^{cls}\) represent the classification loss calculated with labeled and unlabeled data, respectively. \(\mathcal{L}_{s}^{reg}\) denote the regression loss calculated with labeled data. \(\lambda\) and \(\beta\) are weighting factors. The calculation details about \(\mathcal{L}^{reg}\) and \(\mathcal{L}^{cls}\) can be found in [20].
## 3 Experiments
### Dataset description and experimental settings
Datasets.We conduct experiments on four histopathology datasets with different staining styles (HE, Ki-67, PD-L1 and HER2), where 569466, 138644, 466200 and 833807 cell instances are labeled, respectively. The HE [4, 5], Ki-67, PD-L1 and HER2 stained datasets separately contain six, six, ten and six types of cell annotations. More information about these datasets including data sources, cell classes and image resolutions can be found in the supplementary material. We divide each dataset randomly into training, validation and test subsets in a 6:2:2 ratio. The effectiveness of the proposed SSPCR framework is validated with 5%, 10%, 15% and 20% ground-truth labels available in the training subset.
Implementation DetailsWe use DPA-P2PNet [20] with the backbone of ResNet-50 [6] as our cell recognizer. The AdamW optimizer [17] with a fixed learning rate of 1e-4 is adopted to optimize the student models. The number of labeled and unlabeled images in a mini-batch is set to 4. Note that only the labeled data are used in the first 50 epochs to pre-train the models. Then, both labeled and unlabeled data are used for training in the rest 150 epochs. We set the EMA rate \(\alpha\) to 0.99. In the loss function, \(\lambda\) is set to 2e-3, and \(\beta\) is set to 1. We apply data augmentation of RandomGridShuffle, RandomHorizontalFlip, RandomVerticalFlip, RandomBrightness and RandomContrast on labeled data to boost the model performance. Two augmentation pipelines with different strengths are constructed for the unlabeled data. The weak one that works for the teacher models only comprises RandomHorizontalFlip, while the strong one customized for the student models is composed of ColorJitter and GaussianBlur [15].
Evaluation MetricAs in previous PCR works [20, 19, 28], we use macro-average precision (P), recall (R) and F1 to evaluate all models. A predicted point is regarded as true positive (TP) if it is within the region of a ground truth point with a predefined distance threshold \(T_{m}\). Following [19, 20, 28], we set \(T_{m}\) to 6 on the HE stained dataset obtained at \(20\times\) magnification while 12 on the other three IHC stained datasets collected at \(40\times\) magnification.
### Experimental results
We present the performance gains brought by the proposed SSL method under different datasets and labeling ratios in Table 1. The experimental results show that the F1 score of the cell recognizer is improved by 1-4 points in both detection and classification, which demonstrates the effectiveness and versatility of our method. It is worth noting that the model trained on 10% and 15% labeled data using our framework outperforms the full supervised baselines trained on 15% and 20% labeled data separately on the HE, Ki-67 and PD-L1 datasets. An interesting finding is that the performance gain varies with the labeling ratio non-monotonically, which could be attributed to the combined effect of quality of pseudo labels and volume of unlabeled data. In general, with the increase of the labeling ratio, the pre-trained models could generate pseudo labels with higher quality.
To further validate the versatility of the proposed SSPCR framework, we replace ResNet-50 with another two representative backbones (i.e., ConvNext-B [16] and ViTDet-B [12]). Due to the page limitation, we only report the experimental results on the HE stained dataset in Table 2. It can been seen that the proposed SSPCR framework also works well for these two backbones. Surprisingly, we find that though ConvNext-B achieves higher baseline performance than ResNet-50, the increase of classification F1 is even larger using our method with 10%, 15% and 20% labels accessible. This can be explained by better pseudo labels and that the performance may be far from saturation on this dataset. We also notice that ViTDet-B has worse performance compared to the convolutional neural network (CNN) based ResNet-50 and ConvNext-B. This is because vision transformer (ViT) based models have weaker inductive bias than CNNs in modeling visual structures and thus require much more labeled data to learn such bias implicitly [27]. In fact, the requirement for large-scale supervised data limits the application of ViTs in medical image analysis tasks where accurate data annotation takes tremendous effort of experienced doctors. Fortunately, the experimental results show that the proposed SSL method improves the performance of ViTDet-B substantially by 3-5% on classification F1, which provides a solution to unleash the power of ViTs in the PCR task. The generality of our framework is also verified using another end-to-end PCR model (i.e., P2PNet [21]) as the cell recognizer. In general, the performance of P2PNet is consistently improved using the proposed SSPCR framework. The detailed experimental results can be found in the supplementary material.
### Ablation Study
We ablate the effects of teacher-student mutual learning (TSML), distribution alignment (DA) and co-teaching (CT) in the case of 5% labeled HE data. As
\begin{table}
\begin{tabular}{c|c|c|c c c c|c c c c} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{
\begin{tabular}{c} Labeling \\ ratio \\ \end{tabular} } & \multirow{2}{*}{SSL} & \multicolumn{4}{c|}{Detection} & \multicolumn{4}{c}{Classification} \\ \cline{5-10} & & & P & R & F1 & \(\Delta\)F1 & P & R & F1 & \(\Delta\)F1 \\ \hline \multirow{10}{*}{HE} & \multirow{2}{*}{5\%} & ✗ & 80.00 & 76.65 & 78.29 & \multirow{2}{*}{**+2.04**} & 47.34 & 45.92 & 46.48 \\ & & ✓ & 77.02 & 83.94 & 80.33 & & 48.50 & 50.67 & 49.27 \\ \cline{2-10} & \multirow{2}{*}{10\%} & ✗ & 79.28 & 83.77 & 81.46 & \multirow{2}{*}{**+1.35**} & 51.60 & 51.35 & 51.13 \\ & & ✓ & 80.75 & 84.97 & 82.81 & & 53.85 & 53.46 & 53.48 \\ \cline{2-10} & \multirow{2}{*}{15\%} & ✗ & 81.87 & 83.59 & 82.72 & \multirow{2}{*}{**+1.03**} & 52.68 & 53.09 & 52.84 \\ & & ✓ & 80.04 & 87.83 & 83.75 & & 54.26 & 56.30 & 55.12 \\ \cline{2-10} & \multirow{2}{*}{20\%} & ✗ & 79.55 & 87.49 & 83.33 & & 53.73 & 54.91 & 54.00 \\ & & ✓ & 81.59 & 87.44 & 84.41 & & 55.61 & 56.51 & 55.85 \\ \cline{2-10} & \multirow{2}{*}{100\%} & ✗ & 82.56 & 89.41 & 85.85 & & 60.73 & 60.58 & 60.38 \\ \hline \multirow{10}{*}{Ki-67} & \multirow{2}{*}{5\%} & ✗ & 66.57 & 71.21 & 68.81 & \multirow{2}{*}{**+2.77**} & 45.13 & 46.44 & 45.13 \\ & & ✓ & 68.57 & 74.87 & 71.58 & & 46.61 & 50.60 & 48.27 \\ \cline{2-10} & \multirow{2}{*}{10\%} & ✗ & 70.06 & 75.72 & 72.78 & \multirow{2}{*}{**+1.28**} & 47.26 & 50.99 & 48.86 \\ & & ✓ & 71.91 & 76.36 & 74.06 & & 50.46 & 52.10 & 50.96 \\ \cline{2-10} & \multirow{2}{*}{15\%} & ✗ & 71.33 & 75.55 & 73.38 & \multirow{2}{*}{**+1.49**} & 49.97 & 50.03 & 49.48 \\ & & ✓ & 71.71 & 78.32 & 74.87 & & 50.50 & 54.31 & 52.15 \\ \cline{2-10} & \multirow{2}{*}{20\%} & ✗ & 70.22 & 78.25 & 74.02 & \multirow{2}{*}{**+1.75**} & 49.29 & 53.24 & 50.63 \\ & & ✓ & 72.53 & 79.30 & 75.77 & & 51.05 & 55.15 & 52.81 \\ \cline{2-10} & \multirow{2}{*}{100\%} & ✗ & 76.43 & 81.35 & 78.81 & & 57.55 & 60.97 & 59.08 \\ \hline \multirow{10}{*}{PD-L1} & \multirow{2}{*}{5\%} & ✗ & 60.24 & 68.76 & 64.22 & \multirow{2}{*}{**+2.87**} & 32.11 & 31.80 & 29.85 \\ & & ✓ & 65.47 & 68.80 & 67.09 & & 34.86 & 32.50 & 32.39 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{10\%} & ✗ & 67.04 & 70.00 & 68.49 & & 38.26 & 35.92 & 36.03 \\ & & ✓ & 70.73 & 71.66 & 71.20 & & 42.22 & 39.32 & 39.47 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{15\%} & ✗ & 61.91 & 78.67 & 69.29 & \multirow{2}{*}{**+3.15**} & 36.77 & 40.48 & 37.16 \\ & & ✓ & 72.63 & 72.24 & 72.44 & & 43.47 & 41.45 & 40.49 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{20\%} & ✗ & 64.70 & 75.02 & 69.48 & & 40.02 & 40.65 & 39.56 \\ & & ✓ & 69.31 & 73.93 & 71.54 & & 41.53 & 44.10 & 42.29 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{100\%} & ✗ & 71.16 & 82.34 & 76.34 & & 50.70 & 57.26 & 53.66 \\ \hline \multirow{10}{*}{HER2} & \multirow{2}{*}{5\%} & ✗ & 75.53 & 76.64 & 76.08 & \multirow{2}{*}{**+1.08**} & 51.53 & 52.84 & 51.95 \\ & & ✓ & 73.57 & 81.11 & 77.16 & & 53.82 & 53.95 & 53.16 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{10\%} & ✗ & 76.83 & 80.06 & 78.41 & & 56.00 & 55.89 & 55.52 \\ & & ✓ & 76.56 & 83.49 & 79.88 & & 57.19 & 58.46 & 57.51 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{15\%} & ✗ & 77.68 & 81.80 & 79.69 & & **+1.48** & 57.93 & 58.04 & 57.75 \\ & & ✓ & 79.38 & 83.03 & 81.17 & & 60.51 & 60.51 & 60.21 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{20\%} & ✗ & 80.02 & 81.43 & 80.72 & & **+1.23** & 61.92 & 58.91 & 60.06 \\ & & ✓ & 80.97 & 82.96 & 81.95 & & 64.09 & 60.87 & 62.04 \\ \cline{1-1} \cline{2-10} & \multirow{2}{*}{100\%} & ✗ & 84.51 & 83.22 & 83.86 & & 71.15 & 70.23 & 70.66 \\ \hline \end{tabular}
\end{table}
Table 1: Experimental results on four histopathology datasets. The labeling ratio indicates the percentage of available ground-truth labels in the training subset. SSL means that the unlabeled training images are exploited with our framework to improve the model performance.
shown in Table. 3, TSML only promotes the classification F1 by 0.17%, which can be attributed to the severe class imbalance in pseudo labels. To be specific, the imbalance ratio is 73 in ground-truth labels while 190 in pseudo labels. By further inclusion of CT to mitigate the confirmation bias issue, the model outperforms the baseline by 1.14% on classification F1. DA leads to a significant improvement (1.90%) as it generates nearly unbiased pseudo labels, where the imbalance ratio is reduced from 190 to 84. The performance gain reaches 2.79% by combing these three techniques together.
## 4 Conclusion
In this paper, we design a semi-supervised learning framework adapted to the SOTA end-to-end PCR models for the first time. The proposed SSPCR framework adopts the pseudo-labeling paradigm. Moreover, it incorporates the co
\begin{table}
\begin{tabular}{c|c|c|c c c|c c c c} \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{
\begin{tabular}{c} Labeling \\ ratio \\ \end{tabular} } & \multirow{2}{*}{SSL} & \multicolumn{4}{c|}{Detection} & \multicolumn{4}{c}{Classification} \\ \cline{6-10} & & & P & R & F1 & \(\Delta\)F1 & P & R & F1 & \(\Delta\)F1 \\ \hline \multirow{10}{*}{ConvNeXt-B} & \multirow{2}{*}{5\%} & ✗ & 78.08 & 82.68 & 80.31 & \multirow{2}{*}{**+0.85**} & 47.67 & 48.28 & 47.63 & **+2.18** \\ & & ✓ & 77.16 & 85.60 & 81.16 & & 48.25 & 51.99 & 49.81 & **+2.18** \\ \cline{2-10} & \multirow{2}{*}{10\%} & ✗ & 82.26 & 82.33 & 82.29 & \multirow{2}{*}{**+1.49**} & 53.95 & 51.68 & 52.53 & **+2.93** \\ & & ✓ & 82.24 & 85.39 & 83.78 & & 55.41 & 55.62 & 55.46 & **+2.93** \\ \cline{2-10} & \multirow{2}{*}{15\%} & ✗ & 79.31 & 87.68 & 83.29 & \multirow{2}{*}{**+1.41**} & 53.21 & 56.60 & 54.70 & **+2.39** \\ & & ✓ & 82.23 & 87.32 & 84.70 & & 57.24 & 57.20 & 57.09 & **+2.39** \\ \cline{2-10} & \multirow{2}{*}{20\%} & ✗ & 82.78 & 85.85 & 84.29 & \multirow{2}{*}{**+1.17**} & 55.55 & 56.05 & 55.75 & **+2.15** \\ & & ✓ & 83.20 & 87.85 & 85.46 & & 58.08 & 58.05 & 57.90 & **+2.15** \\ \cline{2-10} & \multirow{2}{*}{100\%} & & 86.55 & 88.35 & 87.44 & & 64.51 & 63.14 & 63.73 & \\ \hline \multirow{10}{*}{ViTDet-B} & \multirow{2}{*}{5\%} & ✗ & 76.12 & 73.70 & 74.90 & \multirow{2}{*}{**+2.42**} & 38.90 & 38.18 & 38.49 & **+4.82** \\ & & ✓ & 73.87 & 81.11 & 77.32 & & 42.30 & 44.70 & 43.31 & **+4.82** \\ \cline{2-10} & \multirow{2}{*}{10\%} & ✗ & 75.98 & 79.25 & 77.58 & \multirow{2}{*}{**+1.36**} & 42.66 & 44.72 & 43.52 & **+4.74** \\ & & ✓ & 76.82 & 81.19 & 78.94 & & 49.09 & 47.94 & 48.26 & **+4.74** \\ \cline{2-10} & \multirow{2}{*}{15\%} & ✗ & 77.83 & 79.55 & 78.68 & \multirow{2}{*}{**+1.55**} & 46.69 & 45.17 & 45.82 & **+3.62** \\ & & ✓ & 77.11 & 83.60 & 80.23 & & 49.69 & 49.71 & 49.44 & **+3.62** \\ \cline{2-10} & \multirow{2}{*}{20\%} & ✗ & 78.49 & 80.40 & 79.44 & \multirow{2}{*}{**+1.43**} & 47.09 & 46.78 & 46.75 & **+3.87** \\ & & ✓ & 77.92 & 84.04 & 80.87 & & 50.39 & 51.18 & 50.62 & **+3.87** \\ \cline{2-10} & \multirow{2}{*}{100\%} & ✗ & 77.21 & 88.28 & 82.38 & & 51.96 & 56.80 & 54.18 & \\ \hline \end{tabular}
\end{table}
Table 2: Performance of our framework with different backbones. The experiments are conducted on the HE stained dataset.
\begin{table}
\begin{tabular}{c|c|c|c c c|c c c} \hline \multirow{2}{*}{TSML} & \multirow{2}{*}{CT} & \multirow{2}{*}{DA} & \multicolumn{3}{c|}{Detection} & \multicolumn{3}{c}{Classification} \\ \cline{4-9} & & & P & R & F1 & \(\Delta\)F1 & P & R & F1 & \(\Delta\)F1 \\ \hline \multirow{4}{*}{✓} & & & 80.00 & 76.65 & 78.29 & & 47.34 & 45.92 & 46.48 & \\ & & & 75.44 & 83.17 & 79.11 & **+0.82** & 48.78 & 46.94 & 46.65 & **+0.17** \\ & & ✓ & 73.59 & 87.38 & 79.90 & **+1.61** & 48.75 & 49.71 & 47.62 & **+1.14** \\ & & ✓ & 76.76 & 83.58 & 80.02 & **+1.73** & 48.47 & 49.17 & 48.38 & **+1.90** \\ & & ✓ & 77.02 & 83.94 & 80.33 & **+2.04** & 48.50 & 50.67 & 49.27 & **+2.79** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study on teacher-student mutual learning (TSML), co-teaching (CT) and distribution alignment (DA) in the case of 5% labeled HE data.
teaching and distribution alignment techniques to overcome the confirmation bias problem and construct unbiased pseudo labels, respectively. Experimental results on four histopathology datasets demonstrate the effectiveness and generality of our proposed framework. The ablation studies validate the efficacy of the framework components. In our future work, we will explore the solutions to promote the localization accuracy of the cell recognizer via unlabeled data.
|
2310.15291 | Nuclear charge radius of $^{26m}$Al and its implication for V$_{ud}$ in
the quark-mixing matrix | Collinear laser spectroscopy was performed on the isomer of the aluminium
isotope $^{26m}$Al. The measured isotope shift to $^{27}$Al in the
$3s^{2}3p\;^{2}\!P^\circ_{3/2} \rightarrow 3s^{2}4s\;^{2}\!S_{1/2}$ atomic
transition enabled the first experimental determination of the nuclear charge
radius of $^{26m}$Al, resulting in $R_c$=\qty{3.130\pm.015}{\femto\meter}. This
differs by 4.5 standard deviations from the extrapolated value used to
calculate the isospin-symmetry breaking corrections in the superallowed $\beta$
decay of $^{26m}$Al. Its corrected $\mathcal{F}t$ value, important for the
estimation of $V_{ud}$ in the CKM matrix, is thus shifted by one standard
deviation to \qty{3071.4\pm1.0}{\second}. | P. Plattner, E. Wood, L. Al Ayoubi, O. Beliuskina, M. L. Bissell, K. Blaum, P. Campbell, B. Cheal, R. P. de Groote, C. S. Devlin, T. Eronen, L. Filippin, R. F. García Ruíz, Z. Ge, S. Geldhof, W. Gins, M. Godefroid, H. Heylen, M. Hukkanen, P. Imgram, A. Jaries, A. Jokinen, A. Kanellakopoulos, A. Kankainen, S. Kaufmann, K. König, Á. Koszorús, S. Kujanpää, S. Lechner, S. Malbrunot-Ettenauer, P. Müller, R. Mathieson, I. Moore, W. Nörtershäuser, D. Nesterenko, R. Neugart, G. Neyens, A. Ortiz-Cortes, H. Penttilä, I. Pohjalainen, A. Raggio, M. Reponen, S. Rinta-Antila, L. V. Rodríguez, J. Romero, R. Sánchez, F. Sommer, M. Stryjczyk, V. Virtanen, L. Xie, Z. Y. Xu, X. F. Yang, D. T. Yordanov | 2023-10-23T18:56:56Z | http://arxiv.org/abs/2310.15291v1 | # The nuclear charge radius of \({}^{26m}\)Al and its implication for V\({}_{ud}\) in the CKM matrix
###### Abstract
Collinear laser spectroscopy was performed on the isomer of the aluminium isotope \({}^{26m}\)Al. The measured isotope shift to \({}^{27}\)Al in the \(3s^{2}3p\)\({}^{2}\)\(P_{3/2}^{\circ}\to 3s^{2}4s\)\({}^{2}\)S\({}_{1/2}\) atomic transition enabled the first experimental determination of the nuclear charge radius of \({}^{26m}\)Al, resulting in \(R_{c}\)=3.130(15) fm. This differs by 4.5 standard deviations from the extrapolated value used to calculate the isospin-symmetry breaking corrections in the superallowed \(\beta\) decay of \({}^{26m}\)Al. Its corrected \({\cal F}t\) value, important for the estimation of \(V_{ud}\) in the CKM matrix, is thus shifted by one standard deviation to 3071.4(10) s.
Introduction.--The Cabibbo-Kobayashi-Maskawa (CKM) matrix is a central cornerstone in the formulation of the Standard Model of particle physics. It connects the quarks' mass with weak eigenstates and, thus, characterises the strength of quark-flavour mixing through the weak interaction. The first element in the top row of the matrix, \(V_{ud}\), manifests in the \(\beta\) decay of pions, neutrons or radioactive nuclei. While individual entries of the quark mixing matrix cannot be predicted within the Standard Model, the CKM matrix is required to be unitary - a tenet which is the subject of intense experimental scrutiny.
In recent years, the unitary test of the top-row elements:
\[|V_{ud}|^{2}+|V_{us}|^{2}+|V_{ub}|^{2}=1-\Delta_{CKM}\]
has received significant attention. The unitarity of the CKM matrix demands the residual \(\Delta_{CKM}\) to vanish. However, recent advances in the theoretical description of (inner) radiative corrections [1; 2; 3; 4; 5; 6] to \(\beta\) decays resulted in a notable shift in \(V_{ud}\) and, thus, to a tension with respect to CKM unitarity. Following recommended values by the Particle Data Group [7], \(\Delta_{CKM}=15(7)\times 10^{-4}\) hints at a \(\approx 2\,\sigma\) deviation from unitarity although this discrepancy could be as large as 5.5 \(\sigma\), depending on which calculation of (nuclear-structure dependent [2; 8; 9; 10] and universal [1; 2; 3; 6]) radiative corrections are used in the determination of \(V_{ud}\) and which decay is considered to obtain \(V_{us}\)[7; 11; 12; 13; 14; 15; 16].
At present, superallowed \(0^{+}\to 0^{+}\) nuclear \(\beta\) decays remain the most precise way to access \(V_{ud}\)[10]. For these cases, the experimentally measured \(ft\) value, characterising a \(\beta\) decay, can be related to a corrected \({\cal F}t\) value:
\[{\cal F}t=ft\cdot(1+\delta^{\prime}_{R})(1+\delta_{NS}-\delta_{C}), \tag{1}\]
where \(\delta^{\prime}_{R}\) and \(\delta_{NS}\) constitute the transition-dependent contributions to the radiative corrections while \(\delta_{C}\) are the isospin-symmetry breaking (ISB) corrections. According to the conserved vector-current hypotheses, the \({\cal F}t\) values should be identical for all superallowed \(\beta\) decays. When
averaged over all 15 precision cases, they serve to extract \(V_{ud}\).
While the experimental dataset on \(ft\) values of super-allowed \(\beta\) decays robustly builds on 222 individual measurements [10], theoretical corrections are under scrutiny. As part of this process, the uncertainties in the nuclear-structure dependent radiative corrections \(\delta_{NS}\) have recently been inflated by a factor of \(\approx 2.6\)[10]. Moreover, the ISB corrections \(\delta_{C}\), which are also nuclear-structure dependent, remain an ongoing focus of research which has stimulated new theoretical calculations [17; 18; 19; 20] as well as experimental benchmarks [21; 22; 23; 24; 25; 26].
For the determination of \(V_{ud}\), \({}^{26m}\)Al is of particular importance. The nuclear-structure dependent corrections, \(\delta_{NS}-\delta_{C}\), in \({}^{26m}\)Al are the smallest in size among all superallowed \(\beta\) emitters [10]. The same holds true for the combined experimental and theoretical uncertainties in the \(\mathcal{F}t\) value of \({}^{26m}\)Al[10]. Its extraordinary precision is thus almost on par with all other precision cases combined. In times of tension with CKM unitary and rigorous examination of all involved theoretical corrections, it is, therefore, unsettling that one critical input parameter for the calculation of \(\delta_{C}\), i.e. the nuclear charge radius, is in the case of \({}^{26m}\)Al, based on an extrapolated but experimentally unknown value [27; 28].
In this Letter, we report on isotope-shift measurements obtained via collinear laser spectroscopy (CLS) that puts the nuclear charge radius of \({}^{26m}\)Al on solid experimental footings. Implications for its \(\mathcal{F}t\) value and, thus, \(V_{ud}\) are discussed.
_Experiment. --_ Two independent experiments were performed, one at the COLLAPS beamline [29] at ISOLDE/CERN [30] and the other at the IGISOL CLS beamline [31] in Jyvaskyla/Finland. Details of the campaign on aluminium isotopes at COLLAPS are described in Ref. [32]. In short, radioactive aluminium atoms were synthesised by bombarding a uranium carbide target with 1.4-GeV protons from CERN's PS booster. Once released from the production target, the Al\({}^{+}\) ion beam was formed via resonant laser ionisation [33], subsequent electrostatic acceleration to 30 keV, and final mass selection via ISOLDE's magnetic high-resolution separator [34].
At IGISOL [35], the radionuclides of interest were produced in \({}^{27}\)Al(p,d) reactions at 25-MeV proton energy. After their release from a thin foil target and extraction from the He-gas filled gas cell, the Al ions were guided towards the high vacuum region of the mass separator via a sextupole ion guide, accelerated to 30 keV and mass separated by a 55\({}^{\circ}\) dipole magnet.
In both experiments, the ions were stopped, cooled and accumulated in a buffer-gas filled radio-frequency-quadrupole cooler buncher [36; 37] before they were delivered in
30-keV ion bunches to the respective CLS beamline. There, the ion beam was spatially super-imposed with the laser beam in collinear (COLLAPS) or anti-collinear (IGISOL) fashion. The ions' velocity was adjusted by a Doppler-tuning voltage applied before the neutralisation in a charge exchange cell filled with sodium vapour. In this manner, the laser frequency experienced in the rest frame of the neutral Al atoms could be scanned via Doppler tuning. Once on resonance with the selected transition, fluorescence was detected using a series of photomultiplier tubes and their associated lens systems which surrounded the laser-atom interaction region [38; 39].
In both campaigns, the main spectroscopic transition was from the atomic \(3s^{2}3p\)\({}^{2}\)\({}^{P\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
lived ground state (green) changed only slightly between these two data sets, likely because of a small time dependence in the Al release. The much stronger decrease in isomer intensity (red) between the first and second 6 s of data taking was consistent with the isomer's half-life when each is normalised to the corresponding ground-state intensity.
Direct comparison of the spectra shows a higher overall rate and thus better statistics for the COLLAPS data set. This statement holds true for both measurements of \({}^{26,26m}\)Al as well as stable \({}^{27}\)Al, see Fig. 1d and 1e, which were interleaved with online data as reference measurements. On the other hand, the data from IGISOL benefits from a more favorable isomer-to-ground state ratio, compare Fig. 1a and 1b. The complementarity of the COLLAPS and IGISOL data sets in terms of high statistics versus better isomer-ground state ratio was further strengthened by their distinct control and evaluation of systematic uncertainties. Most importantly, the determination of the ion-acceleration voltage at COLLAPS was achieved by a high-precision voltage divider. At IGISOL, it was calibrated by CLS measurements of stable magnesium (Mg) ions with respect to their precisely known isotope shifts.
_Analysis and Results._ -- The measured resonance spectra of \({}^{26,26m,27}\)Al were fitted to the theoretical model of the hyperfine spectra using the SATLAS package [42]. To constrain the fit in the present work, the ratio of the hyperfine parameters \(A(P_{3/2})/A(S_{1/2})\) was fixed to the precise value of 4.5701(14), obtained in previous work on Al isotopes at COLLAPS [32]. However, this constraint was not applied to the present \({}^{27}\)Al part of the COLLAPS analysis as the analysed spectra were a subset of the measurements examined in Ref. [32].
For \({}^{26,26m}\)Al, a model of the \(I=5\) ground state and one of the \(I=0\) isomeric state were superimposed. Within each experimental campaign, all \({}^{26,26m}\)Al resonance spectra were fitted simultaneously with the same, shared hyperfine parameters as long as a parameter was not otherwise constrained, see above. Similarly, the isomer shift between ground and isomeric state in \({}^{26}\)Al was implemented as a shared fit parameter across a campaign's entire data set. The isomer centroid \(\nu_{0}^{26m}\) itself was freely varied for each individual spectrum. For the determination of \(\nu_{0}^{26m}\), the Doppler-tuning voltage was converted into frequency based on the isomer's ionic mass. It was verified in fits of simulated spectra that this approach led to accurate results despite the peak overlap with the resonance spectrum of the ground state.
Voigt profiles were chosen for the lineshapes of individ
Figure 1: **(a)** Example of a resonance spectrum of the main spectroscopic transition \(3s^{2}3p\)\({}^{2}\)\(P_{3/2}^{o}\to 3s^{2}4s\)\({}^{2}\)\(S_{1/2}\) obtained in the CLS measurements of \({}^{26,26m}\)Al at IGISOL. The inset demonstrates the isomer’s presence (red) due to well separated ground and isomer states in the \(3s^{2}3p\)\({}^{2}\)\(P_{1/2}^{o}\to 3s^{2}3d\)\({}^{2}\)\(D_{3/2}\) transition. **(b,c)** The spectra of \({}^{26,26m}\)Al in the main transition at COLLAPS. Ions have been extracted 0 s (b) and 6 s (c) after the proton impact on the ISOLDE target, demonstrating the isomer’s presence due to the decrease in intensity consistent with the isomer’s half-life. **(d,e)** Examples of resonance spectra of the \({}^{27}\)Al references studied using the main transition at IGISOL (d) and COLLAPS (e). **f)** Extracted isotope shifts (points) and the resulting weighted average (a horizontal line), including systematic uncertainties.
ual resonance peaks with no intensity constraints in the ground state. The Lorentzian and Gaussian widths were shared between ground state and isomer peaks within each individual spectrum but not shared overall. Due to inelastic collisions in the charge-exchange cell [43; 44; 38], four equidistant side peaks were considered in the analysis of the COLLAPS data [32]. The energy offset of these sidepeaks was determined empirically and the relative intensities were constrained by Poisson's law. Because of lower statistics, the IGISOL data were found to be insensitive to the inclusion of these sidepeaks, thus, they were not considered in the analysis.
Each spectrum of \({}^{26,26m}\)Al was measured in sequence with an independent \({}^{27}\)Al reference measurement. The isotope shift \(\delta\nu^{27,26m}=\nu_{0}^{26m}-\nu_{0}^{27}\) of each measurement pair was calculated from the frequency centroid \(\nu_{0}^{26m}\) of \({}^{26m}\)Al with respect to the frequency centroid \(\nu_{0}^{27}\) of the closest \({}^{27}\)Al reference measurement. The results of all individual \(\delta\nu^{27,26m}\) determinations are shown in Fig. 1f. Weighted averages in \(\delta\nu^{27,26m}\) are calculated separately for the COLLAPS and IGISOL data sets, see Tab. 1.
Systematic uncertainties in CLS for measurements of isotope shifts are well understood [45; 46; 47; 48] and are dominated by the imperfect knowledge of the beam energy. The acceleration voltage from the cooler-buncher at IGISOL was calibrated by matching measured isotope shifts in the D1 and D2 lines for singly-charged ions of stable magnesium isotopes to their precisely known literature values in Ref. [49]. The remaining uncertainty in beam energy was 1.8 eV. An additional \(1\times 10^{-4}\) relative uncertainty was assigned to the scanning voltage in the Doppler tuning. For the COLLAPS data, a \(1.5\times 10^{-4}\) relative uncertainty of the incoming ion beam energy was assigned following the specifications of the employed voltage divider (Ohmlabs KV-30A). This was combined with the uncertainties of the calibrated JRL KV10 voltage divider used to measure the scanning voltage and of the employed voltmeters (Agilent 34661A).
Since the systematic uncertainties at COLLAPS and IGISOL were fully independent, statistical and systematic uncertainties of each measurement campaign were first added in quadrature before the weighted average of both measurement results was calculated, see Tab. 1. Our final value for the isotope shift between \({}^{26m}\)Al and \({}^{27}\)Al is \(\delta\nu^{27,26m}\)=377.5(34) MHz.
With knowledge of the isotope shift \(\delta\nu^{27,26m}\) the difference in mean square nuclear charge radii \(\delta\langle r^{2}\rangle\) between the two isotopes could be calculated according to [50]:
\[\delta\nu^{27,26m}=F\delta\langle r^{2}\rangle^{27,26m}+M\frac{m_{26m}-m_{27} }{m_{27}(m_{26m}+m_{e})},\]
where \(m_{e}\) is the electron mass [51] and \(m_{A}\) are the nuclear masses obtained when 13 electrons are subtracted from the atomic masses [52] and an excitation energy of 228.305 keV [53] is added for \({}^{26m}\)Al. Precision atomic-physics calculations were performed in a multiconfiguration Dirac-Hartree-Fock framework to evaluate the field and mass shift factors \(F\) and \(M\) of the investigated atomic transition [54; 32]. Combining the adopted values of \(F\)=76.2(22) MHz/fm\({}^{2}\) and \(M\)=\(-\)243(4) GHz u with the isotope shift \(\delta\nu^{27,26m}\) of the present work yields \(\delta\langle r^{2}\rangle_{27,26m}=0.429(88)\) fm\({}^{2}\), see Tab. 1. Finally, the root mean square (rms) nuclear charge radius of \({}^{26m}\)Al can be derived:
\[R_{c}(^{26m}\text{Al})\equiv\langle r^{2}\rangle_{26m}^{1/2}=\sqrt{R_{c}(^{27 }\text{Al})^{2}+\delta\langle r^{2}\rangle^{27,26m}}.\]
Using the previously evaluated rms charge radius of \({}^{27}\)Al, \(R_{c}(^{27}\text{Al})\)=3.061(6) fm [32], a value of \(R_{c}(^{26m}\text{Al})\)=3.130(15) fm is obtained, see Tab. 2.
_Discussion. --_ Nuclear charge radii of superallowed \(\beta\) emitters are essential input parameters for the calculation of the ISB corrections \(\delta_{C}\) when a nuclear shell-model approach with Woods-Saxon radial wavefunctions is employed [27; 28]. Currently, these \(\delta_{C}\) calculations are the only ones considered to be sufficiently reliable to evaluate \(\mathcal{F}t\) values and thus \(V_{ud}\)[10]. In the shell-model approach, the ISB corrections are separated into two components, \(\delta_{C}=\delta_{C1}+\delta_{C2}\). The former is associated with the configuration mixing within the restricted shell model space while the latter, known as the radial overlap correction, is derived from a phenomenological Woods-Saxon potential and it depends on the nuclear charge radius \(R_{c}\).
Since \(R_{c}(^{26m}\text{Al})\) was previously unknown, the calculation of \(\delta_{C2}\) used \(R_{c}\)=3.040(20) fm [27], an extrapolation based on other, known nuclear charge radii. Our experimental result, \(R_{c}(^{26m}\text{Al})\)=3.130(15) fm, deviates from this extrapolation by 4.5 standard deviations. This significantly impacts the radial overlap correction which is updated to \(\delta_{C2}\)=0.310(14) % [55] compared to the previous 0.280(15) % [10]. The impact of this sizable change in \(\delta_{C2}\) are summarised in Fig. 2a and in Tab. 2.
Despite \({}^{26m}\)Al being the most accurately studied superallowed \(\beta\) emitter, the corrected \(\mathcal{F}t\) value is shifted by almost one full standard deviation to 3071.4(10) s. Its high precision is maintained but, in terms of \(R_{c}\) in the
\begin{table}
\begin{tabular}{r l l} & \(\delta\nu^{27,26m}\) [MHz] & \(\delta\langle r_{c}^{2}\rangle^{27,26m}\) [fm\({}^{2}\)] \\ \hline COLLAPS & 376.5[17]1 & 37\(\cdot\)6.5[17]1 & 37\(\cdot\)9.7[55]1 \\ & IGISOL & 379.7(55)1 & \\ weighted average & 377.5(34)1 & 0.429(45)(76)2 \\ \end{tabular}
\end{table}
Table 1: Measured isotope shift \(\delta\nu^{27,26m}\) between \({}^{27}\)Al and \({}^{26m}\)Al obtained at the IGISOL facility and at COLLAPS/ISOLDE. The weighted average of the two measurements and the resulting difference in mean square charge radius \(\delta\langle r_{c}^{2}\rangle^{27,26m}\) is listed.
calculation of \(\delta_{C}\), the value now stands on a solid experimental basis. The updated \(\mathcal{F}t\) value of \({}^{26m}\)Al also affects the \(\overline{\mathcal{F}t}\) value, i.e. the weighted average over all 15 precisely studied superallowed \(\beta\) emitters, which is shifted by one half of its statistical uncertainty, see inset in Fig. 2a. To our knowledge, this represents the largest shift in the \(\overline{\mathcal{F}t}\) value since 2009, see Fig.2b. This is a remarkable influence of a single experimental result on a quantity which is based on more than 200 individual measurements and which is dominated in its uncertainty by theoretical corrections.
Accounting for 0.57 s, this statistical uncertainty contains all experimental as well as those theoretical errors which scatter 'randomly' from one superallowed transition to another. Previously, a single systematic theoretical uncertainty of 0.36 s due to \(\delta^{\prime}_{R}\) had to be added affecting all superallowed \(\beta\) emitters alike [56]. In these circumstances, the shift in the \(\overline{\mathcal{F}t}\) value caused by the new charge radius of \({}^{26m}\)Al would have corresponded to \(\approx\) 40% of its total uncertainty. In the latest survey of superallowed \(\beta\) decays [10], however, a systematic theoretical uncertainty of 1.73 s in \(\delta_{NS}\) was newly introduced, reflecting uncertainties due to previously unaccounted contributions to the nuclear-structure dependent radiative corrections. This represents an almost three-fold increase of the theoretical error associated with \(\delta_{NS}\) which now dominates the uncertainty in the \(\overline{\mathcal{F}t}\) value. Considering our new charge radius of \({}^{26m}\)Al, one thus obtains an \(\overline{\mathcal{F}t}\) value of 3071.96(185) s.
The present work further implies a \(\Delta_{CKM}\) in the unitarity test of the first row of the CKM matrix which is brought by \(\approx 1/10\)\(\sigma\) closer towards unitarity. Although the magnitude of this change is too small to resolve the tension to CKM unitarity, it illustrates the importance of a comprehensive examination of all relevant ingredients to \(V_{ud}\), especially theoretical corrections which involve nuclear-structure dependencies such as radiative and ISB corrections. In terms of \(\delta_{C2}\), there remain seven superallowed \(\beta\) emitters in which the nuclear charge radius is experimentally undetermined [57; 58]. Among those, \({}^{10}\)C and \({}^{14}\)O are of specific interest given their sensitivity to the Fierz interference term which relates to scalar contributions in \(\beta\) decays. Moreover, it has recently been proposed to constrain models of ISB corrections by new, more precise measurements of charge radii in triplets of the isobaric analog states, e.g. \({}^{38}\)Ca - \({}^{38m}\)K - \({}^{38}\)Ar [20].
_Summary. --_ Collinear laser spectroscopy has been performed to determine the nuclear charge radius of \({}^{26m}\)Al, the most precisely studied superallowed \(\beta\) emitter. The obtained value differs by 4.5 standard deviations from the extrapolation used in the calculation of the isospin-symmetry-breaking corrections [10; 27]. This notably impacts the corrected \(\mathcal{F}t\) value in \({}^{26m}\)Al and, thus, the average of all \(\mathcal{F}t\) values used in the extraction of V\({}_{ud}\). As demanded by the tension in CKM unitarity, this work contributes to the thorough examination of all nuclear-structure dependent corrections in superallowed \(\beta\) decays. Stimulated by the present results, efforts to measure experimentally undetermined charge radii of other cases, for example \({}^{54}\)Co at IGISOL/Jyvaskyla, are currently ongoing.
We would like to express our gratitude to the ISOLDE collaboration and the ISOLDE technical teams, as well as the IGISOL collaboration and IGISOL technical teams for their support in the preparation and successful realisation of the experiments. We are thankful for all input and discussions that we received from Ian S. Towner to support this work. S.M-E. is grateful for fruitful discussions with G. Ball.
We acknowledge funding from the Federal Ministry of Education and Research under Contract No.
\begin{table}
\begin{tabular}{c c c} quantity & previous value & this work \\ \hline \(R_{c}\) & 3.040(20) fm [27] & 3.130(15) fm \\ \(\delta_{C2}\) & 0.280(15) \% [10] & 0.310(14) \% \\ \(\mathcal{F}t(^{26m}Al)\) & 3072.4(11) s [10] & 3071.4(10) s \\ \(\overline{\mathcal{F}t}\) & 3072.24(185) s [10] & 3071.96(185) s \\ \(\Delta_{CKM}\) & \(152(70)\times 10^{-5}\)[7] & \(144(70)\times 10^{-5}\) \\ \end{tabular}
\end{table}
Table 2: Summary of the rms charge radius \(R_{c}\), the radial overlap correction \(\delta_{C2}\) and the \(\mathcal{F}t\) value of \({}^{26m}\)Al, the weighted average of the 15 superallowed \(\beta\) emitters \(\overline{\mathcal{F}t}\) and the result of the CKM unitarity test.
Figure 2: **(a)**\(\mathcal{F}t\) values of the 15 superallowed \(\beta\) emitters used to determine V\({}_{ud}\). The values in black, taken from [10], include experimental as well as ‘statistical’ theoretical errors. The previously determined \(\mathcal{F}t\) value for \({}^{26m}\)Al [10] (blue) is compared to the one (orange) when considering the experimental nuclear charge radius of the present work. The weighted averages for the 15 superallowed \(\beta\) emitters are shown as horizontal bars in the inset (without considering additional, systematic theoretical uncertainties). **(b)** Evolution of the \(\overline{\mathcal{F}t}\) value with statistical uncertainties in previous reviews [10; 56; 59; 60; 61; 62; 63] (black) compared to this work (orange). The vertical line to guide the eye corresponds to the value from 2020 [10].
05P15RDCIA and 05P21RDCI1 and the Max-Planck Society, the Helmholtz International Center for FAIR (HIC- for FAIR), and the EU Horizon 2020 research and innovation programme through ENSAR2 (Grant No. 654002), grant agreement no. 771036 (ERC CoG MAIDEN) and grant agreement no. 861198-LISA-H2020-MSCA-ITN-2019. We acknowledge the funding provided by the UK Science and Technology Facilities Council (STFC) Grants No. ST/P004598/1 and ST/L005794/1. This work was supported by the FWO Vlaanderen and KU Leuven project C14/22/104. TRIUMF receives federal funding via a contribution agreement with the National Research Council of Canada. A significant share of the research work described herein originates from R&D carried out in the frame of the FAIR Phase-0 program of LASPEC/NUSTAR.
|
2306.07173 | Unfolding Particle Physics Hierarchies with Supersymmetry and Extra
Dimensions | This is a written version of lectures delivered at TASI 2022 ``Ten Years
After the Higgs Discovery: Particle Physics Now and Future''. Mechanisms and
symmetries beyond the Standard Model (BSM) are presented capable of elegantly
and robustly generating the striking hierarchies we observe in particle
physics. They are shown to be among the central archetypes of quantum effective
field theory and to strongly resonate with the tight structure and
phenomenology of the Standard Model itself, allowing one to motivate, develop
and test a worthy successor. The (Little) Hiearchy Problem is discussed within
this context. The lectures culminate in specific BSM case-studies,
gaugino-mediated (dynamical) supersymmetry breaking to generate the weak/Planck
hierarchy, and (in less detail) extra-dimensional wavefunction overlaps to
generate flavor hierarchies. | Raman Sundrum | 2023-06-12T15:13:34Z | http://arxiv.org/abs/2306.07173v2 | # Unfolding Particle Physics Hierarchies with Supersymmetry and Extra Dimensions
###### Abstract:
This is a written version of lectures delivered at TASI 2022 "Ten Years After the Higgs Discovery: Particle Physics Now and Future". Mechanisms and symmetries beyond the Standard Model (BSM) are presented capable of elegantly and robustly generating the striking hierarchies we observe in particle physics. They are shown to be among the central archetypes of quantum effective field theory and to strongly resonate with the tight structure and phenomenology of the Standard Model itself, allowing one to motivate, develop and test a worthy successor. The (Little) Hierarchy Problem is discussed within this context. The lectures culminate in specific BSM case-studies, gaugino-mediated (dynamical) supersymmetry breaking to generate the weak/Planck hierarchy, and (in less detail) extra-dimensional wavefunction overlaps to generate flavor hierarchies.
###### Contents
* 1 Introduction
* 1.1 The End of Particle Physics
* 1.2 The View from the Top
* 1.3 Fundamental Physics is Hierarchical!
* 1.4 Exponential Hierarchy from Non-Perturbative Physics
* 2 Spinning Tales
* 2.1 Spin-0
* 2.2 Spin-1/2
* 2.3 Spin-1
* 2.4 Spin-2
* 2.5 Spin \(>2\) (the other end of particle physics)
* 2.6 Spin-3/2
* 3 The Supersymmetry Charge Algebra
* 4 Superpartners
* 5 Supergravity, SUSY Breaking and the \(G_{N}\to 0\) Limit
* 6 Higher Powers and Hierarchies from Higher Dimensions
* 6.1 Compactification of the Extra Dimension
* 6.2 Emergence of (Yukawa) Coupling Hierarchies
* 6.3 Boundary-localized fields and Sequestering
* 6.4 Non-renormalizability of Higher-dimensional EFT
* 6.5 The Ultimate \(m\to 0\) Limit
* 7 Superspace and Superfields
* 8 Chiral Superspace and Chiral Superfields
* 9 The Wess-Zumino Model
* 9.1 Robust interacting massless scalar
* 9.2 R-symmetries
* 9.3 Non-renormalizable SUSY EFT
* 10 An EFT of Spontaneous SUSY Breaking
* 10.1 The Goldstino and the Gravitino
* 11
11 The Renormalizable Minimal Supersymmetric SM (MSSM) [8] * 11.1 Gauge Superfields * 11.2 Wess-Zumino Gauge * 11.3 Gauge field strength and gauge field action * 11.4 Component form of charged field gauged kinetic terms * 11.5 Renormalizable Feynman rules
12 Soft SUSY breaking from Spontaneous SUSY breaking
13 The "\(\mu\)-Problem" and the Giudice-Masiero Mechanism
14 SUSY Phenomenology * 14.1 Importance of the 125 GeV Higgs Sector * 14.2 Direct LHC Searches * 14.3 WIMP Dark Matter Direct Detection
15 Flavor and CP Problems of SUSY (and BSM more generally) * 15.1 SM FCNCs and CP-Violation * 15.2 Superpartner-mediated FCNCs * 15.3 Superpartner-mediated CP-violation * 15.4 Moral for BSM from flavor and CP considerations
16 Combining "bosonic" (\(x_{5}\)) and "fermionic" (\(\theta,\bar{\theta}\)) extra dimensions * 16.1 SUSY Grand Unification and the Size of the Fifth Dimension * 16.2 4D Renormalization Group (RG) evolution below the KK scale * 16.3 Parameter Space * 16.4 Radiative EWSB
17 The (Little) Hierarchy Problem
18 The UnSequestered
19 Realistic Gaugino(-Higgs) Mediated SUSY Breaking * 19.1 Higgs superfields in the Bulk * 19.2 A note on the different UV scales
20 Dynamical SuperSymmetry Breaking (DSSB)
21 Conclusions
Introduction
The Standard Model (SM) describes an orchestra of elementary particles playing tightly interwoven melodies in the symphony of Nature [1]. And yet, it is an unfinished symphony. The SM gauge and Yukawa couplings display intriguing patterns that require new mechanisms to explain. The enigmas of Dark Matter and the origins of the matter-antimatter asymmetry also point beyond the SM. The incorporation of a fully realistic quantum gravity represents a significant challenge. Against the backdrop of plausible BSM physics at far-UV scales, electroweak symmetry breaking (EWSB) is very fragile, posing another thorny mystery, the Hierarchy Problem.
In these lectures, I want to survey the direct and indirect evidence for the very hierarchical structure of fundamental physics and to provide powerful and overarching quantum field theory (QFT) mechanisms beyond the Standard Model (BSM) capable of elegantly generating this structure. Part of the job is to fully appreciate the beautiful themes already at work within the SM, so as to guide us in how the symphony might extend further. New themes should harmonize with the old. Of course, we will need to carefully account for the state of play along different experimental frontiers and the prospects for their improvement. It is in the context of this ambitious BSM undertaking that the (in)famous Hierarchy Problem will be discussed intuitively, rather than as a philosophically dubious concern of the SM in isolation.
The lectures will culminate in a specific BSM structure, not because I think it is the inevitable successor to the SM but because it makes a good "case study", illustrating robust QFT principles, methodology and phenomenological detective work along the way. The key new ingredients are extensions of relativistic spacetime, supersymmetry (SUSY) and extra dimensions, which will be strongly motivated from both top-down and bottom-up considerations. The key old ingredient to be recycled is dimensional transmutation as seen in QCD, capable of generating exponential hierarchies. The Hierarchy Problem will be solved (modulo the Little Hierarchy Problem) within the framework of "Gaugino-Mediated SUSY Breaking" [2], where SUSY is ultimately broken "dynamically" (via dimensional transmutation). Along the way, I will give a low-resolution introduction to the extra-dimensional wavefunction overlap mechanism for generating flavor (Yukawa-coupling) hierarchies [3].
Apology: The goal here is to present a coherent conceptual framework for particle physics, but of course to do that concretely requires equations, which necessarily involve factors of \(2,\pi,i\) and minus signs. I have done a modest job of trying to self-consistently get the right factors of 2 and minus signs in the time I had, but I am sure that there are still several errors. I have done a better job with factors of \(\pi\) and \(i\). I hope this still allows the lectures to be readily comprehensible, and the reader can go through derivations more carefully for themselves or consult the more careful references provided (accounting for slight differences of convention). Since the lectures are founded on the profound mathematical identity,
\[e^{\rm moderate}={\rm Large}, \tag{1}\]
I have been especially careful to ensure that I have no mistakes in the exponents that appear.
### The End of Particle Physics
Let me begin with a few things you all know, but I want to look again with fresh eyes and marvel at the enigmas of fundamental physics. Elementary particles are categorized in terms of two spacetime quantum numbers, Mass and Spin, as well as some internal quantum numbers. Both the mass and spin have maximum allowed values, in each case involving General Relativity in interesting but different ways. These are the "ends" of particle physics in the mass and spin directions.
Since \(E=mc^{2}=\$\) is the central consideration in particle physics, we begin with mass. A point particle has a classical Schwarzchild radius \(\sim G_{N}m\) as well as effectively a quantum mechanical "size" given by its Compton wavelength \(1/m\). When the former is larger, the particle effectively is within its Schwarzchild radius and is predominantly a classical black hole rather than an elementary quantum particle. This happens when \(m\gg 1/\sqrt{G_{N}}\), so that the Planck scale \(M_{Planck}\equiv 1/\sqrt{8\pi G_{N}}=2\times 10^{18}\) GeV marks the high end of particle physics in the mass direction, and the onset of black hole physics.
Relatedly, particle physics is the exploration of the smallest distances \(\ell\), which by the uncertainty principle and relativity requires concentrating energy \(E\geq|\vec{p}\mid>1/\ell\). And yet if this concentration of energy is within its Schwarzchild radius \(\ell\ll G_{N}E\), it will again gravitationally form a black hole. We are therefore unable to probe distances shorter than the Planck length \(\ell_{Planck}=\sqrt{G_{N}}\sim 1/M_{Pl}\).
Given these considerations, we are led to the following paradigm. A full dynamics of quantum gravity operating at the highest energy/mass scales _matches onto_ ( reduces to) a quantum effective field theory (EFT) below \(M_{Pl}\), describing matter, radiation and general relativity with pointlike quanta. This EFT unfolds as we follow its renormalization group (RG) flow to lower energies and larger distances. A roughly parallel unfolding takes place in cosmic history as the universe expands and cools from high temperatures at the Big Bang to the cold temperatures of outer space today.
### The View from the Top
Superstring theory offers a UV-complete formulation of quantum gravity [4]. It has a tight internal consistency, great beauty and unity, and the virtue of concreteness of its perturbative structure. String constructions can have quasi-realistic features. And who knows, maybe some incarnation of string theory is even true. But it at least gives us a concrete means to envision the quantum gravity heights and what might descend from there into EFT.
The fundamental objects are one-dimensional strings which approximate point-particles on distances longer than the string length parameter \(\ell_{\rm string}>\ell_{\rm Planck}\). The lightest vibrational modes of the superstring appear as pointlike gravitons, gauge bosons and charged particles at low energies, but excited vibrational modes of the string appear as high mass \(\sim m_{\rm string}\equiv 1/\ell_{\rm string}<M_{Pl}\) and higher-spin resonances. The non-renormalizable perturbative expansion of quantum general relativity, in terms of the dimensionless coupling \(G_{N}.Energy^{2}\), threatens to exit perturbative control as energies grow towards \(M_{Pl}\). But in string theory this UV growth of the effective coupling is cut off while it is still weak, by energies \(\sim m_{\rm string}\), retaining perturbative control.
Remarkably, the simplest string theory constructions require higher-dimensional spacetime and SUSY for their self-consistency. There is a stringy analog of the kind of gauge-anomaly cancellation requirement familiar in chiral gauge theories such as the SM itself, which restricts the
possible field content. But in string theory, spacetime dimensions are themselves fields (on the string worldsheet), and anomaly cancellation determines their number to be 10! Furthermore, for the stringy vibrational modes to include fermionic effective particles and to ensure vacuum stability (absence of tachyons), SUSY is required. In this way, extra spacetime dimensions and SUSY are strongly motivated from a top-down quantum gravity perspective. The only question is how low in energies these exotic ingredients and their associated higher symmetries are manifest, and then ultimately hidden at current experimental energies.
### Fundamental Physics is Hierarchical!
Consider the cartoon of fundamental physics in figure 1, laid out in terms of energy/mass scales. It is meant to be like an ancient map, starting with actual experimental data but trailing off into broad theoretical prejudices and guesswork, and far from complete. It is an explorers map for those who hope to sail to its extreme reaches by every means possible. This kind of synthesis of precision data and plausible theory is illustrated in figure 2. Starting with the disparate measured values of Standard Model (SM) gauge couplings at the weak scale, their SM RG evolution shows a striking "near" coincidence at extremely high energies, suggesting a common origin. Indeed, on
Figure 1: Cartoon of the hierarchical structure of fundamental physics
closer inspection, the different SM fields and their quantum numbers fit neatly like puzzle pieces into "grand unified theories" (GUTs), where some missing pieces have gotten very large masses \(\sim M_{GUT}\) by a grand version of the Higgs mechanism. See [5] for a review. GUTs would explain why the gauge couplings run up to a nearly unified value at \(\sim M_{GUT}\). As you probably know, there are attractive realizations of GUTs involving Supersymmetry (SUSY) and the extra puzzle pieces it necessitates, but here I wanted to show that just the data and SM extrapolation already suggests something interesting is going on orders of magnitude above collider energies.
From the gargantuan size of our universe to the Planck scale, and everything in between, fundamental physics seems remarkably hierarchical. What powerful and economical mechanisms might underlie such hierarchical structure? One goal of these lectures is to show how supersymmetric and higher-dimensional dynamics can robustly and elegantly generate the observed hierarchical structure in particle physics, at least in its non-gravitational aspects.
### Exponential Hierarchy from Non-Perturbative Physics
Rather than giving some sort of formal definition of what it means to successfully "explain" hierarchical structure, I want to just remind you of a beautiful mechanism that captures its spirit, namely Dimensional Transmutation. Our goal will then be to generalize this in some way that applies it to the broader set of particle physics hierarchies. Let us specialize to the case of QCD, for simplicity approximating the light "up" and "down" quarks as massless, and neglecting the other quark flavors altogether. We can imagine this arising as an EFT just below the Planck Scale and ask how robust it is that the observed proton mass is so many orders of magnitude lighter.
Massless QCD is parametrized by \(\alpha_{QCD}(\mu)\) where \(\mu\) is the RG scale. The physical proton mass must be an RG-invariant function of \(\alpha_{QCD}(\mu)\) and \(\mu\), which uniquely determines its form,
\[m_{proton}=\mu\ e^{-\int_{x_{0}}^{\alpha(\mu)}dx\ (1/\beta(x))}. \tag{2}\]
The RG flow is given by \(\frac{d\,\alpha}{d\ln\mu}\equiv\beta(\alpha)\)1. The parameter \(x_{0}\) is just an order one integration constant
Figure 2: Running of SM gauge couplings at one loop, hinting at a grand unification at extremely high energies
of the RG solution. From the perspective of the Planck scale,
\[m_{proton}=M_{Pl}\,e^{-\int_{\alpha_{0}}^{\alpha(M_{Pl})}dx\ \left(\nicefrac{{1} }{{-bx^{2}\cdots}}\right)}\sim{\cal O}(M_{Pl})\ \,e^{-\nicefrac{{1}}{{-b\alpha(M_{Pl})}}}\,, \tag{3}\]
where we have chosen \(\mu\sim M_{Pl}\) and used asymptotic freedom of QCD at such high scales to 1-loop approximate \(\beta(\alpha)\approx-b\alpha^{2}\). Here \(b=29/6\pi\) is the gauge-algebraically determined 1-loop coeffcient for 2-flavor QCD.
We see that \(m_{proton}\approx\) GeV, \(m_{proton}/M_{Pl}\sim 10^{-18}\) emerges if \(\alpha_{QCD}(M_{Pl})\sim{\rm few}\times 10^{-2}\). If we consider any value of \(\alpha_{QCD}(M_{Pl})\) between say 0 and \(\sim 1\) to have been equally likely to have emerged upon matching to some unspecified Planckia quantum gravity a priori, then we only need to be "lucky" enough that \(\alpha_{QCD}\) is modestly small in order to understand why the proton is orders of magnitude lighter than \(M_{Pl}\).2 Note that this powerful mechanism is fundamentally non-perturbative in nature, \(e^{-{\rm constant}/\alpha}\) is the classic function whose perturbative Taylor expansion in \(\alpha^{n}\) vanishes to all orders.
Footnote 2: We could more generally consider any other smooth likelihood for \(\alpha_{QCD}\) in this range.
The inspiration and goal that follows from this example is to discover BSM QFT mechanisms so that modest hierarchies \({\cal O}(10^{-1}-10^{-2})\) in far-UV couplings and mass-parameters/\(M_{Pl}\) unfold in the IR in some roughly analogous manner, being exponentially stretched out to the very hierarchical structure we observe or anticipate in particle physics. I hope you agree that this should be one of our ends for particle theory. Let us proceed to uncover the means to this end.
## 2 Spinning Tales
Thus far, particle experiments operate at \(E\ll M_{Pl}\) and therefore can only detect particles with \(m\ll M_{Pl}\). That is, from a Planckian view of particle physics the particles we have seen crudely satisfy \(m\approx 0\). Symmetries (and approximate symmetries) provide plausible, economic mechanisms for understanding the robustness of \(m=0\) (and \(m\approx 0\)). These "protective" symmetries vary according to the spins of the (nearly) massless particles under consideration, and together provide a powerful grammar underlying the story of particle physics, what we have seen and have not seen, and what we can hope to see.
### Spin-0
The classic mechanism behind having massless spin-0 particles is their realization as Nambu-Goldstone (NG) bosons of the spontaneous symmetry breaking (SSB) of an internal global symmetry. The symmetry's Noether current can be approximated in terms of the NG field \(\phi\):
\[J_{\mu}\sim\partial_{\mu}\phi+\mbox{non-linear in fields}. \tag{4}\]
Conservation of this current,
\[0=\partial^{\mu}J_{\mu}\sim\Box\phi+\mbox{non-linear in fields}, \tag{5}\]
then implies \(\phi\) is massless. If there is a small explicit symmetry breaking as well, current conservation is imperfect and \(m_{\phi}\) is small but non-vanishing, making \(\phi\) a "pseudo-NG boson" (PNGB).
While (P)NGBs can readily be kinematically light enough to discover, there is a downside. Under the associated internal symmetry transformation, \(\phi\to\phi+\text{constant}+...\), so that symmetric couplings are necessarily derivatively-coupled, that is constructed from the invariant \(\sim\partial_{\mu}\phi\). This implies that \(\phi\) couplings rapidly drop (at least linearly) as we move into the IR, making \(\phi\) difficult to detect even if it is light enough to be produced kinematically. We can compare this with the massless photon, whose coupling \(\alpha_{em}\) also runs to weak coupling in the IR, but only logarithmically.
It is therefore not surprising that we easily detect photons, but are still struggling experimentally to discover very light axions, even if these generically motivated \(U(1)\) PNGBs exist. We have done better with _composite_ PNGBs, such as QCD pions. Their couplings also get weaker in the IR but only starting from their compositeness or strong-coupling scale \(\sim\text{GeV}\) down to their mass \(\sim\mathcal{O}(100)\) MeV. The spin-0 Higgs boson is an enigmatic case. Certainly \(m_{higgs}=125\text{GeV}\ll M_{Pl}\) but in the SM theory there is no protective mechanism for why \(``m_{higgs}\approx 0"\). But in the BSM compositeness paradigm, the Higgs boson may in fact be a kind of PNGB composite of some new strong force [6], analogous to the pion, although we have still not detected any corroborating compositeness effects.
We have not seen any spin-0 particle with sizeable couplings which we can confidently say is an elementary particle, the jury still being out on the Higgs boson. We have seen lots of interacting spin-\(1/2\) and spin-1 elementary particles, but nature does seem to be very stingy with interacting elementary spin-0. If the only protective mechanism for light spin-0 is based on PNGBs, then we can roughly understand why this is. Needless to say, it would be very interesting to uncover any other protective mechanism for light spin-0 particles which are not necessarily derivatively-coupled and therefore not very weakly coupled in the IR. And we will.
### Spin-\(1/2\)
For spin-\(1/2\) the protective symmetry of \(m=0\) is famously chiral symmetry, either gauged or global. For example, for a (single) standard Dirac fermion, \(m\bar{\psi}\psi=m\bar{\psi}_{L}\psi_{R}+m\bar{\psi}_{R}\psi_{L}\) is forbidden if we require chiral invariance under \(\psi_{L}\to e^{i\theta_{L}}\psi_{L},\ \psi_{R}\to e^{i\theta_{R}}\psi_{R}\), where \(\theta_{L}\neq\theta_{R}\). The same is true of any other Lorentz-invariant (Majorana) mass terms.
If chiral symmetry is broken at low scales or by small couplings, it explains \(m\approx 0\) on the big stage of particle physics. Indeed this is just what happens in the SM, where the chiral electroweak gauge symmetries are Higgsed at the weak scale \(\ll M_{Pl}\) and communicated to spin-\(1/2\) fermions by (mostly small) Yukawa couplings.
### Spin-\(1\)
We begin with the Proca equation for massive spin-1 fields:
\[\partial^{\mu}F_{\mu\nu}+m^{2}A_{\nu}=0\,, \tag{6}\]
where \(F_{\mu\nu}\equiv\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) as in standard Maxwell theory. It is best if you pretend you know nothing about electromagnetism and Maxwell's equations. Rather, you can check that the Proca equation is effectively the unique relativistically covariant field equation which is local (that is, a _differential_ equation) associated to a free massive spin-1 particle. It is particularly straightforward
to interpret in 4-momentum space in the rest frame \(\vec{p}=0\):
\[(E^{2}-m^{2})A_{i} = 0\] \[m^{2}A_{0} = 0\,. \tag{7}\]
We see that in this rest frame it matches our non-relativistic expectation that the three polarization form a spatial vector \(\vec{A}\), whose (anti-)particles satisfy \(E=m(c^{2})\) at rest. The \(A_{0}\) "polarization' formally appearing by relativistic covariance of the equations of motion also vanishes by these same equations.
Now we take the \(m\to 0\) limit of Proca. Furthermore, if the spin-1 field is not free, we can add a non-trivial right-hand side made out of the other fields that couple to it. In this way we arrive at Maxwell equations as effectively the only option for a massless spin-1 field:
\[\partial^{\mu}F_{\mu\nu}=J_{\nu}\,. \tag{8}\]
Of course, this is not the historical empirically-based path to Maxwell, but rather the only logical option for interacting massless spin-1 in a relativistic theory.
The massless limit has resulted in two remarkable properties. The first follows by noting that taking the 4-divergence (\(\partial^{\nu}\)) of Maxwell's equations implies that \(J_{\nu}\) made out of other fields is a conserved "current":
\[0=\partial^{\nu}\partial^{\mu}F_{\mu\nu}=\partial^{\nu}J_{\nu}=0\,. \tag{9}\]
Globally, the total charge \(Q=\int d^{3}xJ_{0}\) is therefore conserved in time. The other property is that Maxwell's equations are gauge-invariant under
\[A_{\mu}\to A_{\mu}+\partial_{\mu}\xi\,. \tag{10}\]
This is related to the fact that massless spin-1 has only two propagating polarizations compared to the three of massive spin-1.
When there are multiple massless spin-1 fields, \(A_{\mu}^{a}\), possibly interacting among themselves so that they appear non-linearly in each other's currents, the general self-consistent structure is precisely non-abelian gauge theory.3 In this way, (non-abelian or abelian) gauge-invariance can be thought of as the protective symmetry of \(m=0\) for spin-1 particles.
Footnote 3: It is interesting to check that even the standard non-abelian gauge theory field equations can be written in the “abelian-like” forms of Eq. (8) with abelian-like field strengths and conserved currents, \(F_{\mu\nu}^{a}\equiv\partial_{\mu}W_{\nu}^{a}-\partial_{\nu}W_{\mu}^{a},\ \partial^{\mu}J_{\mu}^{a}=0\) (that is, with just ordinary partial derivatives as opposed to covariant derivatives).
The next question is how to break gauge symmetry by a "small amount" so as to realize \(m\approx 0\). This is subtle because the \(m=0\) limit of spin-1 is not smooth, in that it has two polarization for spin-1 while all non-zero masses have three polarizations. But we know the nuanced answer, it is precisely the Higgs mechanism which maintains the _total_ number of physical polarizations in the massless spin-1 limit by including spin-0 fields. Clearly Nature gives us several examples of massless and Higgsed gauge particles/fields.
Let us jump over spin-\(3/2\) for the moment and survey even higher spins.
### Spin-\(2\)
We again begin with \(m\neq 0\), which for relativistic spin-\(2\) satisfies the Pauli-Fierz equation for a symmetric tensor field \(h_{\mu\nu}\),
\[\Box h_{\mu\nu}-\partial_{\sigma}\partial_{\mu}h_{\nu}^{\sigma}-\partial_{ \sigma}\partial_{\nu}h_{\mu}^{\sigma}+\partial_{\mu}\partial_{\nu}h+\eta_{\mu \nu}\left(\partial_{\alpha}\partial_{\beta}h^{\alpha\beta}-\Box h\right)-m^{2}( h_{\mu\nu}-h)=0 \tag{11}\]
where \(h\equiv\eta^{\mu\nu}h_{\mu\nu}\). In \(4\)-momentum space, its equations of motion in the rest frame \(\vec{p}=0\) reduce to \(h_{jj}=h_{00}=h_{0j}=0\), while the remaining traceless symmetric spatial tensor components satisfy \((E^{2}-m^{2})h_{ij}=0\). This again matches our non-relativistic expectation, in this case that the five spin-\(2\) polarizations form a symmetric traceless spatial tensor with rest energy \(m\).
Now we take the \(m\to 0\) limit. You can check that the Pauli-Fierz equation then reduces to the _linearized_ Einstein Equations
\[\Box h_{\mu\nu}-\partial_{\sigma}\partial_{\mu}h_{\nu}^{\sigma}-\partial_{ \sigma}\partial_{\nu}h_{\mu}^{\sigma}+\partial_{\mu}\partial_{\nu}h+\eta_{\mu \nu}\left(\partial_{\alpha}\partial_{\beta}h^{\alpha\beta}-\Box h\right) \propto T_{\mu\nu}. \tag{12}\]
These happen to be the usual Einstein Equations in terms of a dynamical spacetime metric \(g_{\mu\nu}\equiv\eta_{\mu\nu}+h_{\mu\nu}\) but keeping only first-order terms in \(h_{\mu\nu}\), that is the _linearized_ Einstein Equations. I have again put a non-vanishing right-hand side, \(T_{\mu\nu}\), to represent any non-linear terms in the spin-\(2\) or other fields that might arise beyond free field theory. Again, this is not the historical emperically-based path with Newton's Law and the Equivalence Principle as guides, but rather the only logical option for interacting massless spin-\(2\) in a relativistic theory.
And again, there are two remarkable properties in this limit. There is a gauge invariance,
\[h_{\mu\nu}\to h_{\mu\nu}+\partial_{\mu}\xi_{\nu}+\partial_{\nu}\xi_{\mu}, \tag{13}\]
with a \(4\)-vector gauge transformation \(\xi_{\mu}(x)\). Secondly, by taking the \(4\)-divergence \(\partial_{\mu}...\) of Eq. (12), we see that \(T_{\mu\nu}\) must be a _conserved_ tensor current, \(\partial^{\mu}T_{\mu\nu}=0\), with globally conserved charges \(P_{\mu}=\int d^{3}\vec{x}\ T_{\mu 0}(x)\). The only such \(4\)-vector charge consistent with relativistic interactions is the familiar \(4\)-momentum itself, one consequence of the Coleman-Mandula Theorem. Thus we know that \(T_{\mu\nu}\) can be nothing else but the energy-momentum or stress tensor of all the fields.
**Exercise**: Consider the example of \(2\to 2\) scattering, involving four different species of spin-\(0\) particles. Of course, \(4\)-momentum \(P_{\mu}\) is conserved by translation invariance, but let us ask if there could be another \(4\)-vector conserved charge \(Q_{\mu}=\int d^{3}\vec{x}\) (local charge density). If so, in the far past and far future when all particles are well-separated, total \(Q_{\mu}\) would be the sum of \(Q_{\mu}\) contributions of each individual particle in isolation. Since the only \(4\)-vector attached to each isolated spinless particle is its \(4\)-momentum, we must have \(Q_{\mu}^{particle}\propto p_{\mu}^{particle}\). But it is a priori possible that the proportionality constant is different for each particle species. Show that given standard \(4\)-momentum conservation for a generic nontrivial scattering angle (in center-of-momentum frame) that the proportionality constant must be universal for all the particles involved. Therefore \(Q_{\mu}\) must be nothing but \(P_{\mu}\) up to this overall constant.
Because the spin-\(2\) particles must exchange \(4\)-momentum among themselves and with other particles in an interacting theory, in order to be conserved \(T_{\mu\nu}\) must contain non-linear terms in
\(h_{\mu\nu}\) to represent their energy-momentum. That is, the spin-2 particles must be self-coupled. This situation is therefore analogous to the case of several self-interacting massless spin-1 particles, the self-consistent form resulting in non-abelian gauge theory with non-abelian gauge invariance, despite being expressable in abelian-like form. For spin-2, we also have an apparent abelian-like gauge transformation, but the self-coupling means that including the explicit form of \(T_{\mu\nu}\) requires finding the non-abelian extension of the gauge invariance. This is precisely the gauge symmetry of general coordinate invariance, and the explicit form of Eq. (12) is then the generally coordinate-invariant non-linear Einstein Equations.
**Exercise**: Consider a general coordinate transformation, \(y^{\mu}=x^{\mu}-\xi^{\mu}(x)\). Given a proper distance function on spacetime in terms of a general metric, \(ds^{2}=g_{\mu\nu}(x)dx^{\mu}dx^{\nu}\), re-express it in terms of \(y\), \(ds^{2}=g^{\prime}_{\mu\nu}(y)dy^{\mu}dy^{\nu}\). Writing \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\), \(g^{\prime}_{\mu\nu}=\eta_{\mu\nu}+h^{\prime}_{\mu\nu}\), and restricting \(h,h^{\prime}\) and \(\xi\) to being infinitesimally small (that is, working strictly to first order in these) show that
\[h^{\prime}_{\mu\nu}(x)=h_{\mu\nu}(x)+\partial_{\mu}\xi_{\nu}(x)+\partial_{\nu }\xi_{\mu}(x), \tag{14}\]
the abelian-like gauge transformation we derived for spin-2 above. Removing the restriction of being infinitesimal gives a fully non-abelian generalization of the gauge symmetry, namely that of general coordinate invariance.
The uniqueness of energy-momentum conservation as a conserved 4-vector charge implies there could only be one massless spin-2 field, and it must incarnate as General Relativity. Indeed, Nature has given this to us.
### Spin \(>2\) (the other end of particle physics)
In brief, in analogy to spins 1 and 2, in order to be massless, higher spin fields would have to couple to conserved currents of the appropriate higher Lorentz representation. For example, a spin-3 field would have require a conserved symmetric 3-tensor current. But this would imply a globally conserved 2-tensor charge \(Q_{\mu\nu}\). Such a conserved charge is inconsistent with interactions, again as a consequence of the Coleman-Mandula Theorem.
**Exercise** Again consider \(2\to 2\) scattering, for simplicity involving a single species of spin-0 particle. Suppose there were a conserved traceless symmetric tensor conserved charge,
\[Q_{\mu\nu}=\int\,d^{3}\vec{x}\ (\mbox{local charge density}), \tag{15}\]
so that in the far past and far future it is the sum of individual particle charges. Again for an isolated particle, its charge must be constructed out of its 4-momentum, in this case \(Q_{\mu\nu}\propto p_{\mu}p_{\nu}-\eta_{\mu\nu}p^{2}/4\). Show that conservation of total \(Q_{\mu\nu}\) is inconsistent with a generic scattering angle (in center-of-momentum frame).
Therefore for spin \(>2\) we cannot have \(m=0\) or even \(m\approx 0\), but interacting spin \(>2\) "particles" can exist if their masses are comparable to the scale at which the EFT describing them breaks down in the UV, \(m\sim\Lambda_{UV}\). Operationally, such particles are "composite", in that the energies high
enough to create them are close to the energies at which they cease to be described by point-particle EFT. Nature gives us many examples, such as the high-spin hadron composites of QCD, \(m\gtrsim\) GeV. String Theory as a perturbative theory of quantum gravity also contains many higher-spin excitations "composed" of the fundamental string, with masses \(\sim m_{string}\lesssim M_{Pl}\). Their stringy structure becomes apparent when probed in their relativistic regime, \(E\sim m_{string}\).
In this sense, we see that spin-2 (and hence, General Relativity) is the "end" of point-particle physics in the spin direction. Finally, we return to the exciting case of...
### Spin-\(3/2\)
For \(m\neq 0\), the Rarita-Schwinger equation for a vector-spinor field \(\psi_{\mu\,\alpha}\), where I am choosing 4-vector indices from the middle or later parts of the Greek alphabet (in this case \(\mu\)) and spinor indices from the early part of the Greek alphabet (in this case \(\alpha\)), reads
\[\left(\epsilon^{\mu\nu\rho\kappa}(\gamma_{5}\gamma_{\nu})_{\alpha\beta}\partial _{\rho}+\frac{m}{2}\left[\gamma^{\mu},\gamma^{\kappa}\right]_{\alpha\beta} \right)\psi_{\kappa\beta}=0. \tag{16}\]
The \(\gamma\)'s are the familiar Dirac matrices. As for Dirac spin-\(1/2\), in the rest-frame of 4-momentum space this equation describes a particle (\(E=m\)) and its antiparticle (\(E=-m\)), each with effectively two independent spinor components. Focusing on just the particle and its independent two-component spinor, the above is essentially the unique relativistically covariant equation which reduces in the rest frame to \(\psi_{\mu=0,\,\alpha}=0,\,\,(\sigma_{i})_{\alpha\beta}\psi_{\mu=i,\,\beta}=0, \,(E-m)\psi_{\mu=i,\,\alpha}=0\), where \(\sigma_{i}\) are the Pauli matrices. Non-relativistically, the desired spin-\(3/2\) representation is contained in the product representation \(\psi_{\mu=i,\,\beta}\) of spin-\(1\) and spin-\(1/2\), while the second of these equations projects out the unwanted spin-\(1/2\) representation in this product.
Taking the \(m\to 0\) limit of the Rarita-Schwinger equation and introducing a non-zero right-hand side to represent non-linearities/interactions as previously,
\[\epsilon^{\mu\nu\rho\kappa}\sigma_{\nu}^{\alpha\beta}\partial_{\rho}\psi_{ \kappa\beta}={\cal J}^{\mu\,\alpha}(x). \tag{17}\]
Here, again in analogy to the Dirac equation, we have taken advantage of the fact that in the massless limit the four-component spinor equations split into decoupled equations for two-component or Weyl spinors, of which we keep just the left-handed one. In this left-handed two-component spinor space, the \(\sigma_{\nu}\) are again the Pauli matrices \(\sigma_{i}\) along with \(\sigma_{0}\) being the identity.
One might wonder whether massless spin-\(3/2\) is more like massless spin-\(1\) which allows an arbitrary number of distinct but interacting fields with that spin, or whether it is like massless spin-\(2\) which allows only one field with that spin, namely the graviton field of GR. It turns out that the answer is intermediate, in that the maximum number of species of distinct \(\psi\) fields is \(8\), but we will not do the detectiewework here to show this. Instead, we will just consider a single species of \(\psi_{\mu\alpha}\) as above, this being the phenomenologically most interesting case.
Once again we get a new gauge symmetry, under \(\psi_{\mu\alpha}\to\psi_{\mu\alpha}+\partial_{\mu}\xi_{\alpha}\), where the gauge transformation \(\xi_{\alpha}(x)\) is a spinor field. And by taking the 4-divergence of Eq. (17) we see that \({\cal J}_{\mu\alpha}\) must be a conserved vector-spinor current, \(\partial^{\mu}{\cal J}_{\mu\alpha}=0\). This implies a globally conserved spinor charge, \(Q_{\,\alpha}\equiv\int d^{3}\vec{x}{\cal J}_{\mu=0,\,\alpha}\). Since 2-component spinors are necessarily complex, so are \(\psi\), \({\cal J}\), and therefore the \(Q_{\,\alpha}\) have distinct conjugate conserved charges \(\bar{Q}_{\,\alpha}\).
What are these unfamiliar spinor charges, "halfway" between standard scalar internal charges and the spacetime energy-momentum charges?
## 3 The Supersymmetry Charge Algebra
Before trying to answer this, let us see how we can relate charges in the most familiar case of some scalar conserved charges of internal symmetries, \(Q_{a}=\int d^{3}\vec{x}J_{0}^{a}\). Clearly if \(Q_{a}\) and \(Q_{b}\) are conserved in time, then so is their product \(Q_{a}Q_{b}=\int d^{3}\vec{x}J_{0}^{a}(x)\int d^{3}\vec{x}J_{0}^{b}(x^{\prime})\). But the great practical utility of charge conservation comes from the charge being the sum of _local_ contributions, whereas this product charge is clearly not, with the \(\vec{x}\) and \(\vec{x}^{\prime}\) contributions being arbitrarily far apart. Instead, let us consider the commutator, \([Q_{a},Q_{b}]\). This is indeed a new charge with local contributions, if it is non-vanishing. To see this, first note that \(J_{0}^{a}(x)\) will be made of some products of boson fields (or field momenta) and bilinears in fermion fields at \(x\), and their derivatives. (Since fermions are half-odd-integer spinors by the spin-statistics connection they must appear in even numbers in any 4-vector \(J_{\mu}\).) Let us pretend for just a moment that in the QFT all bosonic fields and their derivatives commute, and all fermions and their derivatives anticommute, and all fermions commute with bosons. (Only the last of these statements is true, but let us just pretend.) Therefore, given this fiction, all bosons and fermion bilinears and their derivatives would commute, in which case clearly \([Q_{a},Q_{b}]=0\). Therefore in truth, the only way to get a non-zero commutator is because of retaining at least one non-trivial commutator between boson fields (including bosonic field momenta or time-derivatives of boson fields) or fermion bilinears at \(x\) and bosons or fermion bilinears at \(x^{\prime}\). Such commutators always contain \(\delta^{3}(\vec{x}-\vec{x}^{\prime})\) factors, so \([Q_{a},Q_{b}]\) must be another charge with _local_ contributions! In this way, the "useful" charges form a standard Lie algebra.
We can apply this kind of detective work to the more enigmatic spinor charges \(Q_{\alpha},\bar{Q}_{\beta}\). To be precise, let us remind ourselves of the 2-component spinor representation of the Lorentz group and the relevant notation. It is based on the relativistic analog of the familiar isomorphism of the rotation group, \(SO(3)\equiv SU(2)^{4}\), namely \(SO(3,1)\equiv SL(2,C)\). That is, the Lorentz group can be represented by complex \(2\times 2\) matrices \(\Lambda\) with unit determinant. We can define a _left-handed_ representation \(\psi_{L}\) to be a 2-component spinor transforming as \(\psi_{L}\rightarrow\Lambda\psi_{L}\), where matrix-multiplication is implied on the right side.
We will use the "bar" notation to be the same as conjugation \(\bar{\psi}_{\alpha}\equiv\psi^{\dagger}_{\alpha}\), and without explicitly writing indices it will represent a row vector \(\bar{\psi}\equiv\psi^{\dagger}\) obtained from the hermitian conjugate of the column vector \(\psi\). Therefore under Lorentz transformation, \(\bar{\psi}\rightarrow\bar{\psi}\Lambda^{\dagger}\) with matrix multiplication implied. One can then form Lorentz 4-vectors from the spinor products \(\bar{\psi}\sigma^{\mu}\chi\), where again \(\sigma^{0}\) is the identity matrix and \(\vec{\sigma}\) are Pauli matrices. Together the \(\sigma^{\mu}\) are a Hermitian basis for \(2\times 2\) complex matrices.
As with internal charges we can start with the product charges \(Q_{\alpha}\bar{Q}_{\beta}\). Once again, they are conserved but not in a useful way since they are not the sum of local contributions. But now we can focus on the anticommutator \(\{Q_{\alpha},\bar{Q}_{\beta}\}\), which is a sum of local contributions. The reason that it is the anticommutator which has this property is because any local spinor-vector current such as \({\cal J}_{\mu\alpha}(x)\) must necessarily be the product of some bosons and an _odd_ number of fermionic fields and derivatives at \(x\), by the spin-statistics connection. By the analogous argument to the case of scalar charges, it is \(\{Q_{\alpha},\bar{Q}_{\beta}\}\) which necessarily contains a factor of \(\delta^{3}(\vec{x}-\vec{x}^{\prime})\).
What is interesting is that by the spinor algebra reviewed above, and the fact that the \(\sigma^{\mu}\) are a hermitian basis for all \(2\times 2\) matrices, it follows that \(\{Q_{\alpha},\bar{Q}_{\beta}\}\) necessarily transforms as a Lorentz 4-vector of conserved charges, and the only such conserved 4-vector that is the sum of local contributions allowed in local QFT is 4-momentum:
\[\{Q_{\alpha},\overline{Q}_{\beta}\}=2P_{\mu}\sigma^{\mu}_{\alpha\beta}. \tag{18}\]
The factor 2 is chosen by convention in the normalization of \(Q_{\alpha}\).
By analogous detective work, the anticommutators \(\{Q_{\alpha},Q_{\beta}\}\) and \(\{\bar{Q}_{\alpha},\bar{Q}_{\beta}\}\) can also be conserved charges with local contributions. But these transform as Lorentz anti-symmetric tensors, and no such conserved tensor charges are possible in interacting QFT.5 Therefore, we can only have
Footnote 5: The famous antisymmetric tensor charges of QFT are the angular momentum tensor \(J_{\mu\nu}\), but they are not all conserved in the usual sense of commuting with the Hamiltonian \(P_{0}\), in particular the boost generators \(J_{0i}\) do not.
\[\{Q_{\alpha},Q_{\beta}\}=0,\ \{\bar{Q}_{\alpha},\bar{Q}_{\beta}\}=0. \tag{19}\]
The fact that the spacetime \(P_{\mu}\) charges appear in our anticommutation relations, mean that the new symmetry charges correspond to an extension of spacetime symmetry beyond the usual algebra of Poincare generators \(P_{\mu},J_{\mu\nu}\) and their commutation relations. In addition to all of these, there are also commutation relations between the \(Q_{\alpha},\bar{Q}_{\beta}\) and the \(P_{\mu},J_{\mu\nu}\). The commutation relations with the angular momentum generators \(J_{\mu\nu}\) merely express the spinor properties of the \(Q_{\alpha},\bar{Q}_{\beta}\), so I will not bother to write these out. The commutation relations with \(P_{\mu}\) express charge conservation,
\[[Q_{\alpha},P_{\mu}]=0,\ [\bar{Q}_{\beta},P_{\mu}]=0. \tag{20}\]
(The \(P_{0}=H\) commutator is literally conservation of charge, while the \(\vec{P}\) commutator vanishes because the charge is the total across space, so a spatial translation does not change this total.)
If we put together the anticommutation algebra of conserved spinor charges eqs. (18) and (19), the usual commutator algebra of Poincare generators, and the conservation properties, Eq. (20), and (spinor) Lorentz transformation properties of \(Q_{\alpha},\bar{Q}_{\beta}\), we arrive at the minimal or \({\cal N}=1\) superalgebra. The \(Q_{\alpha},\bar{Q}_{\beta}\) are "supercharges".6
Footnote 6: \({\cal N}\) counts the number of different “flavors” of \(Q_{\alpha}\), but only \({\cal N}=1\) SUSY QFT is compatible with having chiral fermions, such as Nature exhibits.
Does supersymmetry (SUSY) exist in the space of QFTs? That is, are there QFTs that contain the superalgebra of charges, with charges realised as integrals over local charge densities? In particular, is there a supersymmetric extension of the SM? What is the structure of such theories? How can they be realistic? Let us continue the qualitative deductive reasoning a little further. See Ref. [7] for a canonical text on general SUSY field theory construction, and Ref. [8] for construction and phenomenology of SUSY extension of the Standard Model. See also Refs. [9] and [10].
## 4 Superpartners
Broadly, SUSY charges relate fermions and bosons, remarkable given their very different behavioral properties. To see this, start with the conservation of \(Q_{\alpha}\), expressed canonically as \([H,Q_{\alpha}]=0\). Consider the energy eigenstate of any bosonic particle, \(|B\rangle\), with energy \(E_{B}\). Then
\(|F\rangle\equiv Q_{\alpha}|B\rangle\) must be a fermion by the spin-statistics connection because \(Q_{\alpha}\) changes the angular momentum of the state it is acting on by a \(1/2\) unit. Thus, \([H,Q_{\alpha}]|B\rangle=0\) implies that \(|F\rangle\) must also have the same energy, \(E_{F}=E_{B}\). Working within each momentum subspace, this means that their masses are also the same, \(m_{F}=m_{B}\). In this way, we see that in SUSY particles come in boson-fermion degenerate pairs, or "superpartners", which differ in their spin by a half unit.
The superpartner of the massless spin-2 graviton \(h_{\mu\nu}\) must be therefore be a massless spin-\(3/2\) particle/field \(\psi_{\mu\alpha}\), which we can call the "gravitino". Note, the fermion had to have spin \(1/2\) unit different from 2, but it could not have been spin-\(5/2\) because massless spins \(>2\) are forbidden as discussed earlier.
The superpartner of a massless spin-1 gauge boson \(A_{\mu}^{a}\) must be a massless spin-\(1/2\) fermion \(\lambda_{\alpha}^{a}\), a "gaugino". The 'a' index is the internal (non-spacetime) index labelling gauge generators, so that the gaugino is a fermion in the adjoint representation of the gauge group (the representation furnished by the gauge group generators themselves). Again, one might have thought the fermion could have been massless spin-\(3/2\), but that would make it a second spin-\(3/2\) field, the first being the gravitino. But this would correspondingly require two species of supercharges \(Q_{\alpha}\), incompatible with the minimal \({\cal N}=1\) SUSY we are studying.
Turning to standard (non-gaugino) spin-\(1/2\) fermions, it will be convenient from here on to exclusively use left-handed representation. We can convert conventional right-handed fields into left-handed fields by charge conjugation. That is, given a right-handed 2-component spinor field \(\psi_{R}\), its charge conjugate \((\psi_{R})^{c}\equiv\sigma_{2}\psi_{R}^{*}\) is a left-handed spinor field.7 Here \(\sigma_{2}\) is the second Pauli matrix, and matrix-multiplication is implied. The Lorentz-invariant one can then construct from two left-handed fields is \((\psi_{L}\chi_{L})\equiv\epsilon^{\alpha}\psi_{L\alpha}\chi_{L\beta}=\psi_{L \alpha}\chi_{L}^{\alpha}\). Here, \(\epsilon\) is the completely antisymmetric tensor, and one can think of it as a "metric" on spinors allowing one to "raise" the indices of a standard left-handed spinor. In this way the SM spin-\(1/2\) chiral fermions can be listed as
Footnote 7: Often in the literature, one distinguishes “dotted” and “undtoted” indices, where the dotted indices label the conjugates of the left-handed spinors. I will reduce “clutter” by not making this distinction in these lectures, hoping that context will give away what is intended.
\[\psi_{\alpha}=q_{\alpha},\ell_{\alpha},u_{\alpha}^{c},d_{\alpha}^{c},e_{ \alpha}^{c}. \tag{21}\]
The superpartners of these standard fermions must be spin-\(0\) "sfermion" scalars,
\[\phi=\tilde{q},\tilde{\ell},\tilde{u}^{c},\tilde{d}^{c},\tilde{e}^{c}, \tag{22}\]
where the tilde just labels the superpartner of the related SM field. (They could not be spin-\(1\) because then they would have to be gauge fields in the adjoint representation of gauge groups and the fermions would be gauginos, which does not fit SM quantum numbers.)
The only other field in the SM is the Higgs doublet scalar field \(H(x)\), and this must have a spin-\(1/2\) "Higgsino" superpartner \(\tilde{H}_{\alpha}\). There is an important subtlety regarding the SUSY Higgs content, but we will get to that later.
If we can realize SUSY in QFT it predicts a remarkable feature. Earlier, we could only identify Nambu-Goldstone bosons as the means for having robustly massless spin-\(0\) particles, but they have the limitation that their (derivative) couplings rapidly become weak in the IR. In particular, it was not clear if the Higgs boson could be considered as at least approximately a NG boson in some BSM
scenario given its substantial couplings. But in SUSY because of superpartner mass-degeneracy, a massless scalar is a robust possibility if it is the superpartner of a spin-\(1/2\) fermion which is robustly massless because of a chiral symmetry. If the fermion has standard gauge and Yukawa couplings, so will the scalar, such couplings falling at most logarithmically in the IR. SUSY then offers an attractive mechanism for robustly having light interacting scalars in QFT, such as the Higgs boson (seen from our Planckian perspective).
## 5 Supergravity, SUSY Breaking and the \(G_{n}\to 0\) Limit
As we have seen, the graviton \(h_{\mu\nu}\) is a "gauge field" whose associated charge is energy-momentum \(P_{\mu}\), while the gravitino \(\psi_{\mu\alpha}\) is a "gauge field" with associated charges \(Q_{\alpha},\tilde{Q}_{\beta}\). But these charges obey a non-abelian type of superalgebra, Eq. (18), so together the graviton and gravitino superpartners are "gauge fields" of SUSY. This gauging of the global SUSY superalgebra is sometimes called "local SUSY". But since it also gives a supersymmetric theory including gravity, it is also called "supergravity", or SUGRA for short. Like standard simple non-abelian gauge theories, local SUSY has a single "gauge" coupling, \(G_{\rm Newton}\).
If SUSY were an exact symmetry then there would be a charged selectron with the electron's mass, and this is not seen in Nature. More generally, the absence of superpartners in experiments to date implies that SUSY must be an approximate symmetry at best, part of the robust QFT grammar of options for \(m\approx 0\) from a UV perspective.
We can guess a cartoon of a realistic particle spectrum with broken SUSY as given in figure 3. The superpartners are thereby heavy enough to have evaded LHC and other searches, but close enough to the weak scale that it might be playing a strong role in making a weak-scale interacting spin-0 Higgs a robust QFT feature. While the broken SUSY is clearly a big effect for human particle
Figure 3: A plausible cartoon spectrum of a SUSY extension of the standard model. SUSY (degeneracy) is sufficiently broken to be consistent with our not having discovered superpartners yet, while being approximately valid far above the weak scale.
experimentalists, it represents a "small" breaking compared to the fundamental scale \(\lesssim M_{Pl}\) at which QFT is born from quantum gravity.
Since fundamentally, SUSY is a gauged symmetry of SUGRA, its breaking must be due to a "super-Higgs" effect [11]. But in the global limit of this gauge theory, that is when we work in the approximation of vanishing gauge coupling \(G_{N}\approx 0\), this Higgs-like breaking must become _spontaneous_ SUSY breaking, analogous to what happens in the standard Higgs mechanism in the global limit.
I have been trying to make the case to you that Nature may well be playing every trick in the book as far as \(m\approx 0\) is concerned. But does it contain \(m\approx 0\) for spin-\(3/2\)? The reason we do not know yet may not be that such a particle lies above our puny energy reach (although maybe it does) but because, like PNGBs, it has a reason to be extremely weakly coupled. And unlike the graviton, which shares its \(G_{N}\) coupling, it cannot take advantage of Bose statistics to at least appear to us in observable classical fields. Nevertheless, its existence necessitates all of SUSY structure, and clearly searching for _that_ experimentally is strongly motivated.
How can we construct SUSY QFT and spontaneous SUSY breaking? To motivate the strategy, I want to digress into another important ingredient which is somewhat more intuitive (at least for theorists), namely higher-dimensional spacetime.
## 6 Higher Powers and Hierarchies from Higher Dimensions
Let us consider the simplest higher-dimensional extension of 4D Minkowski spacetime to 5D Minkowski spacetime:
\[4D\;\;x^{\mu} \longrightarrow 5D\;\;X^{M}=x^{\mu},x_{5}\] \[4D\;\;\mbox{Poincare symmetry} \longrightarrow 5D\;\;\mbox{Poincare symmetry}\] \[x^{\mu}\rightarrow\Lambda^{\mu}_{\nu}x^{\nu}+a^{\mu} \longrightarrow X^{M}\rightarrow\Lambda^{M}_{N}X^{N}+A^{M}\] \[\Lambda\in SO(3,1) \Lambda\in SO(4,1)\supset SO(3,1)\] \[4D\;\;\mbox{Fields}\;\;\phi(x) \longrightarrow 5D\;\;\mbox{Fields}\;\;\phi(X) \tag{23}\]
As presented, we posited a higher-dimensional spacetime and then noted that it could enjoy a larger 5D Poincare symmetry. But to pave the way for our later SUSY development, it is useful to think of it the other way around: if I know I have a QFT with an extension of the usual 4D Poincare spacetime symmetry, the simplest way of incorporating that is by extending the spacetime so that it is symmetric under the extended symmetry. In this way, demanding 5D Poincare symmetry leads to fields living on 5D Minkowski spacetime.
### Compactification of the Extra Dimension
The Poincare-invariant field equation, say Klein-Gordon, also extends straightforwardly:
\[(\partial_{\mu}\partial^{\mu}+m_{4}^{2})\phi(x)=0\longrightarrow(\partial_{ \mu}\partial^{\mu}-\partial_{5}^{2}+m_{5}^{2})\phi(X)=0. \tag{24}\]
Now, of course we do not live in a perfect 5D Minkowski spacetime with perfect 5D Poincare symmetry, so at best such a symmetry is badly broken at the "low" energies we currently probe.
This allows me to introduce the notion of _soft_ symmetry breaking, that is breaking by a dimensionful energy/mass scale such that at much higher energies (short distances) there is a very good approximate symmetry and at lower energies (longer distances) the symmetry is not apparent. In the case of 5D, the simplest such soft breaking is depicted in figure 4. The 5th dimension is a finite interval of microscopic length \(L\), so that spacetime is a 5D slab-like "bulk", sandwiched between two 4D boundaries or "branes". For short distance scattering and short wavelengths \(\ll L\) in the bulk interior, as illustrated, 5D Poincare symmetry approximately holds. For long wavelengths \(\gg L\) only the 4D Poincare sub-symmetry is apparent. As depicted in figure 4, at these long wavelengths the extra dimension itself cannot be resolved. We can say that the 5D Poincare symmetry is softly broken at the "Kaluza-Klein" (KK) scale \(\mu_{KK}=1/L\) down to 4D Poincare symmetry.
Let us solve the 5D field equation on the right side of Eq. (24) by separation of variables:
\[\phi(X)=f(x_{5})\phi_{4}(x), \tag{25}\]
where the \(\phi_{4}(x)\) factor satisfies the 4D Klein-Gordon equation
\[(\partial_{\mu}\partial^{\mu}+m_{4}^{2})\phi_{4}(x)=0. \tag{26}\]
Furthermore, let us assume that the fifth dimension is hidden far in the UV and that any fields the effective 4D experimentalist can probe have \(m_{4}\approx 0\). Then clearly the general 5D solution is given by
\[\phi(X)=\left(Ae^{m_{5}x_{5}}+Be^{-m_{5}x_{5}}\right)\phi_{4}(x)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
determine (with boundary conditions) the 5D solutions, including 5D profiles, are called "zero modes" (or near-zero modes more accurately). In essence, at experimental "low" energies, the theory reduces to a 4D EFT of these zero-modes.
See [12] for reviews of extra-dimensional field theory and phenomenology.
### Emergence of (Yukawa) Coupling Hierarchies
Now consider three species of 5D fields, \(\chi(X),\psi(X),\phi(X)\), where the first two have zero-modes leaning towards \(x_{5}=0\) and the last towards \(x_{5}=L\), as depicted in figure 5. We will consider a simple trilinear coupling between them,
\[S_{5D}\supset\lambda_{5}\int\,d^{4}x\int_{0}^{L}dx_{5}\ \psi\,(X)\chi(X)\phi(X)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
Let us apply this result to a toy model of SM quark Yukawa couplings, where ignoring spin and chirality details, \(\chi\) represents the \(i\)th generation of a quark electroweak doublet, \(\psi\) represents the \(j\)th generation of a quark electroweak singlet, and \(\phi\) represents the Higgs doublet field. These SM fields start as fundamentally 5D fields, but it is only their 4D zero-modes that have been discovered. Then the trilinear coupling \(\lambda\) we just studied represents the Yukawa coupling \(Y_{ij}\). We therefore see that starting from a fundamental 5D Yukawa matrix of couplings \(Y_{5ij}\) which are more or less randomly distributed without large hierarchies, one predicts an exponentially hierarchical effective 4D Yukawa matrix,
\[Y_{4,\rm eff}^{ij}\sim Y_{5}^{ij}e^{-(m_{5i}+m_{5j})L}. \tag{30}\]
This is a very interesting structure qualitatively. If the 5D masses are not particularly degenerate, one can approximately diagonalize this Yukawa matrix to give quark mass eigenvalues and CKM mixing angles
\[m_{4,i}\sim e^{-2m_{5i}L}v_{\rm electroweak}\] \[V_{ij}^{\rm CKM}\mathop{\sim}_{m_{5i}>m_{5j}}\ \frac{e^{-m_{5i}L}}{e^{-m_{5j}L}}\sim\sqrt{\frac{m_{4i}}{m_{4j}}}. \tag{31}\]
This is not a bad qualitative fit to the hierarchical quark mass and CKM structure and the correlations between them that we observe!
To conclude, we have just found an attractive mechanism for understanding the hierarchical form of the Yukawa structure that would otherwise remain mysterious in the SM. See [3] for a rapid review of obtaining realistic flavor hierarchies from extra-dimensional wavefunction overlaps, plus original references.
### Boundary-localized fields and Sequestering
Figure 6: Two effectively boundary-localized fields, \(\psi_{4}\) and \(\phi_{4}\), interacting via exchange of a bulk \(\chi\) field. This exchange is Yukawa-suppressed if the bulk mass is large \(m_{5}L\gg 1\).
There are a couple of more concepts to introduce in 5D, which will be important later. Let us return to \(\chi,\psi,\phi\), keeping them general (without identifying them with quarks and Higgs fields). The first concept is the approximation of boundary-localization (or "brane-localization"), which applies when \(m_{5}L\gg 1\). In this limit clearly the zero-modes are closely stuck close to one or other of the boundaries, and one can approximate them as propagating exclusively in 4D, either restricted to \(x_{5}=0\) or \(x_{5}=L\). Given the exponential profiles of zero modes, this approximation kicks in quickly, so \(m_{5}L\) need not be too big. Having seen how we can approach this boundary-localization limit, we can simply impose boundary-localization as fundamental. For example, we can take \(\psi_{4}\) and \(\phi_{4}\) to be perfectly localized at \(x_{5}=0\) and \(x_{5}=L\) respectively. Consequently, they can each self-interact with themselves without suppression, but they cannot directly interact with each other by locality, since they live on the two separated boundaries. But we first take \(m_{5\chi}L>1\) to not be too large. This allows \(\psi\) and \(\phi\) to interact by exchange of \(\chi\) if they have suitable trilinear couplings,
\[\lambda^{\prime}_{5}\int\,d^{4}x\int_{0}^{L}dx_{5}\,\psi^{2}(X) \chi(X) \sim\lambda^{\prime}_{5}\int\,d^{4}x\,\psi_{4}^{2}(x)\,\chi(x,x_{5}=0)\] \[\lambda^{\prime\prime}_{5}\int\,d^{4}x\int_{0}^{L}dx_{5}\,\phi^{ 2}(X)\chi(X) \sim\lambda^{\prime\prime}_{5}\int\,d^{4}x\,\phi_{4}^{2}(x)\,\chi(x,x_{5}= L). \tag{32}\]
Therefore the \(\psi-\phi\) interaction is given in terms of the 5D \(\chi\) exchange,
\[\propto\langle 0|T\{\chi(x,x_{5}=0)\ \chi(x^{\prime},x_{5}^{\prime}=L)\}|0 \rangle\propto e^{-m_{5\chi}L}. \tag{33}\]
Without needing the detailed form of the 5D propagator, this exponential is just the Yukawa-suppression one expects whenever a massive mediator field has to virtually traverse a distance \(L\) beyond its Compton wavelength at low energies (\(<m_{5\chi}\)). The full exchange is depicted in figure 6. Obviously, for \(m_{5\chi}L\gg 1\), the \(\psi-\phi\) interactions effectively shut off. Later in the SUSY context, we will use this natural mechanism for suppressing interactions between sets of boundary-localized 4D fields. It is known as "sequestering" [13].
### Non-renormalizability of Higher-dimensional EFT
Higher-dimensional field theories are non-renormalizable.8 Consider Yang-Mills theory, which in 4D is the classic renormalizable QFT. In 5D,
Footnote 8: Scalar field theory with only trilinear couplings is renormalizable but such a cubic potential is unbounded from below and therefore unphysical.
\[S_{5D}=-\frac{1}{4g_{5}^{2}}\int\,d^{4}x\int_{0}^{L}dx_{5}G^{a}_{MN}G^{MN}_{a}, \tag{34}\]
where we have chosen for later convenience the non-canonical normalization of the gauge fields \(A^{a}\) such that the coupling appears out the front of the action rather than in the interaction terms,
\[D_{M} =\partial_{M}+it^{a}A_{M}(\mathrm{x})\] \[G^{a}_{MN} =\partial_{M}A^{a}_{N}-\partial_{N}A^{a}_{M}+f^{abc}A^{b}_{M}A^{c} _{N}. \tag{35}\]
(The field redefinition \(A_{M}\to g_{5}A_{M}\) restores canonical normalization.) We see that the 5D gauge coupling \(g_{5}\) has mass dimension \(-1/2\) and therefore the theory is non-renormalizable in the 5D regime.
But this behavior changes below \(\mu_{KK}\) for the 4D EFT. Since \(m_{5}=0\) by 5D gauge invariance, the zero-modes have a \(x_{5}\)-independent profile, so that doing the \(x_{5}\) integral for these modes yields
\[S_{4,\rm eff}=-\frac{L}{4g_{5}^{2}}\int d^{4}x\,G_{\mu\nu}^{a}G_{a}^{\mu\nu}+A_ {5}\text{-terms}\,. \tag{36}\]
We see that the effective 4D gauge coupling is now dimensionless and given by
\[\frac{1}{g_{4,\rm eff}^{2}}=\frac{L}{g_{5}^{2}}\,. \tag{37}\]
This 4D Yang-Mills is asymptotically free, \(g_{4,\rm eff}\) running to become stronger in the IR.
Non-renormalizability of 5D field theories is not a disaster, it however does mean that they have to be treated by the methods of non-renormalizable EFT [14], with a finite energy range of validity above which some more UV-complete description must take over.
### The Ultimate \(m\to 0\) Limit
SUSY has been motivated from the top down, as a possible remnant of a superstring realization of quantum gravity, as well as from the bottom up if Nature gives us every spin of elementary particle possible with \(m\approx 0\), including spin \(3/2\). But so far, extra spacetime dimensions have only been motivated from the top down. Yet they too have a robust bottom-up motivation. I have surveyed the different possible particle spins _one at a time_ in terms of what mechanisms and symmetries robustly yield \(m=0\), and then the sense in which these structures might only be approximate, resulting in \(m\approx 0\). I could do this at weak coupling because then each particle approximately corresponds to a free field. But what about strong coupling and \(m\to 0\)? We have already seen that non-perturbatively, there can be emergent mass scales via dimensional transmutation. If we really wish to go all the way to a completely massless theory non-perturbatively, both explicit mass scales and emergent mass scales must be absent. In particular, this requires a theory where even the dimensionless couplings do not run, \(\beta(\alpha)=0\). Lacking characteristic mass scales such a theory also lacks characteristic length scales, and therefore has scale-invariance as a new symmetry. There is a strong conjecture that scale-symmetry combined with Poincare symmetry and the locality of QFT "accidentally" implies an even larger symmetry, namely conformal symmetry. See for example [15]. A 4D QFT with conformal symmetry is called a conformal field theory (CFT).
Remarkably, the famous AdS\({}_{5}\)/CFT\({}_{4}\) correspondence re-expresses 4D CFT "holographically" as an equivalent (or "dual") 5D quantum gravity theory [16]! The 5D theory realizes the conformal symmetry as the symmetry (isometry) of the background curved spacetime geometry, 5D Anti-de Sitter (AdS):
\[ds_{\rm AdS_{5}}^{2}=R^{2}\ \frac{\eta_{\mu\nu}dx^{\mu}dx^{\nu}- dx_{5}^{2}}{x_{5}^{2}}\,,\quad x_{5}>0\] \[\text{where }R=\text{constant AdS radius of curvature}\,. \tag{38}\]
This is analogous to how the 5D Poincare group is the symmetry of 5D Minkowski spacetime geometry. Of course, such a CFT can only be a subsector of the real world, since we know we have 4D GR with its characteristic mass scale \(M_{Pl}\), and the SM at least. If one includes these elements
of realism the holographically dual description is given by the Randall-Sundrum II (RS2) scenario [17].
It is also possible not to have an exact CFT (exactly massless strongly interacting QFT in the IR) but one in which conformal symmetry is approximate and broken spontaneously or softly in the IR (\(\ll M_{Pl}\)) as well as in the UV (\(\sim M_{Pl}\)). These have holographic duals called Randall-Sundrum I (RS1) models or "warped extra dimensions" [18]. They are similar to the extra-dimensional framework we have sketched in the earlier subsections, the major difference being that the bulk 5D spacetime is highly curved in a manner that however cannot be detected if one does not have the energy to resolve the extra dimension. This is the meaning of "warped" in this context. Much of the modeling of the Composite Higgs paradigm currently takes place in this 5D EFT framework [12]. The warped variant of generating Yukawa hierarchies from extra-dimensional wavefunction overlaps is CFT/AdS dual to the mechanism of "Partial Compositeness" in purely 4D [19][20], where the Yukawa hierarchies are created by strong-coupling non-perturbative RG effects, closely related to the physics of dimensional transmutation we have discussed.
The advantage of the 5D dual RS-level descriptions is that one does not need to completely specify the 4D CFT in detail, which would be equivalent to specifying the 5D quantum gravity in detail. Rather, one can describe the 5D theory with non-renormalizable EFT. It is easier (but still non-trivial) to check self-consistency of a 5D EFT than to check that one has a viable and UV-complete 5D quantum gravity! In this way, one can explore interesting physics that strongly coupled 4D theories might produce. For example, the possibility of traversable wormholes is explored within an RS2-like framework in [21]. The Composite Higgs scenario is conveniently explored within the RS1 framework.
## 7 Superspace and Superfields
With extended spacetime now being motivated as a convenient and simple way of representing theories with extended spacetime symmetries, we return to the extended spacetime symmetry of \(\mathcal{N}=1\) SUSY. We will try to write SUSY QFTs in terms of "superfields" \(\Phi(X)\), fields living on an extended spacetime, "superspace" [7, 8], which has SUSY as its geometric symmetry algebra:
\[X\equiv(x^{\mu},\theta_{\alpha},\overline{\theta}_{\beta}). \tag{39}\]
The extra coordinates allow us to represent the action of the supercharges in the simplest possible way, as "supertranslations", akin to the action of translations on Minkowski coordinates:
\[\theta_{\alpha}\rightarrow\theta_{\alpha}+\xi_{\alpha}\,,\ \ \overline{\theta}_{ \beta}\rightarrow\overline{\theta}_{\beta}+\overline{\xi}_{\beta}. \tag{40}\]
The \(\xi_{\alpha}\) are the transformation parameters we associate with the action of \(Q_{\alpha}\) (analogous to a translation vector \(a^{\mu}\) associated to the translation generator \(P_{\mu}\)), and therefore must be spinorial. They are independent of \(x\) because we are studying the global limit of SUSY here, not local SUSY. Furthermore, since the \(Q\) are anticommuting charges (ultimately coming from the fact that they come from a current that couples to the fermionic spin-\(3/2\) gravitino) the \(\xi_{\alpha}\) must be Grassmann numbers. For compatibility, the \(\theta_{\alpha}\) must also be spinor Grassmann coordinates, unlike the usual c-number \(x^{\mu}\) coordinates. This is what will separate SUSY from standard extra dimensions. Matching the conjugate relationship of \(Q\) and \(\bar{Q}\), \(\bar{\theta}_{\alpha}\) is just the conjugate of \(\theta_{\alpha}\).
In order for superspace to exhibit the "non-abelian" aspect, Eq. (18), of SUSY, it is crucial that the \(x^{\mu}\) also transform under the supercharges:
\[x^{\mu}\to x^{\mu}-i\theta_{\alpha}\overline{\xi}_{\beta}\sigma^{ \mu}_{\alpha\beta}+i\xi_{\alpha}\overline{\theta}_{\beta}\sigma^{\mu}_{\alpha\beta} \tag{41}\]
Note that the two transformation terms are hermitian conjugates of each other, keeping \(x\) "real", or, more correctly hermitian as an operator. To check these symmetry transformations, we first recall how ordinary (infinitesimal) translations work on fields on ordinary spacetime, \(\phi(x)\to\phi(x+a)=\phi(x)+a^{\mu}\partial_{\mu}\phi(x)\). We thereby identify the associated 4-momentum charge as the translation generator \(P_{\mu}=i\partial_{\mu}\) (the "\(i\)" for hermiticity). Similarly, supercharges encoding infinitesimal supertranslations on superfields on superspace are represented as differential operators on superspace:
\[Q_{\alpha} = \frac{\partial}{\partial\theta_{\alpha}}+i\overline{\theta}_{ \gamma}\sigma^{\mu}_{\alpha\gamma}\partial_{\mu}\] \[\overline{Q}_{\beta} = \frac{\partial}{\partial\overline{\theta}_{\beta}}+i\theta_{ \gamma}\sigma^{\mu}_{\gamma\beta}\partial_{\mu}\;. \tag{42}\]
The first of these encodes the transformation of superspace by \(\xi\), and the second by \(\bar{\xi}\). In each case, the first term on the right encodes infinitesimal translation of the \(\theta\) coordinates. The second term encodes the fact that \(x^{\mu}\) also receives a \(\bar{\theta}\) (or \(\theta\)) dependent infinitesimal translation. You can check that these supercharges indeed obey the superalgebra of eqs. (18), (19), and (20).
Fortunately, we will only have explicit need of the simplest type of superfield, namely a scalar field on superspace, \(\Phi(X)\), where by "scalar" I mean that only its spacetime argument transforms under SUSY as described above.
## 8 Chiral Superspace and Chiral Superfields
We have seen how SUSY acts on the three types of superspace coordinates \(x^{\mu},\theta_{\alpha},\bar{\theta}_{\beta}\). Remarkably, there is a kind of "projection" of full superspace down to "chiral superspace" which is closed under SUSY transformations. It has just two types of coordinates \(Y\equiv(y^{\mu},\theta_{\gamma})\), where
\[y^{\mu}\equiv x^{\mu}-i\theta_{\alpha}\sigma^{\mu}_{\alpha\beta }\bar{\theta}_{\beta}, \tag{43}\]
in terms of the original supercoordinates. Note that because of the "\(i\)", the \(y^{\mu}\) coordinate is not real (hermitian). Given the original superspace transformations, it is straightforward to check that under SUSY,
\[(y^{\mu},\theta_{\gamma})\to(y^{\mu}-2i\theta_{\alpha}\sigma^{ \mu}_{\alpha\beta}\bar{\xi}_{\beta},\theta_{\gamma}+\xi_{\gamma}), \tag{44}\]
independently of \(\bar{\theta}\).
We can therefore define a special kind of scalar superfield which only depends on chiral superspace, \(\Phi(y^{\mu},\theta_{\gamma})\). Clearly, under SUSY transformations such a "chiral superfield" retains its form, that is independent of \(\bar{\theta}\) except implicitly within \(y\).
In this simpler context, it is time to make a central point, that superfields are really just a finite collection of ordinary component fields which are in the same supermultiplet, that is they transform
into each other under SUSY. To see this we simply Taylor expand the \(\theta\) dependence of the chiral superfield,
\[\Phi(y,\theta)=\phi(y)+\sqrt{2}\psi_{\alpha}(y)\theta^{\alpha}+F(y) \theta^{2} \tag{45}\]
Because there are only two Grassmann coordinates here, \(\theta_{\alpha}\), and each of their squares vanishes because of their anticommuting with themselves, the Taylor expansion can not contain more than one power of each Grassmann coordinate. In particular the highest power of \(\theta\)s in the Taylor expansion must be \(\theta_{1}\theta_{2}=-\theta_{2}\theta_{1}\), or equivalently in the manifestly Lorentz invariant form \(\theta^{2}\equiv\epsilon^{\alpha\beta}\theta_{\alpha}\theta_{\beta}\). With a finite Taylor expansion, there are a finite number of Taylor coefficients which are fields of \(y^{\mu}\) alone. To make \(\Phi\) Lorentz-scalar, \(\phi\) and \(F\) must be complex Lorentz scalars, while \(\psi_{\alpha}\) must be a _chiral_ two-component spinor.
For some purposes, this \(y\) notation is compact and makes SUSY transformations look simpler, but in general, we will want to write the fields in terms of \(x\), corresponding to real spacetime. That means each component field has to be further Taylor expanded in the \(-i\theta\sigma^{\mu}\bar{\theta}\) deviation of \(y\) from \(x\):
\[\Phi(y^{\mu}=x^{\mu}-i\theta\sigma^{\mu}\bar{\theta},\theta) =\phi(x)-i\theta\sigma^{\mu}\bar{\theta}\partial_{\mu}\phi(x)- \frac{1}{4}\theta^{2}\overline{\theta}^{2}\Box\phi(x)\] \[\quad+\sqrt{2}\psi_{\alpha}(x)\theta^{\alpha}+\frac{i}{\sqrt{2}} \theta^{2}\overline{\theta}_{\alpha}\sigma^{\mu}_{\alpha\beta}\partial_{\mu} \psi_{\beta}(x)+\sqrt{2}F(x)\theta^{2}. \tag{46}\]
Obviously there is a totally analogous conjugate notion of anti-chiral superspace, \(\{\bar{Y}\equiv(\bar{y}^{\mu}=x^{\mu}+i\theta_{\alpha}\sigma^{\mu}_{\alpha \beta}\bar{\theta}_{\beta},\bar{\theta}_{\gamma})\), and the analogous anti-chiral superfield \(\bar{\Phi}(\bar{y},\bar{\theta})\). We need this because it houses the conjugates of the component fields of the chiral superfield.
## 9 The Wess-Zumino Model
Finally, we are in position to write an actual simple renormalizable SUSY QFT, with remarkable properties. It is built from a single chiral superfield \(\Phi(Y)\) (and of course its anti-chiral conjugate \(\bar{\Phi}(\bar{Y})\).
The question is how to build a SUSY invariant action. First recall how we build Poincare invariant actions,
\[S=\int\,d^{4}x\,\,\,\text{Lorentz}-\text{scalar}(x). \tag{47}\]
The integration measure \(d^{4}x\) is Poincare invariant and the Lagrangian integrand is a (composite) Lorentz-scalar field, which upon integration over all its translations (all values of \(x\)) becomes Poincare-invariant. We build SUSY-invariant actions the same way, find a SUSY-invariant integration measure and have the integrand be a (composite) scalar superfield:
\[S=\int\,d^{4}x\int\,d^{2}\theta\int\,d^{2}\overline{\theta}\, \,K(\Phi(Y),\overline{\Phi}(\bar{Y}))+\int\,d^{4}y\int\,d^{2}\theta\,\,W(\Phi (Y))+\int\,d^{4}\overline{y}\int\,d^{2}\overline{\theta}\,\,\overline{W}( \overline{\Phi}(\bar{Y})). \tag{48}\]
Let us begin with the first term. Here, the Grassmann integral measure, \(d^{2}\theta\equiv\theta_{1}d\theta_{2}=1/2\epsilon^{\alpha\beta}d\theta_{ \alpha}d\theta_{\beta}\) is Lorentz invariant just like \(d^{4}x\). This and the conjugate integral measure
\(\int d^{2}\theta\int d^{2}\bar{\theta}\) are usually abbreviated to "\(\int d^{4}\theta\)". You can check easily that the full measure \(\int d^{4}xd^{4}\theta\) is invariant under supertranslations, since these look like \(x\)-independent translations of \(x\) and simple translations of \(\theta,\bar{\theta}\). Since products of scalar fields are scalar fields, including scalar superfields on superspace, we can make the integrand by (sums of) products of the superfields \(\Phi,\bar{\Phi}\). This integrand is called the Kahler potential, \(K\). We require that \(K\) is hermitian so that the Lagrangian and Hamiltonian are.
The action made from the Kahler potential would be SUSY invariant if \(\Phi\) were any scalar superfield. But the fact that \(\Phi\) is a chiral superfield offers another possibility in the second term of the action. Here the integrand is only made of sums and products of the chiral superfield, so that \(W\), the "superpotential", is also a composite chiral superfield on chiral superspace. Therefore the integration measure is only over chiral superspace coordinates. Again, it is straightforward to check the measure's SUSY invariance given Eq. (44). The third term is just the conjugate of the second to keep the sum hermitian.
The above action is the most general SUSY action with the fewest explicit derivatives, namely none, which is why both \(K\) and \(W\) are called "potentials". But derivatives will indeed arise from the Taylor expansion of the superfield into component fields, Eq. (46).
The fact that the superpotential is a function of \(y\) but not \(\bar{y}\) means we can shift the integration variable from \(y\) to just \(x\),
\[\int d^{4}y\int d^{2}\theta\ W(\Phi(y,\theta))=\int\ d^{4}x\int\ d^{2}\theta\ W(\Phi(x, \theta)). \tag{49}\]
As can be seen in Eq. (45), there are no derivatives hidden in the Taylor expansion of \(\Phi(x,\theta)\), so the superpotential part of the action does not give rise to derivatives, these only arise from the Kahler potential which depends on both \(y\) and \(\bar{y}\) which cannot both be shifted to \(x\) simultaneously.
It is notable that \(W\) is a complex analytic, or "holomorphic", function of the complex chiral superfield because it does not depend on the conjugate anti-chiral superfield. This yields deep insights peculiar to SUSY, which we only touch on later.
Let us do some quick dimensional analysis to write a renormalizable theory. Given the SUSY algebra we see that the Grassmann coordinates \(\theta,\bar{\theta}\) have mass dimension \(-1/2\). Given that the component spinor fermion will turn out to be a canonical fermion with mass dimension \(3/2\), and component scalar field \(\phi(x)\) will be a canonical scalar field, the superfield \(\Phi\) has mass dimension \(1\). The component scalar field \(F\) has the unusual dimension \(2\), and we will see why soon. The Grassmann measure \(d\theta\) has the _opposite_ dimension to the Grassmann coordinate, namely \(+1/2\), to satisfy the axiomatic \(\int d\theta\ \theta=1\). This means \(K\) has dimension \(2\), and \(W\) has dimension \(3\). In order to not use couplings with negative mass dimensions, the standard indicator of non-renormaliability, we are restricted to:
\[K=\overline{\Phi}\Phi,\quad W=\frac{\lambda}{3}\Phi^{3}+\frac{m}{2}\Phi^{2}. \tag{50}\]
Note, \(K\) needs both chiral and antichiral fields to be non-trivial, so dimension \(2\) restricts it to the above, and it has coefficient \(1\) by choice of the normalization of \(\Phi\). \(W\) could clearly be any cubic polynomial in \(\Phi\), but a constant term would not survive the \(\int d^{2}\theta\) integration, and a linear
term could be redefined away by a \(\Phi\to\Phi+\) constant shift.9 This renormalizable structure is the Wess-Zumino model.
Footnote 9: This is as long as there are \(\Phi^{2}\) or higher order terms. We will study a case where this is not true soon.
We can now write this out in terms of the component fields using Eq. (46), and then do the Grassmann integrations:
\[S= \int d^{4}x\{\partial_{\mu}\overline{\phi}(x)\partial^{\mu}\phi(x) +\overline{\psi}i\sigma^{\mu}\partial_{\mu}\psi(x)+\overline{F}F(x)\] \[-\lambda\phi(x)\psi^{2}(x)-\frac{m}{2}\psi^{2}(x)+(\lambda\phi^{ 2}+m\phi)F(x)+h.c\}. \tag{51}\]
We have done an integration by parts to put it in this form. Here the first line comes from \(K\) and the second from \(W\). We see that the dimension 2 scalar fields \(F,\tilde{F}\) appear quadratically and without derivatives. If you think of this action in a path integral, this means that you can easily do the Gaussian integrals over \(F\), \(\tilde{F}\) by "completing the square". It is entirely equivalent to just solving the \(F,\tilde{F}\) equations of motion for these "auxiliary" fields in terms of the other fields, and plugging the result back into the action. The result is
\[S=\int d^{4}x\{\partial_{\mu}\overline{\phi}\partial^{\mu}\phi+\overline{ \psi}i\sigma^{\mu}\partial_{\mu}\psi-|m\phi+\lambda\phi^{2}|^{2}-(\frac{m}{2}+ \lambda\phi)\psi^{2}(x)+h.c.\}. \tag{52}\]
We have arrived at a QFT with renormalizable interactions consisting of Yukawa interactions between fermion and scalar, and scalar trilinear and quartic self-interactions. But the various couplings and masses are strongly correlated, as dictated by its manifestly SUSY construction. For example the quartic scalar coupling is the square of the Yukawa coupling. But in particular, we see the anticipated SUSY degeneracy of fermion and boson superpartners, \(m_{\phi}=m_{\psi}=m\). SUSY guarantees this structure is maintained radiatively, upon renormalization the counterterms must also have this special form. The Wess-Zumino model finally shows us that the abstract SUSY algebra, which we deduced from general considerations, is actually realizable within an interacting QFT.
### Robust interacting massless scalar
We can further consider the massless limit \(m\to 0\):
\[S\underset{m\to 0}{\rightarrow}\int d^{4}x\{\partial_{\mu}\overline{\phi} \partial^{\mu}\phi+\overline{\psi}i\sigma^{\mu}\partial_{\mu}\psi-\lambda| \phi|^{4}-\lambda\phi\psi^{2}(x)+h.c.\}. \tag{53}\]
Note that this limit is robust, because there is a new chiral symmetry when \(m=0\),
\[\psi\to e^{-i\alpha/3}\psi,\quad\phi\to e^{+2i\alpha/3}\phi. \tag{54}\]
Such chiral symmetries can make fermions robustly massless in Yukawa QFTs, but what is special is that the robust masslessness extends to the scalar because of SUSY degeneracy. In this way, we have an interacting massless (or light) scalar with order-one coupling (with only logarithmic running). SUSY is the only known robust mechanism for such scalars, at least in the perturbative regime.
### R-symmetries
The chiral symmetry is at first sight a bit strange in that it acts differently on the two superpartners, in particular it appears we cannot assign the entire superfield \(\Phi(X)\) a charge under this symmetry. Indeed, this cannot be done if we think of it as an "internal" symmetry which does not act on the superspace \(X\) argument of the superfield. But if we also rotate the complex Grassmann coordinates, \(\theta\to e^{i\alpha}\theta,\bar{\theta}\to e^{-i\alpha}\bar{\theta}\), then we see that by Eq. (45), we can assign \(\Phi\) intrinsic charge \(2/3\). Such symmetries which rephase \(\theta\) are known as R-symmetries. They are not required by SUSY, for example the massive Wess-Zumino model is supersymmetric but has no R-symmetries. (But even when absent the nature of their violation can be useful to track.)
### Non-renormalizable SUSY EFT
We can also take \(K\) and \(W\) to be more general, in which case they describe a non-renormalizable but supersymmetric EFT. We can again Taylor expand \(\Phi\) in Grassmann coordinates, Eq. (46), and use that to then Taylor expand \(K\) and \(W\). Doing the various straightforward Grassmann coordinate integrations (and some integrations by parts) yields the action in terms of the component fields,
\[{\cal L} =\frac{\partial^{2}K}{\partial\Phi\partial\bar{\Phi}}(\phi(x)) \{|\partial_{\mu}\phi|^{2}+\overline{\psi}i\sigma\cdot\partial\psi+|F|^{2}\}+ \frac{1}{4}\frac{\partial^{4}K}{\partial^{2}\Phi\partial^{2}\Phi}\psi^{2}(x) \psi^{2}(x)\] \[\quad+\left(\frac{\partial^{3}K}{\partial^{2}\Phi\partial\bar{ \Phi}}(\frac{i}{2}\bar{\psi}\sigma^{\mu}\psi\partial_{\mu}\phi+\psi^{2} \overline{F})+\frac{\partial^{2}W}{\partial\Phi^{2}}(\phi(x))\psi^{2}+\frac{ \partial W(\phi)}{\partial\Phi}F+h.c.\right)\] \[\xrightarrow[F.\overline{F}\ {\rm eqs}\ \frac{\partial^{2}K}{ \partial\Phi\partial\bar{\Phi}}(\phi(x))\{|\partial_{\mu}\phi|^{2}+\psi i \sigma\cdot\partial\psi\}+\left(\frac{\partial^{2}W}{\partial\Phi^{2}}(\phi) \psi^{2}(x)+h.c.\right)\] \[\quad+\left(\frac{\partial^{3}K}{\partial^{2}\Phi\partial\bar{ \Phi}}(\frac{i}{2}\bar{\psi}\sigma^{\mu}\psi\partial_{\mu}\phi+h.c.\right)\] \[\quad+\frac{1}{4}\frac{\partial^{4}K}{\partial^{2}\Phi\partial^{ 2}\bar{\Phi}}\psi^{2}(x)\psi^{2}(x)\quad-\left(\left|\frac{\partial W}{ \partial\Phi}(\phi)+\frac{\partial^{3}K}{\partial^{2}\bar{\Phi}\partial\Phi} \overline{\psi}^{2}\right|^{2}\right)\Bigg{/}\left(\frac{\partial^{2}K}{ \partial\Phi\partial\bar{\Phi}(\phi)}\right). \tag{55}\]
In the second step we have again integrated out the auxiliary fields \(F\), \(\bar{F}\).
This is the most general SUSY EFT of the chiral superfield at 2-derivative order (1-derivative in fermions). We will give a simple application of this structure in the next section on SUSY breaking. Terms with more derivatives would require generalizing the SUSY Lagrangian beyond \(K,W\) to integrands with explicit superspace derivatives. But in EFT the higher derivatives would be less important in the IR and therefore can be consistently dropped.
## 10 An EFT of Spontaneous SUSY Breaking
We can now give the simplest example of spontaneous breaking of SUSY, with a non-renormalizable EFT. This is somewhat analogous to the well-known non-linear \(\sigma\)-models that describe spontaneous breaking of ordinary (non-abelian) internal symmetries in non-renormalizable EFT, which may then be further UV-completed at higher energies.
Before specifying the theory, it is worth seeing one general implication of spontaneous SUSY breaking. By taking the spinor trace of Eq. (18), we see that the Hamiltonian in SUSY is a sum of
squares of supercharges and therefore positive semi-definite10:
Footnote 10: It is important for this that we are restricting ourselves to the \(G_{N}\to 0\) limit in which SUGRA decouples and SUSY is a global symmetry.
\[H=|Q_{1}|^{2}+|Q_{2}|^{2}\geq 0. \tag{56}\]
If the vacuum breaks SUSY, then at least one supercharge must not annihilate it,
\[Q_{1}|0\rangle\ {\rm or}\ Q_{2}|0\rangle\neq 0, \tag{57}\]
which then implies that
\[\langle 0|H|0\rangle\underset{\rm pert.}{=}V_{\rm effective}>0. \tag{58}\]
In a perturbative theory, this means that the minimum of the effective potential energy will be positive.
In honor of non-supersymmetric \(\sigma\) models, we will call our chiral superfield \(\Sigma(y,\theta)=\sigma(y)+\psi_{\Sigma}\theta+F_{\Sigma}\theta^{2}\), rather than the generic "\(\Phi\)". Specifically, we take
\[K =\overline{\Sigma}\Sigma-\frac{\overline{\Sigma}^{2}\Sigma^{2}}{ 4M^{2}},\quad M\sim\mathcal{O}(M_{Pl})\] \[W =\Lambda^{2}\Sigma. \tag{59}\]
We note that \(K\) has a non-renormalizable term, with a negative-dimension coupling. We will take the scale of nonrenormalizability to be Planckian, \(M\sim\mathcal{O}(M_{Pl})\), so that it only requires UV-completion in the ultimate quantum-gravity/string-theory regime. Also note that we have a linear superpotential, and it cannot be redefined away as we did in the Wess-Zumino model because that required quadratic or cubic terms to do.
From Eq. (55), we read off the scalar potential,
\[V(\sigma)=\left|\frac{\partial W}{\partial\Sigma}\right|^{2}( \sigma)\Bigg{/}\frac{\partial^{2}K}{\partial\Sigma\partial\overline{\Sigma}}( \sigma)=\frac{|\Lambda|^{4}}{1-\frac{|\sigma|^{2}}{M^{2}}}, \tag{60}\]
Figure 7: The scalar potential for our SUSY EFT has positive minimum, indicating spontaneous SUSY breaking.
illustrated in figure 7. Clearly, it has a local minimum at the origin of field space, \(\langle\sigma\rangle=0\), and indeed it is positive,
\[\langle\sigma\rangle=0,\quad V_{\rm vacuum}=|\Lambda|^{4}>0. \tag{61}\]
You may also notice that, taken literally, the potential can become negative for \(|\sigma|>M\), suggesting that \(\sigma=0\) is not the true ground state and that in fact the potential is unbounded from below, which is unphysical. However, we cannot trust such a conclusion because \(|\sigma|>M\) lies outside strict EFT control since the EFT is breaking down at scales of order \(M\). To know what really happens at large \(\sigma\) would require the UV completion of this EFT to all scales. Such renormalizable SUSY breaking models do indeed exist, but we will not need them here. In this non-renormalizable EFT, we can still conclude that \(\sigma=0\) is potentially a metastable (on cosmological timescales) or absolutely stable vacuuum, depending on the details of a full UV complete extension of the EFT. Either way, \(\sigma=0\) has positive vacuum energy and therefore represents spontaneous SUSY breaking (at least for cosmological timescales).
We can go back and calculate \(\langle F_{\Sigma}\rangle\) from the auxiliary field equation of motion,
\[F_{\Sigma}=\frac{\Lambda^{*2}}{1-\frac{|\sigma|^{2}}{M^{2}}},\quad\langle F_{ \Sigma}\rangle=\Lambda^{*2}\neq 0. \tag{62}\]
Non-vanishing \(\langle F\rangle\) VEVs are order-parameters for spontaneous SUSY breaking. This is because when we solve for the auxiliary field \(F\), its square (or sum of squares when there are several chiral superfields) contributes to the effective potential energy.
Phenomenologically, the thing that stands out is that superpartners are no longer degenerate after spontaneous SUSY breaking,
\[m_{\sigma}^{2} =\frac{|\Lambda|^{4}}{M^{2}}\] \[m_{\psi_{\Sigma}} =0. \tag{63}\]
### The Goldstino and the Gravitino
The vanishing of the fermion mass is not specific to this particular model of spontaneous SUSY breaking. Rather it is a parallel of Goldstone's Theorem for ordinary internal symmetries. There, there is a massless scalar robustly arising, but for spontaneous SUSY breaking it is a massless fermion that is robustly predicted [7][8][10]. In honor of the parallel this massless fermion is called a Goldstone fermion, or more often, "Goldstino".
Again paralleling internal symmetries, where when the symmetry is gauged the gauge field "eats" the Goldstone particle to become massive, here too when we do include SUGRA, the gravitino "eats" the Goldstino to become massive [11],
\[m_{\rm gravitino}=``m_{3/2}"\sim\frac{\Lambda^{2}}{M_{Pl}}. \tag{64}\]
As a result, the massless Goldstino is ultimately not part of the physical spectrum once SUGRA is included, but roughly describes the longitudinal polarizations of the massive gravitino. This is the "super-Higgs" mechanism.
We will refer to a sector of particle physics which spontaneously breaks SUSY as a "hidden sector" for phenomenological purposes, hidden in the sense that it is taken not to have SM-charged fields within it. The simplest hidden sector, which is mostly what we consider, is the non-renormalizable model we have just discussed with just the chiral gauge-singlet superfield \(\Sigma\).
## 11 The Renormalizable Minimal Supersymmetric SM (MSSM) [8]
Before we consider the gauge superfields, the SM matter fields can be elevated to chiral superfields,
\[Q_{i} =\bar{q}_{i}(y)+\sqrt{2}q_{i}(y)\theta+F_{q_{i}}\theta^{2}\] \[L_{i} =\bar{l}_{i}(y)+\sqrt{2}l_{i}(y)\theta+F_{l_{i}}\theta^{2}\] \[U_{i}^{c} =\bar{u}_{i}^{c}(y)+\sqrt{2}u_{i}(y)\theta+F_{u_{i}}\theta^{2}\] \[D_{i}^{c} =\bar{d}_{i}^{c}(y)+\sqrt{2}d_{i}(y)\theta+F_{d_{i}}\theta^{2}\] \[E_{i}^{c} =\bar{e}_{i}^{c}(y)+\sqrt{2}e_{i}^{c}(y)\theta+F_{e_{i}^{c}}\theta ^{2}\] \[\mathcal{H}_{u} =H_{u}(y)+\sqrt{2}\bar{H}_{u}(y)\theta+F_{H_{u}}\theta^{2}\] \[\mathcal{H}_{d} =H_{d}(y)+\sqrt{2}\bar{H}_{d}(y)\theta+F_{H_{d}}\theta^{2},\]
(almost) as anticipated in section 4, and given a renormalizable SUSY action which is a straightforward generalization of the Wess-Zumino model. The \(i\) indices refer to SM generations. Anticipating gauging, we restrict the theory to respect global internal (non-R) symmetries \(SU(3)_{QCD}\times SU(2)_{EW}\times U(1)_{Y}\) prior to their gauging. The minimal superpotential with these properties and which contains all the SM Yukawa couplings in its component expansion is11
Footnote 11: In these lectures, for simplicity I will completely neglect neutrino masses.
\[W_{\text{Yukawa}} =Y_{ij}^{u}U_{i}^{c}\mathcal{H}_{u}Q_{j}+Y_{ij}^{d}D_{i}^{c} \mathcal{H}_{d}Q_{j}+Y_{ij}^{e}E_{i}^{c}\mathcal{H}_{d}L_{j}\supset Y_{ij}^{e }e_{i}^{c}H_{d}l_{j}+\cdots \tag{66}\]
The subtlety arises because of the holomorphy of \(W\) in chiral superfields. In the SM one uses the Higgs doublet to Yukawa couple to up-type fermions and its conjugate to couple to down-type fermions, but we cannot use the conjugate anti-chiral superfield in \(W\). We are therefore forced to introduce separate Higgs chiral superfields (with conjugate electroweak quantum numbers) for up-type and down-type Yukawa couplings, as indicated. Therefore there is another electroweak-invariant renormalizable superpotential term just involving these two Higgs doublet superfields, called the "\(\mu\)" term:
\[W_{\mu\text{-term}}=\mu\mathcal{H}_{u}\mathcal{H}_{d}. \tag{67}\]
There are other possible \(SU(3)_{QCD}\times SU(2)_{EW}\times U(1)_{Y}\)-symmetric renormalizable superpotential terms possible, such as \(D^{c}LQ\), but all of these can be forbidden if we impose another symmetry, "R-parity", in which every SM field is parity-even and every superpartner of a SM field is parity-odd.12 R-parity is not strictly necessary, but it is definitely simplifying (my main reason here)
and also makes the lightest superpartner stable and therefore potentially a dark matter candidate. The Kahler potential is so tightly constrained by renormalizablility and the SM symmetries, that (after familiar wavefunction diagonalization and normalization) it has the canonical form
\[K=\overline{Q}_{i}Q_{i}+\overline{L}_{i}L_{i}+\overline{U}_{i}^{c}U_{i}^{c}+ \overline{D}_{i}^{c}D_{i}^{c}+\overline{E}_{i}^{c}E_{i}^{c}+\overline{\mathcal{ H}_{u}}\mathcal{H}_{u}+\overline{\mathcal{H}_{d}}\mathcal{H}_{d}. \tag{68}\]
### Gauge Superfields
Given that ordinary gauge fields \(A_{\mu}\) are real-valued, their superfields including gauginos are found in what are called "real superfields", but more precisely they are scalar superfields which are hermitian (and not chiral):
\[V(x,\theta,\overline{\theta})=\overline{V}(x,\theta,\overline{\theta}). \tag{69}\]
They are written "\(V\)" rather than "\(\Phi\)" because they are also sometimes called "vector superfields", presumably because they contain the 4-vector gauge fields, but they are scalar fields on superspace in that only their superspace arguments transform under SUSY. They are also called "gauge superfields".
Now turn to charged matter fields. Ordinarily a gauge transformation of a matter field transforms as
\[\psi(x)\longrightarrow e^{i\alpha^{a}(x)t^{a}}\psi(x), \tag{70}\]
where the \(t^{a}\) are relevant matrix-valued generators of the gauge group under which \(\psi\) is a charged multiplet. But in SUSY these fields, including the gauge transformation itself, must be elevated to superfields. Since our matter fields are chiral superfields, this must also be true of the gauge transformation. We therefore have
\[\Phi(y,\theta)\longrightarrow e^{\Lambda^{a}(y,\theta)\;t^{a}}\; \Phi(y,\theta)\] \[\overline{\Phi}(\overline{y},\overline{\theta})\longrightarrow \overline{\Phi}(\overline{y},\overline{\theta})\;e^{\overline{\Lambda}^{a}( \overline{y},\overline{\theta})\;t^{a}}\;. \tag{71}\]
\(\bar{\Phi}\) is taken to be a conjugate row vector of the charged multiplet, while \(\Phi\) is a column. Note that the gauge transformation \(\Lambda^{a}\) can no longer be real (hermitian), so we can absorb the usual "i" in Eq. (70) into its definition.
The need for gauge superfields to ensure gauge invariance of the kinetic terms (\(K\)) of the matter fields is easy to see. For simplicity consider the renormalizable abelian case without gauge superfields:
\[K=\overline{\Phi}\Phi\longrightarrow\overline{\Phi}\;e^{\overline{\Lambda}}\; e^{\Lambda}\;\Phi(y,\theta). \tag{72}\]
It is not invariant because \(\Lambda\neq-\bar{\Lambda}\), they are not even fields of the same type. This is fixed by introducing the abelian gauge superfield \(V\), transforming as
\[V(x,\theta,\overline{\theta})\longrightarrow V(x,\theta,\overline{\theta})- \Lambda(y,\theta)-\overline{\Lambda}(\overline{y},\overline{\theta}). \tag{73}\]
Straightforwardly then,
\[S\supset\int\,d^{4}x\int\,d^{4}\theta\;\overline{\Phi}e^{V}\Phi \tag{74}\]
is both gauge-invariant and SUSY-invariant. Even though we will continue to write \(K=\bar{\Phi}\Phi\), think of it as short-hand for this gauge-invariant inclusion of \(V\) to make the Lagrangian gauge-invariant.
For non-abelian gauge theory, we still have
\[S\supset\int d^{4}x\int\,d^{4}\theta\ \overline{\Phi}e^{V^{a}\,t^{a}}\Phi, \tag{75}\]
with gauge-invariance following from
\[e^{V^{a}\,t^{a}}\longrightarrow e^{-\overline{\Lambda}^{a}\,t^{a}}\ e^{V^{b}\,t ^{b}}\ e^{-\Lambda^{c}\,t^{c}}. \tag{76}\]
### Wess-Zumino Gauge
Ordinary axial gauge \(A_{3}(x)=0\) is a Lorentz-violating but sometimes useful partial gauge-fixing condition, which effectively reduces the number of gauge-field components. A general gauge field can be put in this form by a suitable gauge transformation. In analogy, Wess-Zumino gauge is a SUSY-violating but (Lorentz-preserving) partial gauge-fixing which reduces the number of component fields of \(V\):
\[V^{a}(x,\theta,\overline{\theta})=A^{a}_{\mu}(x)\theta_{\alpha}\bar{\sigma}^{ \mu}_{\alpha\beta}\overline{\theta}_{\beta}+(\lambda^{a}(x)\theta)\overline{ \theta}^{2}+(\overline{\lambda}^{a}(x)\overline{\theta})\theta^{2}+\frac{1}{2} D^{a}(x)\theta^{2}\overline{\theta}^{2}, \tag{77}\]
where \(\bar{\sigma}^{i}=-\sigma^{i},\bar{\sigma}^{0}=\sigma^{0}\).
### Gauge field strength and gauge field action
It is possible to construct a gauge-invariant _chiral_ superfield as a composite of the gauge superfield \(V\),
\[{\cal W}^{a\alpha}_{\alpha}{\cal W}^{a\,\alpha}(y,\theta) = \frac{1}{2}d^{a}_{\alpha}\lambda^{a\,\alpha}+\cdots+ \tag{78}\] \[+(\frac{1}{4}G^{a\,2}_{\mu\nu}+\frac{i}{4}\tilde{G}^{a}_{\mu\nu}G ^{a\,\mu\nu}+\overline{\lambda}^{a}i\sigma.D\lambda^{a}+\frac{1}{2}D^{a}D^{a}) \theta^{2},\]
where \(\tilde{G}_{\mu\nu}\equiv 1/2\ \epsilon_{\mu\nu\rho\sigma}G^{\rho\sigma}\). We will not need the terms linear in \(\theta\). Each factor \({\cal W}^{a}_{\alpha}\) is a _spinor chiral_ superfield generalizing the covariant gauge field strength, which I have not defined, but we will only need its gauge-invariant "square" \({\cal W}_{\alpha}{\cal W}^{\alpha}\) which transforms under SUSY like the elementary chiral superfields we have discussed. Clearly it has mass dimension 3. Since it is chiral, we can write a renormalizable action,
\[S_{\rm gauge} = \frac{1}{4g^{2}}\int\,d^{2}\theta\,{\cal W}^{a}_{\alpha}{\cal W}^ {a\,\alpha}+h.c. \tag{79}\] \[= -\frac{1}{4g^{2}}G^{a\,2}_{\mu\nu}+\frac{1}{g^{2}}\overline{ \lambda}^{a}i\sigma.D\lambda^{a}+\frac{D^{a}D^{a}}{g^{2}}.\]
We see that there is a new set of auxiliary fields, \(D^{a}\). I am again using the normalization where the gauge coupling is out the front of the action and not in interaction terms, but one can return to the canonical normalization by field redefinition.
It will be useful later to also allow non-renormalizable interactions at the two-derivative level (one-derivative for fermions) of EFT, between chiral matter superfields and the gauge superfields,
\[\delta S=\int\,d^{2}\theta\,\,f(\Phi)\mathcal{W}^{a}_{\alpha}\mathcal{W}^{a\, \alpha}+h.c. \tag{80}\]
The function of chiral superfields \(\Phi\) is holomorphic for the same reason the superpotential is, and is known as the "gauge coupling function" because it is clearly a generalization of the renormalizable coupling \(1/g^{2}\). That is, it is a field-dependent gauge coupling. We will allow an independent gauge coupling function for each gauge group. We take it to be a gauge-invariant function of fields, as is \(\mathcal{W}^{a}_{\alpha}\mathcal{W}^{a\,\alpha}\).13
Footnote 13: There is a more general option \(f_{ab}(\Phi)\mathcal{W}^{a}_{\alpha}\mathcal{W}^{b\alpha}\) where \(f_{ab}\) is not gauge-invariant but we will not need this, and discard it for simplicity.
### Component form of charged field gauged kinetic terms
Unpacking the \(\int\,d^{4}\theta\,\bar{\Phi}e^{V}\Phi\), the couplings of \(V\) introduce the expected \(\partial_{\mu}\rightarrow\partial_{\mu}-iA^{a}_{\mu}t^{a}\) in all the derivatives of the charged fields. In addition, they give rise to a set of Yukawa-type interactions of the gaugino,
\[\mathcal{L}_{\lambda}=\frac{1}{g^{2}}\overline{\lambda}^{a}i\sigma.D\lambda^{ a}+\overline{\phi}\lambda^{a}_{\alpha}t^{a}\psi_{\beta}\epsilon^{\alpha \beta}+h.c. \tag{81}\]
where the first term comes from the the gauge superfield action, Eq. (79). Finally, there are terms linear in the auxiliary fields \(D^{a}\). Taking into account the quadratic terms in these fields from the gauge action, Eq. (79), and eliminating these fields using their equations of motion, gives rise to a scalar potential called the "D-term potential",
\[V_{D}=\frac{1}{2}\sum_{a}g_{a}^{2}\left(\sum_{i}\overline{\phi}_{i}t^{a}_{(i)} \phi_{i}\right)^{2}. \tag{82}\]
The index \(a\) labels all gauge generators of all gauge groups, and \(g_{a}\) denotes the gauge coupling appropriate to that generator (so a simple gauge group has the same \(g_{a}\) for all its generators, \(a\)). The index \(i\) labels the different charged species and \(t^{a}_{(i)}\) are the gauge generator matrices in the representation \(i\). The full scalar potential is then given by the sum of D- and F-term potentials,
\[V_{\text{scalar}}(\phi,\overline{\phi})=V_{D}+V_{F}. \tag{83}\]
### Renormalizable Feynman rules
It is useful to picture the structure of the renormalizable MSSM in general terms. This is shown schematically in figure 8.
But I have reverted to canonical normalization for the gauge fields where gauge couplings \(g\) appear only in interactions not propagators, as is useful in practical work, rather than the normalization with \(1/g^{2}\) multiplying the entire gauge kinetic terms, which is useful in some of our more theoretical and conceptual discussion. Detailed forms of these rules with all the relevant factors and representation matrices can be found in [8]. (One passes from the latter to canonical normalization by the field redefinition \(A\to gA\).)
The first interaction shown is the Yukawa-type interaction of the gaugino mentioned above. The \(g^{2}\) scalar quartic interaction comes from \(V_{D}\). All the other couplings of strength \(g\) or \(g^{2}\) just follow from gauge invariance without any special SUSY considerations, just the gauge charge of the ordinary component fields. The various Yukawa couplings of strength \(Y\) arise from expanding the superpotential action with two fermions and one scalar field. The last two interactions arise from \(V_{F}\). In particular, the last interaction arises from the cross-term in \(V_{F}\) depending on both the \(\mu\)-term and the Yukawa coupling.
## 12 Soft SUSY breaking from Spontaneous SUSY breaking
Consider the following non-renormalizable couplings between the MSSM matter and the simple hidden sector of SUSY breaking discussed earlier:
\[{\cal L}_{\rm matter-hid} = \int d^{4}\theta\,\frac{\overline{\Sigma}\Sigma}{M_{Pl}^{2}} \left(c_{ij}^{Q}\overline{Q}_{i}Q_{j}+c_{ij}^{U}\overline{U}_{i}^{C}U_{j}^{c}\right. \tag{84}\] \[+\left.c_{ij}^{D}\overline{D}_{i}^{c}D_{j}^{c}+c_{ij}^{L} \overline{L}_{i}L_{j}+c_{ij}^{E}\overline{E}_{i}^{c}E_{j}^{c}\right.\] \[\left.+c_{u}\overline{\cal H}_{u}{\cal H}_{u}+c_{d}\overline{ \cal H}_{d}{\cal H}_{d}\right)\,,\]
where all the \(c\) coefficients are taken to be roughly \({\cal O}(1)\) in size and where \(i,j=1,2,3\) label the three SM generations.
Now, upon spontaneous SUSY breaking in the hidden sector, we have \(\langle\Sigma\rangle=\Lambda^{2}\theta^{2}\). If we plug this VEV into the coupling to the MSSM matter, the \(\int d^{4}\theta\) is forced to integrate the Grassmann
Figure 8: The schematic structure of the Feynman rules of the MSSM component fields, or indeed a generic renormalizable SUSY gauge-Higgs theory.
coordinates in \(\langle\bar{\Sigma}\Sigma\rangle\) and not the MSSM superfields. Therefore the result can only be a potential for the MSSM scalars,
\[\mathcal{L}_{\rm matter-hid}\underset{\Sigma\rightarrow\langle \Sigma\rangle}{\longrightarrow}\frac{\Lambda^{4}}{M_{Pl}^{2}}(c_{ij}^{Q}\overline {\tilde{a}}_{i}\tilde{q}_{j}+c_{ij}^{U}\overline{\tilde{a}}_{i}^{c}\tilde{u} _{j}^{c}+c_{ij}^{D}\overline{\tilde{d}}_{i}^{c}\tilde{d}_{j}^{c}+c_{ij}^{L} \overline{\tilde{l}}_{i}\tilde{l}_{j}+c_{ij}^{E}\overline{\tilde{e}}_{i}^{c} \tilde{e}_{j}^{c}+c_{u}\overline{H}_{u}H_{u}+c_{d}\overline{H}_{d}H_{d}). \tag{85}\]
That is, we get a set of mass terms for all the scalars, roughly
\[m_{\rm scalar}^{2}\sim\left(\frac{\Lambda^{2}}{M_{Pl}}\right)^{2}\gtrsim v_{ weak}^{2}, \tag{86}\]
but not for their superpartner fermions. From the MSSM viewpoint this is effectively soft SUSY breaking roughly at the scale \(\sim\Lambda^{2}/M_{Pl}\). As we anticipated earlier by qualitative reasoning, we will take this scale of soft SUSY breaking to be (very roughly) comparable to the weak scale \(v_{weak}\). Yes, order \(1-10\) factors matter hugely to experimentalists searching for such squarks and sleptons (and extra Higgs scalars) but we are not yet ready to confront experiment at this point, so let us go with this thumbnail sketch for now.
Solving for the scale of fundamental spontaneous SUSY breaking in the hidden sector,
\[\Lambda\gtrsim\sqrt{M_{Pl}v_{weak}}\sim 10^{11}\,\text{GeV}. \tag{87}\]
Because it is roughly the geometric mean of the Planck and weak scales it is known as the "intermediate scale". It is consistent to keep this big VEV of \(\Sigma\) while neglecting its fluctuations in their impact on the MSSM because the fluctuation fields \(\sigma,\psi_{\Sigma}\) have Planck-suppressed couplings which are tiny at accessible energies. By comparison, the intermediate scale VEV is so large that even after Planck-suppression its impact on the MSSM is comparable to the weak scale.
Similarly, we can also consider a non-renormalizable hidden sector coupling to the gauge superfields,
\[\mathcal{L}_{\rm gauge-hid} =\int d^{2}\theta\,\frac{\Sigma}{M_{Pl}}c^{\prime}\mathcal{W}_{ \alpha}^{\alpha}\mathcal{W}^{a\,\alpha} \tag{88}\] \[\underset{\Sigma\rightarrow\langle\Sigma\rangle}{\longrightarrow} \frac{c\Lambda^{2}}{M_{Pl}}\lambda_{\alpha}^{4}\lambda^{a\,\alpha},\]
where there is an (implicit) independent \(c\) for each MSSM gauge group. The MSSM sees this coupling as effectively a soft SUSY-breaking gaugino mass of comparable size to the scalar soft masses,
\[m_{\lambda}\sim\frac{\Lambda^{2}}{M_{Pl}}. \tag{89}\]
Putting it all together, the MSSM is effectively softly broken by the couplings to the hidden sector, at the rough scale
\[m_{\rm superpartners}\sim\frac{\Lambda^{2}}{M_{Pl}}\gtrsim v_{weak}. \tag{90}\]
With this simple form of soft SUSY breaking only affecting superpartner masses, the Feynman diagrammatics above will look schematically the same as the SUSY limit, but in detail the propagators for the different component particles will reflect the SUSY-breaking masses.
## 13 The "\(\mu\)-Problem" and the Giudice-Masiero Mechanism
There is only one explicit mass parameter (not arising from dimensional transmutation) in the MSSM prior to soft SUSY breaking, namely the \(\mu\) parameter:
\[\int\,d^{2}\theta\,\mu{\cal H}_{u}{\cal H}_{d}=\mu\tilde{H}_{u}\tilde{H}_{d}+\cdots \tag{91}\]
One can think of it as primarily giving rise to a Dirac mass, \(\mu\), for Higgsinos. In the SUSY limit it also gives the same mass to the Higgs scalars of course, but after SUSY breaking we have seen that there are other significant corrections to the latter. There are then two parametrically distinct mass scales, \(\mu\) and \(\Lambda^{2}/M_{Pl}\), the first supersymmetric in form and the second the result of SUSY breaking. If \(\mu\ll\Lambda^{2}/M_{Pl}\sim v_{weak}\) then the Higgsinos would be much lighter than the weak scale and would have already been discovered given their electric and weak charges. And if \(\mu\gg\Lambda^{2}/M_{Pl}\) then the entire \({\cal H}_{u,d}\) multiplets would have supersymmetric mass \(\mu\) far above the weak scale so that they could not execute EWSB. Therefore, we need \(\mu\sim\Lambda^{2}/M_{Pl}\). However, given the orders of magnitude hierarchy between the Planck and weak scales, this is a striking and unexplained coincidence, an unsatisfactory feature in our push to understand all large hierarchies where seen _and_ absences of large hierarchies where not seen. It is called the "\(\mu\) problem". We turn now to its simplest and ultimately quite satisfying solution.
First notice that \(\mu=0\) is robust in that it is protected by a new global symmetry in that limit, "Peccei-Quinn" (PQ) symmetry,
\[U(1)_{\rm PQ}: {\cal H}_{u}\longrightarrow e^{i\alpha}{\cal H}_{u} \tag{92}\] \[{\cal H}_{d}\longrightarrow e^{i\alpha}{\cal H}_{d}.\]
Instead, we introduce new couplings to the hidden sector that are peculiar to the two Higgs supermultiplets because they are in conjugate gauge representations:
\[{\cal L}_{\rm Higgs-hid} =\int\,d^{4}\theta\,\left(c^{\prime}\,\frac{\overline{\Sigma}}{M_ {Pl}}{\cal H}_{u}{\cal H}_{d}+c^{\prime\prime}\,\frac{\overline{\Sigma}\Sigma }{M_{Pl}^{2}}{\cal H}_{u}{\cal H}_{d}+h.c.\right) \tag{93}\] \[\stackrel{{\Sigma\to(\Sigma)}}{{\longrightarrow} \left(\frac{c^{\prime}\,\Lambda^{2}}{M_{Pl}}\int\,d^{2}\theta\,{\cal H}_{u}{ \cal H}_{d}+h.c.\right)}+\left(\frac{c^{\prime\prime}\Lambda^{4}}{M_{Pl}^{2} }\,H_{u}H_{d}+h.c.\right).\]
We see that the \(c^{\prime}\sim{\cal O}(1)\) term gives us an effective \(\mu_{eff}=c^{\prime}\Lambda^{2}/M_{Pl}\) term after SUSY breaking in the hidden sector, and solves the \(\mu\) problem by realizing this apparent SUSY-preserving mass as fundamentally a SUSY breaking effect. This is the Giudice-Masiero mechanism [22]. It can be viewed as preserving Peccei-Quinn symmetry if we assign \(\Sigma\) charge 2 under it. The second term is called an effective soft-SUSY-breaking "\(B\mu\) term" \(c^{\prime\prime}\Lambda^{4}/M_{Pl}^{2}\), and can be useful in model-building EWSB as we will see later. But clearly this would violate Peccei-Quinn symmetry. The easiest way to allow both terms is to simply assume that the hidden sector and its couplings violate Peccei-Quinn explicitly, but the MSSM in isolation does not. We will make this more plausible, and radiatively stable, from an extra-dimensional perspective later.
## 14 SUSY Phenomenology
### Importance of the \(125\) GeV Higgs Sector
Here, I want to give a schematic sense of the importance of the Higgs mass and SUSY-breaking radiative corrections, ignoring some of the subtleties related to the 2-Higgs-doublet nature of the MSSM. We will do a more refined job once we have sharpened a model of SUSY-breaking \(c\) parameters.
In the non-supersymmetric SM, the physical Higgs boson mass is given at tree-level by
\[m_{h}^{2}=\lambda_{h}v_{weak}^{2}, \tag{94}\]
where \(\lambda_{h}\) is the Higgs-doublet quartic self-coupling. In the (softly-broken) MSSM, you can see from the scalar potential contributions, the only self-couplings of the (two) Higgs doublets arise in the D-term potential, and therefore \(\sim g_{EW}^{2}\). In detail this implies a tree-level bound, \(m_{h}\leq m_{Z}\), regardless of how EWSB is shared between the two Higgs doublet VEVs. In terms of the mass-squareds (more relevant for bosons) we see that the observed Higgs strongly violates this bound:
\[m_{h}^{2}=(125\,{\rm GeV})^{2}\sim 2m_{Z}^{2}! \tag{95}\]
This implies that if SUSY is playing a role near the weak scale, Higgs radiative corrections are important. The largest such radiative corrections are expected from the largest Higgs coupling, the Yukawa coupling to the top quark, as illustrated in figure 9.
We see that the simplest fit to \(m_{h}=125\) GeV is to have a rather heavy stop, \(m_{\tilde{t}}\sim 10\) TeV, "just" out of LHC reach!
### Direct LHC Searches
As you know, all searches for superpartners to date have come up empty, so these translate into bounds on superpartner masses/couplings [23]. Very roughly, a variety of LHC searches have resulted in bounds,
\[m_{\rm gluino}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt \hbox{$>$}}2\,{\rm TeV},\,\,\,m_{\rm Wino}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}{\rm TeV}\] \[m_{\rm squarks}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}{\rm TeV},\,\,m_{\rm sleptons}\mathrel{\hbox to 0.0pt{ \lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}100(s)\,{\rm GeV}. \tag{96}\]
Figure 9: Top/stop loops can have a significant impact on Higgs quartic self-coupling, and hence physical Higgs scalar mass, if the stop-top mass splitting is significantly larger than the weak scale.
A typical process that the LHC searches for is given in figure 10. Here, two gluonic partons from within the colliding protons are pair-creating a pair of gluinos. Each gluino then promptly decays into a (anti-)quark (jet) and squark, and the squark in turn decays into a quark (jet) and "neutralino". We are assuming that the lightest superpartner, stable by \(R\)-parity conservation, is electrically and color-neutral, hence "neutralino". I have drawn it as one of the neutral electroweak gauginos (though it may be a more complicated superposition of superpartner gauge-eigenstates). Such a stable neutralino is an example of a WIMP (Weakly Interacting Massive Particle, implicitly stable). Because of its neutrality (and color-neutrality), a WIMP behaves roughly like a heavy neutrino, escaping the LHC detectors, only "detected" as missing (or imbalance in the) energy-momentum of the event.
### WIMP Dark Matter Direct Detection
A stable WIMP is, broadly speaking, a good candidate for the particles making up the Dark Matter of the universe [24]. If that is the case, then beyond trying to pair-produce WIMPs at colliders as sketched above, we can also try to detect WIMPs in our galaxy sweeping through the Earth. For example, the WIMPs can scatter off nuclei in underground detectors, the details depending on the specifics of their weak interactions. The likelihood of detection is amplified by the potentially large flux of WIMPs and the large volume of the detector target, as compared to pair-production and detection at particle colliders. Again, as you know, despite heroic efforts WIMP dark matter particles have not been directly detected to date. But there still remains room for discovery.
## 15 Flavor and CP Problems of SUSY (and BSM more generally)
In the experimental search for new BSM physics there are two modes. There is the mode of simply producing and detecting the new particles "on-shell" (or having them be created by the early universe and hit our detectors). But in this section we consider the other mode of processes involving SM particles mediated by virtual "off-shell" exchanges of the new particles, typically
Figure 10: Gluino pair production at a hadron collider, initiated by gluons, decaying into quarks and neutralinos. If the neutralinos are stable they would be a good candidate for dark matter particles.
because the energy of the process is well below the BSM masses as the price for higher statistics and hence precision. Our greatest ability to discriminate BSM effects is within processes in which the competing SM effects are relatively small for SM-structural reasons. Flavor-violating and CP-violating processes at low energies (by high-energy collider standards) are two of the most powerful categories in this regard, because in the SM they are strongly constrained by the CKM flavor/weak-interaction structure and the resulting GIM mechanism.
### SM FCNCs and CP-Violation
The SM GIM mechanism strongly constrains Flavor-Changing Neutral Currents (FCNCs). The classic "neutral current" is virtual \(Z^{0}\)-exchange, illustrated in figure 11 between quarks, the neutral version of the "charged current" exchange of \(W^{\pm}\) in the standard Fermi approximation to weak interactions. For charged currents the \(W\) has coupling \(gV^{CKM}_{ij}\) to any pair of quark and antiquark flavors \(i\), \(j\) with net charge 1, where \(V^{CKM}\) is the CKM matrix emerging from the Yukawa couplings. By contrast, the \(Z^{0}\) coupling is necessarily diagonal in the species of the pair of quarks it couples to. This is a consequence of the minimal gauge coupling from covariant derivatives in the quark kinetic terms. Therefore flavor \(i\) is both entering and leaving the effective neutral current exchange, and the same is true of flavor \(j\). (Or equivalently, quark flavors \(j\) and antiquark flavor \(j\) are being produced.) So this is a flavor-conserving neutral current, not a flavor-violating one. A less trivial
Figure 12: Tree-level neutral current processes in the standard model are flavor-preserving, a striking consequence of its minimal structure.
Figure 11:
Figure 13:
"neutral current" arises from Higgs \(h^{0}\) exchange, illustrated between quarks in figure 12. Even though the Yukawa coupling matrix is not diagonal in the gauge basis, the quark mass matrix is proportional to it, \(m_{ij}=Y_{ij}v_{weak}\), so that the Yukawa couplings are flavor-diagonal in the mass eigenbasis. So once again, this neutral current is flavor-conserving.
We have to go to 1-loop to produce FCNCs in the SM, where we can create an effective neutral current by exchanging a \(W^{+}W^{-}\) pair, and taking advantage of the flavor-changing \(W\) couplings. This is illustrated in the box-diagram of figure 13 for (virtual) \(W\) exchanges between down-type quarks of various generations, at low energies (well below \(m_{W}\)). I have just given the dependence on the CKM matrix elements from the four vertices, while the loop integral itself is given by \(f\) at low energies. I will make my point by pretending that all the up-type quarks are light, \(m_{j},m_{r}\ll m_{W}\). Then the loop-level FCNC in the figure simplifies to \(\approx f(0,0)\delta_{ik}\delta\ell s\)+ small, by the unitarity of the CKM matrix in the SM. That is, even at loop level, the individual contributions to FCNCs cancel to a large extent!14 The small deviations from this limit predict small but observable FCNCs which closely match observation. Any contributions from BSM must be small enough to hide in the current experimental and calculational error bars!
Footnote 14: Obviously the top quark violates the assumption of small quark mass and requires special treatment, which I will not go into. But suffice it to say, it enters with rather small mixing angles \(V_{ts}^{CKM},V_{td}^{CKM}\), so that its contributions to FCNCs are also relatively small.
Now let us turn to CP violation, which can appear in conjunction with flavor-changing processes or within purely flavor-conserving processes. In QFT, CP-violation arises from complex phases in couplings that cannot be eliminated by field redefinitions. In the renormalizable SM, with its three generations (and neglecting the tiny neutrino masses for simplicity), at the perturbative level there is just one such phase in \(V^{CKM}\). This gives rise to a highly restrictive form for CP-violating effects in weak-interaction processes within hadronic flavor physics, and there is a beautiful body of experimental evidence corroborating this single-source of CP-violation. Non-perturbatively and subtly, there is another complex phase that is allowed, the "\(\Theta\) angle", which would give rise to CP-violation in strong-interaction hadron physics (without involving weak-interactions at all), most famously predicting a neutron electric dipole moment. But no such strong-interaction CP-violation has been observed, giving rise to the stringent limit \(\Theta<10^{-10}\). Other than \(\Theta\), Nature seems to have
Figure 13: FCNCs appear at one loop within the standard model by sewing together a pair of tree-level flavor-changing charged currents. Even here, there are significant suppressions due to the unitarity of the CKM matrix.
turned on every coupling allowed by the renormalizable structure of the SM at an observable level, so the nearly-vanishing \(\Theta\) poses a puzzle "The Strong CP Problem". Famously, there is a plausible BSM resolution to this puzzle by adding an axion to the SM, subject to the brilliant Peccei-Quinn mechanism. For a review see [25]. We will assume that this addition to the SM has been made, so that only the CKM phase remains as the source of CP violation in the SM+axion theory.
Given the SM (+ axion) structure that so strongly suppresses FCNCs and CP-violation, in agreement with current precision low-energy flavor and CP tests, makes these very sensitive channels for searching for BSM. Let us see how these channels are sensitive to SUSY.
### Superpartner-mediated FCNCs
To exemplify how SUSY can introduce new FCNC contributions we consider new box diagrams, analogous to those of the SM but with the \(W\)s replaced by gauginos. Gluino exchange comes with the strongest coupling, so we focus on this in figure 14.15 In the figure the quark flavors are in their mass basis, and I am labelling the squark flavors in this same basis, so that gluinos connect quarks and squarks of the same flavor. However, I have indicated the soft SUSY-breaking squark mass-squareds which are not necessarily diagonal in this flavor basis because the \(c_{ij}\) are a priori general matrices in flavor-space, as introduced in section 12. If we were to diagonalize these soft SUSY-breaking squark mass2 matrices, we would find that the quark mass basis was not the same as the squark mass basis. This is in analogy to how the up-type quark mass basis is different from the down-type mass basis in the SM, so there is the nontrivial CKM generation-changing matrix for up-down couplings to \(W\) after EWSB. Similarly the gluino will have generation-changing couplings between quark and squark mass eigenstates after SUSY breaking. Unlike (most of) the up-type quarks in figure 13, all the squarks in figure 14 are comparably as heavy as the gluino, and therefore we expect significant FCNCs for generic \(c_{ij}\sim{\cal O}(1)\).
Figure 14: The difference between the mass eigenbases of quarks and squarks after generic SUSY breaking leads to FCNCs mediated by gluinos.
The effective flavor-violating four-quark interaction arising from integrating out the superpartners is however suppressed by \(\sim 1/m_{superpartner}^{2}\), by dimensional analysis. Therefore for generic \(c_{ij}\) we can suppress BSM FCNCs by taking large enough superpartner masses. Given generic \(c_{ij}\) and current constraints, this leads to bounds of roughly \(m_{squarks}>1000\)(s) of TeV! [26]
### Superpartner-mediated CP-violation
We can illustrate non-flavor-violating CP-violation by the CP-violating electron electric dipole moment (EDM). We can orient ourselves by first considering the classic 1-loop QED contribution to the electron _magnetic_ dipole moment of figure 15. QED does not give rise to an electron EDM because it does not contain CP-violating phases in the renormalizable theory. But we can now consider a superpartner analog of this loop, figure 16. It will give a BSM contribution to the electron magnetic dipole moment. Because it is suppressed by one power of the superpartner mass, it can easily be much smaller than the QED contribution and therefore hard to detect. But if there are irreducible CP-violating phases in the gaugino or slepton masses/couplings then the diagram will also contribute to an electron EDM. For \(\mathcal{O}(1)\) phases, the non-observation of such an EDM then requires \(m_{superpartner}\gg\) TeV [27].
Figure 16: A supersymmetric-loop analog of the previous diagram which can generate a distinctive CP-violating electric-dipole moment if there are CP-violating phases in the superpartner masses/couplings.
Figure 15: The classic diagram generating the anomalous magnetic moment of the electron in QED.
### Moral for BSM from flavor and CP considerations
We have reviewed the key to how the SM (+ axion) strongly suppresses FCNCs and CP-violation, namely that the only sources of flavor and CP violation appear in the Yukawa coupling matrices, \(Y^{u}_{ij},Y^{d}_{ij},Y^{e}_{ij}\), and that these are automatically diagonal and real in the quark/lepton mass eigenbases (neglecting neutrino masses). We have also seen that SUSY with generic \(c_{ij}\)'s (and generic BSM more generally) at collider-accessible scales is typically ruled out by the powerful flavor and CP experimental constraints we already have. This strongly suggests that:
_A viable SUSY (or BSM) theory should retain the feature of having the Yukawa couplings as the sole source of flavor- and CP-violation, and that there should be some theoretical structure that plausibly enforces this._16
Footnote 16: This should at least be the case to good approximation.
We next describe an attractive structure of this type.17
Footnote 17: The ansatz of using copies of the Yukawa matrices in multiple places in a Lagrangian to violate flavor/CP in a manner that extends the GIM mechanism is called Minimal Flavor Violation (MFV), reviewed in [28], but here we are seeking a dynamical explanation of such an extension of GIM rather than an ansatz.
## 16 Combining "bosonic" (\(x_{5}\)) and "fermionic" (\(\theta,\bar{\theta}\)) extra dimensions
I want to first focus on a toy model, but it an almost realistic one. It is a synthesis of supersymmetric and extra-dimensional dynamics, again of the type that might plausibly emerge from something like superstring quantum gravity below the Planck scale. It is depicted in figure 17. It is critical that the extra dimension discussed from here on is _not the same_ as the extra dimension
Figure 17: The higher-dimensional framework for a toy model of “gaugino-mediated SUSY breaking”, with boundary-localized MSSM chiral superfields sequestered from a boundary-localized hidden sector. The gauge superfields propagate in the 5D bulk and mediate SUSY breaking in flavor-independent manner that solves the SUSY Flavor Problem.
discussed in section 6 as being responsible for the generation of SM Yukawa (fermion mass and CKM) hierarchies. For modularity, we can think of these as two different extra dimensions. I will assume here that the Yukawa coupling hierarchies have been achieved by some mechanism, perhaps that of subsection 6.2, but make no further reference to this. I will continue to refer to the relevant extra dimension as the "5th" (and not the "6th"). In particular, in the current context and higher-dimensional spacetime, the MSSM chiral superfields, containing quarks, leptons, Higgses and their superpartners, are all 4D fields localized on the left-hand boundary of 5D. We call this the "visible boundary". The hidden sector responsible for spontaneous SUSY breaking, which minimally is just the \(\Sigma\) superfield model introduced earlier, is localised to 4D on the right-hand boundary. We call this the "hidden boundary". The MSSM gauge supermultiplets are however elevated to 5D superfields propagating in the "bulk" of the higher-dimensional spacetime. This model is called "gaugino-mediated SUSY breaking" (\(\bar{\rm g}\)mSB) [2] for reasons we will see below.
The visible boundary dynamics in isolation is described by the 4D Kahler potential and superpotential for the MSSM chiral superfields detailed in section 11, while the hidden boundary dynamics in isolation is described by the 4D Kahler potential and superpotential described in section 10. But the boundaries are not isolated because they can both interact with the gauge superfields. The leading such couplings to the charged MSSM matter at \(x_{5}=0\) is just given by their supersymmetric gauge couplings as described in Eq. (75), and unpacked in subsection 11.4. Since the hidden field \(\Sigma\) is a gauge singlet, it can only have couplings at \(x_{5}=L\) to supersymmetric gauge field strengths, such as the \(c/M_{Pl}\) of Eq. (88). We can focus on the 4D EFT below the KK scale \(1/L\). 5D gauge invariance forces the gauge fields to have \(m_{5}=0\), so that their 0-modes are \(x_{5}-\)independent. Therefore there are no extra exponentials \(e^{\pm m_{5}L}\) incurred in deriving the 4D EFT. It must just have the form given by eqs. (79) and (37).
But by the locality of all couplings in relativistic theories (no instantaneous action at a distance) we cannot couple the visible boundary fields (MSSM matter chiral superfields) directly to the hidden boundary superfield \(\Sigma\). Therefore the couplings \(c^{Q}_{ij},c^{U}_{ij},c^{D}_{ij},c^{E}_{ij},c^{L}_{ij},c^{E}_{ij},c^{u}_{ij}, c^{d}_{ij},c,c^{\prime}\) must all vanish! You can check that the only remaining couplings that violate flavor symmetries (have non-trivial flavor dependence) are the supersymmetric Yukawa couplings in the visible superpotential, exactly as the moral of subsection 15.4 prescribed. This 5D geographical separation of flavored superfields from the SUSY-breaking dynamics is "sequesetering" [13] and is the key here to solving the "supersymmetric flavor problem" of excessive FCNCs arising via generic flavor-violating \(c^{Q,U,D,L,E}_{ij}\).
Let us now turn to CP. We will impose this as a symmetry of the bulk of 5D as well as the hidden boundary. That is, we assume that CP is only broken on the visible boundary.18 This means that CP-violating phases can appear in the MSSM superpotential, in particular in the MSSM Yukawa couplings of quark and lepton superfields. We will continue to impose Peccei-Quinn symmetry so that \(\mu=0\), therefore the only CP-violation must be restricted to the irreducible single CKM phase of the quark Yukawa couplings, exactly as in the non-SUSY SM, and in accordance with the moral drawn in subsection 15.4. This solves the "supersymmetric CP problem" of excessive CP violation.
But in sequestering have we "thrown out the baby with the bathwater"? After all, it appears the price for having \(c^{ij}=0\) for the squarks and sleptons means they will be degenerate with the quarks and leptons, and therefore we should have already have seen them experimentally. Fortunately, this is only a tree-level conclusion which is strongly radiatively corrected. The point is that quantum loops are not local and can straddle the extra dimension and connect SUSY breaking from the hidden boundary to squarks and sleptons on the visible boundary. By contrast UV divergences are always local and therefore such loop effects should be finite and calculable. Furthermore, locality of UV divergences means that sequestering itself is robust, in that all divergences and counterterms will respect this "geographical" structure.
To work out the leading radiative corrections, we need to have a sense of the size of the extra dimension, \(L\). We turn to this next.
### SUSY Grand Unification and the Size of the Fifth Dimension
We can recompute the running of the SM gauge couplings in this MSSM 4D EFT. It differs from figure 2 in that we must include the superpartners (and two Higgs doublets versus the SM's single Higgs doublet) in the running from the \(\sim\mathcal{O}(1-10)\) TeV scales we expect (hope) to find them. This is shown at 1-loop approximation in figure 18. Famously, this is a dramatic improvement over the non-supersymmetric SM in providing circumstantial evidence in favor of grand unification, given by the much closer meeting of the couplings at \(10^{16}\) GeV. This is intriguingly close to the Planck scale, hinting at an even grander unification of some sort with quantum gravity. But to retain this attractive interpretation we must trust the 4D renormalization group all the way to this high unification scale. This implies that the KK scale, above which the 4D effective description breaks down, must be even larger:
\[1/L\geq M_{GUT}\sim 10^{16}\text{GeV}. \tag{97}\]
For concreteness we take \(1/L\sim 10^{16}\) GeV. This is a very small extra dimension indeed, and yet with big consequences.
Figure 18: Running of (MS)SM gauge couplings at one loop, with superpartner masses \(\sim\mathcal{O}(10)\) TeV, strongly suggesting grand unification not far from the Planck scale.
\[\frac{d}{d\ln\mu}\frac{1}{\alpha}\approx\frac{-b}{2\pi}\]
\[\frac{d}{d\ln\mu}\,m_{\lambda}\approx\frac{bg^{2}}{8\pi^{2}}\,m_{\lambda}\]
\[\frac{d}{d\ln\mu}\,m_{\begin{subarray}{c}\text{squasrv}\\ \text{sleptons}\end{subarray}}^{2}\approx\frac{-1}{2\pi^{2}}\sum_{ \begin{subarray}{c}\text{gauge}\\ \text{group}\end{subarray}}C_{a}g^{2}m_{\lambda}^{2}\]
\[\frac{d}{d\ln\mu}\,m_{\mu_{\alpha}}^{2}\approx\frac{-1}{2\pi^{2}}\sum_{ \begin{subarray}{c}\text{EWgauge}\\ \text{group}\end{subarray}}C_{a}g^{2}m_{\lambda}^{2}+\frac{3}{4\pi^{2}}y_{t}^ {2}m_{\tilde{t}}^{2}\]
\[\frac{d}{d\ln\mu}\,m_{\mu_{\alpha}}^{2}\approx\frac{-1}{2\pi^{2}}\sum_{ \begin{subarray}{c}\text{EWgauge}\\ \text{group}\end{subarray}}C_{a}g^{2}m_{\lambda}^{2}+\frac{3}{4\pi^{2}}y_{t}^ {2}m_{\tilde{t}}^{2}\]
### 4D Renormalization Group (RG) evolution below the KK scale
The leading approximation we will use is to do tree-level matching at the KK scale \(1/L\), and then do one-loop running of couplings and mass parameters below that scale. This is a consistent approach since the loop suppression is significantly compensated by the large logarithms of the very large hierarchy from \(1/L\) down to the MSSM soft SUSY breaking scale \(\Lambda^{2}/M_{Pl}\sim 10\) TeV. The good news is then that all the one-loop computation can be done within the 4D EFT, avoiding all 5D SUSY and 5D diagrammatic complexity. For readers interested in explicitly framing this model within 5D SUSY, the short-cut is to recycle our 4D superspace formalism to 5D by the methods of Ref. [29].
Now, we have already done the tree-level matching above in working out the tree 4D EFT from the 5D set-up. For the MSSM it is just given by having the \(c\)'s associated with visible chiral
Figure 19: The one-loop running of couplings and SUSY breaking masses in the 4D EFT, including the leading mediation of SUSY breaking to the MSSM scalars.
Figure 20: While gaugino-mediation of SUSY breaking to MSSM scalars is efficiently calculated in 4D EFT, fundamentally the gauginos are traversing the extra dimension to accomplish this, as depicted here.
superfields all vanish. There are no renormalizable interactions in the hidden sector to resum via the RG, so it functions in the same way we reviewed in section 10, and as far as the MSSM is concerned we just need the hidden VEV, \(\langle\Sigma\rangle=\Lambda^{2}\theta^{2}\). We thereby find the UV boundary conditions at \(1/L\) for the RG analysis of the MSSM:
\[m_{\rm scalar}^{2}\left(\frac{1}{L}\sim 10^{16}\,{\rm GeV}\right)=0,\ m_{ \lambda}\left(\frac{1}{L}\right)=\frac{c\Lambda^{2}}{M_{\rm Pl}}g^{2}. \tag{98}\]
That is, the gauginos have a tree-level mass via their direct coupling to the hidden sector in 5D (I have switched to canonically normalized gauginos, explaining the extra \(g^{2}\) in Eq. (98)), while all MSSM scalars have vanishing mass2 parameters at this high scale. We have the coupled 1-loop RG equations shown in figure 19.
Footnote 19: There is a simple enough SUSY reason for why this had to happen, but I will not go into that.
The \(b\)'s are the one-loop gauge coupling \(\beta\)-function coefficients for the gauge groups, differing from the SM equivalents by adding the contributions of the scalar and gaugino loops. It turns out that the same coefficient appears in the gaugino mass running.10 The coefficients \(C_{a}\) are the quadratic Casimir invariants for gauge representation of the different scalars and different gauge groups. I have negelected all Yukawa couplings except for the top's, \(y_{t}\), which is comparable in strength to the gauge couplings. It therefore appears in the \(H_{u}\) mass RG. In principle it should also appear in the stop \(\bar{t}\) mass RG, but here I will neglect it relative to the QCD coupling. This is not quite fair since \(y_{t}\) and \(g_{QCD}\) are not that different, but it is not a gross mischaracterization.
Footnote 10: There is a simple enough SUSY reason for why this had to happen, but I will not go into that.
Furthermore, since we only want the spirit of this RG analysis without trying to do a "professional job", I will make the further "approximation" of dropping all the running of the purely SM couplings, namely gauge and Yukawa coupling running. That is, I am dropping the gauge coupling running in figure 19, and also the gaugino mass running which shares the same RG coefficient \(b\). That is, we are only keeping a crude version of the scalar mass2 running because of its _qualitative_ importance, without which the scalar masses would be degenerate with their fermion partners. All the other running is just providing fine detail.
The fact that as far as the MSSM is concerned, the gaugino masses are the "seeds" of SUSY breaking that feed radiatively into other fields, gives this basic mechanism its name, "Gauginos-mediated SUSY Breaking" (\(\bar{g}\)mSB) [2]. While we have cleverly avoided the need to do 5D Feynman diagrams by using 4D EFT RG equations, we can picture what these equations are fundamentally capturing at the 5D level. This is illustrated in figure 20. The gauginos propagate across the extra dimension to communicate SUSY breaking from the hidden boundary to the visible boundary.
Let me confess the central sense in which this is a toy model. We imposed Peccei-Quinn symmetry which forbids the \(\mu\) parameter, and there is no Giudice-Masiero mechanism where \(\mu\) effectively arises from SUSY breaking because locality forbids Higgs couplings to the hidden sector. The Peccei-Quinn symmetry then guarantees that \(\mu=0\) even at loop level. Since Peccei-Quinn symmetry is the Higgsino chiral symmetry, its preservation implies the Higgsino is massless. This is, of course, completely excluded experimentally. I will describe the more realistic set-up in section 19, but for now it is convenient to proceed with the toy model.
### Parameter Space
Since we are ignoring the running of the gauge couplings and Yukawa couplings we can just fix these to their observed values (say evaluated for measurements at \(\sim m_{Z}\)). So these are not parameters we will want to vary, and we can drop considering them consciously as input parameters of the theory. Similarly we have tied the KK scale to the unification scale, fixed by data, so this too is not a parameter we will vary. We see then that the only variable input parameters are the fundamental SUSY-breaking scale \(\Lambda\) and the SUSY-breaking \(c\) couplings to the gauge superfields. To focus on the visible sector of the MSSM, we can trade \(\Lambda\) and the \(c\)'s for the three gauginos for the three gaugino masses \(cg^{2}\Lambda^{2}/M_{Pl}\), (which do not run in our "approximation"). Therefore all other quantities of physical interest must be derivable from these three inputs.
In particular, the structure of EWSB must be predictable and finite, rather than an input as in the non-supersymmetric SM.
### Radiative EWSB
At tree-level, the MSSM scalars do not feel SUSY breaking, and their potential is just that given by exact SUSY, \(V=V_{D}+V_{F}\). Since we have no \(\mu\) term in this toy model, \(V_{F}\) arises from Yukawa couplings and connects Higgs scalars to sleptons or squarks. Let us explore the non-zero Higgs directions in scalar field space, with slepton and squark scalars kept at zero, so that \(V_{F}=0\) along these directions. The Higgs scalar potential is then given entirely by \(V_{D}\). Since \(H_{u}\) and \(H_{d}\) have conjugate electroweak quantum numbers it is easy to see that \(V_{D}\) vanishes along the "flat directions"20\(H_{u}=H_{d}\). Therefore a predictive EWSB vacuum, with specific \(\langle H_{u}\rangle\), \(\langle H_{d}\rangle\) can only be determined by an effective potential "sculpted out" at loop level. This feature is known as "radiative EWSB", as our toy model will now illustrate.
Footnote 20: Such inequivalent degenerate vacua flat directions in complex scalar field space are common in SUSY QFT, typically as “real-imaginary” partners of Goldstone symmetry-breaking directions (that is, equivalent degenerate vacua related by internal symmetries).
We can easily solve the scalar RG equations for all but \(H_{u}\) (which has the extra top Yukawa corrections):
\[m_{\phi}^{2}({\rm IR})\approx\frac{C}{2\pi^{2}}g^{2}m_{\lambda}^{2}\ln\left( \frac{M_{\rm KK}}{m_{\rm soft}}\right). \tag{99}\]
Here we are just keeping the strongest gauge coupling under which the particular scalar \(\phi\) is charged, and the associated gaugino mass. We are taking \(m_{soft}\sim 10\) TeV, \(M_{KK}=1/L\sim 10^{16}\) GeV, and we are not very sensitive to modest deviations since it is mostly the logarithm of the large hierarchy that matters.
For the \(H_{u}\) RG, we need the stop mass2 parameter for general RG scale \(\mu\), which is simply given by
Footnote 20: Such inequivalent degenerate vacua flat directions in complex scalar field space are common in SUSY QFT, typically as “real-imaginary” partners of Goldstone symmetry-breaking directions (that is, equivalent degenerate vacua related by internal symmetries).
\[m_{\tilde{t}}^{2}(\mu)\approx\frac{C_{3}}{2\pi^{2}}\,g_{3}^{2}\,m_{\rm gluino }^{2}\ln\left(\frac{M_{\rm KK}}{\mu}\right). \tag{100}\]
Plugging this into the \(H_{u}\) RG, we can solve to find
\[m_{H_{u}}^{2}\,({\rm IR}) \approx\frac{C_{2}}{2\pi^{2}}\,g_{2}^{2}\,m_{\rm wino}^{2}\,\ln\frac {M_{\rm KK}}{m_{\rm soft}}-\frac{3}{16\pi^{4}}\,y_{t}^{2}\,C_{3}\,g_{3}^{2}\,m_{ \rm gluino}^{2}\left(\ln\frac{M_{\rm KK}}{m_{\rm soft}}\right)^{2}\] \[\approx\frac{C_{2}}{2\pi^{2}}\,g_{2}^{2}\,m_{\rm wino}^{2}\,\ln \frac{M_{\rm KK}}{m_{\rm soft}}-\frac{3}{8\pi^{2}}\,y_{t}^{2}\,m_{\tilde{t}}^{ 2}\,({\rm IR})\,\ln\frac{M_{\rm KK}}{m_{\rm soft}}. \tag{101}\]
Let us make one more switch of input parameters, trading the input \(m_{gluino}\) for \(m_{\tilde{t}}\) using Eq. (100). The \(H_{u}\) mass\({}^{2}\) is then given by the last line in terms of the inputs \(m_{wino}\) and \(m_{\tilde{t}}\). We see that other than \(H_{u}\) all other scalars are predicted to be non-tachyonic, that is they will not condense. In particular, in our toy model \(m_{H_{d}}^{2}\,(IR)>0\), so \(\langle H_{d}\rangle=0\), and therefore none of the down-type quarks and charged leptons will get EWSB masses! This is the other main unrealistic feature of the toy model. Again it will be corrected in subsection 19.1. But \(H_{u}\) has radiative mass\({}^{2}\) corrections of both signs, positive from EW gaugino loops and negative from the top Yukawa coupling. Therefore it will be tachyonic in a large part of parameter space, condensing and breaking EW symmetry, and giving masses to the \(W,Z\) and up-type quarks. Therefore in our toy model, \(H_{d}\) is an entirely massive EW multiplet of scalars, while \(H_{u}\) is the closest thing to the SM Higgs doublet in that it is responsible for EWSB. So, three of its components are eaten by the \(W\) and \(Z\) and the remaining one can be identified with the observed Higgs boson at 125 GeV. Given the usual wine-bottle potential for a single Higgs doublet, this means that
\[m_{H_{u}}^{2}\,({\rm IR})=-2(125{\rm GeV})^{2}. \tag{102}\]
## 17 The (Little) Hierarchy Problem
We are finally able to understand in this case study the technical face of the (in)famous Hierarchy Problem. Given the lower bounds on colored superpartners in the multi-TeV range and the large logarithms in Eq. (101), obtaining the modest \(m_{H_{u}}^{2}\,({\rm IR})\) of Eq. (102) requires relatively fine cancellation of the two larger terms in Eq. (101). The heavier the superpartners, the finer the cancellation needed to achieve a Higgs mass parameter as small as Eq. (102). We can picture the situation as in figure 21, where we are studying the theory in the (effective) input parameter space of \(m_{\tilde{W}}\) and \(m_{\tilde{t}}\,(IR)\), where we shade in the region in which EWSB takes place with \(|m_{H_{u}}^{2}\,({\rm IR})|\leq 2(125{\rm GeV})^{2}\). If the superpartner masses were \({\cal O}(100)\) GeV you can see that this would not require very fine cancelations. But if superpartner masses are more like (multi-)TeV, then one must live in a very narrow strip of this parameter space in order to have EWSB \(\sim{\cal O}(100)\) GeV. If we had not turned on the LHC (or Tevatron for that matter) yet, given Eq. (101) and figure 21, we would gamble that superpartners were \({\cal O}(100)\) GeV. But so far, LHC searches up to \(\sim\) TeV have found no superpartners, and furthermore the simplest way to understand the Higgs quartic coupling, discussed in subsection 14.1, is to have heavy stops, \(m_{\tilde{t}}\sim{\cal O}(10)\) TeV. These direct and indirect arguments then suggest that if SUSY(-breaking) is relevant to EWSB we must live in the sliver of parameter space with multi-TeV superpartners. It just seems puzzling why nature would choose that over superpartners \(\sim 100\) GeV, with a finely tuned cancellation in Eq. (101) at the level of one part in \(10^{3-4}\).
This is the SUSY version of the Little Hierarchy Problem which afflicts any BSM mechanisms in which EWSB becomes a calculable phenomenon rather than an input. Now, you might take an
even more extreme view that SUSY is some vestige of superstring theory or quantum gravity near the Planck scale, but it also was broken not far below that scale. In this case, superpartners would be extremely heavy, perhaps not much lighter than the Planck scale. But extrapolating figure 21 to such massive superpartners means that the parameter space in which EWSB \(\sim 100\) GeV is extremely squeezed, the fine-cancellation is at the level of one part in \(\sim 10^{30}\)! This situation in which the non-SUSY SM is the valid EFT all the way close to the Planck scale would be incredibly puzzling, and it poses the _Big_ Hierarchy Problem. By comparison multi-TeV SUSY would seem more plausible, although we would like to know if there is some other small mechanism missing that would help resolve the Little Hierarchy Problem, or if that is agonizing unnecessarily over a relatively modest perhaps random cancellation.
In the (toy) model under study we have the gaugino masses as ultimately setting the scales of SUSY breaking in general and thereby radiative EWSB. One might therefore worry that the Big and Little Hierarchy Problems are specific to this model. But in any fundamental theory (known) _some_ scale plays the role of the gaugino masses here. For examples: in Gauge-Higgs Unification where the Higgs incarnates as an extra-dimensional component of a gauge field the fundamental scale is given by the KK scale \(m_{KK}\), in Composite Higgs theories the fundamental scale is given by the compositeness scale \(\Lambda_{comp}\), in string theories in which SUSY is maximally broken the fundamental scale is played by the string scale \(m_{string}\), and it is reasonable to expect that in a theory of quantum gravity from which EFT emerges in the IR the Planck scale \(M_{PI}\) is a kind of maximal fundamental scale (corresponding to the minimal distance \(\ell_{PI}\)). In any of these settings it appears that there would be a fine-tuned cancellation of the sort depicted in figure 21 unless one or more of these fundamental scales is relatively close to the weak scale. That is, as far as we understand so far, the hierarchy problems illustrated here are in fact quite general: either new physics appears not far
Figure 21: As illustrated here for gaugino-mediation, the larger the superpartner masses are the smaller is the sliver of parameter space in which the emergent weak scale is \(\mathcal{O}(100)\) GeV. This suggests that the superpartners should not lie too far above the weak scale. We are still waiting for them to show up.
above the weak scale or EWSB arises as a fine cancellation of huge competing and uncorrelated contributions to the Higgs effective potential.
## 18 The UnSequestered
In figure 20 I showed how gauge superfields in the bulk (in particular the gauginos) straddle the extra dimension in communicating SUSY-breaking to the visible MSSM sector (scalars, particularly) at loop level. But what about other possible bulk fields that might also mediate SUSY breaking across the extra dimension? Of course, the one other mandatory inhabitant of the bulk is 5D supergravity. This can and does mediate a particular type of SUSY breaking to the MSSM [13] but it can readily be subdominant to gaugino-mediation. There can be other fields needed to stabilize the size \(L\) of the extra dimension, given that spacetime is dynamical with 5D General Relativity. See for example, Ref. [30]. Such fields can also readily give only subdominant SUSY breaking contributions.
But as (5D) effective field theorists we are always at the mercy of very heavy fields that lie above the UV cutoff of our (5D) EFT control, minimally the KK scale \(1/L\). So let us consider the generic effect of such a massive bulk field, \(m_{5}>1/L\), as depicted in figure 22. I have estimated the size of the effect by the same arguments of subsection 6.3 for massive bulk exchange between boundary localized fields. In particular there is the Yukawa suppression \(e^{-m_{5}L}\) for a massive field to connect distant boundaries. But without knowing the details of this massive field, it might well couple to MSSM matter with flavor-violating couplings \(\propto c_{ij}\). So flavor violation and CP violation in \(c_{ij}\) is not completely eliminated by sequestering, but only exponentially suppressed. Indeed, given that the maximal scale \(M_{Pl}\) is not far above \(1/L\), the possible \(m_{5}L\) of heavy states associated to string theory or quantum gravity cannot be too large. While it is possible that the exponential suppressions are sufficient to make BSM FCNC or CP-violation smaller than current bounds, it is interesting that upcoming experimental improvements may detect the effects of the heavy bulk fields close to the Planck scale! It may be these precision low energy tests that are the first to go beyond
Figure 22: Flavor-violating contributions to the MSSM SUSY-breaking scalar masses can be mediated by heavy 5D \(\sim\) Planckian bulk fields, whose couplings are not as constrained as those of the gauge superfields. But these heavy exchanges across the extra dimension are Yukawa-suppressed.
the SM, but if the sequestering is very good then it will take high energy colliders to explicitly see the BSM physics. It is exciting to see how a BSM breakthrough may happen along very different experimental fronts, whether flavor, CP, collider resonances, or dark matter detection.
## 19 Realistic Gaugino(-Higgs) Mediated SUSY Breaking
### Higgs superfields in the Bulk
So far, I have reviewed a toy model of SUSY breaking in the MSSM in which the down-type fermions and Higgsinos remain massless, but in which the up-type fermions and EWSB Higgs field are treated fairly realistically but with rather crude approximations in the RG analysis. Here, I briefly want to leave you with the set-up that allows a fully realistic incarnation of the MSSM and SUSY-breaking, while still solving the SUSY flavor- and CP-problems, and now the \(\mu\)-problem as well. It is depicted in figure 23.
The central difference from before is that the MSSM Higgs superfields now propagate in the bulk rather than being boundary-localized. Therefore, like the MSSM gauge superfields, they can directly couple to the hidden sector on the hidden boundary and acquire SUSY-breaking masses. They can still have superpotential couplings with the flavored matter localized on the visible boundary. Just as we are taking CP-violation to be localized to the visible boundary, we can take Peccei-Quinn symmetry violation to be localized to the hidden boundary, allowing arbitrary Higgs superfield couplings to \(\Sigma\). This allows us to realize a \(\mu\) term naturally comparable to the other MSSM soft masses, via the Giudice-Masiero mechanism, as well as a \(B\mu\) term also naturally comparable to the other soft mass-squareds.
These two terms fix the lack of realism in our toy model. The \(\mu\) term straightforwardly incorporates a Dirac mass for the Higgsinos, so they are no longer massless. \(B\mu\) is the coefficient of
Figure 23: Improving on our toy model, the Higgs superfields now propagate in the 5D bulk, permitting the Giudice-Masiero mechanism and thereby a realistic theory.
the Higgs scalar mass-mixing, \(H_{u}H_{d}\). To see its effect, we treat it as modestly small with respect to the Higgs spectrum of the toy model, where the \(H_{u}\) was solely responsible for EWSB and the entire \(H_{d}\) doublet was massive with zero VEV. Perturbatively, \(B\mu H_{u}H_{d}\) will induce a tadpole for \(H_{d}\) after \(H_{u}\) EWSB, \(B\mu\langle H_{u}\rangle H_{d}\). Given that \(H_{d}\) is massive, such a tadpole will induce a small VEV for \(H_{d}\),
\[\langle H_{d}\rangle\approx\frac{B\mu}{m_{H_{d}}^{2}}\langle H_{u}\rangle. \tag{103}\]
This subdominant EWSB VEV is sufficient to give the small (compared to the weak scale) down-type fermion masses we observe without overly disturbing our discussion of the 125 GeV observed Higgs phenomenology. But it does mean that the down-type Yukawa couplings are different from their non-SUSY SM equivalents to get the same observed fermion masses:
\[Y_{d,ij}^{MSSM}\approx Y_{d,ij}^{SM}\tan\beta,\ Y_{e,ij}^{MSSM}\approx Y_{e,ij} ^{SM}\tan\beta, \tag{104}\]
where
\[\tan\beta\equiv\frac{\langle H_{u}\rangle}{\langle H_{d}\rangle}\gg 1. \tag{105}\]
With tree-level matching at the KK scale we get our softly broken 4D MSSM but with just the flavor-dependent \(c_{ij}^{Q,L,U,D,E}=0\), while other \(c\)'s are non-zero and roughly order one, and containing no new CP-violating phases. It is just this feature that ensures that the SUSY flavor- and CP-problems are solved. The RG analysis is more complicated than our toy analysis because the Higgs soft masses are additional seeds of SUSY-breaking beyond the gaugino-masses, but qualitatively it is similar, and can readily yield fully realistic super-spectra. This attractive BSM framework is still called "\(\bar{g}mSB\)", though it is perhaps more accurate to call it "gaugino-Higgs" mediation.
### A note on the different UV scales
Just as for gauge theory, where the 4D and 5D couplings are different but related as in Eq. (37), the same is true for general relativity,
\[\frac{1}{G_{4D,eff}}=\frac{L}{G_{5D}}. \tag{106}\]
In terms of Planck scales,
\[M_{Pl,4D}\sim 10^{18}\mbox{GeV}=M_{Pl,5D}^{3/2}L^{1/2}. \tag{107}\]
Since we are taking \(1/L\sim 10^{16}\)GeV, it follows that the 5D Planck scale is
\[M_{Pl,5D}\sim 10^{17}\mbox{GeV}. \tag{108}\]
Since \(M_{Pl,5D}\) is the fundamental scale of 5D gravity, one might expect it to set the scale of non-renormalizable couplings in our theory rather than the \(M_{Pl,4D}\) we have used so far, in units of which our non-renormalizable couplings were expected to be dimensionless \(c\)'s of order one. More generally, if one carefully does the 5D to 4D matching there will be various (fractional) powers of \(LM_{Pl,5D}\sim 10\) that will arise here and there in 5D Planck units. But since our philosophy is that we are allowing modestly hierarchical input couplings while aiming to generate all other hierarchies via
physical exponentials, we can afford to be sloppy about such \(LM_{Pl,5D}\) factors. We simply absorb them into redefinitions of the \(c\)'s of the 4D EFT.
If we are being sloppy about these UV distinctions between \(1/L,M_{Pl,5D},M_{Pl,4D}\), then what is the point of having invoked them in the first place?! The central point is that extremely heavy 5D particles outside our 5D EFT are expected to have masses at most \(\sim M_{Pl,5D}\), so the non-sequestered effects they mediate are suppressed by \(e^{-M_{Pl,5D}L}\sim 10^{-4}\). This level of suppression is very important in understanding how the SUSY flavor- and CP-problems are adequately solved. So we have invoked the modest distinctions between \(L,M_{Pl,5D},M_{Pl,4D}\) because they are important when exponentiated, but not otherwise. Of course, one can more carefully track these distinctions even outside exponentials, if desired.
## 20 Dynamical SuperSymmetry Breaking (DSSB)
Far below the highest scales in our theory, \(1/L,M_{Pl,5D},M_{Pl,4D}\), lies the intermediate scale of SUSY-breaking, \(\Lambda\sim 10^{11}\) GeV. The soft SUSY-breaking scale \(\sim\Lambda^{2}/M_{Pl}\sim\) (several) TeV, and subsequent EWSB are derived from it. Until now, we have put \(\Lambda\) in by hand into the hidden sector Lagrangian, but ultimately we would like to have \(\Lambda\ll M_{Pl}\) be emergent from a Planckian theory by something like dimensional transmutation. Remarkably, there are indeed elegant QFT mechanisms that accomplish this task if we generalize the hidden sector to include a suitable hidden SUSY gauge theory. Spontaneous SUSY breaking via dimensional transmutation is called "dynamical super-symmetry breaking" (DSSB) [31].
Here I will give the simplest example of this, with the simplicity again coming at the cost of being a non-renormalizable EFT. The core mechanism in DSSB is "gaugino-condensation", not to be confused with the gaugino-mediation we have discussed above. In particular, gaugino-mediation involved gauginos of the MSSM, whereas we are now focusing on the hidden sector, and the gauginos that are condensing belong to a _hidden sector gauge theory_ under which no MSSM fields are charged. Consider a rather minimal hidden sector containing SUSY non-abelian Yang-Mills theory (SYM), consisting of just gauge bosons and gauginos, and our \(\Sigma\) chiral superfield which remains a complete gauge singlet. The hidden Lagrangian is
\[{\cal L}_{\rm hidden}=\int\,d^{4}\theta\,|\Sigma|^{4}-\frac{|\Sigma|^{4}}{4M _{1}^{2}}+\left\{\int\,d^{2}\theta\left(\frac{1}{4g^{2}}-\frac{\pi\Sigma}{M_ {2}}\right){\cal W}_{\alpha}^{2}+h.c.\right\} \tag{109}\]
The two scales of non-renormalizable couplings, \(M_{1},M_{2}\) are taken to roughly Planckian in size. We have not included any superpotential for \(\Sigma\), because unlike in section 10 where we put in a superpotential with \(\Lambda\) by hand, here we want it to be generated via dimensional transmutation.
In components, SYM is a non-abelian gauge theory with the gaugino being a single species of "quark" in the adjoint representation of the gauge group (to match that of the gauge boson). We can justify the vanishing superpotential by imposing \(U(1)_{R}\) symmetry, under which the bosons \(A_{\mu}\) and \(\sigma\) (and hence superfield \(\Sigma\)) have vanishing charge but their fermionic superpartners are charged. Since this means the Grassmann coordinates \(\theta\) are also charged it follows that \(\int\,d^{2}\theta\ W(\Sigma)\) is forbidden. Now, this may seem like overkill, since we do want to generate a non-trivial \(\Sigma\) superpotential by dimensional transmutation. Fortunately, \(U(1))_{R}\) is non-perturbatively anomalous, its action on the gaugino "quark" being analogous to the famously anomalous \(U(1)_{\rm axial}\) chiral symmetry of
QCD. So, \(U(1)_{R}\) is at best a perturbative symmetry, and non-perturbatively we can expect that it is broken. This is the perfect situation, perturbatively \(W(\Sigma)\) vanishes but then it can be generated non-perturbatively.
In QCD with massless quarks, there are various axial chiral symmetries which are spontaneously broken and result in massless composite Nambu-Goldstone bosons, but the overall \(U(1)_{axial}\) is non-perturbatively anomalous and the anomalous breaking does not give rise to a NG boson. Similarly, in SYM the anomalous breaking of \(U(1)_{R}\) does not give rise to a NG composite. Instead the spectrum of composites of the confined gauge bosons and gauginos is completely massive, with masses given by dimensional transmutation of order
\[\Lambda_{SYM}\sim M_{\rm Pl}\,e^{-\nicefrac{{1}}{{b_{sYM}}}\, \alpha_{sYM}(M_{\rm Pl})}. \tag{110}\]
Here, \(b_{SYM}=3N/(2\pi)\) is the one-loop \(\beta\)-function coefficient appropriate to SYM for gauge group \(SU(N)\). We can therefore imagine integrating out the entire massive SYM sector and asking what the EFT for \(\Sigma\) is below \(\Lambda_{SYM}\). In particular, we would like to know what effective non-perturbative superpotential \(W_{non-pert}(\Sigma)\) is generated. There are two big clues as to its form: (i) \(W_{non-pert}\) must be holomorphic by SUSY, depending on \(\Sigma\) but not \(\bar{\Sigma}\), (ii) as far as SYM is concerned \(\Sigma\) only appears in the combination \(1/\alpha-\Sigma/M_{2}\) (see Eq. (109)), so \(W_{non-pert}\) only depends on this combination (but not its conjugate).
Putting these clues together with general features of non-perturbative gauge theory will fix the form of \(W\). Given that the dimension-3 superpotential must be generated by dimensional transmutation, we must have
\[W_{\rm non-pert}\propto M_{\rm Pl}^{3}\,e^{-\nicefrac{{1}}{{b_{ sYM}}}\,\alpha_{sYM}(M_{\rm Pl})}. \tag{111}\]
Therefore, given (ii),
\[W_{\rm non-pert}=M_{\rm Pl}^{3}\,e^{-\nicefrac{{3}}{{b_{sYM}}} \,\left(\nicefrac{{1}}{{\alpha_{sYM}}}(M_{\rm Pl})-\nicefrac{{\Sigma}}{{M_{ 2}}}\right)}. \tag{112}\]
Equivalently,
\[W_{\rm non-pert}=\mathcal{O}(1)\,\Lambda_{\rm SYM}^{3}\,e^{ \nicefrac{{3}}{{2\pi}}\,\left(\nicefrac{{1}}{{b_{sYM}}}M_{2}\right)}. \tag{113}\]
Note, we are computing the non-perturbative correction to the superpotential even though it is very "small" (since we can readily have \(\Lambda_{SYM}\ll M_{Pl}\) for modestly small \(\alpha_{SYM}(M_{Pl})\)) because otherwise it would simply vanish. We do not need to compute corrections to the Kahler potential since that is already non-trivial. Putting together \(K(\Sigma,\bar{\Sigma})\) and \(W_{non-pert}(\Sigma)\) in the \(\Sigma\) EFT below \(\Lambda_{SYM}\), we find a scalar \(\sigma\) potential,
\[V \sim \frac{\Lambda_{SYM}^{6}}{M_{Pl}^{2}}\,\frac{e^{\nicefrac{{3}( \sigma\cdot\sigma\cdot\overline{\sigma})}{{}_{/(b_{sYM}}M_{2})}}}{1-\nicefrac{{ \sigma\cdot\sigma}}{{{}_{1}^{2}}}/M_{1}^{2}} \tag{114}\] \[= \frac{\Lambda_{SYM}^{6}}{M_{Pl}^{2}}\,\frac{e^{\nicefrac{{6} \sigma\cdot\overline{\sigma}}{{}_{1}}\,\left(\nicefrac{{1}}{{b_{sYM}}}M_{2} \right)}}{1-\nicefrac{{(\sigma_{1}^{2}+\sigma_{2}^{2})}}{{}_{/M_{1}^{2}}}},\]
where we decompose into real and imaginary components, \(\sigma\equiv\sigma_{1}+i\sigma_{2}\). We easily see that for sub-Planckian VEVs, \(\langle\sigma_{2}\rangle=0\) is a minimum. Plugging this in, \(V(\sigma_{1})\) is sketched in figure 24. Clearly,
non-renormalizable EFT is breaking down at \(\sigma\sim M_{1}\), so we should only trust it for \(\langle\sigma_{1}\rangle\ll M_{1}\). This happens for \(b_{SYM}M_{2}\gg M_{1}\), which is the case that is plotted. In fact, we can analytically solve for the VEV to leading order in \(M_{1}/(b_{SYM}M_{2})\ll 1\) straightforwardly (which does not have to be very small for this to be a reasonable approximation),
\[\langle\sigma_{1}\rangle\approx-\frac{M_{1}^{2}}{3b_{SYM}M_{2}},\ \langle\sigma_{2}\rangle=0. \tag{115}\]
Since the vacuum energy density is positive, this shows that SUSY has been spontaneously broken at the scale \(\sim\Lambda_{SYM}^{3/2}/M_{Pl}^{1/2}\).
We can think of this new hidden sector model as realizing our old model of section 10, in the approximation where \(M_{2}\) is the largest scale so that we can Taylor expand the non-perturbative superpotential to leading non-trivial order in \(1/M_{2}\),
\[\int\,d^{2}\theta\,W_{\rm non-pert}(\Sigma) \sim\int\,d^{2}\theta\,\Lambda_{\rm SYM}^{3}\left(1-\frac{3\Sigma }{b_{SYM}M_{2}}\right)\] \[\sim\int\,d^{2}\theta\,\frac{3\Lambda_{\rm SYM}^{3}}{b_{SYM}M_{ 2}}\,\Sigma. \tag{116}\]
That is, we are simply realizing our old SUSY-breaking scale via dimensional transmutation,
\[\Lambda\sim\frac{\Lambda_{\rm SYM}^{3/2}}{M_{Pl}^{1/2}}\sim\mathcal{O}(M_{ \rm Pl})\ e^{-3/2b_{SYM}\alpha_{inv}(M_{\rm Pl})}. \tag{117}\]
Given we want \(\Lambda\sim 10^{11}\) GeV, we see that \(\Lambda_{SYM}\sim 10^{14-15}\) GeV. In essence then, all the earlier sections utilizing the SUSY breaking module of section 10 continue unaffected after we integrate out the hidden SYM "hadrons" with masses \(\sim\Lambda_{SYM}\sim 10^{14-15}\) GeV, with just the understanding that \(\Lambda\ll M_{Pl}\) is now elegantly arising via dimensional transmutation.
Figure 24: The scalar effective potential arising from coupling to the non-perturbative SYM dynamics. Its positive (local) minimum signals spontaneous SUSY breaking.
## 21 Conclusions
I have covered a variety of big-picture mechanisms and considerations in these lectures, culminating in a prime example of a _comprehensive_ BSM framework. Most research projects these days do not involve developing comprehensive BSM frameworks, but rather pushing on particular directions related to new experimental developments or anomalies, or interesting new theoretical mechanisms involving a modest set of particles. Nevertheless, I hope the comprehensive big picture provides a useful backdrop, context and guidance for your own particular research activities. It certainly does for me in my current research.
The focal point of the lectures was the grand ambition of generating _all_ significant particle hierarchies, in the spirit of the non-perturbative mechanism of dimensional transmutation arising in QCD. To make this practicable, modest hierarchies of an order of magnitude were accepted without question in the input parameters of the final model if the mechanisms were present to exponentiate these into the larger hierarchies we see. The actual Hierarchy Problem and Little Hierarchy Problem were (re-)introduced in the context of this ambitious goal.
I should qualify that we only attempted to understand the _non-gravitational_ particle hierarchies, the quantum gravity Planck scale itself only featuring as the UV "end" of particle physics, the ultimate cutoff of particle physics EFT. But we did not attempt to address the greatest hierarchy problem, the Cosmological Constant Problem, based on the experimental observation
\[\rho_{dark-energy}\sim\mbox{meV}^{4}\ll v_{weak}^{4}\ll M_{Pl}^{4}, \tag{118}\]
which brings in gravity in an essential way. There is no known (well-accepted) mechanism for realistically solving this problem other than an appeal to the Anthropic Principle operating within a vast multiverse of universes with differing laws and couplings [32]. And yet, it is intriguing that global SUSY gives a symmetry reason why the vacuum energy vanishes, and that when coupled to gravity to make SUGRA theories this at least gives a very natural means of having extremely small dark energy (cosmological constant). The problem is that this powerful mechanism for small dark energy does not survive realistic SUSY-breaking. Nevertheless, (to me) it is one of the great discoveries of theoretical physics that a (non-realistic) interacting theory of particles and gravity, with diverse mass scales, can have very small dark energy if the theory is supersymmetric.
There is a second qualification. Physics is based on the laws of nature that allow us to time-evolve an initial state _and_ a suitable initial state as input. In these lectures, I have not discussed any of the puzzles and hierarchical structure that relate to the initial conditions of the universe, or at least very early universe conditions. See Ref. [33] for a review. For example, the state of the universe is clearly asymmetric between ordinary matter and antimatter and the laws we discussed simply maintain this asymmetry in time. So the observed asymmetry must be due to some other mechanism operating in the early universe, broadly known as Baryogenesis. The very high degree of homogeneity of the universe on the largest length scales poses another puzzle, most often mitigated within the framework for initial conditions known as Cosmic Inflation. Questions of cosmic initial conditions and the place of our universe within a larger multiverse are highly ambitious and yet plausibly within our ability to make progress on, by a combination of cosmological experiments and BSM theory. I hope that these lectures and their focus on the current laws of nature will provide a useful base for the ongoing pursuit of these grand questions.
## Acknowledgements
I am grateful to the organizers, Jiji Fan, Stefania Gori and Liantao Wang, for their invitation to lecture at TASI 2022. I am grateful to Arushi Bodas for assistance in preparing the figures, equations and bibliography for this article. This work was supported by NSF grant PHY-2210361 and by the Maryland Center for Fundamental Physics.
|
2307.08169 | Discovering User Types: Mapping User Traits by Task-Specific Behaviors
in Reinforcement Learning | When assisting human users in reinforcement learning (RL), we can represent
users as RL agents and study key parameters, called \emph{user traits}, to
inform intervention design. We study the relationship between user behaviors
(policy classes) and user traits. Given an environment, we introduce an
intuitive tool for studying the breakdown of "user types": broad sets of traits
that result in the same behavior. We show that seemingly different real-world
environments admit the same set of user types and formalize this observation as
an equivalence relation defined on environments. By transferring intervention
design between environments within the same equivalence class, we can help
rapidly personalize interventions. | L. L. Ankile, B. S. Ham, K. Mao, E. Shin, S. Swaroop, F. Doshi-Velez, W. Pan | 2023-07-16T22:41:17Z | http://arxiv.org/abs/2307.08169v1 | # Discovering User Types: Mapping User Traits by Task-Specific Behaviors in Reinforcement Learning
###### Abstract
When assisting human users in reinforcement learning (RL), we can represent users as RL agents and study key parameters, called _user traits_, to inform intervention design. We study the relationship between _user behaviors_ (policy classes) and user traits. Given an environment, we introduce an intuitive tool for studying the breakdown of "user types": broad sets of traits that result in the same behavior. We show that seemingly different real-world environments admit the same set of user types and formalize this observation as an equivalence relation defined on environments. By transferring intervention design between environments within the same equivalence class, we can help rapidly personalize interventions.
Machine Learning,
parameters from demonstration is a difficult and nonidentifiable problem (Shah et al., 2019). This paper shows that, while user parameters cannot be exactly recovered from behavior data in most settings, we can infer general rules about the relationship between user parameters and user behavior. These rules can help us design mHealth interventions.
Equivalence in Inverse RL (Irl).In IRL, when parameters of an MDP cannot be uniquely identified, we infer classes of these parameters, typically rewards (Ziebart, 2010) or transitions functions (Reddy et al., 2018; Golub et al., 2013), that are equally likely under the behavior data provided by _one_ user. In this work, we study the behaviors of _multiple users_ and equate different environments (MDPs) in which the partitioning of the set of users by behavior is similar.
Equivalence of MDPs.Notions of equivalence between MDPs allow for knowledge transfer between different environments (Soni and Singh, 2006; Sorg and Singh, 2009). For example, bisimulation-based equivalence definitions are used in MDP minimization, where large state spaces are reduced to speed up planning (Givan et al., 2003). Relaxed versions of bisimulations, e.g., MDP homomorphism (Biza and Platt, 2018), stochastic homomorphism (van der Pol et al., 2020), and approximate homomorphisms (Ravindran and Barto, 2004) allow optimal policies in simple MDPs to be lifted to desirable policies in more complex and comparable MDPs. More general definitions of MDP equivalence can be defined through other methods of state aggregation (e.g., value equivalence) (Li et al., 2006). While these notions of equivalence are defined over the set of MDPs, we decompose an MDP into task-specific and user-specific components and consider equivalences between the task-specific components of MDPs while varying the user-specific ones.
## 3 Formalizing Users as RL Agents
We formalize an RL environment for an mHealth application as a Markov Decision Process (MDP). An MDP is a 5-tuple, \(\mathcal{M}=\langle\mathcal{S},\mathcal{A},T,R,\gamma\rangle\), consisting of a set of states \(\mathcal{S}\), a set of actions \(\mathcal{A}\), a reward function \(R:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\), a transition function \(T:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) and a discount rate \(\gamma\in[0,1]\). For simplicity, in this paper, we only consider discrete state spaces.
An optimal RL agent acts in \(\mathcal{M}\) according to a policy \(\pi_{\mathcal{M}}:\mathcal{S}\rightarrow\mathcal{A}\), giving a cumulative reward (expected returns): \(J^{\pi}_{\mathcal{M}}=\mathbb{E}\left[\sum\limits_{t=0}^{T}\gamma^{t}r_{t}\right]\), where \(r_{t}\) is the random variable representing the reward received at time \(t\). The optimal policy for \(\mathcal{M}\) maximizes the expected returns: \(\pi^{*}_{\mathcal{M}}=\max\limits_{\pi}J^{\pi}\).
We want the user to adopt the optimal policy \(\pi^{*}_{\mathcal{M}}\) in \(\mathcal{M}\). However, the user plans in their perceived environment, \(\mathcal{M}^{\text{user}}=\langle\mathcal{S}^{\text{user}},\mathcal{A}^{ \text{user}},T^{\text{user}},R^{\text{user}},\gamma^{\text{user}}\rangle\) and adopts the policy, \(\pi^{*}_{\mathcal{M}^{\text{user}}}\), that is optimal for \(\mathcal{M}^{\text{user}}\). Discrepancies between the real environment and the user's perceived one can lead to drastic differences between the target policy, \(\pi^{*}_{\mathcal{M}}\), and the adopted one, \(\pi^{*}_{\mathcal{M}^{\text{user}}}\).
In this work, we shall assume that the perceived environment differs from the real only in the transition function (modeling the user trait confidence) and the discount rate (modeling the user trait myopia). Specifically, we define a _world_ as a tuple \(\mathcal{W}=\langle\mathcal{S},\mathcal{A},R\rangle\) of states \(\mathcal{S}\), actions \(\mathcal{A}\), and reward function \(R\). This captures the real environment and the task in an application of interest (see Fig. 2 for example-grid worlds). Since the user's perceived states, actions, and rewards match the real environment, we set \(\mathcal{S}^{\text{user}}=\mathcal{S}\), \(\mathcal{A}^{\text{user}}=\mathcal{A}\) and \(R^{\text{user}}=R\).
Furthermore, since we are interested in the set of optimal policies generated by varying the user's perceived environment \(\mathcal{M}^{\text{user}}\), we do not keep track of the real transition function \(T\) and the real discount rate \(\gamma\). Instead, the user's policy depends only on their (fixed) perception of the environment, \(T^{\text{user}}\), and their (fixed) discount rate \(\gamma^{\text{user}}\). The real \(T\) is useful only when the user is learning (updating \(T^{\text{user}}\)) based on data generated by \(T\).
We use \(\gamma^{\text{user}}\in[0,1]\) to represent the user's level of _myopia_. To represent the level of _confidence_, we parameterize the user's transition \(T^{\text{user}}_{p}\) function with \(p\in[0,1]\), which is the level of stochasticity in the environment transitions that the user perceives. Other parameterizations of \(T^{\text{user}}\) are possible, but this one aligns with the intuition that a user with low confidence is unsure whether their actions \(a\in\mathcal{A}^{\text{user}}\) will lead to desired outcomes \(s^{\prime}\in\mathcal{S}^{\text{user}}\).
In Section 4, we model how and why users with distinct traits behave differently (i.e., adopt different policies) in the same real-life setting. For example, two people with different levels of myopia would judge different PT behaviors to be optimal in their respective MDPs. However, we first connect our formalization of user traits (their level of myopia \(\gamma^{\text{user}}\), and their confidence level \(p\) parameterizing \(T^{\text{user}}_{p}\)) to well-studied constructs in psychology and behavioral science.
Mapping RL to Behavior Science._Myopia_ corresponds to the concept of temporal discounting in psychology. In user MDPs, we represent temporal discounting with \(\gamma^{\text{user}}\in[0,1)\). This captures people's tendency to undervalue future rewards, often leading to unhealthy behavior (Story et al., 2014). However, we note that in RL, discounting is exponential by default, which does not capture the phenomenon observed in humans called _preference reversal_(Ainslie and Haslam, 1992; Shah et al., 2019) (which _hyperbolic discounting_ is more suited for).
In behavioral science, _confidence_, also known as self-efficacy, measures an agent's belief in their capability to perform a task (Picha et al., 2021). Intuitively, this is the user's perceived probability that their intended outcome can be achieved through action. In user MDPs, we represent the user's confidence level with \(p\in[0,1]\), which is the level of stochasticity in the transitions. Concretely, \(T_{p}^{\text{user}}(s,a,s^{\prime})=p\) for a user's intended outcome \(s^{\prime}\) from performing action \(a\) in state \(s\). We divide the remaining \(1-p\) probability equally among the alternate outcomes: \(T^{\text{user}}(s,a,\hat{s}^{\prime})=\frac{1-p}{|\mathcal{S}|}\). Our current instantiation of confidence is simple, and it is equivalent to adding epsilon-noise to the real-world transition matrix. However, the transition \(T_{p}^{\text{user}}\) can be a function of \(p\) in more complex ways.
## 4 Behavior Maps: A Tool for Understanding User Traits and User Behaviors
In the previous section, we formalized the user's MDP \(\mathcal{M}^{\text{user}}\) and their optimal policy \(\pi^{*}_{\text{user}}\). We now introduce behavior maps, a tool for studying the relationship between the user-specific parameters (\(T_{p}^{\text{user}}\), \(\gamma^{\text{user}}\)) and the corresponding optimal user policy \(\pi^{*}_{\text{user}}\).
Given a world \(\mathcal{W}\), we denote the set of possible (deterministic) policies, \(\pi:\mathcal{S}\rightarrow\mathcal{A}\), as \(\Pi_{\mathcal{W}}\). We note that in many real-life applications, distinct policies may functionally describe the same type of behavior (e.g., if we are interested in overall adherence, skipping PT exercises every Tuesday can be considered functionally equivalent to skipping every Monday). Thus, we work with a concept that generalizes the notion of policy; we define a "_user behavior_", denoted \(B\subset\Pi_{\mathcal{W}}\), as a set of policies considered _equivalent_ in the application domain. We study how differences in user traits lead to different user behaviors.
To do this, we introduce a _behavior map_ of the world \(\mathcal{W}\) as a mapping of user traits to the corresponding user behaviors in \(\mathcal{W}\). That is, the behavior map \(\mathcal{B}_{\mathcal{W}}\) maps \((\gamma^{\text{user}},p)\) to the user behavior \(B\) that contains the optimal policy for the user MDP \(\mathcal{M}_{\text{user}}=\langle\mathcal{S},\mathcal{A},\mathcal{R},T_{p}^{ \text{user}},\gamma^{\text{user}}\rangle\).
In Fig. 1, we show an example of a behavior map. We see that it classifies the user parameter space into regions where parameters map to the same user behavior. In this world, there are only two behaviors (indicated by color), and the user's behavior depends on the value of their user traits (the two axes).
Applications of Behavior Maps.We demonstrate that behavior maps can inform the design and deployment of interventions on user traits (for example, interventions to increase \(\gamma^{\text{user}}\)). Specifically, they can help us (1) determine to what extent user traits are identifiable through behavioral observations; (2) warm-start an intervention strategy for interacting with new users.
Identifiability of User Traits.Since behavior maps tell us which set of parameters gives the same user behavior, they allow us to anticipate the limits of what we can infer about a user (using Inverse Reinforcement Learning (IRL) or related methods) by observing their behavior in a given world. For example, in worlds with the behavior map in Fig. 1, we can distinguish between users with low and high discount factors because users have different optimal policies (different colors). On the other hand, the difference in confidence does not generally correspond to a difference in behavior. Therefore, we cannot generally distinguish between users with different confidence levels. However, we find that behavior maps can inform intervention design, even when the parameters of individual users cannot be exactly inferred.
Warm-start Intervention Strategy.Given a world and a new user, behavior maps can help identify interventions that, a priori, is likely to be more impactful. In particular, the more variation there is in user behavior along a given axis, the more likely an intervention on the corresponding trait will change the user's behavior. For example, in Fig. 1, we know that an intervention on \(\gamma^{\text{user}}\) is more likely to change the user's behavior than an intervention on \(T_{p}^{\text{user}}\).
Although useful, directly computing the behavior map for a
Figure 1: Example behavior map (Big-Small world). The two colors indicate the two possible behaviors (see Fig. 2c for the world and the behaviors). Annotations describe the procedure for deriving the equivalence class. The x-axis varies over the discounting factor, \(\gamma\); the y-axis varies over the confidence level, \(p\). “Extreme” users, i.e., corners of the map, are labeled as circles. The number of “behavior switches” when tracing each edge between extreme users (from A to B, to C, to D, and back to A) are labeled as squares.
complex application such as PT requires solving user MDPs for a range of user parameters and can thus be computationally costly. Instead, to get the same insights, we reduce the PT world \(\mathcal{W}\) to a simpler toy world \(\mathcal{W}^{\prime}\), for which we can easily compute \(\mathcal{B}_{\mathcal{W}^{\prime}}\). We define an equivalence relation that allows us to make this reduction.
## 5 A Behavior-Based Equivalence Relation
This section uses behavior maps to draw analogies between seemingly different worlds.
Suppose that two different applications, such as PT and dieting, have the same behavior map, such as the one from Fig. 1. Then, in both applications, we know that confidence does not impact user behavior and that users with "low" gamma have one behavior, while users with "high" gamma have another. In this way, we consider PT and dieting equivalent worlds because intervention design principles can be transferred from one to the other. For example, in both cases, the initial intervention strategy should focus on \(\gamma^{\text{user}}\) instead of \(T_{p}^{\text{user}}\). Note that this transfer can work in cases where the state and action spaces differ between the two applications because the behavior maps depend on _high-level behaviors_ (not exact states and actions). For example, in PT, the behaviors may be a set of exercises. In dieting, the behaviors may be a set of food choices. In either case, there is a desired behavior (e.g., choosing nutritious foods or choosing the right exercises) and an undesired behavior. We are only concerned with what interventions will help the user go from undesired to desired behaviors, not that the actions defining those behaviors match exactly.
Moreover, we can transfer between worlds with similar but not necessarily identical behavior maps. For example, we might see that both PT and dieting have two possible behaviors, where users with lower \(\gamma^{\text{user}}\) act differently from users with higher \(\gamma^{\text{user}}\). However, what is considered to be "low" or "high" \(\gamma^{\text{user}}\) need not match exactly between the two applications: in PT, the range for "low" \(\gamma^{\text{user}}\) could be \([0,0.3]\) and in dieting the range could be \([0,0.2]\). If we knew both applications had similar behavior maps, we could still transfer the knowledge that the initial intervention strategy should focus on \(\gamma^{\text{user}}\) instead of \(T_{p}^{\text{user}}\). We could also transfer the knowledge that users with different \(\gamma^{\text{user}}\) are identifiable, while users with different \(T_{p}^{\text{user}}\) are not.
### Equivalence Between Behavior Maps
Thus motivated, we call two behavior maps equivalent if the _shapes_ of the decision boundaries between user behaviors in the behavior maps are the same and use an equivalence definition invariant to stretching or translation of these boundaries. We formalize this in Definition 5.1.
In the following, we assume, without loss of generality, that the axes of each behavior map \(\mathcal{B}_{\mathcal{W}}\) is scaled to the unit interval, that is, \(\mathcal{B}_{\mathcal{W}}\) is a map over \(I^{2}\), where \(I=[0,1]\). Thus, the decision boundary classifying different user behaviors in \(\mathcal{B}_{\mathcal{W}}\) is a 1-dimensional submanifold in \(I^{2}\) defined by the map \(g_{\mathcal{W}}:[0,1]\to I^{2}\) satisfying some additional constraints. Although we consider the case where the decision boundary is connected here, our definition extends straightforwardly to cases where it is not.
**Definition 5.1** (World Equivalence Induced by Behavior Map).: We define an equivalence relation, \(\equiv_{\mathrm{map}}\), on the set of discrete worlds \(\mathfrak{W}\) by
\[\mathcal{W}\equiv_{\mathrm{map}}\mathcal{W}^{\prime},\quad\mathcal{W}, \mathcal{W}^{\prime}\in\mathfrak{W}\]
when (1) the number of behaviors in \(\mathcal{B}_{\mathcal{W}}\) and \(\mathcal{B}_{\mathcal{W}^{\prime}}\) are equal, and (2) there is a continuous map \(h:I^{2}\times[0,1]\to I^{2}\), such that \(h_{t}:I^{2}\times\{t\}\to I^{2}\) is bijective, where \(h_{0}\) is the identity map, and where \(h_{1}\) satisfies \(h_{1}\circ g_{\mathcal{W}}=g_{\mathcal{W}^{\prime}}\).
Note that we can simply say that \(h\) is an _ambient isotopy_ between the decision boundaries in \(\mathcal{W}\) and \(\mathcal{W}^{\prime}\).
The idea behind Definition 5.1 can be made more intuitive. We consider each behavior map as a diagram in which (i) \(n_{i}\) number of vertices (each representing a switch between behaviors) is placed on the \(i\)-th edge, and where (ii) each pair of vertices is connected by a curve defined by a decision boundary that separates two user behaviors (see Fig. 1). We say that two maps are equivalent if they are labeled by the same number of distinct behaviors, and, as diagrams, they are topologically equivalent: the decision boundary in one behavior map can be continuously deformed, by using the map \(h\), to look like that in the other.
For the set of worlds studied in this work, we note that whether two worlds are equivalent boils down to counting the number of behavior switches along the edges of their behavior maps (counterclockwise, starting from the bottom edge). We can focus exclusively on the edges because our worlds do not induce behavior maps with decision boundaries that behave differently in the middle (e.g. Fig. 19). By counting the number of behavior switches along the edges, we can represent the set of worlds in the same equivalence class as a count vector (see Fig. 1).
### Intervention Transfer Between Equivalent Worlds
Recall that our primary motivation for defining an equivalence relation on worlds is to develop intervention strategies in simple settings and transfer them to more complex analogous ones. This section provides the formalism for transferring interventions between equivalent worlds. In Section 6.1, we will introduce a set of simple worlds to which many commonly studied RL environments can be reduced through our equivalence.
Given a world \(\mathcal{W}\), we represent a single _intervention_ on
a user's myopia and confidence level as a real-valued pair \((\Delta_{\gamma},\Delta_{p})\in I^{2}\) that is added to the user's current parameters. Thus, a sequence of interventions defines a (piece-wise linear) path, which we call an _intervention strategy_ and denote by \(\tau_{\mathcal{W}}\), in the behavior map \(\mathcal{B}_{\mathcal{W}}\). Our goal is to map an intervention strategy \(\tau_{\mathcal{W}}\) in \(\mathcal{B}_{\mathcal{W}}\), that realizes a behavior change, to a strategy \(\tau_{\mathcal{W}^{\prime}}\) in an equivalent map \(\mathcal{B}_{\mathcal{W}^{\prime}}\) that realizes an analogous behavior change.
We first observe that the continuous map \(h\) in Definition 5.1 induces a mapping from the set of user parameters related to one world \(\mathcal{W}\) to the user parameters related to \(\mathcal{W}^{\prime}\), defined by \(h_{1}:I^{2}\to I^{2}\). Hence, every path \(\tau_{\mathcal{W}}\) defines a path \(\tau_{\mathcal{W}^{\prime}}=h_{1}\circ\tau_{\mathcal{W}}\). Since \(h\) continuously deforms the decision boundary of \(\mathcal{B}_{\mathcal{W}}\), it preserves the number of times \(\tau_{\mathcal{W}}\) intersects the decision boundary in \(\mathcal{B}_{\mathcal{W}}\). In particular, if \(\tau_{\mathcal{W}}\) represents an intervention strategy that achieves \(N\) number of behavior changes in \(\mathcal{B}_{\mathcal{W}}\), then \(\tau_{\mathcal{W}^{\prime}}\) is a strategy that achieves the same number of behavior changes in \(\mathcal{B}_{\mathcal{W}^{\prime}}\).
Note that, unlike knowledge generalization approaches in RL wherein one computes a mapping between all parameters of two MDPs, our approach to intervention transfer between two worlds by-passes explicit mappings between the state and action sets of \(\mathcal{W}\) and \(\mathcal{W}^{\prime}\). Instead, we rely on \(h\), the mapping between user parameter and policy spaces.
In practice, explicitly computing \(h\) can be difficult. In the next section, we show that we can derive a more general set of heuristics for intervention design in a complex world by reasoning about an equivalent simple world.
## 6 Atomic Worlds: Simple Representatives of Equivalence Classes
Under Definition 5.1, we seek the simplest representative, called _atomic worlds_, for each equivalence class. User behaviors can be characterized in atomic worlds, and the insights transferred thereafter to more complex equivalent worlds. We describe three atomic worlds and reduce commonly studied worlds in RL literature to our atomic worlds.
### Atomic Worlds
We visualize an instance of each of the following worlds in Fig. 2 and their corresponding behavior maps in Fig. 3.
The _Big-Small world_ is an atomic world that captures a trade-off between choosing a smaller, more convenient reward and a bigger reward that is more difficult to reach. In mHealth, this world reflects scenarios in which smaller immediate rewards, such as the time saved by skipping PT for the day, preclude larger but delayed rewards, such as a fully rehabilitated ankle.
The _Cliff world_ captures settings in which a harmful absorbing state may be reached due to an action going awry. For example, deciding the intensity of the PT regimen can be modeled as a Cliff world. A high-intensity regimen could accelerate recovery but also risk re-injuring the patient.
The _Wall world_ captures the choice between a short, costly path to the goal and a longer, free path to the same goal. This can model the trade-off in choosing the type of physical therapy: virtual therapy may be more affordable, while in-person therapy is more costly and targeted.
In the above, we note that different aspects of user decision-making (e.g., choosing the intensity vs choosing the type
Figure 2: Each atomic world has two qualitatively distinct behaviors (shown with blue and orange arrows). Each diagram shows what the world looks like for one setting of the parameters, and other sizes are usually also valid.
of therapy) in the same mHealth application (PT), can map to different equivalence classes. We hypothesize that more complex worlds (e.g., larger portions of the user decision-making process in PT) can be captured by compositions of simpler atomic worlds. In future work, we are interested in characterizing the set of complex worlds that can be studied through decomposition into atomic worlds. Further discussion can be found in Section 7 and Appendix C.
### Atomic Worlds Capture Commonly Studied RL Environments
We compare the behavior maps corresponding to four types of RL environments commonly studied in the literature: Chain, RiverSwim, Gambler's Fallacy, and Cafe worlds (details on each world are in Appendix A), and illustrate that the set of worlds they define reduces to the three atomic worlds we identify in Section 6.1. We note that these RL environments are diverse in their state and action spaces; more interestingly, they are diverse in how they map to real-life tasks. Thus, we expect that many useful mHealth applications can be modeled by known atomic worlds or straightforward combinations of atomic worlds (see Section 7 and Appendix C for more details), allowing us to transfer intervention design from familiar, simpler settings onto unexplored and more complex ones.
Under our definition, Chain (Fig. 2(d)), RiverSwim (Fig. 2(e)), Gambler's Fallacy V1 (Fig. 2(f)), and the Cafe worlds (Fig. 2(h)) are equivalent to the Big-Small world (Fig. 2(c)); these are worlds in which the user chooses between a readily available but small reward (i.e., disengaging in Chain, swimming downstream in RiverSwim, and performing the _Finish_ action in the Gambler's Ruin world) and a greater but more time-consuming reward. Gambler's Fallacy V2 (Fig. 2(g)) is equivalent to Cliff World--both worlds have a "catastrophic absorbing state," i.e., a nonzero risk of ending up in a terminal state with a negative reward.
### The Equivalence Definition Is Robust to Parameter Perturbations in World Definitions
We want a world to remain within its equivalence class despite minor parameter adjustments (e.g., the world for a month-long PT program should be in the same class as that for a 2-month program). This is evidence that our equivalence definition captures essential rather than incidental qualities of applications.
In Fig. 4, we verify that the Big-Small world remains within its equivalence class despite parameter changes, such as the
Figure 3: Seemingly different worlds (bottom row) are equivalent to one of our atomic worlds (top row).
world's width or the ratio of the big to a small reward. In Appendix B, we provide additional evidence of how our equivalence classes withstand perturbations across more parameters for all \(8\) worlds investigated.
### Heuristics for Intervention Transfer
Many real-world applications may be roughly mapped to an atomic world through domain knowledge rather than computing an explicit map \(h\), as in Section 5.2. For example, behavior scientists can often describe the types of expected user behavior, e.g., "how many different behaviors are there for users with very low confidence?". Absent a map \(h\), we cannot transfer an intervention strategy in precise terms. However, the broader insights we obtain from studying the behavior maps of atomic worlds can be easily transferred. For example, conclusions we reach on the identifiability of user traits and the effectiveness of a particular warm-start intervention strategy (see Section 4) apply to all worlds within the same equivalence class.
## 7 Discussion & Future Work
Exhaustive World Search.We expect there to be many equivalence classes outside the three identified in this paper. The existence of such classes may be especially relevant when we try to capture multiple distinct aspects of an mHealth application in a single world. In future work, we intend to explore the space of possible equivalence classes more exhaustively.
World Compositions.Complex real-life scenarios are unlikely to neatly map to a singular atomic world; however, we conjecture that some worlds may fall into _compositions_ of atomic worlds. Some initial experiments with composite worlds indicate that the composition of the Big-Small and Cliff worlds leads to a behavior map that combines the atomic worlds' respective maps. See Appendix C for examples of these experiments. This finding further supports the generality of our equivalence classes as seemingly-complicated scenarios can be broken down into atomic worlds that each capture a unique aspect of the application.
Other User-Intrinsic Obstacles.While we focus on myopia (\(\gamma^{\text{user}}\)) and confidence (\(T_{p}^{\text{user}}\)) in this paper, we are interested in modeling a wider range of user-intrinsic obstacles, as differences between the real and user-perceived MDP. For motivation, works like Evans et al. (2016), under a different model of the user's decision-making process, capture behaviors that cannot be parameterized as combinations of \(\gamma^{\text{user}}\) and \(T_{p}^{\text{user}}\) in the Cafe world. This observation raises the question of whether our formal framework can capture behaviors observed under other paradigms of sequential decision-making (e.g. hyperbolic discounting, replanning, etc).
Real World Dynamics vs. User Perceived Dynamics.We note that the definition of behavior maps does not rely on the environment's true dynamics \(T\) since the user's policy is computed based on their perceived dynamics \(T_{p}^{\text{user}}\). In reality, if \(T\) and \(T_{p}^{\text{user}}\) are significantly different, it would be reasonable to assume that the user iteratively updates \(T_{p}^{\text{user}}\) as they interact with the real world.
The Topology of Behavior Maps.For the set of worlds in this work, verifying that any two are equivalent reduces to matching the number of behavior changes along the edges of their behavior maps. That is, the decision boundaries of their behavior maps have no interesting topology. See Appendix D for a discussion on intervention transfers between worlds whose behavior maps are topologically distinct in more nuanced ways. Future research could characterize the set of worlds for which the decision boundaries of the behavior maps are not as "well-behaved".
## 8 Conclusion
In this work, we propose a novel tool, the behavior map, to study the relationship between user traits and user behaviors
Figure 4: A _Big-Small world_ stays within its equivalence class for many different parameter combinations. The example behavior maps have different values for the world width and the ratio of the small reward to the big reward, while the rest of the parameters are fixed as height = 7 and Big far R = 300.
for worlds in which the user acts as an RL agent. We define an equivalence relation between worlds based on the shapes of their corresponding behavior maps. We show that intervention strategies can be transferred between equivalent worlds. In particular, we demonstrate that many seemingly different RL environments map to one of a few equivalence classes, each represented by a simple atomic world. We further argue that many real-world applications can be mapped to atomic worlds by leveraging domain knowledge in behavioral science and psychology. Finally, we show how broad insight into intervention design for simple worlds can be lifted to complex ones in the same equivalence class.
## Acknowledgements
This material is based upon work supported by the National Science Foundation under Grant No. IIS-2107391. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. |
2304.11168 | Learning Self-Supervised Representations for Label Efficient
Cross-Domain Knowledge Transfer on Diabetic Retinopathy Fundus Images | This work presents a novel label-efficient selfsupervised representation
learning-based approach for classifying diabetic retinopathy (DR) images in
cross-domain settings. Most of the existing DR image classification methods are
based on supervised learning which requires a lot of time-consuming and
expensive medical domain experts-annotated data for training. The proposed
approach uses the prior learning from the source DR image dataset to classify
images drawn from the target datasets. The image representations learned from
the unlabeled source domain dataset through contrastive learning are used to
classify DR images from the target domain dataset. Moreover, the proposed
approach requires a few labeled images to perform successfully on DR image
classification tasks in cross-domain settings. The proposed work experiments
with four publicly available datasets: EyePACS, APTOS 2019, MESSIDOR-I, and
Fundus Images for self-supervised representation learning-based DR image
classification in cross-domain settings. The proposed method achieves
state-of-the-art results on binary and multiclassification of DR images, even
in cross-domain settings. The proposed method outperforms the existing DR image
binary and multi-class classification methods proposed in the literature. The
proposed method is also validated qualitatively using class activation maps,
revealing that the method can learn explainable image representations. The
source code and trained models are published on GitHub. | Ekta Gupta, Varun Gupta, Muskaan Chopra, Prakash Chandra Chhipa, Marcus Liwicki | 2023-04-20T12:46:34Z | http://arxiv.org/abs/2304.11168v1 | Learning Self-Supervised Representations for Label Efficient Cross-Domain Knowledge Transfer on Diabetic Retinopathy Fundus Images
###### Abstract
This work presents a novel label-efficient self-supervised representation learning-based approach for classifying diabetic retinopathy (DR) images in cross-domain settings. Most of the existing DR image classification methods are based on supervised learning which requires a lot of time-consuming and expensive medical domain experts-annotated data for training. The proposed approach uses the prior learning from the source DR image dataset to classify images drawn from the target datasets. The image representations learned from the unlabeled source domain dataset through contrastive learning are used to classify DR images from the target domain dataset. Moreover, the proposed approach requires a few labeled images to perform successfully on DR image classification tasks in cross-domain settings. The proposed work experiments with four publicly available datasets: EyePACS, APTOS 2019, MESSIDOR-I, and Fundus Images for self-supervised correlation learning-based DR image classification in cross-domain settings. The proposed method achieves state-of-the-art results on binary and multi-classification of DR images, even in cross-domain settings. The proposed method outperforms the existing DR image binary and multi-class classification methods proposed in the literature. The proposed method is also validated qualitatively using class activation maps, revealing that the method can learn explainable image representations. The source code and trained models are published on GitHub1.
Footnote 1: [https://github.com/prakashchhipa/Learning-Self-Supervised-Representations-for-Label-Efficient-Cross-Domain-Knowledge-Transfer-on-DRF](https://github.com/prakashchhipa/Learning-Self-Supervised-Representations-for-Label-Efficient-Cross-Domain-Knowledge-Transfer-on-DRF)
Self-supervised representation learning, domain adaptation.
## I Introduction
In the medical imaging area, artificial intelligence (AI), a topic characterized broadly by the building of computerized systems capable of doing tasks [1] & [2] that ordinarily require human intelligence, has significant potential. Automated radiology workflows have significantly benefited from machine learning and deep learning techniques. Although AI models have the potential to revolutionize clinical practice, they have been hampered by significant implementation and regulatory obstacles [3]. Almost all constraints may be traced back to a major issue: a dearth of medical image data to train and test AI algorithms [4]. The generalizability and accuracy of developed solutions are hampered because most research institutions and enterprises only have limited access to annotated medical images. Large datasets, including high-quality images and annotations, are still necessary to train, validate, and test the AI systems [5]. In the absence of data that has been appropriately labeled, this procedure becomes prohibitively expensive, time-demanding, and inherently unstable.
Labeled biomedical images are incredibly scarce, and multiple experts are required to annotate each image [6] manually. Massive amounts of health data are being generated and collected. These data range from in-hospital monitoring to wearable. Coding and annotating this data is impractical [6][7]. In addition, the pretrained models obtained from natural images do not apply directly to medical images since their intensity distribution is different. Besides that, annotating natural images is simple; all that is required is basic human knowledge [8]. Nevertheless, in-depth knowledge is necessary for the annotation of medical images. The average medical image has over a billion pixels, significantly larger than others. The annotation process is highly error-prone and expensive, and experts cannot always identify a particular feature. A potential solution is to train models on unlabeled images using self-supervised learning [9][10].
Most supervised learning methods require labeled data to train a machine. Unfortunately, obtaining good-quality labeled
Fig. 1: Schematic presents the contrastive learning-based self-supervised cross-domain knowledge transfer. Pretraining is performed on the source dataset (EyePACS), and downstream tasks are performed on cross-domain targets (APTOS 2019, MESSIDOR, and Fundus Images).
data can be cost-effective and time-consuming. Additionally, the data preparation lifecycle can be extremely long and complicated, including cleaning, filtering, annotating, reviewing, and restructuring according to a training framework [11]. Another approach has been used to deal with scarce biological data: domain adaptation-based self-supervised learning. Self-supervised learning (SSL), an alternative to supervised learning and transfer learning, has emerged as a viable possibility [12]. While self-supervised learning is distinct from transfer learning, both rely on acquiring representations from a secondary pretext activity and the subsequent transfer of those representations to the main focus task [13][14]. The data utilized for the pretraining phase and the downstream task might be taken from one or more separate data sources in domain adaptation self-supervised learning, unlike transfer learning [15].
This work aims to prove that self-supervised learning can be used as a preliminary step in medical image classification. Contributions to this work are:
* This work proposes a domain adaptation-based self-supervised learning approach to learn image representations from diabetic retinopathy fundus images.
* Self-supervised learning of diabetic retinopathy image representations from unlabelled datasets has been validated in cross-domain settings, as shown in Figure 1.
* Results indicate that the proposed work outperforms the existing methods of DR classification.
## II Related Work
In recent years, unsupervised learning has significantly progressed (SSL). Since it is useful for learning feature representations from image datasets without image labels, it has become a primary focus for academic investigation. Medical image classification tasks like detecting diabetic retinopathy, classifying brain age [35], recognizing cancer in histopathology [34], identifying pneumonia in X-ray [37], many others have shown progress using self-supervised learning methods, demonstrating state-of-the-art performance. This article focuses on self-supervised learning techniques pertaining to images of diabetic retinopathy.
In an approach, Truong et al. proposed the fusion of embeddings from multiple SSL models. Then the fused embeddings are combined with self-attention for feature learning. However, it did not use domain adaptation, as the datasets used in the pretext and downstream tasks are the same.
Taleb et al. [20] developed a series of five proxy tasks: 3D contrastive predicting coding, 3D rotation prediction, 3D jigsaw puzzles, 3D patch location, and 3D exemplar networks to learn the feature representations in the pretext task. Lin et al. [21] proposed a multilabel classification method using rotation SSL with graph CNN to learn fundus images' representations. Another work by Srinivasan et al. [22] trained a ResNet50 model using the MoCo-V2 approach in the pretext task to classify diabetic retinopathy images in the downstream task. The authors used a similar dataset for both the pretext and downstream tasks. Yap and Ng [23] proposed a contrastive learning framework to create a patch-level dataset for pretext tasks by extracting the class activation maps from the labeled and unlabelled datasets.
### _SSL methods for DR segmentation_
Segmentation of diabetic retinopathy using self-supervised learning has been explored partially. Tian et al. [18] proposed a multi-class Strong Augmentation via Contrastive Learning (MSACL) approach for detecting unsupervised anomalies. The author also proposes a contrastive loss that combines contrastive learning with multi-centered loss to cluster the samples of the same class. These unsupervised models need to be well-trained. Otherwise, they can learn ineffective image representations. Another author, Kukacka et al. [19], also proposed an approach for lesion segmentation by pretraining a U-net encoder in the pretext task.
### _Reconstruction-based SSL methods for DR classification_
Many efforts have been made for the diabetic retinopathy classification task using reconstruction-based self-supervised learning methods. Holmberg et al. [16] proposed a cross-domain U-net-based system to generate the retinal thickness used for the classification during the downstream task. Other authors, Nygyen et al., learned the features of the target dataset by using a self-supervised contrastive learning method on reconstructed retinal images. This work reconstructs images and features learning on the target dataset. In the proposed work, representations learning is performed on the source dataset of diabetic retinopathy and applied to those learned representations to a different dataset of diabetic retinopathy. In addition, a few authors also proposed a multi-modal-based reconstruction of images. Hervella et al. [3] performed multimodal reconstruction using U-Net for the segmentation task of the optic disk and cup in retinography and angiography images, and Li et al. [17] trained a CycleGAN model on the source dataset to learn the mapping function between the images and also learned both the modality-invariant and patient-similarity features in the pretext task. One more work by Cai et al. proposed a transformer-based framework in combination with a multitask decoder to learn the representations of the reconstructed images. Most works discussed above used adversarial learning methods to reconstruct the images. However, these methods provide inferior performance or have unstable training. Representation learning is pixel-based learning in the existing reconstruction-based methods, but the work focuses on learning representations at the visual concept level.
As was seen in the preceding review of the relevant literature, most existing SSL approaches employ the same dataset for both the pretext task in the source domain and the downstream task in the target domain. Progress has been seen in the knowledge transfer field, but no one has extensively explored the domain adaptation. The proposed work concretely focuses on cross-domain contrastive learning. To identify DR images from a different domain, this study presents an SSL
strategy to reuse the representations learnt on one unlabeled dataset from the source domain during the pretext job.
## III Diabetic Retinopathy Dataset Description
Diabetic retinopathy is a major cause of blindness among people of working age in developed countries. It is a prevalent eye disease that affects more than 93 million people globally. Diabetic retinopathy detection is currently a time-consuming and laborious technique that requires a skilled person to analyze and interpret digital color fundus images of the retina. The public datasets for diabetic retinopathy are:
### _Subset of EyePACS_
Eye disease Diabetic Retinopathy (DR)2 is linked to long-term diabetes. If DR is caught early enough, visual loss can be halted. A comprehensive collection of high-resolution retina images captured using various imaging settings are accessible. Every subject has both a left and right field available to them. Images are identified not just with a subject id but also as being on the left or the right. A medical professional has determined diabetic retinopathy on a scale of 0 to 4.
Footnote 2: [https://www.kaggle.com/c/diabetic-retinopathy-detection/data](https://www.kaggle.com/c/diabetic-retinopathy-detection/data)
### _Aptos 2019_
Numerous people are affected by diabetic retinopathy, the most common reason for vision loss among adults in their 40s and 50s. Aravind Eye Hospital can help people in rural areas without easy access to medical screening in India's efforts to find and prevent this condition there. The answers will be available to other Ophthalmologists through the 4th Asia Pacific Tele-Ophthalmology Society (APTOS) Symposium3. A vast collection of retina images was collected using fundus photography in various situations made available. A clinical expert has determined that each image has been graded for its severity on a scale of 0 to 4.
Footnote 3: [https://www.kaggle.com/competitions/aptos2019-blindness-detection/data](https://www.kaggle.com/competitions/aptos2019-blindness-detection/data)
### _Messidor-I_
Diabetic retinopathy detection is now a labor-intensive and time-consuming method that requires a qualified doctor to use digital color fundus images of the retina. It is known as MESSIDOR (Methods to Evaluate Segmentation and Indexing Techniques in the Field of Retinal Ophthalmology in French)4[24]. The retinopathy grades are determined on a scale of 0 to 3.
Footnote 4: [https://www.adcis.net/en/third-party/messidor](https://www.adcis.net/en/third-party/messidor)
### _Fundus Images_
The Department of Ophthalmology provided the 757 color fundus images [37] included in this collection from the Hospital de Clinicas, Facultad de Ciencias Medicas, Universidad Nacional de Asuncion, Paraguay. The Zeiss brand's Visucam 500 camera was utilized for the process of acquiring the retinographies. Fundus images have been classified into 7 distinct groups on a scale of 1 to 7.
Table I shows the dataset description of diabetic retinopathy.
## IV Self-supervised Cross Domain Knowledge Transfer Framework
The proposed framework consists of two main tasks: (i) pretext task, i.e., representation learning of images from source domain DR dataset (a subset of EyePACS) (ii) downstream task, i.e., classification of DR(Diabetic Retinopathy) images from the target domain datasets ( APTOS 2019, Messidor-I and Fundus Images). In the pretext task, the proposed approach applies various augmentations like flipping, affine transformations, jitter, grayscale, etc., to create different views from the images. The different views created from the same image act as positive pairs, and views from different images act as negative pairs. Then, image representations are learned through contrastive learning from positive and negative pairs of images. These learned representations of images act as input to the downstream task. This task does not require labeled images for representation learning as shown in Figure 1.
The downstream task involves binary as well as multi-class classification of DR images. The model pretrained for learning image representation during the pretext task act as initialization for performing the downstream task, i.e., classification of DR images. Now the downstream task requires fewer labeled images for performing DR classification. Figure 1 provides a detailed architecture of the proposed approach. The objective of the proposed approach is to obtain representations that are robust to domain shift and generalizable to the downstream task. The proposed approach uses an unlabeled source dataset
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Dataset_ & _Total Images_ & _No. of Classes_ \\ \hline _Subset of EyePACS_ & _31615_ & \(5\) \\ \hline _APTOS 2019_ & _3660_ & \(5\) \\ \hline _MESSIDOR-I_ & _1200_ & \(4\) \\ \hline _Fundus Images_ & _747_ & \(7\) \\ \hline \end{tabular}
\end{table} TABLE I: Diabetic retinopathy dataset description
Fig. 2: Sample images- (a)(b) subset of EyePACS Dataset, (c)(d) Messidor-I dataset, (e)(f) APTOS 2019 dataset, (g)(h) Fundus images.
to learn the representations and a labeled target dataset to solve the classification task by reusing these features learned from the source dataset. The representations have been learned using the SimCLR (simple framework for contrastive learning) method [25]. As discussed, positive and negative pairs of DR images are created from unlabelled DR images using different augmentations like a Gaussian blur, flipping, translation, rotation, jitter, etc. These positive and negative pairs of images are fed to the encoder network. The encoder network consists of a ResNet-50 backbone and a projection head containing two fully connected layers of 2048 and 1024 neurons, respectively. This network is trained on positive and negative views of images using Normalized Temperature-scaled Cross-Entropy (NT-Xent) as the loss function, which tries to pull positive pairs close and push away the negative pairs. This loss function is defined as:
\[\ell\left(\mathrm{z_{i}},\mathrm{z_{j}}\right)=-\log\frac{\exp\left(\mathrm{ sim}\left(Z_{i}Z_{j}\right)/T\right)}{\sum_{k=1}^{2n}1_{k\neq i}\exp\left(Z_{i}Z_{k}/T \right)}\hskip 28.452756pt...(1)\]
Where \(z_{i}\) and \(z_{j}\) are representations of positive pairs, T is the temperature parameter, n is the number of images, and sim() represents the similarity function. This loss function is the negative log-likelihood of similarity between positive pairs to the ratio of similarities between all possible positive and negative pairs. This loss function is a softmax function normalized using a temperature parameter. In the downstream task, the proposed approach performs binary and multi-classification of DR images separately from the target domain datasets (Messidor-I and APTOS 2019). During this phase, primary augmentations such as resizing, flipping, and cropping are applied to the target dataset of DR images to create different views. The encoder backbone ResNet-50 weights trained for the pretext task are used as the initialization for the network being trained for the downstream task. The projection head of the network used in the pretext task is replaced with two fully connected layers, i.e. (2048, 512) and (512, no. of classes). The proposed approach performs binary and multi-classification on APTOS 2019, Messidor-I, and Fundus Images datasets during this phase.
## V Experiments and Results
To investigate the proposed domain adaptation framework, three datasets are explored - APTOS 2019, Messidor-I, and Fundus Image dataset. The investigation is performed in a manner that self-supervised pretrained the model on one source dataset, a Subset of EyePACS, and finetuned on the above-mentioned three target datasets. The classification task is performed on all the target datasets. Binary and multi-classification on all three target datasets are performed in the downstream task. This work also explored label efficiency by performing the experiments on 10, 30, 50, and 100 percent data from the datasets. The data augmentations are applied to generate two views of a single image which can be further tested for similarity. During the pretext task, flipping, cropping, translation, scaling, grayscale, rotation, blurring, and resizing augmentation techniques are applied to input medical images to obtain better and more generalizable results. Due to the small dataset size, it is required to investigate various finetune scenarios. Only a few primary augmentations, like resizing and cropping, are used during the downstream task to make this approach more compelling. After performing numerous experiments with varied parameter settings, the hyperparameter values that gave promising results during the pretext task are given in Tables II & III. Table II shows the hyperparameters used for binary classification of diabetic retinopathy images for datasets APTOS 2019 and Messidor-I. It shows various probability values used for different augmentation techniques during the pretext task. The batch size used is 128, and the optimizer used for training is LARS (Layer-wise Adaptive Rate Scaling). The initial learning rate used is 0.79, and the weight decay is 10-6. The performance metrics are defined below.
Table III shows the hyperparameters used for a multi-classification of diabetic retinopathy images for the dataset APTOS 2019. The augmentations in the multi-classification of DR images are - jitter, affine, and normalization, along with the augmentations used in the binary classification for better performance. The batch size used for multi-classification is 256, and the optimizer used for training is LARS.
The classification task is performed on three datasets - APTOS 2019, Messidor-I, and Fundus Images. Table IV shows the results obtained for the binary classification on APTOS 2019 dataset. For APTOS 2019, the proposed method obtains an accuracy of 99.59%, a precision of 100%, a recall of 99.54%, and an F1- score of 99.26% on only 10% images.
\begin{table}
\begin{tabular}{|c|c|} \hline _Augmentations_ & _Parameters_ \\ \hline _Resize_ & _224 X 224_ \\ \hline _Horizontal Flip_ & _P=0.5_ \\ \hline _Vertical Flip_ & _P=0.5_ \\ \hline _Grayscale_ & _P=0.2_ \\ \hline _Gaussian Blur_ & _P = 0.5,Kernel size = [21, 21]_ \\ \hline _Batch size_ & _128_ \\ \hline _Optimizer_ & _LARS_ \\ \hline _Learning Rate_ & _0.79_ \\ \hline _Weight-decay_ & \(10^{-6}\) \\ \hline \end{tabular}
\end{table} TABLE II: Hyperparameters for binary classification
\begin{table}
\begin{tabular}{|l|c|} \hline _Augmentations_ & _Parameters_ \\ \hline _Resize_ & _224 X 224_ \\ \hline _Horizontal Flip_ & _P= 0.5_ \\ \hline _Vertical Flip_ & _P=0.5_ \\ \hline _Grayscale_ & _P=0.2_ \\ \hline _Gaussian Blur_ & _P = 0.5,Kernel size = [21, 21]_ \\ \hline _Batch size_ & _128_ \\ \hline _Optimizer_ & _LARS_ \\ \hline _Learning Rate_ & _0.79_ \\ \hline _Weight-decay_ & \(10^{-6}\) \\ \hline \end{tabular}
\end{table} TABLE III: Hyperparameters for multi-classification
For 100% images, the accuracy improved by 2.71%, recall by 5%, and F1-Score by 3%.
For another dataset- Messidor-I, the highest accuracy obtained is 98.49%, precision is 98.65%, recall is 100%, and F1 score is 99.99% as shown in Table V.
For the third dataset- Fundus Images, the accuracy obtained on 100% images is 98.96%, precision is 96%, recall is 99.43%, and F1 score is 99.67% as shown in Table VI.
Table VII shows the results of the multi-classification of DR images by using the same dataset for pretext and downstream tasks, i.e., the Subset of EyePACS. Table VIII shows the outcomes of the three datasets' multi-classification of DR images. The downstream dataset used for the multi-classification of diabetic retinopathy images is APTOS 2019, Messidor-I, and Fundus Images, where the proposed method obtained an accuracy of 83.43%, 66.39%, and 91.67%. The proposed self-supervised learning method outperforms prior state-of-the-art techniques on two datasets - Aptos 2019 and Fundus Images.
Table X displays the results of a comparison between the proposed study and previous work in multi-class classification of DR images. Kassani et al. [32] reported the highest accuracy for multi-class classification was 83.09%. The remaining works reported an accuracy below 75% for classifying diabetic retinopathy images.
The comparisons in Tables IX and X suggest that the performance achieved with the proposed work is improved over previous works for binary and multi-classification of diabetic retinopathy images.
### _Label efficiency in cross-domain knowledge transfer_
The proposed self-supervised cross-domain knowledge method obtains concrete evidence for label efficiency. Result comparisons on both downstream tasks show that model achieves comparable performance when only \(50\%\) labels (half supervised) are used against fully supervised models with \(100\%\) labels. It is observed that downstream task performance differences between partially supervised and fully supervised are in close on at-least two datasets out of three target datasets for both classification tasks. Further, it is noticeable that the training portion of all the target datasets consists of only a few labeled examples in the range of \(500\) to \(2500\). Label efficiency is illustrated in the figures 4 & 5.
## VI Conclusion
This work proposes a label-efficient self-supervised representation learning-based method for diabetic retinopathy image classification in cross-domain settings. The proposed work has been evaluated qualitatively and quantitatively on the publicly available EyePACS, APTOS 2019, MESSIDOR-I, and Fundus Images datasets for binary and multi-classification of DR images. The qualitative evaluation shows that the proposed approach learns explainable image representations. Moreover, the proposed approach uses only a few training samples for training and outperforms the existing DR image classification methods, even in cross-domain settings. In future work, the proposed approach can be used to investigate other downstream tasks, such as segmentation and localization. Further, non-contrastive methods for representation learning can be examined to perform downstream tasks on DR images in cross-domain settings.
|
2310.13838 | CNN-based Prediction of Partition Path for VVC Fast Inter Partitioning
Using Motion Fields | The Versatile Video Coding (VVC) standard has been recently finalized by the
Joint Video Exploration Team (JVET). Compared to the High Efficiency Video
Coding (HEVC) standard, VVC offers about 50% compression efficiency gain, in
terms of Bjontegaard Delta-Rate (BD-rate), at the cost of a 10-fold increase in
encoding complexity. In this paper, we propose a method based on Convolutional
Neural Network (CNN) to speed up the inter partitioning process in VVC.
Firstly, a novel representation for the quadtree with nested multi-type tree
(QTMT) partition is introduced, derived from the partition path. Secondly, we
develop a U-Net-based CNN taking a multi-scale motion vector field as input at
the Coding Tree Unit (CTU) level. The purpose of CNN inference is to predict
the optimal partition path during the Rate-Distortion Optimization (RDO)
process. To achieve this, we divide CTU into grids and predict the Quaternary
Tree (QT) depth and Multi-type Tree (MT) split decisions for each cell of the
grid. Thirdly, an efficient partition pruning algorithm is introduced to employ
the CNN predictions at each partitioning level to skip RDO evaluations of
unnecessary partition paths. Finally, an adaptive threshold selection scheme is
designed, making the trade-off between complexity and efficiency scalable.
Experiments show that the proposed method can achieve acceleration ranging from
16.5% to 60.2% under the RandomAccess Group Of Picture 32 (RAGOP32)
configuration with a reasonable efficiency drop ranging from 0.44% to 4.59% in
terms of BD-rate, which surpasses other state-of-the-art solutions.
Additionally, our method stands out as one of the lightest approaches in the
field, which ensures its applicability to other encoders. | Yiqun Liu, Marc Riviere, Thomas Guionnet, Aline Roumy, Christine Guillemot | 2023-10-20T22:26:49Z | http://arxiv.org/abs/2310.13838v1 | # CNN-based Prediction of Partition Path for VVC Fast Inter Partitioning Using Motion Fields
###### Abstract
The Versatile Video Coding (VVC) standard has been recently finalized by the Joint Video Exploration Team (JVET). Compared to the High Efficiency Video Coding (HEVC) standard, VVC offers about 50% compression efficiency gain, in terms of Bjontegaard Delta-Rate (BD-rate), at the cost of a 10-fold increase in encoding complexity. In this paper, we propose a method based on Convolutional Neural Network (CNN) to speed up the inter partitioning process in VVC. Firstly, a novel representation for the quadree with nested multi-type tree (QTMT) partition is introduced, derived from the partition path. Secondly, we develop a U-Net-based CNN taking a multi-scale motion vector field as input at the Coding Tree Unit (CTU) level. The purpose of CNN inference is to predict the optimal partition path during the Rate-Distortion Optimization (RDO) process. To achieve this, we divide CTU into grids and predict the Quaternary Tree (QT) depth and Multi-type Tree (MT) split decisions for each cell of the grid. Thirdly, an efficient partition pruning algorithm is introduced to employ the CNN predictions at each partitioning level to skip RDO evaluations of unnecessary partition paths. Finally, an adaptive threshold selection scheme is designed, making the trade-off between complexity and efficiency scalable. Experiments show that the proposed method can achieve acceleration ranging from 16.5% to 60.2% under the RandomAccess Group Of Picture 32 (RAGOP32) configuration with a reasonable efficiency drop ranging from 0.44% to 4.59% in terms of BD-rate, which surpasses other state-of-the-art solutions. Additionally, our method stands out as one of the lightest approaches in the field, which ensures its applicability to other encoders.
VVC, multi-scale motion vector field, VTM, QTMT, inter partitioning acceleration, U-Net, multi-branch CNN, multi-class classification.
## I Introduction
According to [1], global internet traffic has increased substantially, primarily due to the growing video usage, which now accounts for 65% of internet traffic. In addition, the rapid development of Ultra-High Definition (UHD) and Virtual Reality (VR) makes it critical to design more efficient video compression codecs. For this purpose, the latest video coding standard VVC has been finalized in 2020. In comparison to its predecessor, HEVC, its efficiency of inter coding is boosted by about 50% in terms of BD-rate at the cost of 10 times higher complexity [2]. The substantial complexity of VVC impedes its direct implementation in real-time applications such as TV broadcasting. Apart from multiple newly added inter coding tools [3, 4, 5], a novel partition structure introduced in VVC, called QTMT [6], is the main contributor to this complexity surge. In particular, it has been observed in [7] that the VVC Test Model (VTM) encoder, which is an implementation of the VVC codec, dedicates 97% of its encoding time to searching for the optimal partition. Consequently, fast partitioning methods emerge as the most promising approaches to speed up the whole VVC encoding process.
### _Partitioning Acceleration for VVC_
#### I-A1 Fast Intra Partitioning Methods
Numerous works achieve an important acceleration of intra-frame partitioning in the All-Intra (AI) encoding configuration. These approaches fall primarily into two categories: heuristic-based methods and machine learning-based methods.
Some heuristic-based methods are built upon pixel-wise statistics, such as gradients [8, 9, 10] and variances [8, 10]. To simplify the partitioning process, other heuristic methods reuse some data generated during the encoding process, such as Rate-Distortion cost (RD-cost) of Coding Unit (CU) encoding [11], coding tool decisions [12], best split type, and the intra mode of sub-CUs [13].
Machine learning-based methods utilize CNN or Decision Tree (DT) models to expedite intra partitioning. In [14, 15, 16], a CNN model is trained to predict the split boundaries inside CTU partitions. In [17], Feng _et al._ propose a fast partitioning method by predicting a QT depth map, multiple MT depth maps, and multiple MT direction maps with CNN. Regarding the DT-based approach, various Light Gradient Boosting Machine (LGBM) classifiers are separately trained for different CU sizes to predict the possible splits, as demonstrated in [18].
#### I-A2 Fast Inter Partitioning Methods
Fewer contributions of fast inter partitioning methods have been proposed for VVC. Due to the fact that the inter coding consists of predicting pixels of current frame depending on previously encoded reference frames, encoding errors resulting from the use of fast coding methods are propagated and accumulated between frames. Therefore, the acceleration of inter partitioning is a more challenging task compared to that of intra partitioning. Nevertheless, the acceleration of inter-frame coding is key to speeding up the overall encoding process, especially in RandomAccess (RA) and Low-Delay
(LD) configurations. These configurations are employed more widely than the AI configuration in scenarios such as broadcasting and streaming.
Generally, fast partitioning approaches aim to reduce the search space of potential partitions. Therefore, accurately predicting the subset of partitions is of crucial importance. Heuristic methods proposed for fast intra partitioning of VVC [8, 13] heavily depend on handcrafted features to determine whether to check a partition. These methods are fast and simple to implement but lack accuracy for two reasons. Firstly, the features are computed locally on the CU and/or sub-CUs, which fails to provide a synthesized view of the entire CTU. Secondly, these features, including variances, gradients, and coding information, are low dimensional and do not adequately capture the complexity of CTU.
One approach to improve the accuracy of partition prediction involves increasing the dimension of the extracted features. This is the case with the methods based on Random Forest (RF) [19] or DT [20], which use over 20 features from a given CU and its sub-CUs. As a result, decisions made by these methods remain confined to the local context of CU, without considering the entirety of CTU. Rather than relying on local information, a more effective selection of subsets of partitions should be based on global features computed on the entire CTU. This can be accomplished through the utilization of CNN-based methods.
Several approaches [21, 22, 23] use CNN to partially accelerate the partition search process. For instance, in [21], Pan _et al._ propose a multi-branch CNN to perform a binary classification of the "Partition" or "Non-partition" at the CU level. In [22], the split type at the CTU level is predicted, whereas the partitions of its sub-CUs are not determined. Liu _et al._ in [23] employ a CNN to estimate an 8x8 grid map of QT depth, which is used to discard a portion of the MT splits. These methods cover only a part of the partition search space, while the partition search is conducted exhaustively on the remainder. These methods could be referred as partial partitioning acceleration methods by CNN.
A complete partitioning acceleration of inter coding by CNN is proposed in [15]. A vector containing probabilities of the existence of split boundaries in the partition is predicted similarly to [24]. This method is fast in the sense that a single vector is computed for each CTU. Nevertheless, it is observed in [24], that the predictions are more accurate at higher levels of the partitioning tree. Hence, they propose improving the decisions by adding 16 trained DTs to process the CNN output, introducing additional complexity to the method.
### _Proposed Method_
In the MT partitioning, both binary and ternary (with sub-CUs of two different sizes) splits are available. Consequently, CUs at a specific depth in the tree do not correspond to the same size and shape, introducing dependence between the MT splits along the partition path. This dependence partly explains the decrease in partition prediction accuracy as the depth of the partitioning tree increases, as observed in recent studies [15, 24] presented in the previous section.
More precisely, since the size and shape of a CU depend not only on its depth in the tree but also on consecutive MT splits, depth alone is insufficient for defining a partition. Therefore, we propose making decisions on MT partitioning in a hierarchical manner, considering their dependence on the partition path. We also introduce a one-shot approach for QT partitioning which precedes MT partitioning, since there is a one-to-one correspondence between the QT depth and the CU size at that particular depth.
Hence, our overall proposition involves predicting the partition path, which includes a one-shot prediction for the QT partitioning, followed by a hierarchical prediction for the MT partitioning. Additionally, to further improve the accuracy of partition prediction, we suggest basing the partition decision not only on pixel values and residual values but also on motion vector fields, as these fields exhibit a strong correlation with partitioning [19].
Our two main contributions are as follows:
* We propose a novel partition-path-based representation of the QTMT partition at the CTU level as a map of QT depth plus three maps of MT split well adapted to the sophisticated partitioning scheme in VVC.
* We design a U-Net-based CNN model taking multi-scale fields of motion vectors as input to effectively predict QT depth map as well as split decisions at different MT levels.
We also have other contributions such as:
* We build MVF-Inter1, a large scale dataset for inter QTMT partition of VVC, which could facilitate the research in this field. Footnote 1: Our dataset MVF-Inter is available at [https://ldr.msf/fs1](https://ldr.msf/fs1) Aoi4abnnFw4f1fkg93fPjhdskXfgIvo7e=fxso
* We propose a fine tuned loss function for this complex multi-branch multi-class classification problem.
* We develop a fast partition scheme effectively exploiting the prediction of a CNN model in a way that the most possible splits are determined at each partition level.
* We design a specific threshold-based selection approach to match with the partition scheme, which allows us to realize a large range of trade-offs between complexity and compression efficiency.
The remainder of this paper is organized as follows. In Section II, we provide an overview of QTMT partitioning scheme in VVC, including the concept of the partition path. The motivation and detailed description for our proposed representation of the QTMT partition are presented in Section III. In Section IV, the structure of the proposed CNN model is illustrated. We give a detailed description of the partitioning acceleration algorithm in Section V. The loss function of CNN and the dataset generation process are described in Section VI. In section VII, the evaluation of the prediction accuracy of
our CNN model is carried out. Furthermore, we compare our result with the state-of-the-art of RF-based methods and CNN-based methods, respectively. The complexity analysis of our method is also included in this section. Finally Section VIII concludes the paper. Our source code is available at [https://github.com/Simon123123/MSMVF_VVC_TIP2023.git](https://github.com/Simon123123/MSMVF_VVC_TIP2023.git).
## II Overview of Partitioning Structure in VVC
### _QTMT Partitioning_
The partitioning of VVC is done in a top-to-bottom manner. Starting from CTU, the encoder applies possible split types recursively on CU at each level of the partitioning tree, in order to find the partition which best exploits spatial and temporal redundancy. The QTMT partitioning scheme in VVC consists in splitting the CU using either a QT split or a MT split. For the MT split, four split types are introduced: Horizontal Binary Tree (HBT) split, Vertical Binary Tree (VBT) split, Horizontal Ternary Tree (HTT) split and Vertical Ternary Tree (VTT) as illustrated in Figure 1.
Based on the QTMT partitioning scheme, dimensions of the final encoded CU could range from 128x128 to 4x4 [6] including squared and rectangular CUs of 32 different sizes in the RA configuration with the CTU size of 128x128. Figure 2, QTMT can achieve fine partitioning adapted to the local frame texture.
A worth-noting characteristic of QTMT partitioning is that the available split types depend on CU sizes. Figure 3 presents the number of available split types including No Split (NS) per CU size for luma samples. The number of split options varies from 1 to 6 for different sizes of CU, making the partition acceleration at the CU level more complicated. Except for the CU size restrictions on available split types, VVC implements various shortcuts [25], including speed-up rules based on content gradients and QT search restrictions estimated on neighboring CUs, etc.
The partitioning is executed at CTU level with dual tree allowing separated partitioning tree for luma and chroma. Our algorithm only accelerates the luma partitioning search using luma samples, and the resulting luma partitioning tree is then applied to the chroma components.
### _Partition Path_
In this work, we introduce the concept of partition path to depict the partition of a CU. The partition path refers to the sequence of splits applied to obtain a CU during the partitioning. In the RDO process of partitioning, numerous partition paths included in the partition search space are checked and the optimal one leading to the final partition is selected. Figure 4 illustrates a simplified tree representation showing all possible partition paths checked for a CTU. The red arrows indicate the selected partition path with the lowest RD-cost [26], while the blue arrows represent other paths that have been tested by RDO, but were not selected.
Specifically, within the QTMT partition, it is important to note that QT splits are prohibited for the child nodes of a MT split. Consequently, the search for optimal partition path in VVC can be conceptualized as a sequential two-step decision-making process, comprising a sequence of QT splits followed by a series of MT splits.
## III Novel Representation of QTMT Partition
Based on the QTMT partitioning structure and the partition path of VVC presented in the previous section, we introduce
Fig. 1: Different split types in VVC.
Fig. 4: Example of partition paths.
Fig. 3: Number of split types per CU in VVC for Luma.
Fig. 2: Example of a partitioned frame in VVC. [8]
in this section a novel representation of QTMT partition by partition path. In III-A, we explain the motivation for this new representation. The partition path representation is illustrated in III-B.
### _Motivation_
Previous partition representations at CTU level have typically used binary vectors to depict split boundaries. In [14, 15, 24], the authors intend to predict the split boundary of each 4x4 sub-block in CTU. Lately, Wu _et al._ improve this representation in [16] by proposing hierarchical boundaries. This adaptation is designed to better align with the QTMT partition pattern. In this work, binary labels for split boundaries of varying lengths are predicted. Collectively, these methods provide a geometric representation of the partition.
The limitations of the geometric representation mainly lie in two aspects. Firstly, it is an implicit representation of the partitioning process, requiring conversions from boundary vectors to split decisions. In the case of [14], conversions are carried out by computing the average probability at the location of the specific split. [15] and [24] convert boundary vectors to split decisions by DT models separately trained for different CU sizes. Secondly, different partition paths could be deduced from a particular partition presented in a geometric way. For example, as demonstrated in Figure 5, the partition defined by the split boundaries can lead to three distinct partition paths. These partition paths correspond to different coding performances and are individually tested in the RDO process. This multiplicity of partition paths of the geometric representation limits the acceleration potential of the method.
To address the above limitations, we introduced a novel representation based on the partition path. Our representation comprises the QT depth map and the MT split maps. Firstly, the split decisions at each depth can be directly deduced from either the QT depth map or the split map. This eliminates the need for decision trees, reducing method overhead, and simplifying implementation. Secondly, it corresponds to a unique partition path, maximizing the potential for complexity reduction.
### _QT Depth Map and MT Split Maps_
Considering that the maximum number of QT splits and MT splits is typically set to 4 and 3 in VTM, any partition can be effectively described by a QT depth map (_i.e._ QTdepthMap) along with three MT split maps (_i.e._ MTsplitMap) in sequence. Each element within QTdepthMap and MTsplitMap corresponds to an 8x8 and 4x4 area, which aligns with the dimensions of the smallest sub-CUs for the QT split and the MT split in VTM.
A detailed example of our partition representation is shown in Figure 6. To keep it simple and without loss of generality, we represent this example for a CTU size of 64x64. In this figure, (a) shows an instance of QTMT partition with its corresponding tree representation shown in (b). (c)-(f) illustrate the QTdepthMap and MTsplitMaps generated from this partition. Given that the CTU size in this example is 64x64, the sizes of QTdepthMap and MTsplitMap are 8x8 and 16x16, respectively. The QTdepthMap in (c) consists of QT depth values ranging from 0 to 4, while each element in MTsplitMap in (d)-(f) represents the split decision among five options: NS, HBT, VBT, HTT and VTT. This representation depicts a distinct partition path for every CU within the partition. To provide an example, consider the CU highlighted in the green circle in Figure 6. Its partition path can be expressed as three QT splits (QT depth 3), followed by a HBT split and two NS decisions.
## IV CNN-based Prediction of Partition Path
Predicting the optimal partition is equivalent to predicting the optimal partition path. In VTM, the size of CTU is set to 128x128 by default, consequently yielding QTdepthMap and MTsplitMap dimensions of 16x16 and 32x32, respectively. The representation of partition path can be predicted by a multi-branch CNN, where one branch infers the QTdepthMap of regression values with dimension 16x16x1, while the other three branches produce the MTsplitMap. Each element of MTsplitMap is classified into one of five classes, corresponding to five split types, resulting in three MT outputs with dimensions of 32x32x5. We have handled the classification of MT splits as an image segmentation problem based on 4x4 sub-blocks. Accordingly, we adopted the classical U-Net structure [27] to design our CNN model to address this segmentation-like task.
In this section, the U-Net structure is briefly introduced. Then we present the structure of the proposed CNN in Section IV-B. Afterwards, we list its input features and explain the reasons for choosing them in Section IV-C.
### _U-Net_
The U-Net structure is derived from Fully Convolutional Network (FCN) [28]. It consists of an encoder part which is composed of a sequence of convolutional layers plus max-pooling layers. Then this part is followed by a decoder part in which the maxpooling layers are replaced by upsampling layers. In addition, skip connections concatenate feature maps from the encoder and decoder with the same dimension. The U-Net and its variations have been widely applied to image segmentation tasks.
### _MS-MVF-CNN Structure_
The CNN structure proposed in this paper, named Multi-Scale Motion Vector Field CNN (MS-MVF-CNN), is depicted in Figure 7. The proposed CNN has 7 inputs and 4 outputs.
Fig. 5: Possible partition paths for a final partition given by split boundaries
After two convolutional layers with stride, the tensor of Input 1 is downsampled to dimension 32x32x8 and then concatenated with the Input 2. The merged input is then fed to the U-Net feature extractor demonstrated in Figure 8. Regarding the design of this module, we are referring to the classical structure of U-Net depicted in [27]. Specifically, we concatenate the upsampled feature map in the decoder part of U-Net, the feature map copied from the encoder part with the motion vector field of the same scale. At the decoding part, the feature map is gradually expanded and merged with normalized motion field of 2x2x6, 4x4x6, 8x8x6, 16x16x6 and 32x32x6. As a result, the U-Net feature extractor outputs a feature map of dimension 32x32x8, combining pixels features with motion estimation features.
Since the split at each level depends on previous splits, we employ a hierarchical multi-branch prediction mechanism. QTdepthMap is predicted after shrinking the features extracted from U-Net by four convolutional layers. For MT branches, we designed the MT branch module presented in Figure 8. Two inputs of this module are the extracted features of U-Net and outputs from previous partition levels. We utilize the asymmetric kernel structure to process the extracted features. This structure is originally proposed by [29] in HEVC to pay attention to near-horizontal and near-vertical textures for predicting split decision of intra coding by CNN. We adopt this structure to exploit the horizontal and vertical information contained in Multi-Scale Motion Vector Field (MS-MVF). The MT branch module contains branches of kernel size MN, LxL, and NxM. The values of (M, N, L) are set as (5, 7, 9) for branch MT0, (3, 5, 7) for branch MT1, and (1, 3, 3) for branch MT2. On deeper MT levels, splits are made on smaller CUs. Thus, smaller kernel sizes are applied to extract finer features. After the asymmetric kernels, the feature map is then concatenated with outputs from previous levels. In the end, the merged feature maps are given to two residual blocks [30] before yielding classification results of MT branches. No activation is applied to the fully connected output layer of the QT depth branch. The output layer of the MT branch is with softmax activation.
### _Input Features_
This network structure takes three different types of input. The involved inputs are presented below:
#### Iv-C1 Original and Residual CTU
In Figure 7, Input 1, with dimensions of 128x128x2, is created by merging the original CTU with the residual CTU. The original luma pixels carry the texture details of the CTU, while the residual CTU is generated through motion compensation of the original CTU based on the nearest frame.
Several studies [21, 22, 31] have adopted a method in which both the original CTU values and the residual of CTU are fed to a CNN. Combining the original and residual values as input allows CNN to assess the similarity between current CTU and reference CTU. This combined input offers features that reflect the temporal correlation between frames which is a crucial factor in inter partition prediction.
#### Iv-C2 QP and Temporal ID
The Input 2, as illustrated in Figure 7, has dimensions of 32\(\times\)32\(\times\)2, consisting of two separate 32x32 matrices. These matrices are assigned specific values: one holds the Quantization Parameter (QP) value, while the other contains the temporal identifier. This temporal identifier in VVC, similar to its usage in HEVC, signifies a picture's position within a hierarchical temporal prediction structure, controlling temporal scalability [32].
Fig. 6: Example of QTMT partition, tree representation, QTdepthMap and MTsplitMaps of CTU size of 64x64
We specifically utilize the QP value and temporal identifier as input features since inter partitioning depends on them. In essence, a higher temporal layer identifier or a lower QP value tends to result in finer partitions, as outlined in [22]. Instead of developing separate models for each parameter instance, our approach focuses on training a model with adaptability to varying values of QP and temporal identifier.
#### Iii-B3 Multi-Scale Motion Vector Field
In this paper, we have introduced a CNN model based on a novel input feature called MS-MVF. Our MS-MVF at five scales is presented as Input 3-7 in Figure 8. To compute MS-MVF, we divide the 128x128 CTU into multiple scale sub-blocks ranging from 4x4 pixels to 64x64 pixels, and perform motion estimation on these sub-blocks. Each motion vector of sub-block comprises a vertical and horizontal motion value, along with the associated Sum of Absolute Differences (SAD) cost value as the third element. By concatenating elements pointing to reference frame of L0 with those of L1, each sub-block corresponds to 6 elements in the motion vector field. For example, the motion vector field input for 8x8-pixel scale has dimensions of 16x16x6.
A significant challenge in inter partition prediction is the
Fig. 8: U-Net feature extractor and MT branch module
Fig. 7: Multi-Scale Motion Vector Field CNN. The vector above Resblock and Conv2D represents (kernel size, number of filters, stride).
large motion search space, which spans up to 6 regions of 384x384 pixels across different reference frames in the RAGOP32 configuration. State-of-the-art methods typically employ motion fields or pixels from reference frames as input features for machine learning models. Notably, in [19] and [21], a crucial feature used is the motion field, which comprises motion vectors calculated for each 4x4 sub-block referring to the nearest frame. As mentioned in [19], this motion field is strongly correlated with the optimal partition. In a different approach, Tissier _et al._ in [15] opt to utilize two reference CTUs in the nearest frames.
The choice of using MS-MVF as the CNN input, instead of motion fields and reference pixels, is based on the following reasons. First, the MS-MVF contains crucial motion information for the current CTU, which is essential for both inter prediction and inter partitioning. This information can be interpreted more effectively by the CNN model compared to using reference pixels as CNN input. Second, the multi-scale nature of MS-MVF aligns well with the multi-level structure of U-Net and can leverage this structure effectively. Essentially, MS-MVF represents motion features at various resolutions, allowing for the combination with features extracted from CTU pixels at the same resolution scale.
To demonstrate the effectiveness of our MS-MVF input, we conducted an experiment involving the training of two CNN models. The only distinction between these models is their input: the first model, PIX-CNN, takes the pixels of two reference CTUs as input, while the second model, MVF-CNN, utilizes our proposed MS-MVF as input. Both models share the same architecture as in Figure 7. The training dataset comprises 250k samples randomly selected from the RAGOP32 encoding of 200 sequences with a resolution of 540p from [33]. Performance evaluations in Figure 9 are based on Class C sequences of Common Test Condition (CTC). The results consistently show that MVF-CNN outperforms PIX-CNN at all four data points, which justifies the advantages of using the MS-MVF input over pixel input.
Based on our evaluation conducted on the first 64 frames of all CTC sequences using the RAGOP32 configuration, the computation of MS-MVF for each CTU consumes, on average, a mere 0.52% of the encoding time in VTM10. Importantly, the generation of MS-MVF introduces only minimal encoding overhead, making it a task that can be readily preprocessed or parallelized.
## V Proposed CNN-based Acceleration Method
After the prediction by our trained CNN model, we obtain one QTdepthMap and three MTsplitMaps per CTU. The predicted QTdepthMap is composed of floating-point values. The predicted MTsplitMaps comprise probabilities of five split types for each 4x4 sub-block within the CTU. In this section, we elucidate the post-processing of the CNN prediction, with the aim of achieving a wide range of acceleration-loss trade-off.
```
0: QTdepthMap; MTsplitMap; Thm; QTdepthcur, CU; \(\text{Size}_{\text{CU}}\); \(\text{Pos}_{\text{CU}}\)
0: SkipMT: Boolean to decide whether to skip MT split types or not.
0: CandSplit: Candidate list of splits for RDO check
1: Compute the average QTdepthpred based on \(\text{Size}_{\text{CU}}\), \(\text{Pos}_{\text{CU}}\) and QTdepthMap
2:ifround(QTdepthpred) \(>\) QTdepthcur and QT is possible for current CU then
3: SkipMT = True
4: CandSplit = {NS, QT}
5:else
6: SkipMT = False
7: CandSplit = {NS}
8:for sp in {BTH, BTV, TTH, TTV} do
9: Compute average \(\text{Proba}_{\text{sp}}\) based on \(\text{Size}_{\text{CU}}\), \(\text{Pos}_{\text{CU}}\) and MTsplitMap
10:if\(\text{Proba}_{\text{sp}}>\) Thm then
11: CandSplit append split sp
12:endif
13:endfor
14:endif
```
**Algorithm 1** MT splits early skipping
The acceleration algorithm is precisely described in Algorithm 1 and Figure 10. We introduce two parameters _Thm_ and _QTskip_ to regulate the acceleration-loss trade-off. Specifically,
Fig. 9: Comparison of performances between PIX-CNN and MVF-CNN.
_Thm_ is the threshold for the split probability. _QTskip_ represents whether we should accelerate RDO of QT splits or not. Increasing the _Thm_ value and setting _QTskip_ to true will lead to greater acceleration at the cost of increased coding loss.
Regarding the algorithm applied at the CU level, Algorithm. 1 is first executed. This algorithm produces two outputs: the _SkipMT_ variable and the _CandSplit_ list, both of which are subsequently utilized in the flowchart in Figure 10. To start with, the mean _QTdepth\({}_{\textit{prod}}\)_ of current CU is calculated based on the corresponding area in Quad Tree depth map (QTdepthMap). If the rounded _QTdepth\({}_{\textit{pred}}\)_ is larger than the QT depth of the CU and QT split is feasible, the current CU should be split by QT. Consequently, all MT splits are excluded from the _CandSplit_ list and _SkipMT_ is set to true. Otherwise, the mean probability of each available split is computed on the corresponding MTsplitMap in a similar way to that of the QTdepth. Then _CandSplit_ is filled by splits with _Proba\({}_{\textit{up}}\)_ larger than the threshold _Thm_. In this case, the value assigned to _SkipMT_ is False.
In the flowchart of Figure 10, if the _SkipMT_ is true after the execution of Algorithm 1, we directly check the _CandSplit_. In this scenario, the encoder conducts RDO of CU and splits CU with QT because _CandSplit_ contains only NS and QT. If _SkipMT_ is false, then we will verify if NS is the only choice in _CandList_. If this is the case, we will add the MT split with the highest probability to the list. Next, if QT split is not allowed for CU due to CU shape or shortcuts, we go directly to the check of _CandSplit_. If the QT split is feasible, we refer to _QTskip_ to determine whether to add QT to the _CandList_ or not. Setting the _QTskip_ to true signifies that we will always check QT if possible. This is for rectifying the potential error of predicting a _QTdepth\({}_{\textit{pred}}\)_ value smaller than the actual ground truth value. However, it comes at the expense of sacrificing some acceleration. Finally, we execute RDO on CU and partition it by split types in the _CandSplit_ list. The partition search then repeats for the next CU, and the algorithm described above is applied anew.
Our inter partitioning acceleration method is designed on top of the partitioning algorithm of VTM which performs a nearly exhaustive search on possible partition paths of a CTU, except that it incorporates a handful of handcrafted conditional shortcuts as mentioned in Section II. Therefore, this work can be considered as a CNN-based shortcut to reduce the search space of partition paths.
## VI Training of MS-MVF-CNN
To effectively train our CNN model, we have designed a hybrid loss function and created a large-scale dataset named MVF-Inter\({}^{1}\). First, we will explain how this loss function is determined in Section VI-A. Then Section VI-B describes training details and the generation of dataset.
### _Loss Function_
The outputs of MS-MVF-CNN contain one regression output as well as three classification outputs. Therefore, a hybrid loss function is developed in our case. We choose the category cross-entropy for classification loss and mean square error for regression loss as follows:
\[L=a\frac{1}{n_{q}}\sum_{i=1}^{n_{q}}(d_{i}-\hat{d}_{i})^{2}-(1-a)(\sum_{b=1}^ {n_{b}}\sum_{i=1}^{n_{m}}\sum_{s=1}^{n_{s}}w_{b,s}y_{b,i,s}\log(\hat{y}_{b,i,s })) \tag{1}\]
Here, we have _n\({}_{q}\)_ = 256, _n\({}_{b}\)_ = 3, _n\({}_{m}\)_ = 1024 and _n\({}_{s}\)_ = 5, representing the number of elements in QTdepthMap, the number of MTbranches, the number of elements in MTsplitMap, and the number of split types, respectively. In this equation, \(d_{i}\) denotes the ground-truth QT depth value, while \(\hat{d}_{i}\) represents the predicted QT depth value. Additionally, \(\hat{y}_{b,i,s}\) is used to denote the predicted probability of split type \(s\) for the _i_-th element of the MT decision map at the _b_-th MT branch. Similarly, \(y_{b,i,s}\) signifies the ground-truth label for the same case. Notably, we introduce a parameter \(a\), which falls
Fig. 10: Flowchart of acceleration algorithm
within the range [0, 1], in Equation 1 to fine-tune the relative weights of the regression loss and classification loss.
The split types are distributed unbalancedly at different MT depths as illustrated in Figure 11. To counteract this imbalance, we introduce class weights for split type \(s\) on MT branch \(b\), denoted as \(w_{b,s}\). The definition of these weights is as follows:
\[w_{b,s}=\frac{\lambda_{s}p_{b,s=ns}}{p_{b,s}} \tag{2}\]
where \(p_{b,s}\) represents the percentage of split type \(s\) within MT branch \(b\). For each branch \(b\), \(\frac{\beta_{b,s}}{p_{b,s}}\) can be interpreted as the inverse percentage of the split type \(s\) normalized by the inverse percentage of the NS split. In [6], a series of tests were performed to evaluate the coding gain and increase of complexity associated with the Binary Tree (BT) and Ternary-type Tree (TT) splits individually as demonstrated in Table I.
When comparing Setting 1 and Setting 2 to the anchor configuration, it's observed that Setting 1 and Setting 2 exhibit similar BD-rate gains, but the encoding time in Setting 2 is twice that of Setting 1. These tests suggest that BT split performs better in terms of the trade-off between complexity and coding gain compared to TT split. Thus, placing greater importance on the prediction of the BT split can result in a better acceleration-loss trade-off. To achieve this, the ratio between the proportion of NS and proportion of split \(s\) is computed for MT branch \(b\). The class weight \(w_{b,s}\) in Equation 2 is formulated as the product of this ratio and \(\lambda_{s}\) which is another weight added to prioritize the split type \(s\).
After fine-tuning the model, we find that the best performance is achieved with a value of 0.8 for \(a\) and \(\lambda_{s}\) set to 2 for BT splits and 1 for TT splits and NS.
### _Dataset Generation and Training Details_
Constructing a large scale inter partition dataset is more challenging than that of intra partition because the former needs to encode a substantial number of video sequences, while the latter could be done by encoding images. To the best of our knowledge, there exists no prior work focused on developing an inter partition dataset.
Our MVF-Inter\({}^{1}\) dataset involved the encoding of 800 sequences from [33] and an additional 28 sequences of 600 frames in 720p resolution extracted from [34]. Sequences of [33] cover resolutions of 240p, 540p, 1080p, and 4k, with 200 videos of 64 frames for each resolution. We have encoded all these videos with the VTM10 [35] encoder in the RAGOP32 configuration with QP 22, 27, 32, and 37. We randomly selected a total of 820k CTU partition samples, equally distributed per resolution and QP, with 120k samples reserved as a validation set.
Each sample of our dataset contains the following components of each CTU: pixel values, residual values, motion vector fields at five scales, QP value, temporal ID value, QTdepthMap with depths ranging from 0 to 4, and MTsplitMaps for MT0, MT1, and MT2. MTsplitMap labels are encoded as VTT (0), VBT (1), NS (2), HBT (3), and HTT (4).
In terms of training details, we employed the Adam optimizer [36] to train the model. The initial learning rate was set to 1e\({}^{-3}\) and was exponentially decreased by 3% every 5 epochs. The batch size set for training is 400.
## VII Experimental Results and Analyses
In this section, we present the results of our experiments and provide an in-depth analysis of the results. To begin, in Section VII-A, we assess the precision of the prediction of our CNN model. Subsequently, comparisons with the RF and CNN based approaches are made in Section VII-B. Finally, the complexity analysis of our framework is carried out in Section VII-C.
### _Prediction Accuracy Evaluation_
At the CU level, our algorithm can be broken down into two decisions: the decision of _SkipMT_ and the decision of _CandSplit_ list. To evaluate the precision of decisions based on our model's output, we have performed the encoding where both the ground truth partitioning and the CNN output were collected. The analysis is done on the first 64 frames of all CTC sequences excluding class D with QP 22, 27, 32, 37. The accuracy of these decisions presented in Table II and Figure 12 are calculated by averaging four QPs and various test sequences.
There is no need to make a _SkipMT_ decision on QT depth 4 since the partitioning is forced to proceed to MT splits with the maximum of QT depth reached. The accuracy of _SkipMT_ decision is independently measured on QT depth from 0 to 3. If the current CU requires further splitting of QT and _SkipMT_ is equal to False, then this decision of _SkipMT_ is classified as False Negative (FN). The proportion of True Positives (TP), FN, True Negatives (TN), False Positives (FP) and their corresponding Precision (Prec) and Recall (Rec) are shown in Table II. Precision, recall, and F1 score are calculated as follows:
\[Precision_{\text{QTdepth}}=\frac{TP_{\text{QTdepth}}}{TP_{\text{QTdepth}}+FP_ {\text{QTdepth}}} \tag{3}\]
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & BT split & TT split & BD-rate & Encoding Time \\ \hline Anchor Setting & X & X & - & - \\ \hline Setting 1 & ✓ & X & -8.26\% & 337\% \\ \hline Setting 2 & X & ✓ & -10.22\% & 732\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: Settings of split type in VTM9 under RA [6]
Fig. 11: Distribution of split types for MT0, MT1, MT2
\[Recall_{\text{QTdepth}}=\frac{TP_{\text{QTdepth}}}{TP_{\text{QTdepth}}+FN_{ \text{QTdepth}}} \tag{4}\]
\[F1score_{\text{QTdepth}}=2\frac{Precision_{\text{QTdepth}}Recall_{\text{QTdepth}} }{Precision_{\text{QTdepth}}+Recall_{\text{QTdepth}}} \tag{5}\]
Generally, our model exhibits strong performance at QT depths ranging from 0 to 2, as depicted in Table II. Both precision and F1 score decrease as QT depth increases. At QT depth 3, the precision and F1 score drop to 25% and 40%, respectively, suggesting that the _SkipMT_ decision at this level is less reliable. These observations could be explained by two reasons:
First of all, the scale of decision-making diminishes as the QT depth increases. More explicitly, the _SkipMT_ decision at QT depth 0 is made at the CTU scale by computing the mean of 256 values from the QTdepthMap. Nevertheless, the decision at QT depth 3 relies only on 4 values from the QTdepthMap within the 16x16 CU. Consequently, decisions at smaller scales are less resilient to incorrectly predicted QTdepthMap values, resulting in lower overall accuracy at higher QT depths.
Secondly, decisions at higher QT depths are noticeably more imbalanced than those at lower QT depths. Positive cases of ground truth at QT depth 3 represent only 0.02%, while the proportion of positive cases is 49.65% at QT depth 0. In conclusion, the model is trained in such a way that it tends to make negative _SkipMT_ decision at larger QT depths. This explains the decline in precision as the QT depth increases.
In Figure 12, the accuracy of the _CandSplit_ list decision is determined by whether the list contains the ground truth split at the MT level. We calculate and draw separate accuracy curves for MT0, MT1 and MT2 separately by varying the threshold _Thm_. As _Thm_ increases, the size of the _CandSplit_ list decreases, leading to decreasing precision. Once _Thm_ reaches a certain value, the accuracy stabilizes because _CandList_ is constant, containing only the MT split type with the highest probability and NS. It's worth noting that the minimum accuracy of the MT increases with the MT depth. This is mainly due to the fact that NS is more frequent at larger MT depths, as illustrated in the pie chart in Figure 11. Since our _CandSplit_ list consistently includes NS, the accuracy tends to be relatively higher at larger MT depths.
In general, our model achieves a satisfactory F1 score for QT depths 0, 1 and 2 regarding the _SkipMT_ decision. As for the _CandSplit_ list decision, our algorithm maintains an accuracy exceeding 65% while adjusting the value of _Thm_ at various MT levels. These performance evaluations justifies the high accuracy of the decisions made by our method during the partition search process in VVC.
### _Comparison with the State of the Art_
The proposed method has been implemented in the VTM10.0 encoder using the Frugally deep library [37] for CPU-based inference in real time. To showcase the effectiveness of our method in the latest version of VTM, we conducted experiments using VTM21, as represented by the black curve in Figure 13. Encodings of CTC sequences are performed on a Linux machine with Intel Xeon E5-2697 v4 in a single-threaded manner. These experiments were conducted on the first 64 frames of CTC sequences with the RAGOP32 configuration on four QPs values of 22, 27, 32, 37.
Two metrics were used to assess the performance: BD-rate [38] and Time Saving (TS). The formula for computing TS is provided in Equation 6. Here, T\({}_{\text{Test}}\) denotes the encoding time of the proposed method, while T\({}_{\text{VTM}}\) represents the encoding time of the original VTM10 under the same conditions. The average BD-rate loss and Time Saving (TS) are computed as the arithmetic mean and geometric mean, respectively, on four QPs values over CTC sequences as defined in [39]. In addition, sequences of class D are excluded when computing the overall average performance.
\[TS=\frac{1}{4}\sum_{q\in\{22,27,32,37\}}\frac{T_{VTM}(q)-T_{\text{Test}}(q)}{ T_{VTM}(q)} \tag{6}\]
The acceleration performances obtained from the state-of-the-art RF-based methods could not be directly compared with our performance. There are two main reasons for this. First of all, the results of [19] and [20] are based on VTM5.0 and VTM8.0, respectively. The differences of encoder complexity among various VTM versions are not negligible as highlighted in [40], which makes it less valid to directly compare our performances with theirs. Secondly, the training dataset was generated from a subset of CTC sequences, and the results were not obtained from the entire CTC. This approach results in possible overfitting and reduces the credibility of their results. As a result, comparing our results obtained on the entire CTC with their results is not fair.
[20] is an extended and specialized work for VVC based on [19]. We have reproduced the result of [20] in VTM10 to perform an unbiased comparison between our method
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline & TP & FN & TN & FP & Prec & Rec & F1score \\ \hline QT depth 0 & 41.84 & 7.81 & 45.83 & 4.52 & 90.3 & 84.3 & 87.2 \\ \hline QT depth 1 & 19.53 & 0.58 & 72.57 & 7.32 & 72.7 & 97.1 & 83.1 \\ \hline QT depth 2 & 2.69 & 0.08 & 94.67 & 2.57 & 51.1 & 97.1 & 67.0 \\ \hline QT depth 3 & 0.02 & 0 & 99.92 & 0.06 & 25.0 & 100.0 & 40.0 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Table of confusion for _SkipM_ (Unit: %)
Fig. 12: Curves of accuracy and _Thm_ for MT0, MT1, MT2
and RF-based method in [20]. First of all, we created a non-CTC dataset for training. Table III presents details on the composition of sequences for the dataset. For the 720p resolution, sequences are selected from [34] and sequences for other resolutions are from [33]. In the end, we generated a large dataset with 3.7e7 samples for the training of 17 Hor/Ver classifiers as well as 2.5e6 samples for the training of 4 QT/MTT classifiers. After generating the dataset, we trained, pruned and integrated the RF classifiers in VTM10.0. This was done in a manner consistent with the original article, including the implementation of the early termination rule for TT 2.
Footnote 2: The code and dataset of reproduction is available at [https://github.com/Simon123123/vtm10_fast_dt_inter_partition_pes2021.git](https://github.com/Simon123123/vtm10_fast_dt_inter_partition_pes2021.git)
We reproduce the result of the medium and fast speed preset of [20] in VTM10. It should be noted that the maximum MT depth is limited to 2 for the fast preset. We plot the curve of BD-rate loss and TS of our method by gradually adjusting the threshold _Thm_ and _QTskip_ to build six settings. The curves obtained are shown in Figure 13. For example, the label (T, 0.125) signifies that in this particular setting, _QTskip_ and _Thm_ are assigned the values True and 0.125, respectively. Our method can achieve scalable acceleration varying from 16.5% to 60.2% with BD-rate loss ranging from 0.44% to 4.59%. Comparing with the fast preset, the setting (T, 0.175) produces the same acceleration with a 0.84% lower BD-rate loss. Similarly, the setting (T, 0) reaches the same BD-rate loss while providing a 17% higher speed-up compared to the medium preset. In summary, our method generally outperforms the state-of-the-art RF-based method. It is worth mentioning that the results in VTM21 are obtained by implementing our CNN model, which was originally trained on VTM10. Consequently, it is expected to exhibit reduced performance compared to the results in VTM10. Nonetheless, our method remains applicable and effective in the latest version of VTM.
Regarding CNN-based approaches, we compared our method with [21] and [15] in Table IV. The VTM version of [21] is VTM6. Thus we reimplement our method and integrate our model trained on VTM10 into VTM6 for a fair comparison within the same context. In Table IV, the reimplementation in VTM6 labeled as (T, 0, VTM6) reaches a slightly larger acceleration with only one-third of BD-rate loss compared to [21]. For [15], their VTM version is the same as ours, allowing for direct comparisons. Encoding with _Thm_ = 0.125 yields a 40. 6% reduction in the encoding time, which is similar to the acceleration achieved by the C2 configuration in [15], but with only half of its BD-rate loss. Furthermore, our method with _Thm_ set to 0.2 outperformed their C3 configuration, achieving a 0.52% lower BD-rate loss at the same level of acceleration. In conclusion, our method consistently outperforms all state-of-the-art methods.
It is important to note that the level of acceleration can vary depending on different sequence classes (e.g. resolution), which is consistent with other CNN-based methods. As discussed in [6], CTUs that exceed the picture boundary are called partial CTUs. These partial CTUs require a different partition search scheme compared to regular CTUs. Consequently, the encoding of partial CTUs are not accelerated since the CNN-based approaches are not applicable to them. Generally, the proportion of the frame region occupied by partial CTUs is larger for lower resolutions, resulting in less acceleration when fast partitioning approaches are used on smaller resolutions. This could partially explain the limited acceleration observed in class D which was excluded from the overall performance calculation. More specifically, our method tends to perform better on higher resolutions (e.g. class A and class B) while achieving less acceleration than state-of-the-art methods on lower resolutions (e.g. class C, class D and class E). Investigating and improving this aspect could be a focus of future work.
### _Complexity Analysis_
Machine learning-based fast partitioning methods may not be suitable for alternative implementations of the same codec. For example, VVenc [41] is a fast implementation of VVC. In the All Intra configuration, VTM10.0 is reported to be 27 times more complex compared to VVenc with fast preset, as mentioned in [42]. The overall complexity of the CNN-based method presented in [17] accounts for only 2.34% of the encoding time of the VTM10 encoder. However, when this method is implemented in VVenc without any adjustments, its overhead increases to about 67% of the encoding time with the fast preset, which means that this method is not directly applicable to VVenc. Consequently, it is crucial to develop a lightweight method to ensure its applicability across different implementations. Furthermore, lightweight methods do not
Fig. 13: Comparison of performances between proposed method and reproduction of [20].
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline & \multicolumn{4}{c|}{Resolution} \\ \hline Number of videos & 240p & 480p & 720p & 1080p & 4k \\ \cline{2-6} \cline{2-6} \end{tabular}
\end{table} TABLE III: Breakdown of sequences used to train RFs of [20]
require parallel execution, enhancing the cost-effectiveness of such solutions.
As a result, we conducted a complexity analysis of our method to compare it with the state of the art. The overhead of a machine learning-based method typically consists of three components: preprocessing time, inference time, and postprocessing time. The post-processing of our method is integrated into the VVC partitioning process and introduces minimal overhead to the encoding process. However, preprocessing is necessary to compute the MS-MVF as model input. Table V provides the complexity of the preprocessing and the inference of CNN related to the encoding of the anchor VTM10. The last column corresponds to the geometric average of complexity for sequences from class A to E (including class D). Based on experimental results, the CNN inference time on a CPU accounts for only 0.60% of the total encoding time. Our approach consumes only 1.21% of the total encoding time, underscoring its lightweight nature.
Another important metric for evaluating the complexity of the model is its floating point operations (FLOPs). Our model has a FLOPs value of 1.12e\({}^{6}\). In comparison, the FLOPs of the model in [43] is approximately 1.1e\({}^{9}\)[16, 16] employs a pruned ResNet-18 as the backbone with 9e\({}^{7}\) FLOPs, and [15] utilizes the pretrained MobileNetV2 with 3.14e\({}^{8}\) FLOPs. Our model is hundreds of times lighter than these methods. The lightweight nature of our proposed approach facilitates its adaptation to faster encoders.
## VIII Conclusion
In this study, we propose a machine learning-based method to accelerate VVC inter partitioning. Our method leverages a novel representation of the QTMT partition structure based on partition path, consisting of QTdepthMap and MtsplitMaps. Our work is structured as follows. Firstly, we have built a large scale inter partition dataset. Secondly, a novel Unet-based model that takes MS-MVF as input is trained to predict the partition paths of CTU. Thirdly, we develop a scalable acceleration algorithm based on thresholds to utilize the output of the model. Finally, we speed up the VTM10 encoder under RAGOP32 configuration by 16.5%\(\sim\)60.2% with BD-rate loss of 0.44%\(\sim\)4.59%. This performance surpasses state-of-the-art methods in terms of coding efficiency and complexity trade-off. Notably, our method is among the most lightweight methods in the field, making it possible to adapt our approach to faster codecs.
For future work, we intend to investigate how video resolution influences partitioning acceleration, aiming to boost the speed-up of our method on lower resolutions. Furthermore, there is still acceleration potential lying in the selection of inter coding modes at the CU level, as discussed in [44]. An extension of our approach could be the incorporation of fast inter coding mode selection algorithm into our method to further accelerate the inter coding process.
|
2307.15886 | The modified scattering of 2 dimensional semi-relativistic Hartree
equations | In this paper, we consider the asymptotic behaviors of small solutions to the
semi-relativistic Hartree equations in two dimension. The nonlinear term is
convolved with the Coulomb potential 1/|x|, and it produces the long-range
interaction in the sense of scattering phenomenon. From this observation, one
anticipates that small solutions converge to a modified scattering states,
although they decay as linear solutions. We show the global well-posedness and
the modified scattering for small solutions in weighted Sobolev spaces. Our
proof follows a road map of exploiting the space-time resonance developed by
Germain, Masmoudi, and Shatah. Compared to the result in three dimensional case
by Pusateri, weaker time decay in two dimension is one of the main obstacles. | Soonsik Kwon, Kiyeon Lee, Changhun Yang | 2023-07-29T04:47:11Z | http://arxiv.org/abs/2307.15886v1 | # The modified scattering of 2 dimensional semi-relativistic Hartree equations
###### Abstract.
In this paper, we consider the asymptotic behaviors of small solutions to the semi-relativistic Hartree equations in two dimension. The nonlinear term is convolved with the Coulomb potential \(|x|^{-1}\), and it produces the _long-range interaction_ in the sense of scattering phenomenon. From this observation, one anticipates that small solutions converge to a modified scattering states, although they decay as linear solutions. We show the global well-posedness and the modified scattering for small solutions in weighted Sobolev spaces. Our proof follows a road map of exploiting the space-time resonance by [12, 30]. Compared to the result in three dimensional case [30], weaker time decay in two dimension is one of the main obstacles.
## 1. Introduction
### The equation and previous results
We consider the semi-relativistic Hartree equations with _Coulomb_ potential
\[\begin{cases}-i\partial_{t}u+\sqrt{m^{2}-\Delta}u&=\lambda\left(|x|^{-1}*|u|^{ 2}\right)u&\text{in}\ \ \mathbb{R}\times\mathbb{R}^{d},\\ u(0,\cdot)&=u_{0}&\text{in}\ \ \mathbb{R}^{d},\end{cases} \tag{1.1}\]
where the unknown \(u:\mathbb{R}^{1+d}\to\mathbb{C}\) and some fixed constant \(\lambda\in\mathbb{R}\). The nonlocal differential operator \(\sqrt{m^{2}-\Delta}\) is defined as a Fourier multiplier operator associated to the symbol \(\sqrt{m^{2}+|\xi|^{2}}\) and \(*\) denotes the convolution in \(\mathbb{R}^{d}\). Here we consider the mass parameter \(m>0\), so we normalize \(m=1\) throughout the paper. For three dimensional case \(d=3\), (1.1) is often referred to as _Boson star equation_ which describes the dynamics and collapse of relativistic Boson stars. It was rigorously derived as the mean-field limit of large systems of bosons. See [11, 24, 26] and references therein.
The mass and energy of solution to (1.1) are defined by
\[M(u)(t) =\|u(t)\|_{L^{2}(\mathbb{R}^{d})},\] \[E(u)(t) =\frac{1}{2}\int_{\mathbb{R}^{d}}\overline{u}\sqrt{1-\Delta}udx+ \frac{\lambda}{4}\int_{\mathbb{R}^{d}}\left(|x|^{-1}*|u|^{2}\right)|u|^{2}dx \tag{1.2}\]
respectively and conserved as time evolves. From the conservation laws, one sees that \(H^{\frac{1}{2}}\) is the energy space. Furthermore, in massless case \(m=0\), we have the scaling symmetry. If \(u\) is solution to (1.1), \(u_{\alpha}\) defined by \(u_{\alpha}(t,x)=\alpha^{\frac{d}{2}}u(\alpha t,\alpha x)\) are also solutions and the mass is invariant under the scaling, i.e., \(\|u(t)\|_{L^{2}(\mathbb{R}^{d})}=\|u_{\alpha}(t)\|_{L^{2}(\mathbb{R}^{d})}\), thus (1.1) is \(L^{2}\)-critical.
There are numerous local and global well-posedness results for the semi-relativistic Hartree equations (1.1). A first result was obtained in [23] where the local well-posedness in \(H^{s}(\mathbb{R}^{3})\) for \(s\geq\frac{1}{2}\) and global well-posedness in the energy space \(H^{\frac{1}{2}}(\mathbb{R}^{3})\) were established. This result was extended to other dimensions \(d\geq 2\) in [4]. Also, the authors in [4] established the low regularity well-posedness below the energy space, more precisely, the local well-posedness in \(H^{s}(\mathbb{R}^{d})\) for \(s>\frac{1}{2}-\frac{d-1}{4d}\). This result was later improved in [18] and [22] where the local well-posedness in \(H^{s}(\mathbb{R}^{3})\) for \(s\geq\frac{1}{4}\) and \(H^{s}(\mathbb{R}^{2})\) for \(s>\frac{3}{8}\) were proved, respectively.
The aim of this paper is to study the asymptotic behaviors of solutions to (1.1) when \(d=2\). By a _scattering_, we mean a solution to nonlinear PDEs converges to a solution of the linear equation as time goes to infinity. This phenomenon has been observed in various dispersive equations. Concerning our equation, let us consider the following generalized model
\[-i\partial_{t}u+\sqrt{1-\Delta}u=\lambda\left(|x|^{-\gamma}\ast|u|^{2}\right)u,\quad 0<\gamma<d,\quad\text{in}\ \,\,\mathbb{R}\times\mathbb{R}^{d}. \tag{1.3}\]
The asymptotic behavior of solutions to (1.3) varies depending on the potential, i.e., the range of \(\gamma\). To see this, by Duhamel's principle, we write (1.3) as the integral equation
\[u(t)=e^{it\sqrt{1-\Delta}}u_{0}+\lambda\int_{0}^{t}e^{i(t-s)\sqrt{m^{2}-\Delta} }\left(|x|^{-\gamma}\ast|u(s)|^{2}\right)u(s)ds.\]
Observe that if (1.3) scatters a linear profile, it would be defined as
\[u_{0}+\lim_{t\to\infty}\lambda\int_{0}^{t}e^{-is\sqrt{1-\Delta}}\left(|x|^{- \gamma}\ast|u(s)|^{2}\right)u(s)ds.\]
By using the well-known time decay estimates of linear solution (see [25, Lemma 3])
\[\|e^{it\sqrt{1-\Delta}}u_{0}\|_{L^{\infty}(\mathbb{R}^{d})}\lesssim\langle t \rangle^{-\frac{d}{2}}\quad\text{for}\quad u_{0}\in C_{0}^{\infty}(\mathbb{R} ^{d}), \tag{1.4}\]
one verifies that the time decay of \(L^{2}\) norm of the nonlinearity, computed on a solution to the linear equation, is \(t^{-\gamma}\)1
Footnote 1: We refer to [4, Section 4] for the precise statement and detailed proof.
\[\left\|(|x|^{-\gamma}\ast|e^{it\sqrt{1-\Delta}}u_{0}|^{2})\,e^{it\sqrt{1- \Delta}}u_{0}\right\|_{L^{2}(\mathbb{R}^{d})}\sim|t|^{-\gamma}\quad\text{ for}\ |t|\gg 1.\]
Thus, one may infer that there can not exist a linear profile if \(0<\gamma\leq 1\) with which the nonlinearity is called a _long-range interaction_. Indeed, the authors in [4] proved that (1.3) failed to scatter when \(0<\gamma\leq 1\) for \(d\geq 3\) and \(0<\gamma<\frac{d}{2}\) for \(d=1\) or \(2\). On the other hand, for the case \(1<\gamma<d\) which is called a _short-range interaction_ we may expect the scattering. The first scattering result for the case \(2<\gamma<d\) and \(d\geq 3\) was obtained in [4] and the gap corresponding to \(1<\gamma\leq 2\) was later closed in [30] for \(d=3\) and in [17] for \(d\geq 3\). Recently, scattering result for two dimensional case when \(1<\gamma<2\) was established in [32]. Lastly, we refer to [5, 6, 7] for related works.
Now, let us focus on the case \(\gamma=1\), which corresponds to our main equation. We refer to this as the "scattering-critical" case, because the time integration barely fails to integrable, or diverges logarithmically. We generally anticipate a _modified scattering_ result for solutions to equations which have the scattering critical nonlinearity. The modified scattering means that a global solution decays as linear solutions, but converges to a linear solution _with a suitable correction_ (eg. a phase modification). In the area of nonlinear dispersive equations, the first modified scattering results was established in [28] for one dimensional cubic nonlinear Schrodinger equations (NLS). This result was extended to higher dimension in [15] and the authors also proved the modified scattering for NLS with Hartree nonlinear terms for \(d\geq 2\)
\[-i\partial_{t}u+\Delta u=\lambda\left(|x|^{-1}\ast|u|^{2}\right)u,\quad\text{ in}\ \,\mathbb{R}\times\mathbb{R}^{d}. \tag{1.5}\]
Later, in [21], the authors reproved the modified scattering for (1.5), the same equations addressed in [15], by the different technique called _space-time resonance argument_ which was introduced in [12, 13, 14]. We should mention that the algebraic structure of Schrodinger symbol plays a crucial role in their proof. Concerning our equation (1.1) where the linear operator is nonlocal, the structure of resonance is more involved, so we have to induce a different asymptotic behavior of a solution. Also, relatively higher regularity assumption on initial data is required.
The modified scattering result of (1.1) for three dimensional case was proved in [30]. We also refer to [16, 20, 31] where the nonlinear equation with non-local differential operator was studied. Inspired by work [30], we investigate an asymptotic behavior of solution to (1.1) when \(d=2\).
### Main results and ideas
We now state our main theorem for the two dimensional semi-relativistic Hartree equations (1.1):
**Theorem 1.1**.: _Let \(n\geq 1000\) and \(k=\frac{n}{100}\). There exists \(\overline{\varepsilon_{0}}>0\) satisfying the following:_
_Suppose that the initial data \(u_{0}\) is sufficiently small in a weighted space. In other words, for any \(\varepsilon_{0}\leq\overline{\varepsilon_{0}}\), \(u_{0}\) satisfies_
\[\|u_{0}\|_{H^{n}(\mathbb{R}^{2})}+\|\langle x\rangle^{2}u_{0}\|_{H^{2}( \mathbb{R}^{2})}+\|\langle\xi\rangle^{k}\widehat{u_{0}}\|_{L^{\infty}(\mathbb{ R}^{2})}\leq\varepsilon_{0}. \tag{1.6}\]
_Then the Cauchy problem (1.1) with the initial data \(u_{0}\) has a unique global solution \(u\) to (1.1) decaying as_
\[\|u(t)\|_{L^{\infty}(\mathbb{R}^{2})}\lesssim\varepsilon_{0}\langle t\rangle^{ -1}. \tag{1.7}\]
_Moreover, there exists a scattering profile \(u_{\infty}\) such that_
\[\left\|\langle\xi\rangle^{k}\mathcal{F}\left[u(t)-e^{iB(t,D)}e^{-it(D)}u_{ \infty}\right]\right\|_{L^{\infty}_{\xi}(\mathbb{R}^{2})}\lesssim\varepsilon_ {0}\langle t\rangle^{-\delta}, \tag{1.8}\]
_for some \(0<\delta<\frac{1}{100}\). Here, the phase modification is defined by_
\[B(t,\xi)=\frac{\lambda}{(2\pi)^{2}}\int_{0}^{t}\left(\int_{\mathbb{R}^{2}} \left|\frac{\xi}{\langle\xi\rangle}-\frac{\sigma}{\langle\sigma\rangle} \right|^{-1}|\widehat{u}(\sigma)|^{2}d\sigma\right)\frac{\rho(s^{-\frac{2}{n }}\xi)}{\langle s\rangle}ds, \tag{1.9}\]
_where \(\rho\) is a smooth compactly supported function._
**Remark 1.2**.: _We do not pursue to optimize the regularity indices \(n\) and \(k\) and time decay \(\delta>0\) in Theorem 1.1._
**Remark 1.3**.: _We prefer to express the formula of phase modification (1.9) in the fourier space because it can be seen not only from some heuristic consideration (see (1.15) below) but also in our rigorous proof._
_Furthermore, we observe that convergence in the weighted \(L^{\infty}\) norm in (1.7) immediately implies the convergence in \(L^{2}\) spaces._
**Remark 1.4**.: _The time decay rate of solutions in (1.7) is optimal in the sense that the nonlinear solutions decay as the same rate with the linear ones (1.4)._
Theorem 1.1 contains the global existence and asymptotic behaviors of small solutions to (1.1). Our proof to obtain the global existence of solutions is basically based on the bootstrap argument in a weighted Sobolev space, and then the next crucial part is to perform a suitable phase correction and find a modified scattering state. Briefly, the proof of Theorem 1.1 consists of threefold. First, we find the time decay of solutions to (1.1), from which we construct a function space which consists of the weighted energy norm and scattering norm. Then, the second step is to show that the small solutions stay small as long as they exist by performing the weighted energy estimates. Our strategy is based on the method of space-time resonance which was introduced in [12, 13, 14, 30]. The final step is to obtain the bound for the scattering norm in the function space. It is in this step that a suitable correction of the phase based on the Taylor expansion is required to close the bootstrap argument. Collecting all from the three steps, we can finally obtain the modified scattering results for (1.1).
Let us explain in detail the ideas of proof in each step. In the first step, we use the standard stationary phase method on oscillatory integral to derive the time decay of linear solutions, \(t^{-1}\). By a direct proof, without resorting to well-known \(L^{p}-L^{q}\) estimates (e.g [25, Lemma 3]), we manage to obtain the time decay of solutions up to higher (\(k\) th) order derivative, which is essential in the course of weighted energy estimates to overcome the lack of time decay compared to higher dimensional cases. To fully utilize the time decay of solutions, we construct our solution space based on the weighted \(L^{2}\)-norms.
In the second step, we show that the small nonlinear solutions stay small during the existing time by performing the weighted energy estimates. We introduce the interaction representation of solutions \(u(t)\) so as to track the scattering states
\[f(t,x):=e^{it\langle D\rangle}u(t,x). \tag{1.10}\]
Then we can express \(f\) via Duhamel's representation
\[\begin{split}\widehat{f}(t,\xi)&=\widehat{u_{0}}( \xi)+i\lambda\mathcal{I}(t,\xi),\\ \mathcal{I}(t,\xi)&=\frac{1}{2\pi}\int_{0}^{t}\int_ {\mathbb{R}^{2}\times\mathbb{R}^{2}}e^{is\phi(\xi,\eta)}|\eta|^{-1}\widehat{|u |^{2}}(\eta)\widehat{f}(s,\xi-\eta)d\eta ds\end{split} \tag{1.11}\]
with the resonance function
\[\phi(\xi,\eta)=\langle\xi\rangle-\langle\xi-\eta\rangle. \tag{1.12}\]
In the course of weighted energy estimates, we should bound the \(xf\) and \(x^{2}f\) in \(L^{2}\) which are converted to \(\nabla\widehat{f}\) and \(\nabla^{2}\widehat{f}\) in the fourier space, respectively. The main task is to not only bound the singularity \(|\eta|^{-1}\), but also recover the time growth resulting from the derivative \(\nabla_{\xi}\) taken to exponential term. Indeed, the most delicate term occurs from \(\nabla^{2}\widehat{f}\) when two derivatives both fall on \(e^{is\phi(\xi,\eta)}\)
\[\frac{1}{2\pi}\int_{0}^{t}\int_{\mathbb{R}^{2}}s^{2}\big{(}\nabla_{\xi}\phi( \xi,\eta)\big{)}^{2}e^{is\phi(\xi,\eta)}|\eta|^{-1}\widehat{|u(s)|^{2}}(\eta) \widehat{f}(s,\xi-\eta)d\eta ds, \tag{1.13}\]
where we have to compensate the time growth \(s^{2}\). Here, we encounter the main difficulty from two dimensional nature, i.e., the weaker time decay \(|s|^{-1}\) of solutions in contrast to three or higher dimensional problem, because \(L^{2}\)-norm of cubic nonlinearity in (1.13) enjoys at most \(s^{-2}\) decay which is not sufficient to compensate the time growth for (1.13) being integrable in time. Nevertheless, since the singularity \(|\eta|^{-1}\) near the origin is weaker compared to three or higher dimensional case where \(\mathcal{F}(|x|^{-1})(\eta)=C_{d}|\eta|^{-d+1}\), we anticipate that this advantage leads us to obtaining an extra time decay. One of key observations, as already observed in [30], is the null structure from the phase function
\[\nabla_{\xi}\phi(\xi,\eta)\Big{|}_{\eta=0}=\nabla_{\xi}\Big{(}\langle\xi \rangle-\langle\xi-\eta\rangle\Big{)}\Big{|}_{\eta=0}=0. \tag{1.14}\]
The null structure removes the singularity near the origin, more precisely, the multiplier in (1.13) behaves as
\[|\nabla_{\xi}\phi(\xi,\eta)|^{2}|\eta|^{-1}\sim|\eta|,\quad\text{if}\ \ |\eta|,|\xi|\lesssim 1.\]
Using this, we can heuristically regard \(|\eta|\approx|s|^{-1}\) in the analysis respect. Indeed, for \(|\eta|\geq|s|^{-1}\), we can exploit the space resonance, in other words, we can apply an integration by parts to the following quadratic term
\[\widehat{|u(s)|^{2}}(\eta)=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}e^{is( \langle\sigma+\eta\rangle-\langle\sigma\rangle)}\widehat{f}(s,\sigma)\widehat {\overline{f}(s,\eta+\sigma)}d\sigma,\]
by using \(|\nabla_{\sigma}(\langle\sigma+\eta\rangle-\langle\sigma\rangle)|\sim|\eta|\) for \(|\eta|,|\sigma|\lesssim 1\) to derive an additional time decay at the cost of \(|\eta|^{-1}\). In the rigorous proof below, we then can control the \(xf\) and \(x^{2}f\) in \(L^{2}\), allowing a small growth in \(t\).
As mentioned above, since our main equation has the _long-range_ nonlinearity, the nonlinearity occurs the logarithm divergence in terms of time integration. To overcome this difficulty, we employ a phase modification (1.8) from the singular potential \(|x|^{-1}\) and obtain an extra logarithm time decay by following the argument in [21, 30]. We begin with writing the nonlinear term as
\[\mathcal{I}(t,\xi)=\frac{1}{(2\pi)^{3}}\int_{0}^{t}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}e^{isp(\xi,\eta,\sigma)}|\eta|^{-1}\widehat{f}(s,\xi+\eta) \widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s,\xi+\eta+\sigma)}d\eta d \sigma ds,\]
where
\[p(\xi,\eta,\sigma)=\langle\xi\rangle-\langle\xi+\eta\rangle-\langle\xi+\sigma \rangle+\langle\xi+\eta+\sigma\rangle.\]
Let us assume that \(|\xi|\lesssim 1\). By Taylor expansion, the phase function is approximated by
\[p(\xi,\eta,\sigma)=\eta\cdot\left(\frac{\xi}{\langle\xi\rangle}-\frac{\xi+ \sigma}{\langle\xi+\sigma\rangle}\right)+O\left(|\eta|^{2}\right).\]
Then, neglecting all contributions that decay faster than \(|s|^{-1}\), we can approximate the above integration as
\[\begin{split}&\frac{1}{(2\pi)^{3}}\iint_{|\eta|\lesssim|s|^{-1 +}}e^{is\eta\cdot\left(\frac{\xi}{\langle\xi\rangle}-\frac{\xi+\sigma}{\langle \xi+\sigma\rangle}\right)}|\eta|^{-1}\widehat{f}(s,\xi)\widehat{f}(s,\xi+ \sigma)\overline{\widehat{f}(s,\xi+\sigma)}d\eta d\sigma\\ &=\frac{1}{(2\pi)^{3}}\widehat{f}(s,\xi)\iint_{\mathbb{R}^{2} \times\mathbb{R}^{2}}e^{is\eta\cdot\left(\frac{\xi}{\langle\xi\rangle}-\frac{ \xi+\sigma}{\langle\xi+\sigma\rangle}\right)}|\eta|^{-1}d\eta|\widehat{f}(s, \xi+\sigma)\big{|}^{2}d\sigma+O(s^{-1-})\\ &=\frac{1}{2\pi}\widehat{f}(s,\xi)\int_{\mathbb{R}^{2}}\mathcal{ F}^{-1}(|\eta|^{-1})\left(s\big{(}\frac{\xi}{\langle\xi\rangle}-\frac{\sigma}{ \langle\sigma\rangle}\big{)}\right)\big{|}\widehat{f}(s,\sigma)\big{|}^{2}d \sigma+O(s^{-1-}),\end{split} \tag{1.15}\]
under the suitable assumption on \(f\). Then, we obtain
\[\partial_{t}\widehat{f}(t,\xi)=it^{-1}\frac{1}{(2\pi)^{2}}\widehat{f}(t,\xi) \int_{\mathbb{R}^{2}}\left|\frac{\xi}{\langle\xi\rangle}-\frac{\sigma}{ \langle\sigma\rangle}\right|^{-1}\big{|}\widehat{f}(t,\sigma)\big{|}^{2}d \sigma+O(t^{-1-})\]
which implies the modified scattering property (1.8) and (1.9). The rigorous analysis for error terms will be achieved by identifying suitable scale in \(\eta\) with respect to time, say \(s^{-1+}\), and then, by exploiting the space resonance for \(|\eta|\gtrsim s^{-1+}\).
### Motivation: 2d models
The two dimensional semi-relativistic equation (1.1) might be regarded as a simplified model of the Chern-Simons-Dirac system under the Coulomb gauge condition 2
Footnote 2: We refer to [2] for its derivation.
(CSD-C) \[(-i\partial_{t}+\alpha\cdot D+m\beta)\psi=N(\psi,\psi)\psi\qquad\text{in}\ \, \mathbb{R}\times\mathbb{R}^{2},\]
where the unknown \(\psi:\mathbb{R}^{1+2}\to\mathbb{C}^{2}\) and the nonlinear term is given as
\[N(\psi,\psi)=\frac{1}{\Delta}\left[\left(\partial_{1}(\psi^{\dagger}\alpha^{2} \psi)-\partial_{2}(\psi^{\dagger}\alpha^{1}\psi)\right)+\left(\partial_{2}(| \psi|^{2})\alpha^{1}-\partial_{1}(|\psi|^{2})\alpha^{2}\right)\right]\]
with Dirac matrices \(\alpha^{j},\beta\) defined as
\[\alpha^{1}=\begin{bmatrix}0&i\\ -i&0\end{bmatrix},\quad\alpha^{2}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\quad\beta=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}.\]
One of strategy to deal with Dirac operator is, as introduced in [9], to diagonalize the system using the following identity
\[\alpha\cdot D+m\beta=\langle D\rangle\Pi_{+}(D)-\langle D\rangle\Pi_{-}(D),\]
where \(\Pi_{\pm}(D)=\frac{1}{2}\left(I_{2}\pm\frac{1}{\langle D\rangle}\big{[}\alpha \cdot D+\beta\big{]}\right)\) are the projection operators. Letting \(\psi_{\pm}=\Pi_{\pm}(D)\psi\), (CSD-C) is indeed diagonalized into
\[-i\partial_{t}\psi_{\pm}\pm\langle D\rangle\psi_{\pm}=\sum_{\theta_{1},\theta_ {2},\theta_{3}\in\{\pm\}}N(\psi_{\theta_{1}},\psi_{\theta_{2}})\psi_{\theta_{ 3}},\]
which consists of the nonlocal differential operator and cubic Hartree nonlinear term as in our main equations, (1.1). Especially, the potentials in Hartree term are given as \(\frac{\eta_{j}}{|\eta|^{2}}\) for \(j=1,2\) in the fourier space, which has the similar singularity near the origin and decay property with order \(-1\) as the one given in (1.1), so (CSD-C) can be also regarded as the scattering critical equation and modified scattering would be expected. However, not only long time behaviors but the global existence of solutions to (CSD-C) are still unknown and only the local results have been intensively studied including other choices of gauge [2, 19, 27, 29]. One of main difficulty in studying global solutions, compared to our equation (1.1), arises from analysis of the following various resonance functions
\[p_{(\theta_{1},\theta_{2},\theta_{3})}(\xi,\eta,\sigma)=\langle\xi\rangle- \theta_{1}\langle\xi-\eta\rangle-\theta_{2}\langle\eta+\sigma\rangle+\theta_{ 3}\langle\sigma\rangle,\quad\theta_{i}\in\{\pm\}. \tag{1.16}\]
Indeed, one can see that the key null structure (1.14) to remove the singularity is no longer valid when \(\theta_{1}=-\). Nevertheless, we believe that the methodology in this paper with the help of careful analysis of resonance set together with null structures from Dirac operator will play a crucial role in studying the global behavior of solutions to (CSD-C).
The similar structure also can be observed in the following Dirac equation
(DE) \[(-i\partial_{t}+\alpha\cdot D+m\beta)\psi=\lambda\left(|x|^{-1}*|\psi|^{2} \right)\psi\qquad\text{in}\ \ \mathbb{R}\times\mathbb{R}^{2},\]
where \(\psi:\mathbb{R}^{2}\to\mathbb{C}^{2}\) is the spinor. (DE) describe the relativistic dynamics of electrons in graphene and can be derived from the nonlinear Schrodinger equation with a potential which is periodic with respect to honeycomb structure [1]. (DE) also can be referred to as the scattering critical equation, but the only local results were established in [10, 22]. The global well-posedness and modified scattering for (DE) will be treated in future work. Finally, we refer to [3, 8] for global results for (DE) in three dimension.
**Notations.**
\(\bullet\) (Fourier transform) \(\mathcal{F}g(\xi)=\widehat{g}(\xi):=\int_{\mathbb{R}^{2}}e^{-ix\cdot\xi}g(x)dx\) and \(g(x)=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}e^{ix\cdot\xi}\widehat{g}(\xi)d\xi\).
\(\bullet\) (Mixed-normed spaces) For a Banach space \(X\) and an interval \(I\), \(u\in L^{q}_{I}X\) iff \(u(t)\in X\) for a.e. \(t\in I\) and \(\|u\|_{L^{q}_{I}X}:=\|\|u(t)\|_{X}\|_{L^{q}_{I}}<\infty\). Especially, we denote \(L^{q}_{I}L^{r}_{x}=L^{q}_{t}(I;L^{r}_{x}(\mathbb{R}^{2}))\), \(L^{q}_{I,x}=L^{q}_{I}L^{q}_{x}\), \(L^{q}_{t}L^{r}_{x}=L^{q}_{R}L^{r}_{x}\).
\(\bullet\) As usual, different positive constants are denoted by the same letter \(C\), if not specified. \(A\lesssim B\) and \(A\gtrsim B\) means that \(A\leq CB\) and \(A\geq C^{-1}B\), respectively for some \(C>0\). \(A\sim B\) means that \(A\lesssim B\) and \(A\gtrsim B\).
\(\bullet\) (Fourier multiplier) \(D=-i\nabla\). For \(m:\mathbb{R}^{2}\to\mathbb{R}\), \(m(D)f:=\mathcal{F}^{-1}\big{(}m(\xi)\widehat{f}(\xi)\big{)}\).
\(\bullet\) (Littlewood-Paley operators) Let \(\rho\) be a bump function such that \(\rho\in C^{\infty}_{0}(B(0,2))\) with \(\rho(\xi)=1\) for \(|\xi|\leq 1\) and define \(\rho_{N}(\xi):=\rho\left(\frac{\xi}{N}\right)-\rho\left(\frac{2\xi}{N}\right)\) for \(N\in 2^{\mathbb{Z}}\) and also \(\rho_{\leq N_{0}}:=1-\sum_{N>N_{0}}\rho_{N}\). We define the frequency projection \(P_{N}\) by \(\mathcal{F}(P_{N}f)(\xi)=\rho_{N}(\xi)\widehat{f}(\xi)\). In addition \(P_{N_{1}\leq\cdot\leq N_{2}}:=\sum_{N_{1}\leq N\leq N_{2}}P_{N}\) and \(P_{\sim N_{0}}:=\sum_{N\sim N_{0}}P_{N}\). For \(N\in 2^{\mathbb{Z}}\) we denote \(\widetilde{\rho_{N}}=\rho_{N/2}+\rho_{N}+\rho_{2N}\). In particular,
\(P_{N}\widetilde{P_{N}}=P_{N}\) where \(\widetilde{P_{N}}=\mathcal{F}^{-1}\widetilde{\rho_{N}}\mathcal{F}\). Especially, we denote \(P_{N}f\) by \(f_{N}\) for any measurable function \(f\).
\(\bullet\) Let \(\mathbf{v}=(v_{i}),\mathbf{w}=(w_{i})\in\mathbb{R}^{2}\). Then \(\mathbf{v}\otimes\mathbf{w}\) denotes the usual tensor product such that \((\mathbf{v}\otimes\mathbf{w})_{ij}=v_{i}w_{j}\) for \(i,j=1,2\). We also denote a tensor product of \(\mathbf{v}\in\mathbb{C}^{n}\) and \(\mathbf{w}\in\mathbb{C}^{m}\) by a matrix \(\mathbf{v}\otimes\mathbf{w}=(v_{i}w_{j})\underset{i=1,\cdots,n}{=1,\cdots,m}\). For simplicity, we use the simplified notation
\[\mathbf{v}^{k}=\overbrace{\mathbf{v}\otimes\cdots\otimes\mathbf{v}}^{k\text{ times}},\qquad\nabla^{k}=\overbrace{\nabla\otimes\cdots\otimes\nabla}^{k\text{ times}}.\]
The product of \(\mathbf{v}\) and \(f\in\mathbb{C}\) is given by \(\mathbf{v}f=\mathbf{v}\otimes f\).
\(\bullet\) For the distinction between a vector and a scalar, we use the bold letter for a vector-valued function and the normal letter for a scalar-valued function.
## 2. Time decay estimates
In this section, we find sharp time decay estimates of solutions to linear equation. We define an a priori assumption incorporating time decay to get global solutions to (1.1). Define
\[f(t,x):=e^{it\langle D\rangle}u(t,x), \tag{2.1}\]
where \(u(t)\) is a solution to (1.1). Let us set
\[n\geq 1000,\,\,\,k=\frac{n}{100},\,\,\,\,\,\text{and}\,\,\,\,\,\delta_{0}=\frac{ 1}{100}. \tag{2.2}\]
For \(\varepsilon_{1}>0\) to be chosen later, we assume a priori smallness of solutions: for a large time \(T>0\),
\[\|u\|_{\Sigma_{T}}\lesssim\varepsilon_{1}, \tag{2.3}\]
with
\[\|u\|_{\Sigma_{T}}:=\sup_{t\in[0,T]}\Big{[}\langle t\rangle^{-\delta_{0}}\|u( t)\|_{E_{1}}+\langle t\rangle^{-2\delta_{0}}\|u(t)\|_{E_{2}}+\|u(t)\|_{S}\Big{]},\]
where
\[\|u(t)\|_{E_{1}} :=\|u(t)\|_{H^{n}(\mathbb{R}^{2})}+\|xe^{it\langle D\rangle}u(t)\| _{H^{2}(\mathbb{R}^{2})},\] \[\|u(t)\|_{E_{2}} :=\|x^{2}e^{it\langle D\rangle}u(t)\|_{H^{2}(\mathbb{R}^{2})},\] \[\|u(t)\|_{S} :=\left\|\langle\xi\rangle^{k}\widehat{u}(t)\right\|_{L^{\infty}_ {\ell}(\mathbb{R}^{2})}.\]
We compute the pointwise time decay of the semi-relativistic equation by assuming the a priori assumption (2.3). We refer to [30, Proposition 3.1] for three dimensional case where the sharp time decay \(|t|^{-\frac{3}{2}}\) was obtained. Here, we obtain sharp time decay estimates for two dimensional case by following the similar strategy in [30, Proposition 3.1], but we have to bound the linear solution with the differential order up to \(k\).
**Proposition 2.1** (Time decay).: _Assume that \(u\) satisfies the a priori assumption (2.3) for \(\varepsilon_{1}\) and \(T\) with the index conditions (2.2). Then for small \(\varepsilon_{1}\), There exists \(C\) satisfying that for \(0\leq t\leq T\) and \(0\leq\ell\leq k\)_
\[\|u(t)\|_{W^{\ell,\infty}}\leq C\langle t\rangle^{-1}\varepsilon_{1}, \tag{2.4}\]
_where the index \(k\) is in the a priori assumption (2.3)._
Proof.: We prove that
\[\|\langle D\rangle^{\ell}e^{-it\langle D\rangle}f\|_{L^{\infty}}\lesssim 1,\]
whenever \(f\) satisfies
\[\langle t\rangle^{-1}\|\langle\xi\rangle^{k}\widehat{f}\|_{L^{\infty}_{\xi}}+ \langle t\rangle^{-1-\delta_{0}}\Big{[}\|xf\|_{L^{2}_{x}}+\|f\|_{H^{n}}\Big{]}+ \langle t\rangle^{-1-2\delta_{0}}\|x^{2}f\|_{L^{2}_{x}}\leq\varepsilon_{1}. \tag{2.5}\]
We begin with writing
\[\langle D\rangle^{\ell}e^{-it\langle D\rangle}f(t,x)=\sum_{N\in 2^{\mathbb{Z}}}I_{ N}(t,x)\]
where
\[I_{N}(t,x) :=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}\langle\xi\rangle^{ \ell}e^{it\phi(\xi)}\widehat{f}(\xi)\rho_{N}(\xi)d\xi, \tag{2.6}\] \[\phi(\xi) :=-\langle\xi\rangle+\xi\cdot\frac{x}{t}.\]
We decompose
\[\sum_{N\in 2^{\mathbb{Z}}}I_{N}(t,x)=\left(\sum_{N\leq\langle t\rangle^{- \frac{1}{2}}}+\sum_{N\geq\langle t\rangle^{\frac{2}{n}}}+\sum_{\langle t \rangle^{-\frac{1}{2}}\leq N\leq\langle t\rangle^{\frac{2}{n}}}\right)I_{N}(t,x).\]
The low-frequency part can be estimated as
\[\sum_{N\leq\langle t\rangle^{-\frac{1}{2}}}I_{N}(t,x)\lesssim\sum_{N\leq \langle t\rangle^{-\frac{1}{2}}}\|\rho_{N}\|_{L^{1}}\left\|\langle\xi\rangle^{ \ell}\widehat{f}\right\|_{L^{\infty}_{\xi}}\lesssim\langle t\rangle^{-1}\| \langle\xi\rangle^{\ell}\widehat{f}\|_{L^{\infty}_{\xi}}\lesssim\varepsilon_{1}.\]
For the high-frequency, we exploit the high Sobolev norm bound as follows:
\[\sum_{N\geq\langle t\rangle^{\frac{2}{n}}}I_{N}(t,x) \lesssim\sum_{N\geq\langle t\rangle^{\frac{2}{n}}}\langle N \rangle^{\ell+1}\|\rho_{N}\widehat{f}\|_{L^{2}_{x}}\lesssim\sum_{N\geq\langle t \rangle^{\frac{2}{n}}}N^{\ell+1-n}\|f\|_{H^{n}}\] \[\lesssim\langle t\rangle^{-1-\delta_{0}}\|f\|_{H^{n}}\lesssim \varepsilon_{1}.\]
For the remaining mid-frequency part, we apply the non-stationary phase method. One verifies that when \(|x|>t\), the phase \(\phi\) is non-stationary, i.e.
\[|\nabla_{\xi}\phi(\xi)|\geq\left|\frac{|x|}{t}-\frac{|\xi|}{\langle\xi\rangle }\right|\geq 1-\frac{|\xi|}{\langle\xi\rangle}\gtrsim\langle\xi\rangle^{-2}. \tag{2.7}\]
On the other hand, when \(|x|<t\), the phase \(\phi\) could be stationary around \(\xi_{0}\):
\[\nabla_{\xi}\phi(\xi_{0})=0\;\;\text{where}\;\;\xi_{0}=-\frac{x}{\sqrt{t^{2}- |x|^{2}}}.\]
We now set \(N_{0}\sim|\xi_{0}|\). First, we consider the non-stationary case \(N\nsim N_{0}\). Then one can find that
\[\left|\nabla_{\xi}\phi(\xi)\right|\gtrsim\max\left(\frac{|\xi-\xi_{0}|}{ \langle N\rangle^{3}},\frac{|\xi-\xi_{0}|}{\langle N_{0}\rangle^{3}}\right), \;\text{for}\;|\xi|\sim N. \tag{2.8}\]
We perform an integration by parts twice to write \(I_{N}\) as
\[\int_{\mathbb{R}^{2}}\langle\xi\rangle^{\ell}e^{it\phi(\xi)}\widehat{f}(\xi) \rho_{N}(\xi)d\xi=I_{N}^{1}(t,x)+I_{N}^{2}(t,x)+I_{N}^{3}(t,x)\]
where
\[\begin{split} I_{N}^{1}(t,x)&=-t^{-2}\int_{\mathbb{R}^{ 2}}\left\langle\xi\right\rangle^{\ell}e^{it\phi(\xi)}\frac{\nabla_{\xi}\phi}{| \nabla_{\xi}\phi|^{4}}\cdot\nabla_{\xi}\phi\nabla_{\xi}^{2}\left(\widehat{f_{N} }(\xi)\right)d\xi,\\ I_{N}^{2}(t,x)&=-2t^{-2}\int_{\mathbb{R}^{2}}\left\langle \xi\right\rangle^{\ell}e^{it\phi(\xi)}\nabla_{\xi}\cdot\left(\frac{\nabla_{\xi }\phi}{|\nabla_{\xi}\phi|^{2}}\right)\frac{\nabla_{\xi}\phi}{|\nabla_{\xi}\phi| ^{2}}\cdot\nabla_{\xi}\widehat{f_{N}}(\xi)d\xi\\ I_{N}^{3}(t,x)&=-t^{-2}\int_{\mathbb{R}^{2}}\left\langle \xi\right\rangle^{\ell}e^{it\phi(\xi)}\nabla_{\xi}\cdot\left[\nabla_{\xi} \cdot\left(\frac{\nabla_{\xi}\phi}{|\nabla_{\xi}\phi|^{2}}\right)\frac{\nabla _{\xi}\phi}{|\nabla_{\xi}\phi|^{2}}\right]\widehat{f_{N}}(\xi)d\xi.\end{split} \tag{2.9}\]
By (2.7) and (2.8), we obtain the following bounds (independent of \(N_{0}\)): for \(|\xi|\sim N\)
\[\begin{split}\left|\frac{1}{\nabla_{\xi}\phi(\xi)}\right|& \lesssim N^{-1}\langle N\rangle^{3},\\ \left|\nabla_{\xi}\cdot\left(\frac{\nabla_{\xi}\phi}{|\nabla_{ \xi}\phi|^{2}}\right)\right|&\lesssim N^{-2}\langle N\rangle^{5},\\ \left|\nabla_{\xi}\cdot\left[\nabla_{\xi}\cdot\left(\frac{ \nabla_{\xi}\phi}{|\nabla_{\xi}\phi|^{2}}\right)\frac{\nabla_{\xi}\phi}{|\nabla _{\xi}\phi|^{2}}\right]\right|&\lesssim N^{-4}\langle N\rangle^{ 10}.\end{split} \tag{2.10}\]
Using (2.10) and the Sobolev embedding, we see that
\[\begin{split}\left|I_{N}^{1}(t,x)\right|&\lesssim t ^{-2}N^{-2}\langle N\rangle^{\ell+6}\left\|\nabla_{\xi}^{2}\left(\widehat{f_{N }}\right)\right\|_{L_{\xi}^{1}}\\ &\lesssim t^{-2}\langle N\rangle^{\ell+6}\Big{(}N^{-1}\|x^{2}f\| _{L^{2}}+N^{-2}\left\|\nabla\rho_{N}\right\|_{L^{\frac{4}{6}}}\|\langle x \rangle^{2}f\|_{L^{2}}\\ &\qquad\qquad\qquad+N^{-2}\min\left(\langle N\rangle^{-n}N^{-1} \|f\|_{H^{n}},\langle N\rangle^{-k}\|\widehat{f}\|_{L_{\xi}^{\infty}}\right) \Big{)}\\ &\lesssim t^{-2}\langle N\rangle^{\ell+6}\left((N^{-1}+N^{-\frac {3}{2}})t^{1+2\delta_{0}}+N^{-2}\min\left(\langle N\rangle^{-n+1}t^{1+\delta_ {0}},t\right)\right),\end{split} \tag{2.11}\]
which implies that
\[\sum_{\langle t\rangle^{-\frac{1}{2}}\leq N\leq\langle t\rangle^{\frac{2}{n}} }\left|I_{N}^{1}(t,x)\right|\lesssim\varepsilon_{1}.\]
\(I_{N}^{2}\) can be estimated similarly. Indeed, one has
\[\begin{split}&|I_{N}^{2}(t,x)|\lesssim t^{-2}N^{-3}\langle N \rangle^{\ell+8}\left\|\nabla_{\xi}\left(\rho_{N}\widehat{f}\right)\right\|_{ L_{\xi}^{1}}\\ &\lesssim t^{-2}\langle N\rangle^{\ell+8}\left(N^{-2}\left\| \nabla\rho_{N}\right\|_{L^{\frac{4}{3}}}\|\langle x\rangle^{2}f\|_{L^{2}}+N^{- 2}\min\left(\langle N\rangle^{-n}N^{-1}\|f\|_{H^{n}},\langle N\rangle^{-k}\| \widehat{f}\|_{L_{\xi}^{\infty}}\right)\right),\end{split}\]
which leads us that
\[\sum_{\langle t\rangle^{-\frac{1}{2}}\leq N\leq\langle t\rangle^{\frac{2}{n}} }|I_{N}^{2}(t,x)|\lesssim\varepsilon_{1}.\]
Lastly, we estimate
\[\begin{split}\sum_{\langle t\rangle^{-\frac{1}{2}}\leq N\leq \langle t\rangle^{\frac{2}{n}}}|I_{N}^{3}(t,x)|&\lesssim\sum_{ \langle t\rangle^{-\frac{1}{2}}\leq N\leq\langle t\rangle^{\frac{2}{n}}}t^{-2} N^{-4}\langle N\rangle^{\ell+10}\|\rho_{N}\widehat{f}\|_{L_{\xi}^{1}}\\ &\lesssim\sum_{\langle t\rangle^{-\frac{1}{2}}\leq N\leq\langle t \rangle^{\frac{2}{n}}}t^{-2}\langle N\rangle^{\ell+10}N^{-2}\min\left(\langle N \rangle^{-n}N^{-1}\|f\|_{H^{n}},\langle N\rangle^{-k}\|\widehat{f}\|_{L_{\xi}^ {\infty}}\right)\\ &\lesssim\varepsilon_{1}.\end{split}\]
We remain to consider the stationary phase case \(N\sim N_{0}\). We further decompose dyadically the frequency space around \(\xi_{0}\). Let \(L_{0}\in 2^{\mathbb{Z}}\) such that \(\frac{L_{0}}{2}<t^{-\frac{1}{2}}\leq L_{0}\). We write
\[\left|\int_{\mathbb{R}^{2}}\left\langle\xi\right\rangle^{\ell}e^{it\phi(\xi)} \rho_{N}(\xi)\widehat{f}(\xi)d\xi\right|\leq\sum_{L=L_{0}}^{2^{10}N}|J_{L}|\]
where
\[J_{L}(t,x)=\left\{\begin{aligned} &\int_{\mathbb{R}^{2}}\left\langle\xi \right\rangle^{\ell}e^{it\phi(\xi)}\rho_{\leq L_{0}}(\xi-\xi_{0})\rho_{N}(\xi) \widehat{f}(\xi)d\xi&\text{when }L=L_{0},\\ &\int_{\mathbb{R}^{2}}\left\langle\xi\right\rangle^{\ell}e^{it \phi(\xi)}\rho_{L}(\xi-\xi_{0})\rho_{N}(\xi)\widehat{f}(\xi)d\xi& \text{when }L>L_{0}.\end{aligned}\right.\]
We bound \(J_{L_{0}}\) the support of which contains the stationary point by just the measure
\[|J_{L_{0}}|\lesssim L_{0}^{2}\langle N\rangle^{l}\|\rho_{N}\widehat{f}\|_{L_{ \xi}^{\infty}}\lesssim t^{-1}\|\langle\xi\rangle^{k}\widehat{f}\|_{L_{\xi}^{ \infty}}\lesssim\varepsilon_{1}.\]
When \(L>L_{0}\), we return to the non-stationary phase cases. By integrating by parts twice, we decompose \(J_{L}(t,x)\) into
\[J_{L}(t,x)=J_{L}^{1}(t,x)+J_{L}^{2}(t,x)+J_{L}^{3}(t,x),\]
where
\[J_{L}^{1}(t,x) =-t^{-2}\int_{\mathbb{R}^{2}}\left\langle\xi\right\rangle^{l}e^{ it\phi(\xi)}\frac{\nabla_{\xi}\phi}{|\nabla_{\xi}\phi|^{4}}\cdot\nabla_{\xi} \phi\nabla_{\xi}^{2}\left(\widehat{f}(\xi)\rho_{N}(\xi)\rho_{L}(\xi-\xi_{0}) \right)d\xi,\] \[J_{L}^{2}(t,x) =-2t^{-2}\int_{\mathbb{R}^{2}}\left\langle\xi\right\rangle^{l}e^ {it\phi(\xi)}\nabla_{\xi}\cdot\left(\frac{\nabla_{\xi}\phi}{|\nabla_{\xi}\phi |^{2}}\right)\frac{\nabla_{\xi}\phi}{|\nabla_{\xi}\phi|^{2}}\cdot\nabla_{\xi} \left(\widehat{f}(\xi)\rho_{N}(\xi)\rho_{L}(\xi-\xi_{0})\right)d\xi,\] \[J_{L}^{3}(t,x) =-t^{-2}\int_{\mathbb{R}^{2}}\left\langle\xi\right\rangle^{l}e^ {it\phi(\xi)}\nabla_{\xi}\cdot\left[\nabla_{\xi}\cdot\left(\frac{\nabla_{\xi} \phi}{|\nabla_{\xi}\phi|^{2}}\right)\frac{\nabla_{\xi}\phi}{|\nabla_{\xi}\phi |^{2}}\right]\widehat{f}(\xi)\rho_{N}(\xi)\rho_{L}(\xi-\xi_{0})d\xi.\]
We estimate \(J_{L}^{i}\) for \(i=1,2,3\) similarly to the above, but the following bounds are employed instead of (2.10): for \(|\xi|\sim N\) and \(|\xi-\xi_{0}|\sim L\),
\[|\nabla_{\xi}\phi(\xi)|^{-1} \lesssim L^{-1}\langle N\rangle^{3},\] \[\left|\nabla_{\xi}\cdot\left(\frac{\nabla_{\xi}\phi}{|\nabla_{\xi }\phi|^{2}}\right)\right| \lesssim L^{-2}\langle N\rangle^{5},\] \[\left|\nabla_{\xi}\cdot\left[\nabla_{\xi}\cdot\left(\frac{ \nabla_{\xi}\phi}{|\nabla_{\xi}\phi|^{2}}\right)\frac{\nabla_{\xi}\phi}{| \nabla_{\xi}\phi|^{2}}\right]\right| \lesssim L^{-4}\langle N\rangle^{10}.\]
The computation as in (2.11) gives the desired results. We omit the details and complete the proof of (2.4)
Since we have to handle the multipliers to exploit the time decay (2.4) in our main proof, we introduce some useful estimates in the rest of this section.
**Lemma 2.2** (Coifman-Meyer operator estimates).: _Assume that a multiplier \(\textbf{m}(\xi,\eta)\) satisfies that_
\[C_{\textbf{m}}:=\left\|\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\textbf{m}( \xi,\eta)e^{ix\cdot\xi}e^{iy\cdot\eta}\,d\eta d\xi\right\|_{L^{1}_{x,y}(\mathbb{ R}^{2}\times\mathbb{R}^{2})}<\infty. \tag{2.12}\]
_Then, for \(\frac{1}{p}+\frac{1}{q}=\frac{1}{2}\),_
\[\left\|\int_{\mathbb{R}^{2}}\textbf{m}(\xi,\eta)\widehat{u}(\xi\pm\eta) \widehat{v}(\eta)\,d\eta\right\|_{L^{2}_{\xi}(\mathbb{R}^{2})}\lesssim C_{ \textbf{m}}\|u\|_{L^{p}}\|v\|_{L^{q}}, \tag{2.13}\]
_and for \(\frac{1}{p}+\frac{1}{q}+\frac{1}{r}=1\),_
\[\left|\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\mathbf{m}(\eta,\sigma)\widehat{u }(\eta\pm\sigma)\widehat{v}(\eta)\widehat{w}(\sigma)\,d\sigma d\eta\right| \lesssim C_{\mathbf{m}}\|u\|_{L^{p}}\|v\|_{L^{q}}\|w\|_{L^{r}}. \tag{2.14}\]
## 3. Weighted Energy estimate
In this section, we prove the energy estimate which plays an important role in the bootstrap argument. In the following Proposition 3.1, we bound the weighted norms \(\|u\|_{E_{1}},\|u\|_{E_{2}}\) in the a priori assumption (2.3).
**Proposition 3.1** (Weighted energy estimate).: _Assume that \(u\in C([0,T],H^{n})\) satisfies the a priori assumption (2.3) for some \(\varepsilon_{1}>0\) with initial data condition (1.6) for \(\varepsilon_{0}>0\). Suppose the index conditions (2.2). Then, we have the following estimates_
\[\sup_{t\in[0,T]}\langle t\rangle^{-\delta_{0}}\|u(t)\|_{E_{1}} \leq\varepsilon_{0}+C\varepsilon_{1}^{3}, \tag{3.2}\] \[\sup_{t\in[0,T]}\langle t\rangle^{-2\delta_{0}}\|u(t)\|_{E_{2}} \leq\varepsilon_{0}+C\varepsilon_{1}^{3}, \tag{3.1}\]
_with \(\delta_{0}=\frac{1}{100}\)._
### Useful inequalities
For the purpose of proving weighted energy estimates, we introduce some useful inequalities.
**Lemma 3.2** (Lemma 3.2 in [4]).: _For any \(\mathbb{C}\)-valued functions \(u\in L^{2}(\mathbb{R}^{2})\cap L^{\infty}(\mathbb{R}^{2})\), we get_
\[\left\|\left|x\right|^{-1}*(\left|u\right|^{2})\right\|_{L^{\infty}(\mathbb{R} ^{2})}\lesssim\|u\|_{L^{2}}\|u\|_{L^{\infty}}. \tag{3.3}\]
Under the a priori assumption, we find the bounds for the frequency localized terms.
**Lemma 3.3**.: _Let \(u\) satisfy the a priori assumption (2.3) for some \(\varepsilon_{1}>0\). Suppose the index conditions (2.2). Then, for \(0\leq\gamma\leq 1\) and a dyadic number \(N\in 2^{2}\),_
\[\|P_{N}u(t)\|_{L^{\infty}(\mathbb{R}^{2})}\lesssim N^{\gamma}\langle N\rangle ^{-k(1-\gamma)}\langle t\rangle^{-(1-\gamma)}\epsilon_{1}. \tag{3.4}\]
_We also have_
\[\|P_{N}f(t)\|_{L^{2}(\mathbb{R}^{2})} \lesssim\min(N^{\frac{1}{2}}\langle t\rangle^{\frac{1}{2}\delta_ {0}},\langle N\rangle^{-n})\epsilon_{1}, \tag{3.6}\] \[\|P_{N}xf(t)\|_{L^{2}(\mathbb{R}^{2})} \lesssim\min(N^{\frac{1}{2}}\langle t\rangle^{\frac{3}{2}\delta_ {0}},\langle N\rangle^{-2}\langle t\rangle^{\delta_{0}})\epsilon_{1}. \tag{3.5}\]
Proof.: By Young's inequality, we have from (2.4) that
\[\|P_{N}u(t)\|_{L^{\infty}(\mathbb{R}^{2})}\leq\left\|\mathcal{F}^{-1}\left( \rho_{N}\langle\xi\rangle^{-k}\right)\right\|_{L^{1}}\|u(t)\|_{W^{k,\infty}} \lesssim\langle N\rangle^{-k}\langle t\rangle^{-1}\epsilon_{1}.\]
On the other hand, interpolating time decay estimates (2.4) and the mass conservation law (1.2), we get for \(2\leq p\leq\infty\),
\[\|P_{N}u(t)\|_{L^{p}(\mathbb{R}^{2})}\lesssim\|P_{N}u(t)\|_{L^{\infty}}^{1- \frac{2}{p}}\|u_{0}\|_{L^{2}}^{\frac{2}{p}}\lesssim\langle N\rangle^{-k(1- \frac{2}{p})}\langle t\rangle^{-\left(1-\frac{2}{p}\right)}\varepsilon_{1}^{2}.\]
Then, by Bernstein's inequality, we obtain for \(2\leq p<\infty\)
\[\|P_{N}u(t)\|_{L^{\infty}(\mathbb{R}^{2})}\lesssim N^{\frac{2}{p}}\|P_{N}u(t) \|_{L^{p}(\mathbb{R}^{2})}\lesssim N^{\frac{2}{p}}\langle N\rangle^{-k(1- \frac{2}{p})}\langle t\rangle^{-(1-\frac{2}{p})}\epsilon_{1}.\]
Next, by using Bernstein's inequality and interpolating weighted norms, one can obtain
\[\|P_{N}f\|_{L^{2}(\mathbb{R}^{2})}\lesssim N^{\frac{1}{2}}\|f\|_{L^{\frac{4}{3} }(\mathbb{R}^{2})}\lesssim N^{\frac{1}{2}}\|u_{0}\|_{L^{2}}^{\frac{1}{2}}\|xf \|_{L^{2}}^{\frac{1}{2}}\lesssim N^{\frac{1}{2}}\langle t\rangle^{\frac{1}{2} \delta_{0}}\epsilon_{1}^{2},\]
or, one has
\[\|P_{N}f\|_{L^{2}(\mathbb{R}^{2})}\lesssim\langle N\rangle^{-n}\|u\|_{H^{n}( \mathbb{R}^{2})}.\]
Then, (3.5) follows by interpolating above two estimates. The last inequality (3.6) can be obtained similarly.
Next, we consider the quadratic terms.
**Lemma 3.4**.: _Let \(u\) satisfy the a priori assumption (2.3) for some \(\varepsilon_{1}>0\) with the index conditions (2.2). For a dyadic number \(N\in 2^{\mathbb{Z}}\), we have_
\[\Big{\|}P_{N}\Big{(}|u(t)|^{2}\Big{)}\Big{\|}_{L^{\infty}( \mathbb{R}^{2})} \lesssim\min(\langle N\rangle^{-k}\langle t\rangle^{-2},N^{2}) \varepsilon_{1}^{2}, \tag{3.8}\] \[\Big{\|}P_{N}\Big{(}|u(t)|^{2}\Big{)}\Big{\|}_{L^{2}(\mathbb{R}^{ 2})} \lesssim N\langle N\rangle^{-\frac{k}{2}}\varepsilon_{1}^{2}. \tag{3.7}\]
Proof.: We refer to [3, Lemma 2.4] where the three dimensional case is proved.
We observe that no time decay was obtained in (3.8). In the following lemma, we find the time decay in (3.8) at the cost of derivative. Especially, the space resonance is employed to obtain the almost second order time decay \(|t|^{-2+}\).
**Lemma 3.5**.: _Let \(u\) satisfy the a priori assumption (2.3) for some \(\varepsilon_{1}>0\) with the index conditions (2.2). Then, for a dyadic number \(L\in 2^{\mathbb{Z}}\), we have_
\[\|P_{N}|u(t)|^{2}\|_{L^{2}(\mathbb{R}^{2})} \lesssim\langle t\rangle^{-1+\delta_{0}}\langle N\rangle^{-k} \epsilon^{2}, \tag{3.9}\] \[\|P_{N}|u(t)|^{2}\|_{L^{2}(\mathbb{R}^{2})} \lesssim\langle t\rangle^{-2+\frac{3}{2}\delta_{0}}N^{-1}\langle N \rangle^{-1}\epsilon^{2}.\]
Proof.: The Plancherel's identity yields that
\[\|P_{N}|u(t)|^{2}\|_{L^{2}(\mathbb{R}^{2})}=\|\rho_{N}\mathcal{F} \big{(}|u(t)|^{2}\big{)}\|_{L^{2}(\mathbb{R}^{2})}.\]
We write
\[\rho_{N}(\eta)\mathcal{F}\big{(}|u(t)|^{2}\big{)}(\eta) =\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}\rho_{N}(\eta)\widehat{ u}(\sigma)\overline{\widehat{u}(\sigma+\eta)}d\sigma\] \[=\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^{2}}\frac{\rho_{N}(\eta)}{ \langle\sigma\rangle^{k}\langle\sigma+\eta\rangle^{k}}\widehat{\langle D \rangle^{k}u}(\sigma)\overline{\widehat{\langle D\rangle^{k}u}(\sigma+\eta)}d\sigma.\]
If we let
\[\mathbf{m}(\eta,\sigma):=\frac{\rho_{N}(\eta)}{\langle\sigma\rangle^{k} \langle\sigma+\eta\rangle^{k}},\]
one can verify that
\[C_{\mathbf{m}}\lesssim\langle N\rangle^{-k},\]
where the constant \(C_{\mathbf{m}}\) is defined in (2.12). Thus, by the Coifman-Meyer estimates (2.13), we have
\[\|\rho_{N}\mathcal{F}\big{(}|u(t)|^{2}\big{)}\|_{L^{2}(\mathbb{R}^{2})} \lesssim\langle N\rangle^{-k}\|u\|_{H^{k}}\|\langle D\rangle^{k}u\|_{L^{ \infty}}\lesssim\langle N\rangle^{-k}\langle t\rangle^{-1+\delta_{0}} \epsilon_{1}^{2}.\]
Next, we consider (3.9). we write
\[\mathcal{F}\big{(}|u(t)|^{2}\big{)}(\eta)=\frac{1}{(2\pi)^{2}} \int_{\mathbb{R}^{2}}e^{it(\langle\sigma+\eta\rangle-\langle\sigma\rangle)} \widehat{f}(t,\sigma)\overline{\widehat{f}(t,\eta+\sigma)}d\sigma, \tag{3.10}\]
where \(f(t,x)=e^{it\langle D\rangle}u(t,x)\). We perform an integration by parts to obtain
\[\mathcal{F}\big{(}|u(t)|^{2}\big{)}(\eta)=\frac{1}{(2\pi)^{2}} \Big{(}I_{1}(t,\eta)+I_{2}(t,\eta)+I_{3}(t,\eta)\Big{)},\]
where
\[I_{1}(t,\eta) =\frac{i}{t}\int_{\mathbb{R}^{2}}\frac{\nabla_{\sigma}(\langle\sigma +\eta\rangle-\langle\sigma\rangle)}{|\nabla_{\sigma}(\langle\sigma+\eta\rangle- \langle\sigma\rangle)|^{2}}e^{-it\langle\sigma\rangle}\cdot\widehat{x_{f}}(t, \sigma)\overline{\widehat{u}(t,\eta+\sigma)}d\sigma,\] \[I_{2}(t,\eta) =\frac{i}{t}\int_{\mathbb{R}^{2}}\frac{\nabla_{\sigma}(\langle \sigma+\eta\rangle-\langle\sigma\rangle)}{|\nabla_{\sigma}(\langle\sigma+\eta \rangle-\langle\sigma\rangle)|^{2}}e^{it\langle\sigma+\eta\rangle}\cdot \widehat{u}(t,\sigma)\overline{\widehat{x_{f}}(t,\eta+\sigma)}d\sigma,\] \[I_{3}(t,\eta) =\frac{i}{t}\int_{\mathbb{R}^{2}}\nabla_{\sigma}\cdot\left( \frac{\nabla_{\sigma}(\langle\sigma+\eta\rangle-\langle\sigma\rangle)}{| \nabla_{\sigma}(\langle\sigma+\eta\rangle-\langle\sigma\rangle)|^{2}}\right) \widehat{u}(t,\sigma)\overline{\widehat{u}(t,\eta+\sigma)}d\sigma.\]
First, we consider \(I_{1}\). We perform dyadic decomposition in the variables \(\sigma\) and \(\eta+\sigma\) to write
\[\rho_{N}(\eta)I_{1}(t,\eta) =\frac{i}{t}\sum_{(N_{1},N_{2})\in(2^{\mathbb{Z}})^{2}}I_{1}^{(N,N_{1},N_{2})}(t,\eta),\] \[I_{1}^{(N,N_{1},N_{2})}(t,\eta) :=\frac{i}{t}\int_{\mathbb{R}^{2}}\mathbf{m}_{(N,N_{1},N_{2})}( \eta,\sigma)e^{-it\langle\sigma\rangle}\widehat{P_{N_{1}}x_{f}}(t,\sigma) \overline{\widehat{P_{N_{2}}u}(t,\eta+\sigma)}d\sigma, \tag{3.11}\]
where
\[\mathbf{m}_{(N,N_{1},N_{2})}(\eta,\sigma)=\frac{\nabla_{\sigma}( \langle\sigma+\eta\rangle-\langle\sigma\rangle)}{|\nabla_{\sigma}(\langle \sigma+\eta\rangle-\langle\sigma\rangle)|^{2}}\rho_{N}(\eta)\rho_{N_{1}}(\eta +\sigma)\rho_{N_{2}}(\sigma).\]
Since
\[|\nabla_{\sigma}(\langle\sigma+\eta\rangle-\langle\sigma\rangle)|= \left|\frac{\eta+\sigma}{\langle\eta+\sigma\rangle}-\frac{\sigma}{\langle \sigma\rangle}\right|\gtrsim\frac{|\eta|}{\max(\langle\eta+\sigma\rangle, \langle\sigma\rangle)\min(\langle\eta+\sigma\rangle,\langle\sigma\rangle)^{2}},\]
a direct computation yields that 3
Footnote 3: For detailed computation, we refer to [3].
\[|\mathbf{m}_{(N,N_{1},N_{2})}(\eta,\sigma)|\lesssim N^{-1}\max( \langle N_{1}\rangle,\langle N_{2}\rangle)\min(\langle N_{1}\rangle,\langle N _{2}\rangle)^{2}\]
and
\[C_{\mathbf{m}_{(N,N_{1},N_{2})}}\lesssim N^{-1}\max(\langle N_{1 }\rangle,\langle N_{2}\rangle)\min(\langle N_{1}\rangle,\langle N_{2}\rangle )^{10}. \tag{3.12}\]
Applying the operator inequality (2.13) with (3.12), we obtain
\[\left\|I_{1}^{(N,N_{1},N_{2})}(t)\right\|_{L^{2}(\mathbb{R}^{2})}\] \[\qquad\lesssim|t|^{-1}N^{-1}\max(\langle N_{1}\rangle,\langle N _{2}\rangle)\min(\langle N_{1}\rangle,\langle N_{2}\rangle)^{10}\|P_{N_{1}}xf( t)\|_{L^{2}}\|P_{N_{2}}u(t)\|_{L^{\infty}}\]
We observe that the sums in (3.11) are actually taken over those indexes \((N_{1},N_{2})\) satisfying
\[N\lesssim N_{1}\sim N_{2}\ \ \text{or}\ \ N_{\min}\ll N_{\max}\sim N,\]
where \(N_{\max}=\max(N_{1},N_{2})\) and \(N_{\min}=\min(N_{1},N_{2})\). Thus, using Lemma 3.3, we estimate
\[\sum_{N_{1}\lesssim N_{2}}\left\|I_{1}^{(N,N_{1},N_{2})}(t)\right\|_ {L^{2}(\mathbb{R}^{2})}\] \[\lesssim|t|^{-1}N^{-1}\Big{(}\sum_{N_{1}\ll N_{2}\sim N}+\sum_{N \lesssim N_{1}\sim N_{2}}\Big{)}\langle N_{2}\rangle^{1-k}\langle N_{1} \rangle^{10}\min(N_{1}^{\frac{1}{2}}\langle t\rangle^{\frac{3}{2}\delta_{0}}, \langle N_{1}\rangle^{-2}\langle t\rangle^{\delta_{0}})\langle t\rangle^{-1} \epsilon_{1}^{2}\] \[\lesssim\langle t\rangle^{-2+\frac{3}{2}\delta_{0}}N^{-1} \langle N\rangle^{-1}\]
and
\[\sum_{N_{2}\ll N_{1}\sim N}\left\|I_{1}^{(N,N_{1},N_{2})}(t)\right\|_ {L^{2}(\mathbb{R}^{2})} \lesssim|t|^{-1}N^{-1}\sum_{N_{2}\ll N_{1}\sim N}\langle N_{1} \rangle^{-1}\langle N_{2}\rangle^{10-k+k\frac{\delta_{0}}{2}}\langle t\rangle^ {\delta_{0}}N_{2}^{\frac{\delta_{0}}{2}}\langle t\rangle^{-1+\frac{1}{2}\delta _{0}}\epsilon_{1}^{2}\] \[\lesssim\langle t\rangle^{-2+\frac{3}{2}\delta_{0}}N^{-1}\langle N \rangle^{-1}\sum_{N_{2}\ll N}\langle N_{2}\rangle^{10-k+\frac{\delta_{0}}{2}}N _{2}^{\frac{\delta_{0}}{2}}\epsilon_{1}^{2}\] \[\lesssim\langle t\rangle^{-2+\frac{3}{2}\delta_{0}}N^{-1}\langle L \rangle^{-1}.\]
By the symmetry, we may omit the proof of estimates for \(I_{2}\).
For \(I_{3}\), we also perform dyadic decomposition and write
\[\rho_{N}(\eta)I_{3}(t,\eta) =\frac{i}{t}\sum_{N_{1},N_{2}\in(2^{\mathbb{T}})^{2}}I_{3}^{(N,N_ {1},N_{2})}(t,\eta),\] \[I_{3}^{(N,N_{1},N_{2})}(t,\eta) =\frac{i}{t}\int_{\mathbb{R}^{2}}\mathbf{m^{\prime}}_{(N,N_{1},N_ {2})}(\eta,\sigma)\widehat{P_{N_{1}}u}(t,\sigma)\overline{\widehat{P_{N_{2}}u }(t,\eta+\sigma)}d\sigma,\]
where
\[\mathbf{m^{\prime}}_{(N,N_{1},N_{2})}(\eta,\sigma)=\nabla_{\sigma}\cdot\left( \frac{\nabla_{\sigma}(\langle\sigma+\eta\rangle-\langle\sigma\rangle)}{| \nabla_{\sigma}(\langle\sigma+\eta\rangle-\langle\sigma\rangle)|^{2}} \right)\rho_{N}(\eta)\rho_{N_{1}}(\eta+\sigma)\rho_{N_{2}}(\sigma).\]
Then, one easily verifies that
\[|\mathbf{m^{\prime}}_{(N,N_{1},N_{2})}(\eta,\sigma)|\lesssim N^{-1}\max( \langle N_{1}\rangle,\langle N_{2}\rangle)\min(\langle N_{1}\rangle,\langle N _{2}\rangle)^{3}\]
and
\[C_{\mathbf{m^{\prime}}_{(N,N_{1},N_{2})}}\lesssim N^{-1}\max(\langle N_{1} \rangle,\langle N_{2}\rangle)\min(\langle N_{1}\rangle,\langle N_{2}\rangle)^ {11}. \tag{3.13}\]
Applying the operator inequality (2.13) with (3.13), we obtain
\[\sum_{(N_{1},N_{2})\in(2^{\mathbb{T}})^{2}}\left\|I_{3}^{(N,N_{1},N_{2})}(t)\right\|_{L^{2}(\mathbb{R}^{2})}\] \[\lesssim|t|^{-1}N^{-1}\sum_{(N_{1},N_{2})\in(2^{\mathbb{T}})^{2}} \max(\langle N_{1}\rangle,\langle N_{2}\rangle)\min(\langle N_{1}\rangle, \langle N_{2}\rangle)^{11}\|P_{N_{1}}u\|_{L^{2}}\|P_{N_{2}}u(t)\|_{L^{\infty}}\] \[\lesssim|t|^{-2+\frac{3}{2}\delta_{0}}N^{-1}\sum_{(N_{1},N_{2}) \in(2^{\mathbb{T}})^{2}}\max(\langle N_{1}\rangle,\langle N_{2}\rangle)\min( \langle N_{1}\rangle,\langle N_{2}\rangle)^{11}N_{1}^{\frac{1}{2}}\langle N_{ 1}\rangle^{-n+\frac{1}{2}}N_{2}^{\delta_{0}}\langle N_{2}\rangle^{-k+k\delta_ {0}}\epsilon_{1}^{2}\] \[\lesssim\langle t\rangle^{-2+\delta_{0}}N^{-1}\langle N\rangle^{ -1}.\]
Here we used Lemma 3.3 in the second inequality. This finishes the proof.
### Proof of Proposition 3.1
Now, we begin with the proof of Proposition 3.1.
Proof of (3.1).: Let us first handle the high Sobolev norm in \(\|u(t)\|_{E_{1}}\). This can be bounded by Hardy-Littlewood-Sobolev inequality and (3.3). Indeed, we first observe that by interpolation of the time decay estimates (2.4) and the conservation law (1.2), we get for \(2\leq p\leq\infty\),
\[\|u(t)\|_{L^{p}_{x}(\mathbb{R}^{2})}\lesssim\|u(t)\|_{L^{2}_{x}}^{1-\frac{2}{p }}\|u(t)\|_{L^{2}_{x}}^{2}\lesssim\langle t\rangle^{-\left(1-\frac{2}{p} \right)}\varepsilon_{1}^{2}.\]
Then, we estimate
\[\left\|\left|x\right|^{-1}*|u(t)|^{2}\right\|_{L^{\infty}_{x}(\mathbb{R}^{2})} \lesssim\|u_{0}\|_{L^{2}_{x}}\|u(t)\|_{L^{\infty}_{x}}\lesssim\langle t \rangle^{-1}\varepsilon_{1}^{2}\]
and
\[\left\|\langle D\rangle^{n}\left(|x|^{-1}*|u(t)|^{2}\right)\right\|_{L^{4}_{x} (\mathbb{R}^{2})}\lesssim\|u(t)\|_{H^{n}}\|u(t)\|_{L^{4}_{x}}\lesssim\langle t \rangle^{-\frac{1}{2}+\delta_{0}}\varepsilon_{1}^{2},\]
which imply
\[\|u(t)\|_{H^{n}(\mathbb{R}^{2})}\lesssim\varepsilon_{0}+C\langle t\rangle^{\delta_ {0}}\varepsilon_{1}^{3}.\]
Let us consider \(\|xe^{it\langle D\rangle}u\|_{H^{2}(\mathbb{R}^{2})}\). Note that
\[\|xe^{it\langle D\rangle}u\|_{H^{2}}\sim\left\|\langle\xi\rangle^{2}\mathcal{F }\left(xe^{it\langle D\rangle}u\right)\right\|_{L_{\xi}^{2}}\sim\left\|\langle \xi\rangle^{2}\nabla_{\xi}\widehat{f}\right\|_{L_{\xi}^{2}}.\]
By the Duhamel's formula (1.11), \(\nabla_{\xi}\widehat{f}\) can be represented by
\[\nabla_{\xi}\widehat{f}(t,\xi)=\nabla_{\xi}\widehat{u_{0}}(\xi)+\frac{i\lambda }{2\pi}\int_{0}^{t}\Big{[}\mathcal{I}^{1}(s,\xi)+\mathcal{I}^{2}(s,\xi)\Big{]}ds,\]
where
\[\mathcal{I}^{1}(s,\xi) =\int_{\mathbb{R}^{2}}e^{is\phi(\xi,\eta)}|\eta|^{-1}\nabla_{ \xi}\widehat{f}(\xi-\eta)\mathcal{F}(|u|^{2})(\eta)\,d\eta,\] \[\mathcal{I}^{2}(s,\xi) =is\int_{\mathbb{R}^{2}}\nabla_{\xi}\phi(\xi,\eta)e^{is\phi(\xi, \eta)}|\eta|^{-1}\widehat{f}(\xi-\eta)\mathcal{F}(|u|^{2})(\eta)\,d\eta.\]
Here we defined a resonant function \(\phi\):
\[\phi(\xi,\eta)=\langle\xi\rangle-\langle\xi-\eta\rangle,\quad\text{and}\quad \nabla_{\xi}\phi(\xi,\eta)=\frac{\xi}{\langle\xi\rangle}-\frac{\xi-\eta}{ \langle\xi-\eta\rangle}. \tag{3.14}\]
We estimate the contribution from \(\mathcal{I}^{1}\) and \(\mathcal{I}^{2}\) under the a priori assumption (2.3) as follows:
\[\left\|\langle\xi\rangle^{2}\mathcal{I}^{1}(s,\xi)\right\|_{L_{\xi}^{2}}+ \left\|\langle\xi\rangle^{2}\mathcal{I}^{2}(s,\xi)\right\|_{L_{\xi}^{2}} \lesssim\langle s\rangle^{-1+\delta_{0}}\varepsilon_{1}^{3}.\]
Estimates for \(\mathcal{I}^{1}\). By Lemma 3.3 and a priori assumption (2.3), we get
\[\left\|\langle\xi\rangle^{2}\mathcal{I}^{1}(s,\xi)\right\|_{L_{\xi}^{2}} \lesssim\|u(s)\|_{L^{\infty}}\|u(s)\|_{H^{2}}\|xf(s)\|_{H^{2}}\lesssim\langle s \rangle^{-1+\delta_{0}}\varepsilon_{1}^{3}. \tag{3.15}\]
Estimates for \(\mathcal{I}^{2}\). We decompose the frequency variables \(|\xi|,|\xi-\eta|\) into the dyadic pieces \(N_{0},N_{1}\in 2^{\mathbb{Z}}\), respectively. We also divide \(|\eta|\) associated to the potential into \(N_{2}\in 2^{\mathbb{Z}}\). Then, we write
\[\langle\xi\rangle^{2}\mathcal{I}^{2}(s,\xi) =\sum_{\mathbf{N}:=(N_{0},N_{1},N_{2})\in(2^{\mathbb{Z}})^{3}} \mathcal{I}^{2}_{\mathbf{N}}(s,\xi),\] \[\mathcal{I}^{2}_{\mathbf{N}}(s,\xi) =is\int_{\mathbb{R}^{2}}\mathbf{m}_{\mathbf{N}}(\xi,\eta)e^{is\phi (\xi,\eta)}\widehat{P_{N_{1}}f}(\xi-\eta)\widehat{P_{N_{2}}(|u|^{2})}(\eta)\,d\eta,\]
where
\[\mathbf{m}_{\mathbf{N}}(\xi,\eta)=\langle\xi\rangle^{2}|\eta|^{-1}\nabla_{ \xi}\phi(\xi,\eta)\rho_{N_{0}}(\xi)\rho_{L}(\eta)\rho_{N_{1}}(\xi-\eta).\]
We observe that the sums are actually taken over those indices \((N_{0},N_{1},N_{2})\) in the following set
\[\mathcal{N}:=\left\{(N_{0},N_{1},N_{2})\in(2^{\mathbb{Z}})^{3}\mid N_{0} \lesssim N_{1}\sim N_{2}\text{ or }N_{0}\sim\max(N_{1},N_{2})\right\}.\]
Indeed, the integral \(\mathcal{I}^{2}_{\mathbf{N}}\) is zero for \((N_{0},N_{1},N_{2})\notin\mathcal{N}\).
We can estimate \(\mathcal{I}^{2}_{\mathbf{N}}(s)\) in two ways. First, we have by Holder inequality
\[\left\|\mathcal{I}^{2}_{\mathbf{N}}(s,\xi)\right\|_{L_{\xi}^{2}}\lesssim|s|\, \|\mathbf{m}_{\mathbf{N}}(\xi,\eta)\|_{L_{\xi,\eta}^{\infty}}\,\|\rho_{N_{0}}\| _{L^{2}}\left\|P_{N_{2}}|u(s)|^{2}\right\|_{L^{2}}\left\|P_{N_{1}}f(s)\,\right\| _{L^{2}}. \tag{3.16}\]
On the other hand, by using the operator inequality (2.13) with satisfying
\[C_{\mathbf{m}_{\mathbf{N}}}:=\left\|\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}} \mathbf{m}_{\mathbf{N}}(\xi,\eta)e^{ix\cdot\xi}e^{iy\cdot\eta}\,d\eta d\xi \right\|_{L_{x,y}^{1}(\mathbb{R}^{2}\times\mathbb{R}^{2})}<\infty,\]
we have
\[\left\|\mathcal{I}^{2}_{\mathbf{N}}(s,\xi)\right\|_{L_{\xi}^{2}}\lesssim|s|C_{ \mathbf{m}_{\mathbf{N}}}\left\|P_{N_{2}}|u(s)|^{2}\right\|_{L^{\infty}}\left\|P_ {N_{1}}f(s)\,\right\|_{L^{2}}. \tag{3.17}\]
From the following inequality
\[|\nabla_{\xi}\phi(\xi,\eta)|=\left|\frac{\xi}{\langle\xi\rangle}-\frac{\xi-\eta}{ \langle\xi-\eta\rangle}\right|\lesssim\frac{|\eta|}{\max(\langle\xi\rangle, \langle\xi-\eta\rangle)}, \tag{3.18}\]
one can readily verify that
\[\sup_{\xi,\eta\in\mathbb{R}^{2}}|\mathbf{m}_{\mathbf{N}}(\xi,\eta)|\lesssim \langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-1} \ \ \text{and}\ \ C_{\mathbf{m}\mathbf{v}}\lesssim\langle N_{0}\rangle^{2}\max(\langle N_{0} \rangle,\langle N_{1}\rangle)^{-1}. \tag{3.19}\]
Now, we estimate the sum over those indexes \(N_{0}\) such that \(N_{0}\leq\langle s\rangle^{-2}\) by applying (3.16) together with (3.8) and (3.5)
\[\sum_{\begin{subarray}{c}(N_{0},N_{1},N_{2})\in\mathcal{N}\\ N_{0}\leq\langle s\rangle^{-2}\end{subarray}}\left\|\mathcal{I}_{\mathbf{N}}^{ 2}(s)\right\|_{L^{2}} \lesssim|s|\sum_{\begin{subarray}{c}(N_{0},N_{1},N_{2})\in \mathcal{N}\\ N_{0}\leq\langle s\rangle^{-2}\end{subarray}}\langle N_{0}\rangle^{2}\left\| \mathbf{m}\mathbf{v}\right\|_{L^{\infty}}\left\|P_{N_{2}}|u|^{2}(s)\right\|_{L ^{2}}\left\|P_{N_{1}}f(s)\right\|_{L^{2}}\] \[\lesssim|s|\sum_{\begin{subarray}{c}(N_{0},N_{1},N_{2})\in \mathcal{N}\\ N_{0}\leq\langle s\rangle^{-2}\end{subarray}}\langle N_{0}\rangle^{2}N_{0} \max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-1}N_{2}\langle N_{2}\rangle^ {-\frac{k}{2}}N_{1}\langle N_{1}\rangle^{-k}\epsilon_{1}^{3}\] \[\lesssim\langle s\rangle^{-1+\delta_{0}}\epsilon_{1}^{3},\]
where in the second inequality we used Lemma 3.4. For the remaining contribution, we utilize (3.17) together with (3.7) and (3.5) to obtain
\[\sum_{\begin{subarray}{c}(N_{0},N_{1},N_{2})\in\mathcal{N}\\ N_{0}\geq\langle s\rangle^{-2}\end{subarray}}\left\|\mathcal{I}_{\mathbf{N}}^{ 2}(s)\right\|_{L^{2}} \lesssim|s|\sum_{N_{0}\geq\langle s\rangle^{-2}}\langle N_{0} \rangle^{2}\max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-1}\left\|P_{L}|u |^{2}(s)\right\|_{L^{\infty}}\left\|P_{N_{1}}f(s)\right\|_{L^{2}}\] \[\lesssim|s|\sum_{\begin{subarray}{c}(N_{0},N_{1},N_{2})\in \mathcal{N}\\ N_{0}\geq\langle s\rangle^{-2},\ N_{2}\leq\langle s\rangle^{-1}\end{subarray}} \langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-1}N _{2}^{2}N_{1}\langle N_{1}\rangle^{-k}\epsilon_{1}^{3}\] \[\quad+\langle s\rangle^{-1}\sum_{\begin{subarray}{c}(N_{0},N_{1},N_{2})\in\mathcal{N}\\ N_{0}\geq\langle s\rangle^{-2},\ N_{2}\geq\langle s\rangle^{-1}\end{subarray}} \langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-1} \langle N_{2}\rangle^{-k}N_{1}\langle N_{1}\rangle^{-k}\epsilon_{1}^{3}\] \[\lesssim\langle s\rangle^{-1+\delta_{0}}\epsilon_{1}^{3}.\]
Let us move on to the proof of second weighted estimates (3.2).
Proof of (3.2).: By Plancherel's theorem, we have
\[\|x^{2}e^{it\langle D\rangle}u\|_{H^{2}}\sim\|\langle\xi\rangle^{2}\mathcal{ F}(x^{2}e^{it\langle D\rangle}u)\|_{L^{2}}\sim\left\|\langle\xi\rangle^{2} \nabla_{\xi}^{2}\widehat{f}\right\|_{L^{2}}.\]
The Duhamel's formula (1.11) implies that \(\nabla_{\xi}^{2}\widehat{f}\) can be represented by
\[\langle\xi\rangle^{2}\nabla_{\xi}^{2}\widehat{f}(t,\xi)=\langle\xi\rangle^{2 }\nabla^{2}\widehat{u_{0}}(\xi)+\frac{i\lambda}{2\pi}\sum_{j=1}^{4}\int_{0}^{t} \mathcal{J}^{j}(s,\xi)ds,\]
where, by abusing the notation,
\[\mathcal{J}^{1}(s,\xi) =\langle\xi\rangle^{2}\int_{\mathbb{R}^{2}}e^{is\phi(\xi,\eta)}| \eta|^{-1}\nabla^{2}\widehat{f}(\xi-\eta)\mathcal{F}(|u|^{2})(\eta)d\eta,\] \[\mathcal{J}^{2}(s,\xi) =2is\langle\xi\rangle^{2}\int_{\mathbb{R}^{2}}\nabla_{\xi}\phi( \xi,\eta)e^{is\phi(\xi,\eta)}|\eta|^{-1}\nabla\widehat{f}(\xi-\eta)\mathcal{F }(|u|^{2})(\eta)d\eta,\] \[\mathcal{J}^{3}(s,\xi) =is\langle\xi\rangle^{2}\int_{\mathbb{R}^{2}}\nabla_{\xi}^{2}\phi (\xi,\eta)(\xi,\eta)e^{is\phi(\xi,\eta)}|\eta|^{-1}\widehat{f}(\xi-\eta) \mathcal{F}(|u|^{2})(\eta)d\eta, \tag{3.20}\] \[\mathcal{J}^{4}(s,\xi) =-s^{2}\langle\xi\rangle^{2}\int_{\mathbb{R}^{2}}\left(\nabla_{ \xi}\phi(\xi,\eta)\right)^{2}e^{is\phi(\xi,\eta)}|\eta|^{-1}\widehat{f}(\xi- \eta)\mathcal{F}(|u|^{2})(\eta)d\eta.\]
Then we prove that
\[\sum_{j=1}^{4}\left\|\mathcal{J}^{j}(s,\xi)\right\|_{L^{2}_{\xi}}\lesssim \langle s\rangle^{-1+2\delta_{0}}\varepsilon_{1}^{3},\quad\text{ for }j=1,\cdots,4.\]
Estimates for \(\mathcal{J}^{1}\). By Lemma 3.3, \(\mathcal{J}^{1}\) can be bounded as in (3.15). Indeed,
\[\left\|\langle\xi\rangle^{2}\mathcal{J}^{1}(s,\xi)\right\|_{L^{2}_{\xi}( \mathbb{R}^{2})}\lesssim\|u(s)\|_{L^{\infty}}\|u(s)\|_{H^{2}}\|x^{2}f(s)\|_{H^ {2}}\lesssim\langle s\rangle^{-1+2\delta_{0}}\varepsilon_{1}^{3}.\]
Estimates for \(\mathcal{J}^{2}\). Estimates for \(\mathcal{J}^{2}\) can be done almost similarly to those for \(\mathcal{I}^{2}\). If one follows the argument, the only difference is that the norm \(\|P_{N_{1}}f(s)\|_{L^{2}}\) in inequalities (3.16) and (3.17) is replaced by the weighted norm \(\|P_{N_{1}}xf(s)\|_{L^{2}}\), which can be easily dealt with once one utilizes (3.6).
Estimates for \(\mathcal{J}^{3}\). \(\mathcal{J}^{3}\) also can be handled in a similar manner to \(\mathcal{I}^{2}\). Indeed, one finds that the multiplier \(\nabla_{\xi}^{2}\phi(\xi,\eta)\) in \(\mathcal{J}^{3}\) verifies the smaller bound than the one given in (3.18) satisfied by the multiplier \(\nabla_{\xi}\phi(\xi,\eta)\) in \(\mathcal{I}^{2}\). More precisely, the following bound holds
\[\left|\nabla_{\xi}^{2}\phi(\xi,\eta)\right|\lesssim\frac{|\eta|}{\max(\langle \xi\rangle,\langle\xi-\eta\rangle)^{2}}.\]
Thus, if we let
\[\widetilde{\mathbf{m}}_{\mathbf{N}}(\xi,\eta)=\langle\xi\rangle^{2}|\eta|^{-1} \nabla_{\xi}^{2}\phi(\xi,\eta)\rho_{N_{0}}(\xi)\rho_{N_{2}}(\eta)\rho_{N_{1}}( \xi-\eta),\]
one can show that
\[\sup_{\xi,\eta\in\mathbb{R}^{2}}|\widetilde{\mathbf{m}}_{\mathbf{ N}}(\xi,\eta)| \lesssim\langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N _{1}\rangle)^{-2},\] \[C_{\widetilde{\mathbf{m}}_{\mathbf{N}}} \lesssim\langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N _{1}\rangle)^{-2}.\]
Applying these bounds into (3.16) and (3.17), one can obtain the desired bounds.
Estimates for \(\mathcal{J}^{4}\). It remains to estimate the main case \(\mathcal{J}^{4}\). As before, we decompose
\[\mathcal{J}^{4}(s,\xi) =\sum_{\mathbf{N}=(N_{0},N_{1},N_{2})\in(\mathbb{Z}^{2})^{3}} \mathcal{J}^{4}_{\mathbf{N}}(s,\xi),\] \[\mathcal{J}^{4}_{\mathbf{N}}(s,\xi) :=-s^{2}\int_{\mathbb{R}^{2}}\mathbf{m}_{\mathbf{N}}(\xi,\eta)e^{ is\phi(\xi,\eta)}\widehat{P_{N_{1}}f}(\xi-\eta)\widehat{P_{N_{2}}(|u|^{2})}(\eta)\,d\eta,\]
where
\[\mathbf{m}_{\mathbf{N}}(\xi,\eta)=\langle\xi\rangle^{2}|\eta|^{-1}\left( \nabla_{\xi}\phi(\xi,\eta)\right)^{2}\rho_{N_{0}}(\xi)\rho_{N_{2}}(\eta)\rho_ {N_{1}}(\xi-\eta).\]
One can readily verify that
\[\sup_{\xi,\eta\in\mathbb{R}^{2}}|\mathbf{m}_{\mathbf{N}}(\xi,\eta)| \lesssim N_{2}\langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle, \langle N_{1}\rangle)^{-2},\] \[C_{\mathbf{m}_{\mathbf{N}}} \lesssim N_{2}\langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle, \langle N_{1}\rangle)^{-2}. \tag{3.21}\]
We see that the sum over those indexes \(N_{0}\) such that \(N_{0}\leq\langle s\rangle^{-3}\) can be dealt with:
\[\|\mathcal{J}_{\mathbf{N}}^{4}(s)\|_{L^{2}(\mathbb{R}^{2})} \lesssim|s|^{2}\sum_{\begin{subarray}{c}(N_{0},N_{1},L)\in\mathcal{ N}\\ N_{0}\leq\langle s\rangle^{-3}\end{subarray}}\|\rho_{N_{0}}\|_{L^{2}}\left\|\mathbf{m}_{ \mathbf{N}}\right\|_{L^{\infty}}\left\|P_{N_{2}}|u|^{2}(s)\right\|_{L^{2}}\left\| P_{N_{1}}f(s)\right\|_{L^{2}}\] \[\lesssim|s|^{2}\sum_{N_{0}\leq\langle s\rangle^{-3}}N_{0}N_{2} \langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-2}N _{2}\langle N_{2}\rangle^{-\frac{b}{2}}N_{1}\langle N_{1}\rangle^{-k}\epsilon _{1}^{3}\] \[\lesssim\langle s\rangle^{-1}\epsilon_{1}^{3}.\]
On the other hand, by the multiplier inequalities with (3.21), one has
\[\|\mathcal{J}_{\mathbf{N}}^{4}(s)\|_{L^{2}(\mathbb{R}^{2})}\lesssim|s|^{2} \sum_{\begin{subarray}{c}(N_{0},N_{1},L)\in\mathcal{N}\\ N_{0}\geq\langle s\rangle^{-3}\end{subarray}}N_{2}\langle N_{0}\rangle^{2} \max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-2}\left\|P_{N_{2}}|u(s)|^{2 }\right\|_{L^{2}}\left\|P_{N_{1}}u(s)\right\|_{L^{\infty}}. \tag{3.22}\]
Using (3.8), we can bound the sum in (3.22) for \(N_{2}\leq\langle s\rangle^{-1+\frac{3}{4}\delta_{0}}\) by
\[|s|^{2}\sum_{\begin{subarray}{c}N_{0}\geq\langle s\rangle^{-3}, \\ N_{2}\leq\langle s\rangle^{-1+\frac{3}{4}\delta_{0}}\end{subarray}}N_{2} \langle N_{0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-2}N _{2}N_{1}^{\frac{1}{4}\delta_{0}}\langle N_{1}\rangle^{-k-\frac{1}{4}\delta_{ 0}}\langle t\rangle^{-1+\frac{1}{4}\delta_{0}}\epsilon_{1}^{3}\] \[\lesssim\langle s\rangle^{-1+\frac{7}{4}\delta_{0}}\sum_{N_{0} \geq\langle s\rangle^{-3},\ N_{0}\lesssim N_{1}}\langle N_{0}\rangle^{2}\max( \langle N_{0}\rangle,\langle N_{1}\rangle)^{-2}N_{1}^{\frac{1}{4}\delta_{0}} \langle N_{1}\rangle^{-k-\frac{1}{4}\delta_{0}}\epsilon_{1}^{3}\] \[\qquad+\langle s\rangle^{-1+\frac{7}{4}\delta_{0}}\sum_{N_{1} \ll N_{0}\lesssim\langle s\rangle^{-1+\frac{3}{4}\delta_{0}}}N_{1}^{\frac{1}{4 }\delta_{0}}\epsilon_{1}^{3}\] \[\lesssim\langle s\rangle^{-1+2\delta_{0}}\epsilon_{1}^{3}.\]
To estimate the sum in (3.22) for \(L\geq\langle s\rangle^{-1+\frac{3}{4}\delta_{0}}\), we use (3.9) to obtain
\[\langle s\rangle^{-1+\frac{7}{4}\delta_{0}}\sum_{\begin{subarray}{ c}N_{0}\geq\langle s\rangle^{-3},\\ N_{2}\geq\langle s\rangle^{-1+\frac{3}{4}\delta_{0}}\end{subarray}}\langle N_{ 0}\rangle^{2}\max(\langle N_{0}\rangle,\langle N_{1}\rangle)^{-2}\langle N_{ 2}\rangle^{-1}N_{1}^{\frac{1}{4}\delta_{0}}\langle N_{1}\rangle^{-k-\frac{1}{4 }\delta_{0}}\epsilon_{1}^{3}\] \[\lesssim\langle s\rangle^{-1+2\delta_{0}}\epsilon_{1}^{3}.\]
## 4. Modified Scattering : Proof of Theorem 1.1
In Section 3, we have shown that the small solutions stay small in weighted Sobolev norm. In this section we prove (1.8), the asymptotic behaviors of solutions, and complete the proof of Theorem 1.1. We assume that \(u\) satisfies the a priori assumption (2.3). The modified scattering profile is defined by
\[\mathsf{v}(t,\xi)=e^{-iB(t,\xi)}e^{it\langle\xi\rangle}\widehat{u}(t,\xi),\]
where the phase correction is given by
\[B(t,\xi)=\frac{\lambda}{(2\pi)^{2}}\int_{0}^{t}\int_{\mathbb{R}^{2}}\left| \frac{\xi}{\langle\xi\rangle}-\frac{\sigma}{\langle\sigma\rangle}\right|^{- 1}\left|\widehat{u}(\sigma)\right|^{2}d\sigma\frac{\rho(s^{-\frac{2}{n}}\xi) }{\langle s\rangle}ds.\]
**Proposition 4.1**.: _Assume that \(u\in C([0,T],H^{n})\) satisfies the a priori assumption (2.3) with the index conditions (2.2). Then we get_
\[\Big{\|}\langle\xi\rangle^{k}\Big{(}\mathsf{v}(t_{2},\xi)-\mathsf{v}(t_{1},\xi )\Big{)}\Big{\|}_{L^{\infty}_{\xi}}\lesssim\varepsilon_{1}^{3}\langle t_{1} \rangle^{-\delta}. \tag{4.1}\]
_for \(t_{1}\leq t_{2}\in[0,T]\) and some \(0<\delta\leq\frac{1}{100}\)._
By assuming Proposition 4.1, we first prove our main theorem.
Proof of Theorem 1.1.: For the proof of global behavior of solutions, it takes precedence to show the existence of a local solution to (1.1) in \(\Sigma_{T}\). However, since it is straightforward from the contraction mapping principle, we may omit the proof (for instance see [4, 18, 22]). Now, given \(T>0\), we assume that \(\psi\) is a solution to (1.1) on \([0,T]\) with initial data condition (1.6). Then, by the bootstrap argument, it suffices to prove that for sufficiently small \(\varepsilon_{1}>0\), there exists \(C>0\) such that if \(\|u\|_{\Sigma_{T}}\leq\epsilon_{1}\),
\[\|u\|_{\Sigma_{T}}\leq\varepsilon_{0}+C\varepsilon_{1}^{3}. \tag{4.2}\]
From (4.1), one sees that the scattering norm stays bounded
\[\|u(t)\|_{S}\leq\varepsilon_{0}+C\varepsilon_{1}^{3},\quad\text{for $t\in[0,T]$}.\]
Thus, together with the weighted energy estimates (3.1) and (3.2), we can close the bootstrapping argument and obtain the global existence of solution. Concerning (1.8), the asymptotic behavior of the solution, we define a scattering profile by
\[u_{\infty}:=\mathcal{F}^{-1}\left(\lim_{t\to\infty}\mathsf{v}(t,\cdot)\right).\]
Then Proposition 4.1 immediately yields that for \(t\in[0,T]\),
\[\Big{\|}\langle\xi\rangle^{k}\left(\widehat{u}(t,\xi)-e^{iB(t,\xi)}e^{-it \langle\xi\rangle}\widehat{u_{\infty}}(\xi)\right)\Big{\|}_{L^{\infty}_{\xi} }\lesssim\varepsilon_{1}^{3}\langle t\rangle^{-\delta}.\]
Proof of Proposition 4.1.: We will proceed as in [30]. We prove that if \(t_{1}\leq t_{2}\in[M-2,2M]\cap[0,T]\) for a dyadic number \(M\in 2^{\mathbb{N}}\)
\[\Big{\|}\langle\xi\rangle^{k}\Big{(}\mathsf{v}(t_{2},\xi)-\mathsf{v}(t_{1}, \xi)\Big{)}\Big{\|}_{L^{\infty}_{\xi}}\lesssim\varepsilon_{1}^{3}M^{-\delta}, \tag{4.3}\]
for some \(0<\delta\leq\frac{1}{100}\). We begin with writing \(\mathsf{v}\) as
\[\mathsf{v}(t,\xi)=e^{-iB(t,\xi)}\widehat{f}(t,\xi)=e^{-iB(t,\xi)}\widehat{u_{ 0}}(\xi)+i\lambda e^{-iB(t,\xi)}\mathcal{I}(t,\xi),\]
where, after change of variables, the nonlinear term \(\mathcal{I}\) is given by
\[\mathcal{I}(t,\xi)=\frac{1}{(2\pi)^{3}}\int_{0}^{t}\iint_{\mathbb{R}^{2}\times \mathbb{R}^{2}}e^{isq(\xi,\eta,\sigma)}|\eta|^{-1}\widehat{f}(s,\xi+\eta) \widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s,\xi+\eta+\sigma)}d\eta d \sigma ds,\]
where a resonant function
\[q(\xi,\eta,\sigma)=\langle\xi\rangle-\langle\xi+\eta\rangle-\langle\xi+ \sigma\rangle+\langle\xi+\eta+\sigma\rangle.\]
Let \(L_{0}\in 2^{\mathbb{Z}}\) such that
\[L_{0}\sim M^{-\frac{9}{10}}. \tag{4.4}\]
We write
\[\mathcal{I}(t,\xi)=\int_{0}^{t}\bigg{(}\mathcal{K}_{L_{0}}(s,\xi)+\sum_{L\in 2^{ \mathbb{Z}},L>L_{0}}\mathcal{K}_{L}(s,\xi)\bigg{)}ds, \tag{4.5}\]
where
\[\mathcal{K}_{L_{0}}(s,\xi) :=\frac{1}{(2\pi)^{3}}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}} e^{isq(\xi,\eta,\sigma)}\rho_{\leq L_{0}}(\eta)|\eta|^{-1}\widehat{f}(s,\xi+\eta) \widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s,\xi+\eta+\sigma)}d\eta d\sigma,\] \[\mathcal{K}_{L}(s,\xi) :=\frac{1}{(2\pi)^{3}}\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}} e^{isq(\xi,\eta,\sigma)}\rho_{L}(\eta)|\eta|^{-1}\widehat{f}(s,\xi+\eta) \widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s,\xi+\eta+\sigma)}d\eta d\sigma.\]
The first term \(\mathcal{K}_{L_{0}}\), the integral around the singular point, is the one responsible for the correction of scattering, whereas the second term \(\mathcal{K}_{L}\) is remainder term. The profile \(\mathsf{v}\) verifies
\[\partial_{t}\mathsf{v}(t,\xi) =\partial_{t}\left[e^{-iB(t,\xi)}\widehat{f}(t,\xi)\right]\] \[=i\lambda e^{-iB(t,\xi)}\left[\left(\mathcal{K}_{L_{0}}(t,\xi)+ \sum_{L>L_{0}}\mathcal{K}_{L}(t,\xi)\right)-\frac{1}{\lambda}\left[\partial_{t }B(t,\xi)\right]\widehat{f}(t,\xi)\right].\]
Thus,
\[\mathsf{v}(t_{2},\xi)-\mathsf{v}(t_{1},\xi) =\int_{t_{1}}^{t_{2}}\partial_{s}\mathsf{v}(s,\xi)ds\] \[=i\lambda\int_{t_{1}}^{t_{2}}e^{-iB(s,\xi)}\left[\left(\mathcal{ K}_{L_{0}}(s,\xi)+\sum_{L>L_{0}}\mathcal{K}_{L}(s,\xi)\right)-\frac{1}{\lambda} \left[\partial_{s}B(s,\xi)\right]\widehat{f}(s,\xi)\right]ds. \tag{4.6}\]
In order to prove (4.3), we use the cancellation effect between \(\mathcal{K}_{L_{0}}\) and \(\partial_{s}B(s,\xi)\), specifically, we show that for each \(\xi\) with \(|\xi|\sim N\in 2^{\mathbb{Z}}\),
\[\left|\int_{t_{1}}^{t_{2}}e^{-iB(s,\xi)}\left(\mathcal{K}_{L_{0}}(s,\xi)-\frac{ 1}{\lambda}\left[\partial_{s}B(s,\xi)\right]\widehat{f}(s,\xi)\right)ds\right| \lesssim\varepsilon_{1}^{3}M^{-\delta}\langle N\rangle^{-k} \tag{4.7}\]
and
\[\left|\int_{t_{1}}^{t_{2}}e^{-iB(s,\xi)}\sum_{L>L_{0}}\mathcal{K}_{L}(s,\xi) ds\right|\lesssim\varepsilon_{1}^{3}M^{-\delta}\langle N\rangle^{-k}, \tag{4.8}\]
for some \(0<\delta\leq\frac{1}{100}\).
_Proof of (4.7)_. We prove the following two bounds:
\[\left|\int_{t_{1}}^{t_{2}}e^{-iB(s,\xi)}\mathcal{K}_{L_{0}}(s, \xi)\left(1-\rho\left(s^{-\frac{2}{n}}\xi\right)\right)ds\right| \lesssim\varepsilon_{1}^{3}M^{-\delta}\langle N\rangle^{-k}, \tag{4.10}\] \[\left|\int_{t_{1}}^{t_{2}}e^{-iB(s,\xi)}\left[\mathcal{K}_{L_{0} }(s,\xi)\rho\left(s^{-\frac{2}{n}}\xi\right)-\frac{1}{\lambda}\left[\partial_ {s}B(s,\xi)\right]\widehat{f(s,\xi)}\right]ds\right| \lesssim\varepsilon_{1}^{3}M^{-\delta}\langle N\rangle^{-k}, \tag{4.9}\]
where \(\rho\in C_{0}^{\infty}(B(0,2))\). We remark that the phase correction will be derived in the proof of (4.10).
We first consider (4.9). It suffices to show the integrand bound
\[\left|\mathcal{K}_{L_{0}}(s,\xi)\right|\lesssim\varepsilon_{1}^{3}M^{-(1+ \delta)}\langle N\rangle^{-k}.\]
We further split \(\mathcal{K}_{L_{0}}\) dyadically as follows:
\[\mathcal{K}_{L_{0}}(s,\xi)=\sum_{L_{1}\leq L_{0}+10}\mathcal{K}_{L_{0},L_{1}}( s,\xi),\]
where
\[\mathcal{K}_{L_{0},L_{1}}(s,\xi)=\frac{1}{(2\pi)^{3}}\iint e^{isq(\xi,\eta,\sigma)} \rho_{L_{1}}(\eta)\rho_{\leq L_{0}}(\eta)|\eta|^{-1}\widehat{f}(s,\xi+\eta) \widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s,\xi+\eta+\sigma)}d\eta d\sigma.\]
Then, by the a priori assumption (2.3), we estimate
\[|\mathcal{K}_{L_{0},L_{1}}(s,\xi)|\] \[\lesssim L_{1}^{-1}\langle\xi\rangle^{-k}\iint\left|\langle N \rangle^{k}\rho_{L_{1}}(\eta)\rho_{\leq L_{0}}(\eta)\widehat{f}(s,\xi+\eta) \widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s,\xi+\eta+\sigma)}\right|d\eta d\sigma\] \[\lesssim L_{1}^{-1}\langle N\rangle^{-k}L_{1}^{2}\left\|\langle \xi\rangle^{k}\widehat{f}\right\|_{L_{\xi}^{\infty}}\|u\|_{H^{n}}\|u\|_{H^{n}}\] \[\lesssim\varepsilon_{1}^{3}L_{1}M^{2\delta_{0}}\langle N\rangle^ {-k}.\]
On the other hand, by Holder inequality, we get
\[|\mathcal{K}_{L_{0},L_{1}}(s,\xi)|\ \lesssim L_{1}^{-1}\|\rho_{L_{1}}\|_{L^{2}}N^{-n}\|f\| _{H^{n}}^{3}\lesssim\varepsilon_{1}^{3}M^{-2}M^{3\delta_{0}}.\]
These two estimates induce that
\[\sum_{L_{1}\leq L_{0}+10}\left|\mathcal{K}_{L_{0},L_{1}}(s,\xi)\right|\] \[\lesssim\varepsilon_{1}^{3}\left(\sum_{L_{1}\leq M^{-2}}L_{1}M^{2 \delta_{0}}\langle N\rangle^{-k}+\sum_{M^{-2}<L_{1}<L_{0}+10}M^{-2+3\delta_{0 }}\right)\] \[\lesssim\varepsilon_{1}^{3}M^{-(1+\delta_{0})}\langle N\rangle^ {-k}.\]
Next, consider (4.10). Due to the cut-off \(\rho(s^{-\frac{2}{n}}\xi)\), we may assume \(N\leq M^{\frac{2}{n}}\). It suffices to show that the integrand satisfies
\[\left|\mathcal{K}_{L_{0}}(s,\xi)-\frac{1}{\lambda}\left[\partial_{s}B(s,\xi) \right]\widehat{f}(s,\xi)\right|\lesssim\varepsilon_{1}^{3}M^{-(1+\delta)} \langle N\rangle^{-k}. \tag{4.11}\]
The correction term will be achieved after three steps.
_Step1: Phase approximation._ We approximate the phase function by a simpler one in the support of the integrand in (4.10). Let us observe that
\[q(\xi,\eta,\sigma) =\left(\langle\xi\rangle-\langle\xi+\eta\rangle\right)-\left( \langle\xi+\sigma\rangle-\langle\xi+\eta+\sigma\rangle\right)\] \[=\left(\frac{-|\eta|^{2}-2\eta\cdot\xi}{\langle\xi\rangle+\langle \xi+\eta\rangle}-\frac{-|\eta|^{2}-2\eta\cdot(\xi+\sigma)}{\langle\xi+\sigma \rangle+\langle\xi+\eta+\sigma\rangle}\right)\] \[=\eta\cdot\left(\frac{\xi}{\langle\xi\rangle}-\frac{\xi+\sigma} {\langle\xi+\sigma\rangle}\right)+O\left(|\eta|^{2}\right)\] \[=:r(\xi,\eta,\sigma)+O\left(|\eta|^{2}\right).\]
We now set
\[\mathcal{K}^{\prime}_{L_{0}}(s,\xi):=\frac{1}{(2\pi)^{3}}\iint_{\mathbb{R}^{2 }\times\mathbb{R}^{2}}e^{isr(\xi,\eta,\sigma)}\rho_{\leq L_{0}}(\eta)|\eta|^{- 1}\widehat{f}(s,\xi+\eta)\widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s, \xi+\eta+\sigma)}d\eta d\sigma.\]
Then we estimate
\[\left|\mathcal{K}_{L_{0}}(s,\xi)-\mathcal{K}_{L_{0}}^{\prime}(s,\xi)\right|\] \[\lesssim\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\left|s\right| \left|q(\xi,\eta,\sigma)-r(\xi,\eta,\sigma)\right|\left|\eta\right|^{-1}\] \[\qquad\qquad\qquad\qquad\times\left|\rho_{\leq L_{0}}(\eta) \widehat{f}(s,\xi+\eta)\widehat{f}(s,\xi+\sigma)\overline{\widehat{f}(s,\xi+ \eta+\sigma)}\right|d\eta d\sigma\] \[\lesssim M\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\left|\eta \right|\left|\rho_{\leq L_{0}}(\eta)\widehat{f}(s,\xi+\eta)\widehat{f}(s,\xi+ \sigma)\overline{\widehat{f}(s,\xi+\eta+\sigma)}\right|d\eta d\sigma\] \[\lesssim ML_{0}^{3}\left\|f\right\|_{L^{2}}^{2}\|\widehat{f}\|_{L ^{\infty}_{\xi}}\] \[\lesssim\varepsilon_{1}^{3}M^{-\frac{1^{2}}{10}}\lesssim \varepsilon_{1}^{3}M^{-(1+\delta)}\langle N\rangle^{-k},\]
where we used (4.4) and \(\langle N\rangle^{k}\leq M^{\frac{2k}{k}}\leq M^{\frac{1}{100}}\) in the last inequality.
_Step2: Profiles approximation._ We now approximate \(\mathcal{K}_{L_{0}}^{\prime}\) by \(\widetilde{\mathcal{K}_{L_{0}}}\) which is defined by
\[\widetilde{\mathcal{K}_{L_{0}}}\left(s,\xi\right):=\frac{1}{(2\pi)^{3}}\iint_ {\mathbb{R}^{2}\times\mathbb{R}^{2}}e^{isr(\xi,\eta,\sigma)}\rho_{\leq L_{0}} (\eta)|\eta|^{-1}\widehat{f}(\xi)\left|\widehat{f}(\xi+\sigma)\right|^{2}d \eta d\sigma.\]
By setting \(R=L_{0}^{-\frac{1}{2}}\), we see that
\[\left|\widehat{f}(\zeta+\eta)-\widehat{f}(\zeta)\right| \lesssim\left|\widehat{\rho_{>R}f}(\zeta+\eta)-\widehat{\rho_{>R }f}(\zeta)\right|+\left|\widehat{\rho_{\leq R}f}(\zeta+\eta)-\widehat{\rho_{ \leq R}f}(\zeta)\right|\] \[\lesssim\left\|\widehat{\rho_{>R}f}\right\|_{L_{0}^{\infty}}+L_{0 }\left\|\nabla_{\xi}\widehat{\rho_{\leq R}f}\right\|_{L_{0}^{\infty}}\] \[\lesssim R^{-1}\left\|\langle x\rangle^{2}f\right\|_{L^{2}}+L_{0 }\|\rho_{\leq R}\|_{L_{x}^{2}}\left\|xf\right\|_{L^{2}}\] \[\lesssim L_{0}^{\frac{1}{2}}M^{2\delta_{0}}.\]
From this and (4.4), we estimate
\[\left|\mathcal{K}_{L_{0}}^{\prime}(s,\xi)-\widetilde{\mathcal{K} _{L_{0}}}\left(s,\xi\right)\right|\] \[\lesssim\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\rho_{\leq L_{ 0}}(\eta)|\eta|^{-1}\left|\widehat{f}(\xi+\eta)\widehat{f}(\xi+\sigma) \overline{\widehat{f}(\xi+\eta+\sigma)}-\widehat{f}(\xi)\left|\widehat{f}( \xi+\sigma)\right|^{2}\right|d\eta d\sigma\] \[\lesssim\varepsilon_{1}^{3}L_{0}^{\frac{3}{2}}M^{2\delta_{0}} \lesssim\varepsilon_{1}^{3}M^{-(1+\delta)}\langle N\rangle^{-k}.\]
_Step3: final approximation._ We conclude the proof of (4.10) by showing that
\[\left|\widetilde{\mathcal{K}_{L_{0}}}\left(s,\xi\right)-\frac{1}{\lambda} \left[\partial_{s}B(s,\xi)\right]\widehat{f}(s,\xi)\right|\lesssim\varepsilon_ {1}^{3}M^{-(1+\delta)}\langle N\rangle^{-k}. \tag{4.12}\]
Setting \(\mathbf{z}:=\left(\frac{\xi}{\langle\xi\rangle}-\frac{\sigma}{\langle\sigma \rangle}\right)\), by the change of variables, we get
\[\widetilde{\mathcal{K}_{L_{0}}}\left(s,\xi\right)=\frac{1}{(2\pi)^{3}}\iint_ {\mathbb{R}^{2}\times\mathbb{R}^{2}}e^{is\eta\cdot\mathbf{z}}\rho_{\leq L_{0}} (\eta)|\eta|^{-1}\widehat{f}(\xi)\left|\widehat{f}(\sigma)\right|^{2}d\eta d\sigma.\]
Then, (4.12) can be reduced to showing that
\[\left|\frac{1}{(2\pi)^{3}}\iint_{\mathbb{R}^{2}\times\mathbb{R}^ {2}}e^{is\eta\cdot\mathbf{z}}\rho_{\leq L_{0}}(\eta)|\eta|^{-1}\left|\widehat{ f}(\sigma)\right|^{2}d\eta d\sigma-\frac{1}{(2\pi)^{2}|s|}\int_{\mathbb{R}^{3}} \left|\mathbf{z}\right|^{-1}\left|\widehat{f}(\sigma)\right|^{2}d\sigma\right|\] \[\qquad\lesssim\varepsilon_{1}^{2}M^{-(1+\delta)}. \tag{4.13}\]
We first claim that
\[\left|\int_{\mathbb{R}^{2}}e^{is\eta\cdot\mathbf{z}}|\eta|^{-1}\rho_{\leq L_{0}}( \eta)d\eta-(2\pi)^{2}|s\mathbf{z}|^{-1}\right|\lesssim M^{-\frac{16}{15}}| \mathbf{z}|^{-\frac{5}{3}}. \tag{4.14}\]
Observe that since \(\mathcal{F}(|x|^{-1})=2\pi|\eta|^{-1}\), the following formula hols
\[\frac{2\pi}{|s\mathbf{z}|}=\lim_{A\to\infty}\int_{\mathbb{R}^{2}}e^{is\mathbf{z }\cdot\eta}\rho_{\leq A}(\eta)\frac{1}{|\eta|}d\eta.\]
Then we get that for \(L_{0}\ll A\),
\[\begin{split}&\left|\int_{\mathbb{R}^{2}}e^{is\eta\cdot\mathbf{z}}| \eta|^{-1}\rho_{\leq L_{0}}(\eta)d\eta-2\pi|s\mathbf{z}|^{-1}\right|\\ &=\left|\int_{\mathbb{R}^{2}}e^{is\eta\cdot\mathbf{z}}|\eta|^{-1} \left(\rho_{\leq L_{0}}(\eta)-\rho_{\leq A}(\eta)\right)d\eta\right|\\ &=|s\mathbf{z}|^{-2}\left|\int_{\mathbb{R}^{2}}\left(\nabla_{ \eta}^{2}e^{is\eta\cdot\mathbf{z}}\right)|\eta|^{-1}\left(\rho_{\leq L_{0}}( \eta)-\rho_{\leq A}(\eta)\right)d\eta\right|\\ &\lesssim M^{-2}|\mathbf{z}|^{-2}L_{0}^{-1}\end{split} \tag{4.15}\]
and the trivial bounds
\[\left|\int_{\mathbb{R}^{2}}e^{is\eta\cdot\mathbf{z}}|\eta|^{-1}\rho_{\leq L_{ 0}}(\eta)d\eta-(2\pi)^{2}|s\mathbf{z}|^{-1}\right|\lesssim L_{0}+|s\mathbf{z}| ^{-1}. \tag{4.16}\]
Then, (4.14) follows by interpolating (4.15) and (4.16), since \(L_{0}\sim M^{-\frac{9}{10}}\). Now, the left-hand side of (4.13) is bounded by
\[M^{-\frac{16}{15}}\left|\int_{\mathbb{R}^{2}}|\mathbf{z}|^{-\frac{5}{3}}\left| \widehat{f}(\sigma)\right|^{2}d\sigma\right|.\]
Since \(|\mathbf{z}|\gtrsim\min\left\{1,|\sigma|,\frac{|\xi-\sigma|}{\langle\sigma \rangle^{3}}\right\}\), the a priori assumption (2.3) yields that
\[\left|\int_{\mathbb{R}^{2}}|\mathbf{z}|^{-\frac{5}{3}}\left|\widehat{f}( \sigma)\right|^{2}d\sigma\right|\lesssim\varepsilon_{1}^{2}.\]
which completes the proof of (4.13).
Proof of (4.8).: We further localize the frequencies as follows:
\[\mathcal{K}_{L}(s,\xi) =\frac{1}{(2\pi)^{3}}\sum_{\mathbf{N}=(N_{1},N_{2},N_{3})\in(2^{ 2})^{3}}\mathcal{K}_{L,\mathbf{N}}(s,\xi),\] \[\mathcal{K}_{L,\mathbf{N}}(s,\xi) :=\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}e^{isq(\xi,\eta, \sigma)}\rho_{L}(\eta)|\eta|^{-1}\widehat{f_{N_{1}}}(s,\xi+\eta)\widehat{f_{N_ {2}}}(s,\xi+\sigma)\widehat{\overline{f_{N_{3}}}(s,\xi+\eta+\sigma)}d\eta d\sigma.\]
We prove that
\[\sum_{L>L_{0},\mathbf{N}=(N_{1},N_{2},N_{3})}|\mathcal{K}_{L,\mathbf{N}}(s, \xi)|\lesssim\varepsilon_{1}^{3}M^{-(1+\delta)}\langle N\rangle^{-k}. \tag{4.17}\]
By Holder inequality, we readily have
\[|\mathcal{K}_{L,\mathbf{N}}(s,\xi)|\lesssim\prod_{j=1}^{3}\left\|\widehat{f_ {N_{j}}}(s)\right\|_{L^{2}}.\]
Using the a priori assumption (2.3), we know that
\[\left\|\widehat{f_{N_{j}}}(s)\right\|_{L^{2}}\lesssim\min\Big{(}N_{j}\langle N _{j}\rangle^{-k},\langle N_{j}\rangle^{-n}M^{\delta_{0}}\Big{)}\varepsilon_{1 }\quad\text{for}\quad j=1,2,3.\]
The last two estimates above imply the summation in (4.17) over those indexes \((N_{1},N_{2},N_{3})\) with \(\max(N_{1},N_{2},N_{3})\geq M^{\frac{2}{n}}\) or \(\min(N_{1},N_{2},N_{3})\leq M^{-(1+\delta)}\) satisfy the desired bound. Thus, we suffice to estimate the sum over those indexes \((N_{1},N_{2},N_{3})\) with
\[M^{-(1+\delta)}\leq N_{1},N_{2},N_{3}\leq M^{\frac{2}{n}}. \tag{4.18}\]
Let us further localize \(\sigma\) variable with respect to \(L^{\prime}\in 2^{\mathbb{Z}}\) to write
\[\mathcal{K}_{\mathbf{L},\mathbf{N}}(s,\xi)=\] \[\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}e^{isq(\xi,\eta, \sigma)}\rho_{L}(\eta)\rho_{L^{\prime}}(\sigma)|\eta|^{-1}\widehat{f_{N_{1}}} (s,\xi+\eta)\widehat{f_{N_{2}}}(s,\xi+\sigma)\overline{\widehat{f_{N_{3}}}(s,\xi+\eta+\sigma)}d\eta d\sigma,\]
where we denoted \(\mathbf{L}=(L,L^{\prime})\in(2^{\mathbb{Z}})^{2}\). Then, (4.17) reduces to showing that
\[\sum_{(\mathbf{L},\mathbf{N})\in\mathcal{A}}|\mathcal{K}_{\mathbf{L},\mathbf{ N}}(s,\xi)|\lesssim\varepsilon_{1}^{3}M^{-(1+\delta)}{\langle N\rangle}^{-k}, \tag{4.19}\]
where the summation runs over
\[\mathcal{A}=\Big{\{}\left(\mathbf{L},\mathbf{N}\right)\in(2^{ \mathbb{Z}})^{5}:M^{-(1+\delta)}\leq N_{1},N_{2},N_{3}\leq M^{\frac{2}{n}},\; \max(N_{1},N_{2},N_{3})\sim N,\] \[L_{0}\leq L\leq\max(N_{2},N_{3}),\;L^{\prime}\leq\max(N_{1},N_{3 })\Big{\}}. \tag{4.20}\]
By Holder inequality, we also readily have
\[|\mathcal{K}_{L,\mathbf{N}}(s,\xi)| \lesssim L^{-1}\|\rho_{L}\|_{L^{1}}\|\rho_{L^{\prime}}\|_{L^{1}} \langle N_{1}\rangle^{-k}\langle N_{2}\rangle^{-k}\langle N_{3}\rangle^{-k}\left \|{\langle\xi\rangle}^{k}\widehat{f}(s,\xi)\right\|_{L_{\xi}^{\infty}}^{3}\] \[\lesssim L(L^{\prime})^{2}{\langle N\rangle}^{-k}\left\|{\langle \xi\rangle}^{k}\widehat{f}(s,\xi)\right\|_{L_{\xi}^{\infty}}^{3}.\]
Hence we may obtain the desired estimates (4.19) whenever the summation runs over those indexes in \(\mathcal{A}\) satisfying \(L(L^{\prime})^{2}\leq M^{-(1+2\delta)}\). Indeed, we have
\[\sum_{\mathcal{A}\cap\left\{L(L^{\prime})^{2}\leq M^{-(1+2\delta)}\right\}}| \mathcal{K}_{\mathbf{L},\mathbf{N}}(s,\xi)|\lesssim\varepsilon_{1}^{3}M^{-(1+ \delta)}{\langle N\rangle}^{-k}.\]
We are left with a summation over
\[\mathcal{B}=\mathcal{A}\cap\left\{L(L^{\prime})^{2}\geq M^{-(1+2\delta)} \right\}.\]
We perform an integration by parts in \(\eta\) twice, with a slight abuse of notation, to obtain
\[|\mathcal{K}_{\mathbf{L},\mathbf{N}}(s,\xi)|\leq|\mathcal{K}_{1} (s,\xi)|+|\mathcal{K}_{2}(s,\xi)|+|\mathcal{K}_{3}(s,\xi)|,\] \[\mathcal{K}_{1}(s,\xi):=\frac{1}{s^{2}}\iint e^{isq(\xi,\eta, \sigma)}\mathbf{m}_{1}(\xi,\eta,\sigma)\nabla_{\eta}^{2}\left(\widehat{f_{N_{ 1}}}(s,\xi+\eta)\overline{\widehat{f_{N_{3}}}(s,\xi+\eta+\sigma)}\right) \widehat{f_{N_{2}}}(s,\xi+\sigma)d\eta d\sigma,\] \[\mathcal{K}_{2}(s,\xi):=\frac{1}{s^{2}}\iint e^{isq(\xi,\eta, \sigma)}\mathbf{m}_{2}(\xi,\eta,\sigma)\nabla_{\eta}\left(\widehat{f_{N_{1}}} (s,\xi+\eta)\overline{\widehat{f_{N_{3}}}(s,\xi+\eta+\sigma)}\right)\widehat{ f_{N_{2}}}(s,\xi+\sigma)d\eta d\sigma,\] \[\mathcal{K}_{3}(s,\xi):=\frac{1}{s^{2}}\iint e^{isq(\xi,\eta, \sigma)}\mathbf{m}_{3}(\xi,\eta,\sigma)\widehat{f_{N_{1}}}(s,\xi+\eta) \overline{\widehat{f_{N_{3}}}(s,\xi+\eta+\sigma)}\widehat{f_{N_{2}}}(s,\xi+ \sigma)d\eta d\sigma,\]
where
\[\mathbf{m}_{1}(\xi,\eta,\sigma):=\left(\frac{\nabla_{\eta}q(\xi, \eta,\sigma)}{|\nabla_{\eta}q(\xi,\eta,\sigma)|^{2}}\right)^{2}|\eta|^{-1}\rho_ {L}(\eta)\rho_{L^{\prime}}(\sigma)\rho_{N_{1}}(\xi+\eta)\rho_{N_{3}}(\xi+\eta+ \sigma),\] \[\mathbf{m}_{2}(\xi,\eta,\sigma):=\frac{\nabla_{\eta}q(\xi,\eta, \sigma)}{|\nabla_{\eta}q(\xi,\eta,\sigma)|^{2}}\nabla_{\eta}\left(\frac{\nabla_ {\eta}q(\xi,\eta,\sigma)}{|\nabla_{\eta}q(\xi,\eta,\sigma)|^{2}}|\eta|^{-1} \rho_{L}(\eta)\rho_{N_{1}}(\xi+\eta)\rho_{N_{3}}(\xi+\eta+\sigma)\right)\rho_{L ^{\prime}}(\sigma),\] \[\mathbf{m}_{3}(\xi,\eta,\sigma):=\nabla_{\eta}\mathbf{m}_{2}(\xi, \eta,\sigma).\]
A direct computation yields that
\[\begin{split}|\nabla_{\eta}q(\xi,\eta,\sigma)|&=\left| \frac{\xi+\eta+\sigma}{\langle\xi+\eta+\sigma\rangle}-\frac{\xi+\eta}{\langle\xi +\eta\rangle}\right|\\ &\gtrsim\frac{|\sigma|}{\max(\langle\xi+\eta+\sigma\rangle, \langle\xi+\eta\rangle)\min(\langle\xi+\eta+\sigma\rangle,\langle\xi+\eta \rangle)^{2}}.\end{split} \tag{4.21}\]
With the help of (4.21), one can verify that the symbols satisfy the following bounds:
\[\left\|\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\mathbf{m}_{1}(\xi,\eta, \sigma)e^{iy\cdot\eta}e^{iz\cdot\sigma}d\eta d\sigma\right\|_{L^{1}_{y,z}( \mathbb{R}^{2}\times\mathbb{R}^{2})} \lesssim L^{-1}(L^{\prime})^{-2}\langle\max(N_{1},N_{3})\rangle \langle\min(N_{1},N_{3})\rangle^{10}, \tag{4.23}\] \[\left\|\iint_{\mathbb{R}^{2}\times\mathbb{R}^{2}}\mathbf{m}_{2}( \xi,\eta,\sigma)e^{iy\cdot\eta}e^{iz\cdot\sigma}\,d\sigma d\eta\right\|_{L^{1} _{y,z}(\mathbb{R}^{2}\times\mathbb{R}^{2})} \lesssim L^{-2}(L^{\prime})^{-2}\langle\max(N_{1},N_{3})\rangle^{2} \langle\min(N_{1},N_{3})\rangle^{12}, \tag{4.22}\]
and
\[|\mathbf{m}_{3}(\xi,\eta,\sigma)|\lesssim L^{-3}(L^{\prime})^{-2}\langle\max (N_{1},N_{3})\rangle^{2}\langle\min(N_{1},N_{3})\rangle^{4}. \tag{4.24}\]
By applying the operator inequality (2.14) with (4.22), we estimate
\[\begin{split}&\sum_{(\mathbf{L},\mathbf{N})\in\mathcal{B}}| \mathcal{K}_{1}(s,\xi)|\\ &\lesssim M^{-2}\sum_{(\mathbf{L},\mathbf{N})\in\mathcal{B}}L^{- 1}(L^{\prime})^{-2}\langle\max(N_{1},N_{3})\rangle\langle\min(N_{1},N_{3}) \rangle^{10}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\|P_{N_{2} }u\|_{L^{\infty}_{x}}\left(\left\|x^{2}f\right\|_{L^{2}}\left\|f_{N_{3}}\right\| _{L^{2}}+\left\|xf\right\|_{L^{2}}^{2}+\left\|x^{2}f\right\|_{L^{2}}\left\|f_{ N_{1}}\right\|_{L^{2}}\right)\\ &\lesssim\varepsilon_{1}^{3}M^{-2+2\delta+2\delta_{0}}\sum_{ \begin{subarray}{c}M^{-(1+\delta)}\leq N_{1},N_{2},N_{3}\leq M^{\frac{2}{n}}\\ L_{0}\leq L\leq\max(N_{2},N_{3})\end{subarray}}\langle\max(N_{1},N_{3})\rangle \langle\min(N_{1},N_{3})\rangle^{10}\langle N_{2}\rangle^{-k}\\ &\lesssim\varepsilon_{1}^{3}M^{-2+3\delta+2\delta_{0}}\langle N \rangle^{12}\lesssim\varepsilon_{1}^{3}M^{-(1+\delta)}\langle N\rangle^{-k}. \end{split}\]
Similarly, (4.23) implies that
\[\begin{split}&\sum_{(\mathbf{L},\mathbf{N})\in\mathcal{B}}| \mathcal{K}_{2}(s,\xi)|\\ &\lesssim M^{-2}\sum_{(\mathbf{L},\mathbf{N})\in\mathcal{B}}L^{- 2}(L^{\prime})^{-2}\langle\max(N_{1},N_{3})\rangle^{2}\langle\min(N_{1},N_{3}) \rangle^{12}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\times\|P_{N_{2}}u\|_{L^{\infty}}\left\|xf\right\|_{L^{2}} \left(\|f_{N_{3}}\|_{L^{2}}+\|f_{N_{1}}\|_{L^{2}}\right)\\ &\lesssim\varepsilon_{1}^{3}M^{-2+\delta_{0}}\sum_{ \begin{subarray}{c}M^{-(1+\delta)}\leq N_{1},N_{2},N_{3}\leq M^{\frac{2}{n}}\\ L_{0}\leq L\leq\max(N_{2},N_{3})\end{subarray}}L^{-1}\langle\max(N_{1},N_{3}) \rangle^{2}\langle\min(N_{1},N_{3})\rangle^{12}\langle N_{2}\rangle^{-k}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\times\left(N_{1}^{\frac{1}{2}}\langle N_{1}\rangle^{-k}+N_{3}^{ \frac{1}{2}}\langle N_{3}\rangle^{-k}\right)\\ &\lesssim\varepsilon_{1}^{3}M^{-\frac{11}{10}+\delta_{0}}\sum_{ \begin{subarray}{c}M^{-(1+\delta)}\leq N_{1},N_{2},N_{3}\leq M^{\frac{2}{n}} \\ L_{0}\leq L\leq\max(N_{2},N_{3})\end{subarray}}\langle\max(N_{1},N_{3})\rangle^{ 2}\langle\min(N_{1},N_{3})\rangle^{12}\langle N_{2}\rangle^{-k}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\times\left(N_{1}^{\frac{1}{2}}\langle N_{1}\rangle^{-k}+N_{3}^{ \frac{1}{2}}\langle N_{3}\rangle^{-k}\right)\\ &\lesssim\varepsilon_{1}^{3}M^{-\frac{11}{10}+\delta_{0}}\langle N \rangle^{14}\lesssim\varepsilon_{1}^{3}M^{-(1+\delta)}\langle N\rangle^{-k}. \end{split}\]
Finally, using the pointwise bound in (4.24), we estimate
\[\sum_{(\mathbf{L},\mathbf{N})\in\mathcal{B}}|\mathcal{K}_{3}(s,\xi)|\] \[\lesssim M^{-2}\sum_{(\mathbf{L},\mathbf{N})\in\mathcal{B}}L^{-3}(L ^{\prime})^{-2}\langle\max(N_{1},N_{3})\rangle^{2}\langle\min(N_{1},N_{3}) \rangle^{4}\|\rho_{L}\|_{L^{1}}\|\rho_{L^{\prime}}\|_{L^{1}}\left\|\widehat{f_ {N_{1}}}\right\|_{L^{\infty}}\left\|\widehat{f_{N_{2}}}\right\|_{L^{\infty}} \left\|\widehat{f_{N_{3}}}\right\|_{L^{\infty}}\] \[\lesssim\varepsilon_{1}^{3}M^{-2}\sum_{(\mathbf{L},\mathbf{N})\in \mathcal{B}}L^{-1}\langle\max(N_{1},N_{3})\rangle^{2}\langle\min(N_{1},N_{3}) \rangle^{4}\langle N_{1}\rangle^{-k}\langle N_{2}\rangle^{-k}\langle N_{3} \rangle^{-k}\] \[\lesssim\varepsilon_{1}^{3}M^{-\frac{11}{10}}\sum_{\begin{subarray} {c}M^{-(1+\delta)}\leq N_{1},N_{2},N_{3}\leq M^{\frac{2}{n}}\\ L^{\prime}\sim\max(N_{1},N_{3})\end{subarray}}\langle\max(N_{1},N_{3})\rangle ^{2}\langle\min(N_{1},N_{3})\rangle^{4}\langle N_{1}\rangle^{-k}\langle N_{2} \rangle^{-k}\langle N_{3}\rangle^{-k}\] \[\lesssim\varepsilon_{1}^{3}M^{-(1+\delta)}\langle N\rangle^{-k}.\]
### Acknowledgement
The first and second authors were supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. NRF-2019R1A5A1028324). The first author was supported in part by NRF-2022R1A2C1091499. The second author was supported in part by NRF-2022R1I1A1A0105640812. The third author was supported in part by NRF-2021R1C1C1005700.
|
2310.16798 | Reachability in Continuous Pushdown VASS | Pushdown Vector Addition Systems with States (PVASS) consist of finitely many
control states, a pushdown stack, and a set of counters that can be incremented
and decremented, but not tested for zero. Whether the reachability problem is
decidable for PVASS is a long-standing open problem.
We consider continuous PVASS, which are PVASS with a continuous semantics.
This means, the counter values are rational numbers and whenever a vector is
added to the current counter values, this vector is first scaled with an
arbitrarily chosen rational factor between zero and one. We show that
reachability in continuous PVASS is NEXPTIME-complete. Our result is unusually
robust: Reachability can be decided in NEXPTIME even if all numbers are
specified in binary. On the other hand, NEXPTIME-hardness already holds for
coverability, in fixed dimension, for bounded stack, and even if all numbers
are specified in unary. | A. R. Balasubramanian, Rupak Majumdar, Ramanathan S. Thinniyam, Georg Zetzsche | 2023-10-25T17:27:22Z | http://arxiv.org/abs/2310.16798v2 | # Reachability in Continuous Pushdown VASS
###### Abstract.
Pushdown Vector Addition Systems with States (PVASS) consist of finitely many control states, a pushdown stack, and a set of counters that can be incremented and decremented, but not tested for zero. Whether the reachability problem is decidable for PVASS is a long-standing open problem.
We consider _continuous PVASS_, which are PVASS with a continuous semantics. This means, the counter values are rational numbers and whenever a vector is added to the current counter values, this vector is first scaled with an arbitrarily chosen rational factor between zero and one.
We show that reachability in continuous PVASS is NEXPTIME-complete. Our result is unusually robust: Reachability can be decided in NEXPTIME even if all numbers are specified in binary. On the other hand, NEXPTIME-hardness already holds for coverability, in fixed dimension, for bounded stack, and even if all numbers are specified in unary.
Vector addition systems, Pushdown automata, Reachability, Decidability, Complexity +
Footnote †: A part of the work was done when this author was at Technical University of Munich (TUM).
+
Footnote †: A part of the work was done when this author was at Max Planck Institute for Software Systems (MPI-SWS). A part of the work was done when this author was at Max Planck Institute for Software Systems (MPI-SWS). A part of the work was done when this author was at Technical University of Munich (TUM). A part of the work was done when this author was at Max Planck Institute for Software Systems (MPI-SWS). A part of the work was done when this author was at Max Planck Institute for Software Systems (MPI-SWS). A part of the work was done when this author was at Technical University of Munich (TUM). A part of the work was done when this author was at Max Planck Institute for Software Systems (MPI-SWS).
Many verification problems can then be reduced to the _reachability_ problem (Hack, 1976), where one asks, given a PVASS \(\mathcal{M}\) and two of its configurations \(c_{0}\) and \(c_{f}\), whether there is a run of \(\mathcal{M}\) that starts at \(c_{0}\) and ends at \(c_{f}\). Unfortunately, in spite of our understanding of the reachability problem in the context of PDA (Alur et al., 2005; Chaudhuri, 2008; Reps et al., 1995; Yannakakis, 1990) and VASS models (Czerwinski and Orlikowski, 2022; Leroux, 2022; Leroux and Schmitz, 2019), the decidability of PVASS reachability remains open (Englert et al., 2021; Ganardi et al., 2022; Schmitz and Zetzsche, 2019), even with just one stack and one VASS counter. Hence a natural approach to take is to consider _overapproximations_ of the system behaviour, both to build up theoretical understanding and to approximate verification questions in practice. The approach we take in this paper is to _approximate_ the model via continuous semantics. A continuous semantics gives an over-approximation of the possible behaviors, so if the relaxed program cannot reach a location, neither can the original one.
By _continuous_ semantics of VASS, we mean that a transition is allowed to be fired _fractionally_, allowing the addition or removal of a _rational fraction_ of tokens from a counter. This model was first introduced by David and Alla in the context of Petri nets (David, 1987). Continuous VASS (\(\mathbb{Q}_{+}\)-VASS) were studied by Blondin and Haase (2017), who showed reachability and coverability are \(\mathsf{NP}\)-complete. Approximation via continuous semantics has allowed the application of SMT solvers and the development of state-of-the-art solvers from an empirical perspective (Blondin et al., 2016) to the coverability problem for VASS. More generally, relaxing integer-valued programs to continuous-valued programs is a common approximation in invariant synthesis (Colon et al., 2003; Srivastava et al., 2010).
In addition to its use as an overapproximation, the continuous semantics captures the exact behavior in systems where update vectors represent change rates (to real variables such as temperature, energy consumption) per time unit. Here, fractional addition corresponds to executing steps that take at most one time unit. For example, \(\mathbb{Q}_{+}\)-VASS are constant-rate multi-mode systems (Alur et al., 2012) where each action takes at most one time unit. Continuous PVASS can then be seen as recursive programs with such constant-rate dynamics.
**Contribution** We study PVASS with continuous semantics (denoted by \(\mathbb{Q}_{+}\)-PVASS), where we allow fractional transitions on counters, but retain the discrete nature of the stack. Hence, a _configuration_ is a tuple \((q,\gamma,\mathbf{v})\) where \(q\) is the control-state, \(\gamma\) is the stack content and \(\mathbf{v}\) represents the counter values. We show that reachability is decidable, and we provide a comprehensive complexity landscape for reachability, coverability, and state reachability. The _reachability problem_ asks for given configurations \((q,\gamma,\mathbf{v})\) and \((q^{\prime},\gamma^{\prime},\mathbf{v}^{\prime})\), whether from \((q,\gamma,\mathbf{v})\), the system can reach \((q^{\prime},\gamma^{\prime},\mathbf{v}^{\prime})\). The _coverability problem_ asks for given configurations \((q,\gamma,\mathbf{v})\) and \((q^{\prime},\gamma^{\prime},\mathbf{v}^{\prime})\), whether from \((q,\gamma,\mathbf{v})\), the system can reach a configuration of the form \((q^{\prime},\gamma^{\prime},\mathbf{v}^{\prime\prime})\) where \(\mathbf{v}^{\prime\prime}\geq\mathbf{v}^{\prime})\). Moreover, _state reachability_ asks for a given configuration \((q,\gamma,\mathbf{v})\) and a state \(q^{\prime}\), whether from \((q,\gamma,\mathbf{v})\), the system can reach a configuration of the form \((q^{\prime},\gamma^{\prime},\mathbf{v}^{\prime})\) for some \(\gamma^{\prime}\) and \(\mathbf{v}^{\prime}\).
Our main result is the following:
**Theorem 1.1**.: _Reachability in \(\mathbb{Q}_{+}\)-PVASS is \(\mathsf{NEXPTIME}\)-complete._
The \(\mathsf{NEXPTIME}\)-completeness is unusually robust. Specifically, the complexity is not affected by (i) whether we consider reachability or coverability, or (ii) whether the number of counters is part of the input or fixed, or (iii) whether counter values (in updates and configurations) are encoded in unary or binary. This is summarized in the following stronger version:
**Theorem 1.2**.: _Reachability in \(\mathbb{Q}_{+}\)-PVASS, with binary encoded numbers, is in \(\mathsf{NEXPTIME}\). Moreover, \(\mathsf{NEXPTIME}\)-hardness already holds for coverability in \(85\)-dimensional \(\mathbb{Q}_{+}\)-PVASS, with unary encoded numbers, and bounded stack-height._
Further, if we allow the configurations to be encoded in binary, then hardness already holds for coverability in \(13\)-dimensional \(\mathbb{Q}_{+}\)-PVASS.
Our result is in stark contrast to reachability problems in classical VASS: It is well-known that there, coverability is EXPSPACE-complete [11], whereas general reachability is Ackermann-complete [12, 13]. Furthermore, fixing the dimension brings the complexity down to primitive-recursive [13] (or from EXPSPACE to PSPACE in the case of coverability [11]).
Another surprising aspect is that for continuous PVASS, the coverability problem and the state reachability problem do not have the same complexity. We also show:
**Theorem 1.3**: _The state reachability problem for \(\mathbb{Q}_{+}\)-PVASS is \(\NP\)-complete._
This is also in contrast to the situation in PVASS: There is a simple reduction from the reachability problem to the coverability problem [13], and from there to state reachability. Thus, the three problems are polynomial-time inter-reducible for PVASS.
Our results are based on a number of novel technical ingredients. Especially for our lower bounds, we show a number of subtle constructions that enable us to encode discrete computations of bounded runs of counter machines in the continuous semantics.
Ingredient I: Upper bound via rational arithmeticWe prove the \(\NEXPTIME\) upper bound by observing that a characterization of runs in a cyclic \(\mathbb{Q}_{+}\)-VASS (meaning: the initial state is also the only final one) by Blondin and Haase [14] still holds in a more general setting of cyclic \(\mathbb{Q}_{+}\)-PVASS. We apply this observation by combining several (known) techniques. As is standard in the analysis of PVASS [10, 13], we view runs as derivations in a suitable grammar. As usual, one can then decompose each derivation tree into an acyclic part and "pump derivations" of the form \(A\xrightarrow{*}uAv\) for some non-terminal \(A\). Such pumps, in turn, can by simulated by a cyclic \(\mathbb{Q}_{+}\)-PVASS. Here, to simulate \(A\xrightarrow{*}uAv\), one applies \(u\) as is and one applies \(v\) in reverse on a separate set of counters. This idea of simulating grammar derivations by applying "the left part forward" and "the right part backward" is a recurring theme in the literature on context-free grammars (see, e.g. [1, 13, 14, 15, 16]) and has been applied to PVASS by Leroux et al. [13, Section 5].
As a consequence, reachability can be decided by guessing an exponential-size formula of Existential Linear Rational Arithmetic (ELRA). Since satisfiability for ELRA is in \(\NP\), this yields an \(\NEXPTIME\) upper bound.
Ingredient II: High precision and zero testsFor our lower bound, we reduce from reachability in machines with two counters, which can only be doubled and incremented. A run in these machines is accepted if it is of a particular length (given in binary) and has both counters equal in the end. We call such machines \(2\CM_{\RL}^{2+1}\), the \(\RL\) stands for "run length". This problem is \(\NEXPTIME\)-hard by a reduction from a variant of the Post Correspondence Problem where the word pair is restricted to have a specified length (given in binary) [10].
We give the desired reduction from \(2\CM_{\RL}^{2,+1}\), through a series of intricate constructions that "control" the fractional firings. We go through an intermediate machine model called \([0,1]\)-\(\VASS_{\RL}^{0?}\). An \([0,1]\)-\(\VASS_{\RL}^{0?}\) is just like a \(\mathbb{Q}_{+}\)-VASS, except that the counter values are constrained to be in the interval \([0,1]\) and we allow zero tests. Further, we only consider runs up to a particular length (given in binary), as indicated by the \(\RL\) subscript. When reducing from \(2\CM_{\RL}^{2,+1}\), we are confronted with two challenges: First, \([0,1]\)-\(\VASS_{\RL}^{0?}\) cannot store numbers beyond \(1\) and second, \([0,1]\)-\(\VASS_{\RL}^{0?}\) cannot natively double numbers. The key idea here is that, since we only consider runs of a \(2\CM_{\RL}^{2,+1}\) up to a given length \(m\), the counter values of a \(2\CM_{\RL}^{2,+1}\) are bounded by \(2^{m}\) along any run. Hence,
instead of storing the counter values of a \(2CM_{\text{RL}}^{2,+1}\) exactly, we instead use _exponential precision_. We encode a number \(n\in\mathbb{N}\) by \(\frac{n}{2^{m}}\in[0,1]\). Since then all the values are in \([0,1]\), we can double the counter values in a \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) by forcing the firing fraction of the \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) to be a particular value. The firing fraction is controlled, in turn, by means of the zero tests.
**Ingredient III: Constructing precise numbers** In order to simulate increments of the \(2CM_{\text{RL}}^{-2,+1}\) in our \([0,1]\text{-VASS}_{\text{RL}}^{0?}\), we need to be able to add \(\frac{1}{2^{m}}\) to a counter. To this end, we present a \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) gadget of polynomial size that produces the (exponentially precise) number \(\frac{1}{2^{m}}\) in a given counter. The idea is to start with \(1\) in the counter and halve it \(m\) times. The trick is to use an additional counter that goes up by \(1/m\) for each halving step. Checking this counter to be \(1\) in the end ensures that exactly \(m\) halvings have been performed.
**Ingredient IV: Zero tests via run length** We then reduce \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) to \(\mathbb{Q}_{+}\text{-VASS}_{\text{RL}}\), which are \(\mathbb{Q}_{+}\text{-VASS}\) with a run-length constraint. Here, we need to (i) make sure that the counter values remain in \([0,1]\) and (ii) simulate zero tests. We achieve both by introducing a _complement counter_\(\bar{x}\) for each counter \(x\), where it is guaranteed that \(x+\bar{x}=1\) at all times. This means that instead of checking \(x=0\), we can check \(\bar{x}=1\) by subtracting \(1\) from \(\bar{x}\). However, this does not suffice--we need to ensure that the firing fraction is exactly \(1\) in these steps. Here, the key idea is, whenever we check \(\bar{x}=1\), we also increment at the same time (and thus with the same firing fraction), a separate counter called the _controlling counter_, which in the end must equal \(Z\), the total number of zero tests. This exploits the fact that every run attempts the same pre-determined number of zero tests due to the run-length constraint. If the controlling counter reaches the value \(Z\) at the very end, then we are assured that every zero-test along the run was indeed performed correctly.
Finally, we reduce from \(\mathbb{Q}_{+}\text{-VASS}_{\text{RL}}\) to \(\mathbb{Q}_{+}\text{-PVASS}\) by using the pushdown stack to count from zero up to a number specified in binary. This employs a standard trick for encoding a binary number on the stack, where the least significant bit is on top. We further show that the final \(\mathbb{Q}_{+}\text{-PVASS}\) that we construct has bounded stack-height, \(13\) counters, and also that the target configuration can be reached from the source configuration if and only if the target can be covered from the source. This proves that coverability is hard even for a constant number of counters.
**Ingredient V: Unary encodings** The above reduction produces instances of \(\mathbb{Q}_{+}\text{-PVASS}\) where the configurations are encoded in binary. Proving hardness for unary encodings requires more ideas. First, by using a trick akin to exponential precision from above, we show that hardness of coverability in \(\mathbb{Q}_{+}\text{-PVASS}\) holds already when all the values of the given configurations are less than \(1\). Next, by reusing the doubling and the halving gadgets from Ingredients II and III, we show that for any fraction \(p/2^{k}\) where \(p\) is given in binary, there exists an _amplifier_, i.e., there is a \(\mathbb{Q}_{+}\text{-VASS}\) of polynomial size in \(\log(p)\) and \(k\), which starting from an unary-encoded configuration is able to put the value \(p/2^{k}\) in a given counter. We then simply plug in a collection of these amplifiers before and after our original \(\mathbb{Q}_{+}\text{-PVASS}\) to get the desired result.
**Related work.** There have been several attempts to study restrictions or relaxations of the PVASS reachability problem. For example, reachability is decidable when one is allowed to simultaneously test the first \(i\) counters of a VASS for zero for any \(i\)[10]; this model can be seen as a special case of PVASS [11]. Furthermore, the coverability problem in one-dimensional PVASS is decidable [13] and PSPACE-hard [14]. Reachability is decidable for _bidirected_ PVASS [1], although the best known upper bound is Ackermann time (primitive recursive time in fixed dimension). Our work is in the same spirit. The continuous semantics reduces the complexity of reachability from Ackermann-complete for VASS to NP-complete [15] (and even to P for Petri nets [12].
2015)). Our results show that the presence of a stack retains decidability, but allows exponentially more computational power.
All missing proofs can be found in the appendix of this paper.
## 2. Preliminaries
We write \(\mathbb{Q}\) for the set of rationals and \(\mathbb{Q}_{+}\) for the set of nonnegative rationals. Vectors over \(\mathbb{Q}\) (or \(\mathbb{Q}_{+}\)) are written in bold (\(\mathbf{u},\mathbf{v}\) etc.) and are represented as a pair of natural numbers (numerator and denominator) for each rational. Throughout this paper, all numbers will be encoded in binary, unless stated otherwise. Note that, this means that, each rational number is a pair of natural numbers, with both of them encoded in binary.
**Machine models.** A \(d\)-dimensional _Continuous Vector Addition System with States_ (\(d\)-\(\mathbb{Q}_{+}\)-VASS or simply \(\mathbb{Q}_{+}\)-VASS) \(\mathcal{M}=(Q,T,\Delta)\) consists of a finite set \(Q\) of states, a finite set \(T\subseteq\mathbb{Z}^{d}\) of transitions, and a finite set \(\Delta\subseteq Q\times T\times Q\) of rules. We will on occasion consider an infinite \(\mathbb{Q}_{+}\)-VASS where \(T\) continues to be finite, but \(Q\) and \(\Delta\) are infinite.
A _configuration_ of \(\mathcal{M}\) is a tuple \(C=(q,\mathbf{v})\) where \(q\) is a state and \(\mathbf{v}\in\mathbb{Q}_{+}^{d}\) is a valuation of the counters. We use the notations \(\mathrm{state}(C),\mathrm{val}(C),C(i)\) to denote \(q,\mathbf{v},\mathbf{v}(i)\) respectively. Let \(I=(q,t,q^{\prime})\in\Delta\) be a rule and let \(\alpha\in(0,1]\) be the _firing fraction_. A _step_ from a configuration \(C\) to another configuration \(C^{\prime}\) by means of the pair \((\alpha,I)\) (denoted by \(C\xrightarrow{al}C^{\prime}\)) is possible iff \(\mathrm{state}(C)=q,\mathrm{state}(C^{\prime})=q^{\prime}\) and \(\mathrm{val}(C^{\prime})=\mathrm{val}(C)+\alpha t\). A run of \(\mathcal{M}\) is a finite sequence of steps \(C_{0}\xrightarrow{\alpha_{i}I_{1}}C_{1}\xrightarrow{\alpha_{i}I_{2}}\ldots \xrightarrow{\alpha_{n}I_{n}}C_{n}\), where \(\alpha_{1}I_{1}\ldots\alpha_{n}I_{n}\) is called a _firing sequence_, and we say \(C_{n}\) is _reachable_ from \(C_{0}\) (written \(C_{0}\xrightarrow{\alpha_{1}I_{1},\alpha_{2}I_{2},\ldots,\alpha_{n}I_{n}}C_{n}\) or \(C_{0}\xrightarrow{*}C_{n}\)).
We assume the reader is familiar with context-free grammars and give basic definitions and notation (see, e.g., (Sipser, 2012)). A _context-free grammar_\(\mathcal{G}=(S,N,\Sigma,P)\) consists of a finite set of nonterminals \(N\), a starting nonterminal \(S\), a finite alphabet \(\Sigma\) and a finite set of production rules \(P\subseteq N\times(N\cup\Sigma)^{*}\). We will assume that \(\mathcal{G}\) is in Chomsky Normal Form. As usual \(\xRightarrow{*}\) is the reflexive, transitive closure of \(\Rightarrow\). A word \(w\in\Sigma^{*}\) belongs to the language \(L(\mathcal{G})\) of the grammar iff \(\xRightarrow{*}w\).
A _Continuous Pushdown_ VASS (\(\mathbb{Q}_{+}\)-PVASS) is a \(\mathbb{Q}_{+}\)-VASS additionally equipped with a stack. Formally, it is a tuple \(\mathcal{M}=(Q,\Gamma,T,\Delta)\) where \(Q\) is a finite set of states, \(\Gamma\) is a finite stack alphabet, \(T\subseteq\mathbb{Z}^{d}\times(\Gamma\cup\bar{\Gamma}\cup\epsilon)\) is a finite set of transitions, and \(\Delta\subseteq Q\times T\times Q\) is a finite set of rules. A configuration \(C=(q,w,\mathbf{v})\) of \(\mathcal{M}\) contains additionally the stack \(w\in\Gamma^{*}\) and we write \(w=\mathrm{stack}(C)\). A _step_\(C\xrightarrow{al}C^{\prime}\) using rule \(I=(q,a,t,q^{\prime})\) is possible iff \(\mathrm{state}(C)=q\), \(\mathrm{state}(C^{\prime})=q^{\prime}\), \(\mathrm{val}(C^{\prime})=\mathrm{val}(C)+\alpha t\), and one of the following holds: (1) \(a\in\Gamma\) and \(\mathrm{stack}(C^{\prime})=a\)\(\mathrm{stack}(C)\), (2) \(a\in\bar{\Gamma}\) and \(\mathrm{stack}(C)=a\)\(\mathrm{stack}(C^{\prime})\), or (3) \(a=\epsilon\) and \(\mathrm{stack}(C)=\mathrm{stack}(C^{\prime})\). The notion of run, firing sequence, and reachability is defined as for \(\mathbb{Q}_{+}\)-VASS. In some cases, we will need to extend the notion of step to allow vectors in \(\mathbb{Q}^{d}\) in a configuration rather than just \(\mathbb{Q}_{+}^{d}\). We then explicitly specify this in the form of an underscript: \(\rightarrow_{\mathbb{Q}_{+}}\) or \(\rightarrow_{\mathbb{Q}}\) to make it clear.
**Decision Problems.** The reachability problem for \(\mathbb{Q}_{+}\)-PVASS is defined as follows:
Given a \(\mathbb{Q}_{+}\)-PVASS \(\mathcal{M}\) and two of its configurations \(C_{0},C_{1}\), is \(C_{1}\) reachable from \(C_{0}\)?
The coverability problem for \(\mathbb{Q}_{+}\)-PVASS is defined as follows:
Given a \(\mathbb{Q}_{+}\)-PVASS \(\mathcal{M}\) and two of its configurations \(C_{0},C_{1}=(q_{1},w_{1},\mathbf{v}_{1})\), does there exist a configuration \(C^{\prime}=(q_{1},w_{1},\mathbf{v}_{1}^{\prime})\) with \(\mathbf{v}_{1}^{\prime}\geq\mathbf{v}_{1}\) such that \(C^{\prime}\) reachable from \(C_{0}\)?
The state reachability problem is defined as follows:
Given a \(\mathbb{Q}_{+}\)-PVASS \(\mathcal{M}\), a configuration \(C_{0}\) and a state \(q\), does there exist a configuration \(C_{1}\) with \(\mathrm{state}(C_{1})=q\) that is reachable from \(C_{0}\)?
**Example 2.1**.: Let us consider the \(\mathbb{Q}_{+}\)-PVASS from Figure 1, which we shall denote by \(\mathcal{M}\). It has \(2\) counters and stack symbols \(a\) and \(b\). Recall that a label \(a\) represents a push of \(a\) and \(\bar{a}\) represents a pop of \(a\). There are only two outgoing rules from the state \(q_{0}\): the first rule \(r_{1}\) decreents the first counter by \(1\), does not modify the second counter and pushes \(a\) onto the stack and the second rule \(r_{2}\) decreents the second counter by \(1\), does not modify the first counter and pushes \(b\) onto the stack. Hence, starting from the configuration \((q_{0},\varepsilon,(0,0))\) it is not possible to reach a configuration whose state is \(q_{1}\). This implies that the input \((\mathcal{M},(q_{0},\varepsilon,(0,0))),q_{1})\) is a negative instance of the state reachability problem.
On the other hand, starting from \((q_{0},\varepsilon,(1,1))\), by firing \(r_{1}\) with fraction \(0.5\), we can reach \((q_{1},a,(0.5,1))\). This means that \((\mathcal{M},(q_{0},\varepsilon,(1,1)),q_{1})\) is a positive instance of the state reachability problem. Moreover, this also means that \((\mathcal{M},(q_{0},\varepsilon,(1,1)),(q_{1},a,(0.5,0.5)))\) is a positive instance of the coverability problem.
However, the input \((\mathcal{M},(q_{0},\varepsilon,(1,1)),(q_{1},a,(1,1)))\) is a negative instance of the coverability problem. To see this, suppose for the sake of contradiction, a run exists between \((q_{0},\varepsilon,(1,1))\) and \((q_{1},a,(n_{1},n_{2}))\) for some \(n_{1}\geq 1,n_{2}\geq 1\). The first step of this run has to fire either \(r_{1}\) or \(r_{2}\) by some non-zero fraction \(\alpha\). Suppose \(r_{1}\) is fired. (The argument is similar for the other case). Then \(a\) gets pushed onto the stack and the value of the first counter becomes \(1-\alpha\). From that point onwards, the only rules that can be fired are the ones going in and out of the state \(q_{2}\), both of which do not increment the first counter. Hence, the first counter will have \(1-\alpha\) as its value throughout the run, which leads to a contradiction.
Finally, note that starting from \((q_{0},\varepsilon,(1.1,0.6))\), it is possible to reach \((q_{1},a,(1,1))\): first, fire \(r_{1}\) with fraction \(0.1\), then fire the incoming rule to \(q_{2}\) (which pops \(a\)) with fraction \(1\) and then fire the outgoing rule from \(q_{2}\) (which pushes \(a\)) with fraction \(0.4\). Hence, \((\mathcal{M},(q_{0},\varepsilon,(1.1,0.6)),(q_{1},a,(1,1)))\) is a positive instance of the reachability problem.
## 3. Upper Bound for Reachability
We first prove the NEXPTIME upper bound in Theorem 1.2. To this end, we first use a standard language-theoretic translation to slightly rephrase the reachability problem in \(\mathbb{Q}_{+}\)-PVASS (this slight change in viewpoint is also taken in other work on PVASS (Englert et al., 2021; Leroux et al., 2015)). Observe that when we are given two configurations \(C_{0}\) and \(C_{1}\) of a \(\mathbb{Q}_{+}\)-PVASS, we want to know whether there exists a sequence \(w\in(\mathbb{Z}^{d})^{*}\) of update vectors such that (i) there exists a sequence \(\sigma\) of transitions that applies \(w\), such that \(\sigma\) is a valid run from \(C_{0}\) to \(C_{1}\) in the pushdown automaton underlying the \(\mathbb{Q}_{+}\)-PVASS (thus ignoring the counter updates) and (ii) there exist firing fractions for each vector in \(w\) such that adding the resulting update vectors will lead from \(\mathbf{u}\) to \(\mathbf{v}\)
Figure 1. An example \(\mathbb{Q}_{+}\)-PVASS with \(2\) counters. Here \(a\) and \(b\) are the stack symbols.
where \(\mathbf{u},\mathbf{v}\) are the vectors in the configurations \(C_{0}\) and \(C_{1}\). Now observe that the set of words \(w\) as in condition (i) are a context-free language. Therefore, we can phrase the reachability problem in \(\mathbb{Q}_{+}\)-PVASS by asking for a word in a context-free language that satisfies condition (ii).
Let us make condition (ii) precise. Let \(\Sigma\subseteq\mathbb{Z}^{d}\) be the finite set of vectors that appear as transition labels in our \(\mathbb{Q}_{+}\)-PVASS. Given two configurations \(\mathbf{u},\mathbf{v}\in\mathbb{Q}_{+}^{d}\) and a word \(w=w_{1}w_{2}\ldots w_{n}\in\Sigma^{*}\) with each \(w_{i}\in\Sigma\), we say that \(\mathbf{u}\xrightarrow{w}_{\mathbb{Q}_{+}}\mathbf{v}\) iff there exist \(\alpha_{1},\ldots,\alpha_{n}\) such that \(\mathbf{u}\xrightarrow{\alpha_{1}w_{1},\alpha_{2}w_{2},\ldots,\alpha_{n}w_{n }}_{\mathbb{Q}_{+}}\mathbf{v}\). Similarly, given a language \(L\subseteq\Sigma^{*}\), we say that \(\mathbf{u}\xrightarrow{L}_{\mathbb{Q}_{+}}\mathbf{v}\) iff \(\mathbf{u}\xrightarrow{w}_{\mathbb{Q}_{+}}\mathbf{v}\) for some \(w\in L\). By our observation above, the reachability problem in \(\mathbb{Q}_{+}\)-PVASS is equivalent to the following problem:
**Given**: A set of vectors \(\Sigma\subseteq\mathbb{Z}^{d}\), a context-free language \(L\subseteq\Sigma^{*}\) and \(\mathbf{u},\mathbf{v}\in\mathbb{Q}_{+}^{d}\).
**Question**: Does \(\mathbf{u}\xrightarrow{L}_{\mathbb{Q}_{+}}\mathbf{v}\)?
We solve this problem using results about the existential fragment of the first-order theory of \((\mathbb{Q},+,<)\), which we call Existential Linear Rational Arithmetic (ELRA). Our algorithm constructs an ELRA formula for the following relation \(R_{L}\).
Definition 3.1: The _reachability relation_\(R_{L}\) corresponding to a context-free language \(L\subseteq(\mathbb{Z}^{d})^{*}\) is given by \(R_{L}=\{(\mathbf{u},\mathbf{v})\in\mathbb{Q}_{+}^{d}\times\mathbb{Q}_{+}^{d} \mid\mathbf{u}\xrightarrow{L}_{\mathbb{Q}_{+}}\mathbf{v}\}\).
The following definition of computing a formula using a non-deterministic algorithm is inspired by the definition of _leaf language_ from complexity theory [20]. We say that one can _construct an ELRA formula in_ NEXPTIME _(resp._ NP_) for a relation \(R\subseteq\mathbb{Q}_{+}^{n}\) if there is a non-deterministic exponential (resp. polynomial) time-bounded Turing machine such that every accepting path of the machine computes an ELRA formula such that if \(\varphi_{1},\ldots,\varphi_{m}\) are the produced formulae, then their disjunction \(\bigvee_{i=1}^{m}\varphi_{i}\) defines the relation \(R\). Here, a formula \(\phi\) is said to define a relation \(R\subseteq\mathbb{Q}_{+}^{n}\) if for every \(n\)-tuple \(\mathbf{u}\in\mathbb{Q}_{+}^{n}\), we have \(R(\mathbf{u})\) holds iff \(\phi(\mathbf{u})\) is true of the rational numbers.
Proposition 3.2: _Given a context-free language \(L\subseteq(\mathbb{Z}^{d})^{*}\) one can construct in_ NEXPTIME _an ELRA formula for the relation \(R_{L}\)._
Since the truth problem for ELRA formulae can be solved in NP[21], the NEXPTIME upper bound follows from Proposition 3.2: Our algorithm would first non-deterministically compute a disjunct \(\varphi\) of the ELRA formula for \(R_{L}\) and then check the truth of \(\varphi\) in NP in the size of \(\varphi\). This is a non-deterministic algorithm that runs in exponential time.
Therefore, the remainder of this section is devoted to proving Proposition 3.2. The key difficulty lies in understanding the reachability relation along _pumps_, which are derivations of the form \(A\xRightarrow{*}wAw^{\prime}\) for some non-terminal \(A\).
Definition 3.3: Let \(\mathcal{G}\) be a context-free grammar over \(\mathbb{Z}^{d}\) and \(A\) a non-terminal in \(\mathcal{G}\). The _pump reachability relation_ is defined as
\[P_{A}=\left\{(\mathbf{u},\mathbf{v},\mathbf{u}^{\prime},\mathbf{v}^{\prime}) \in\mathbb{Q}_{+}^{d}\times\mathbb{Q}_{+}^{d}\times\mathbb{Q}_{+}^{d}\times \mathbb{Q}_{+}^{d}\mid\exists\mathbf{w},\mathbf{w}^{\prime}:A\xRightarrow{*}wA \mathbf{w}^{\prime},\ \mathbf{u}\xrightarrow{w}_{\mathbb{Q}_{+}}\mathbf{v},\ \mathbf{u}^{\prime} \xrightarrow{w^{\prime}}_{\mathbb{Q}_{+}}\mathbf{v}^{\prime}\right\}.\]
Theorem 3.4: _Given a context-free grammar \(\mathcal{G}\) over \(\mathbb{Z}^{d}\) and a non-terminal \(A\) in \(\mathcal{G}\), one can compute in_ NEXPTIME _an ELRA formula for the relation \(P_{A}\)._
Before proving Theorem 3.4, we first show how Proposition 3.2 follows from Theorem 3.4.
Let \(\mathcal{G}\) be a grammar for the language \(L\) in Proposition 3.2. Consider an arbitrary derivation tree \(\mathcal{T}\) of \(\mathcal{G}\). We say a derivation tree is _pumpfree_ if along every path of the tree, every nonterminal occurs at most once. Clearly, an arbitrary derivation tree \(\mathcal{T}\) can be obtained from a pumpfree
tree by inserting "pumping" derivations of the form \(A\xRightarrow{*}wAw^{\prime}\). Since every pumpfree tree is exponentially bounded in size (since its depth is bounded by the number of nonterminals \(|N|\)), there can only be exponentially many such pumps that need to be inserted for any given nonterminal \(A\).
A pump \(A\xRightarrow{*}vAx\) on a nonterminal \(A\) in an arbitrary derivation tree \(\mathcal{T}\) can be replaced by additional terminal letters called _pump letters_\((A,\mathbf{n})\) and \(\overline{(A,\mathbf{n})}\) as shown in Fig. 2. Let the two occurrences of \(A\) be the first and last occurences of \(A\) along a path. Here \(\mathbf{n}\in\{0,1\}^{*}\) is a vector denoting the node which is labelled by the first \(A\). Note that we assume that the grammar is in Chomsky Normal Form and hence nodes in the derivation tree can be identified in this manner since the trees are binary trees. The tree \(\mathcal{T}^{\prime}\) contains four children at \(\mathbf{n}\), with the first and fourth being pump letters and the second and third being labelled by the nonterminals \(B,C\) occurring in the production \(A\xRightarrow{*}BC\). It could also be the case that the rule used is of the form \(A\xRightarrow{*}a\), in which case there are only three letters: the middle letter being \(a\) and the other two pump letters. Repeating this replacement procedure along each path, we finally obtain a pumpfree tree \(\tilde{\mathcal{T}}\) which does not have a repeated nonterminal along any path. Since \(\tilde{\mathcal{T}}\) is exponentially bounded in size, the number of pump terminals introduced is also exponentially bounded. In particular, every vector \(\mathbf{n}\in\{0,1\}^{h}\) for \(h\leq|N|\) where \(N\) is the set of nonterminals of \(\mathcal{G}\).
The algorithm guesses an exponential sized tree \(\tilde{\mathcal{T}}\) and verifies the consistency of node labels between parent and children nodes in the tree using the rule set \(P\) of the grammar. It then constructs a formula \(\phi_{\tilde{\mathcal{T}}}\) as follows. The formula \(\phi_{\tilde{\mathcal{T}}}\) contains variables for a sequence of fractions and vectors \(\mathbf{x}_{0},\alpha_{1},\mathbf{x}_{1}\ldots\alpha_{l},\mathbf{x}_{l}\) where \(l\) is the number of leaf nodes in \(\tilde{\mathcal{T}}\). Let \(\gamma_{i}\) be the label of the \(i^{th}\) leaf node. The constructed formula is the conjunction of the following formulae \(\phi_{i}\) for each leaf \(i\):
* if \(\gamma_{i}\) is a nonpump letter then \(\phi_{i}:=(\mathbf{x}_{i}+\alpha_{i+1}\gamma_{i}=\mathbf{x}_{i+1})\), else
* \(\gamma_{i}\) is a pump letter \((A,\mathbf{n})\), then \(\mathbf{x}_{i},\mathbf{x}_{i+1}\) are plugged into an instantiation of the formula obtained from Theorem 3.1 for \(A\), along with the corresponding vectors \(\mathbf{x}_{j},\mathbf{x}_{j+1}\) for the dual letter \(\overline{(A,\mathbf{n})}=\gamma_{j}\) to give the formula \(\phi_{i,j}\). In this case, \(\phi_{i}=\phi_{j}=\phi_{i,j}\).
The formula \(\phi_{\tilde{\mathcal{T}}}\) existentially quantifies over the variables \(\mathbf{x}_{1},\ldots,\mathbf{x}_{l-1}\) as well as the firing fractions \(\alpha_{1},\ldots,\alpha_{l}\), while \(\mathbf{x}_{0}\) and \(\mathbf{x}_{l}\) are free variables corresponding to \(\mathbf{u}\) and \(\mathbf{v}\) respectively. The final formula we want is \(\bigvee_{\tilde{\mathcal{T}}}\phi_{\tilde{\mathcal{T}}}\).
Figure 2. Removal of a single cycle in an arbitrary derivation tree \(\mathcal{T}\) to get a tree \(\mathcal{T}^{\prime}\) with additional leaves of the form \((A,n)\) and \(\overline{(A,n)}\).
### Capturing pump reachability relations
It remains to prove Theorem 3.4. The key observation is that a characterization of Blondin and Haase (2017) of the existence of "cyclic runs" (i.e. ones that start and end in the same control state) in \(\mathbb{Q}_{+}\)-VASS actually also applies to \(\mathbb{Q}_{+}\)-VASS with infinitely many control states. Thus, the first step is to translate the setting of pumps into that of cyclic \(\mathbb{Q}_{+}\)-VASS with infinitely many control states. It is more convenient for us to use algebraic terminology, so we will phrase this as a translation to the case of semigroups. We say a language \(K\subseteq\Sigma^{*}\) is a _semigroup_ if it is closed under concatenation, i.e., for any \(u,v\in K\), we have \(uw\in K\). We will show that for our particular cyclic \(\mathbb{Q}_{+}\)-VASS, the characterization of Blondin and Haase allows us to build an exponential-sized ELRA formula.
**Reduction to semigroups** Let us first show that the pump reachability relations \(P_{A}\) can be captured using context-free semigroups. In this section, we often write letters \(a\) in normal font, even though they are vectors in \(\mathbb{Z}^{d}\). Vectors in \(\mathbb{Q}_{+}^{d}\) are represented by bold font e.g. \(\mathbf{u}\).
The following lemma uses the idea of simulating grammar derivations by applying "the left part forward" and "the right part backward", which is a recurring theme in the literature on context-free grammars (see, e.g. (Baumann et al., 2023; Berstel, 1979; Lohrey et al., 2022; Reps et al., 2016; Rosenberg, 1967)) and has been applied to PVASS by Leroux et al. (2015, Section 5).
**Lemma 3.5**.: _Given a grammar \(\mathcal{G}\) over \(\mathbb{Z}^{d}\) and a non-terminal \(A\) in \(\mathcal{G}\), one can compute, in polynomial time, a context-free language \(K\subseteq(\mathbb{Z}^{2d})^{*}\) such that (i) \(K\) is a semigroup and (ii) for any \(\mathbf{u},\mathbf{v},\mathbf{u}^{\prime},\mathbf{v}^{\prime}\in\mathbb{Q}^{d}\), we have \((\mathbf{u},\mathbf{v}^{\prime})\xrightarrow{K}_{\mathbb{Q}_{+}}(\mathbf{u}^{ \prime},\mathbf{v})\) if and only if \((\mathbf{u},\mathbf{v},\mathbf{u}^{\prime},\mathbf{v}^{\prime})\in P_{A}\)._
Suppose \((S,N,\Sigma,P)\) is a context-free grammar in Chomsky normal form, with \(\Sigma\subseteq\mathbb{Z}^{d}\) and \(A\in N\). The idea is to take a derivation tree for \(A\xrightarrow{*}wAw^{\prime}\) and consider the path from the root to the \(A\) in the derived word, see Fig. 3 on the left. We transform the tree as follows. Each subtree on the left of this path (\(\ell_{1}\) and \(\ell_{2}\) in the figure) is left unchanged, except that each produced vector \(a\in\mathbb{Z}^{d}\) is padded so as to obtain \((a,0,\ldots,0)\in\mathbb{Z}^{2d}\). In the figure, the resulting subtrees are \(\overrightarrow{\ell_{1}},\overrightarrow{\ell_{2}}\). Each subtree on the right (\(r_{1}\) and \(r_{2}\) in the figure), however, is moved to the left side of the path and it is _reversed_, meaning in particular that the word produced by it is reversed. Moreover, each vector \(b\in\mathbb{Z}^{d}\) occurring at a leaf is turned into \((0,\ldots,0,-b)\in\mathbb{Z}^{2d}\).
Then, every word generated by the new grammar is of the form \(\overrightarrow{x_{1}y_{n}}\cdots\overrightarrow{x_{n}y_{1}}\), where \(x_{1}\cdots x_{n}y_{1}\cdots y_{n}\) is the word produced by the original grammar. Here, for a word \(w\in(\mathbb{Z}^{d})^{*}\), \(\overrightarrow{w}\) is obtained from \(w\) by replacing each vector \(a\in\mathbb{Z}^{d}\) in \(w\) by \((a,0,\ldots,0)\in\mathbb{Z}^{2d}\), and \(\overleftarrow{w}\) is obtained from \(w\) by reversing the word and replacing each \(a\in\mathbb{Z}^{d}\) by \((0,\ldots,0,-a)\in\mathbb{Z}^{2d}\). Conversely, for every \(A\xrightarrow{*}wAw^{\prime}\), we can find a word \(\overrightarrow{x_{1}y_{n}}\cdots\overrightarrow{x_{n}y_{1}}\) in the new grammar such that \(x_{1}\cdots x_{n}=w\) and \(y_{1}\cdots y_{n}=w^{\prime}\). Thus, we clearly have \((\mathbf{u},\mathbf{v}^{\prime})\xrightarrow{K}_{\mathbb{Q}_{+}}(\mathbf{u}^ {\prime},\mathbf{v})\) if and only if \((\mathbf{u},\mathbf{v},\mathbf{u}^{\prime},\mathbf{v}^{\prime})\in P_{A}\) for every \(\mathbf{u},\mathbf{v}\in\mathbb{Q}_{+}^{d}\).
Formally, in the new grammar for \(K\), we have three copies of the non-terminals in \(N\), hence we have \(N^{\prime}=\overleftarrow{N}\cup\overrightarrow{N}\cup\hat{N}\), where \(\overleftarrow{N}=\{\overleftarrow{B}\mid B\in N\}\), \(\overrightarrow{N}=\{\overleftarrow{B}\mid B\in N\}\) and \(\hat{N}=\{\hat{B}\mid B\in N\}\) are disjoint copies of \(N\). The productions in the new grammar are as follows: For every production \(B\to CD\) in \(P\), we include productions \(\overrightarrow{B}\rightarrow\overrightarrow{C}\hat{D}\), \(\hat{B}\rightarrow\overrightarrow{C}\hat{D}\) and \(\hat{B}\rightarrow\overleftarrow{D}\hat{C}\), \(\overleftarrow{B}\rightarrow\overleftarrow{D}\hat{C}\). Moreover, for every \(B\to b\) with \(b\in\mathbb{Z}^{d}\), we include the productions \(\overrightarrow{B}\rightarrow(b,0,\ldots,0)\in\mathbb{Z}^{2d}\) and \(\overleftarrow{B}\rightarrow(0,\ldots,0,-b)\in\mathbb{Z}^{2d}\). Finally, we add \(\hat{A}\rightarrow\epsilon\) and set the start symbol to \(\hat{A}\). Observe that the generated language is closed under concatenation. This is because at any point in any derivation, there is exactly one hat nonterminal symbol which always occurs as the last symbol and only \(\hat{A}\) can be replaced by \(\epsilon\). This means whenever \(\hat{A}\xrightarrow{*}u\), it is the case that \(\hat{A}\xrightarrow{*}u\hat{A}\) and hence if \(\hat{A}\xrightarrow{*}v\) as well, then it is the case that \(\hat{A}\xrightarrow{*}w\) as well as \(\hat{A}\xrightarrow{*}\overline{w}\). This grammar achieves the required transformation.
Reduction to letter-uniform semigroupAs a second step, we will further reduce the problem to the case where n all runs, the letters (i.e. added vectors) appear uniformly (in some precise sense). The _support sequence_ of a word \(w\in\Sigma\) is the tuple \((\Gamma,<)\) where \(\Gamma\subseteq\Sigma\) is the subset of letters occuring in \(w\) and \(<\) is a total order on \(\Gamma\) which corresponds to the order of first occurrence of the letters in \(w\). For example the support sequence of \(aacabbc\) consists of \(\Gamma=\{a,b,c\}\) and the linear ordering \(a<c<b\).
A context-free language \(K\subseteq(\mathbb{Z}^{d})^{*}\) is _letter-uniform_ if any two words in \(K\) have the same support sequence. Let \(\Sigma\subseteq\mathbb{Z}^{d}\) be the set of letters occurring in \(K\). Moreover, for every subset \(\Gamma\subseteq\Sigma\) and a total order \(<\) on \(\Gamma=\{\gamma_{1},\ldots,\gamma_{l}\}\) given as \(\gamma_{1}<\gamma_{2}\ldots<\gamma_{l}\), let \(K_{(\Gamma,<)}=\{w\in K\mid\exists u_{1},u_{2},\ldots,u_{l}\ w=\gamma_{1}u_{1} \gamma_{2}u_{2}\ldots\gamma_{l}u_{l}\) where \(u_{i}\in\{\gamma_{1},\ldots,\gamma_{i}\}^{*}\}\) denote the set of all words in \(K\) with support sequence \((\Gamma,<)\).
Then we can observe that each \(K_{(\Gamma,<)}\) is letter-uniform and also a semigroup: for any two words \(u,v\in K_{(\Gamma,<)}\), it is the case that the letters occurring in \(uv\) and \(vu\) are exactly \(\Gamma\) and furthermore, the order of first occurrence of the letters from \(\Gamma\) in the two words also corresponds to the total order \(<\). Furthermore, \(uv,vu\in K\) since \(K\) is a semigroup. Hence both of these words also belong to \(K_{(\Gamma,<)}\). Moreover, we have \(\mathbf{u}\xrightarrow[]{K}_{\mathbb{Q}_{*}}\mathbf{v}\) if and only if there exists some \(\Gamma\subseteq\Sigma\) and total order \(<\) on \(\Gamma\) with \(\mathbf{u}\xrightarrow[]{K_{(\Gamma,<)}}_{\mathbb{Q}_{*}}\mathbf{v}\). We shall prove the following:
Proposition 3.6: _Given a context-free letter-uniform semigroup \(K\subseteq(\mathbb{Z}^{d})^{*}\), we can in_ NEXPTIME _construct an ELRA formula for the relation \(R_{K}\)._
Let us see how Theorem 3.4 follows from Proposition 3.6. Given some nonterminal \(A\) of a CFG, we want a formula for \(P_{A}\). We first use Lemma 3.5 to compute a context-free language \(K\) such that \(R_{K}\) captures \(P_{A}\) (up to permuting some counters). Suppose \(K\subseteq\Sigma^{*}\) for some \(\Sigma\subseteq\mathbb{Z}^{d}\). For each subset \(\Gamma\subseteq\Sigma\) and total order \(<\) on \(\Gamma\), consider the set \(K_{(\Gamma,<)}\) as defined earlier. As we already observed, (i) each \(K_{(\Gamma,<)}\) is a semigroup, (ii) each \(K_{(\Gamma,<)}\) is letter-uniform, and (iii) \(K\) is the union of all \(K_{(\Gamma,<)}\). Therefore, our construction proceeds as follows. We guess \((\Gamma,<)\) and then apply Proposition 3.6 to compute in NEXPTIME an existential \(\mathrm{FO}(\mathbb{Q},+,<)\) formula for \(R_{K_{(\Gamma,<)}}\). Then, the disjunction of all resulting formulas clearly defines \(R_{K}\). We note that given some \((\Gamma,<)\), a grammar for \(K_{(\Gamma,<)}\) can be constructed from the grammar for \(K\) in polynomial time. This is because we need to construct a grammar for the intersection of \(K\) with the language of the regular expression given by \(R:=\gamma_{1}\gamma_{1}^{*}\gamma_{2}(\gamma_{1}+\gamma_{2})^{*}\gamma_{3} \ldots\gamma_{k}(\gamma_{1}+\gamma_{2}\ldots\gamma_{k})^{*}\), which only incurs a polynomial blowup. In fact, without the linear order and only the subset \(\Gamma\), the same construction would lead to an exponential blowup since we would then have to remember all possible subsets of \(\Gamma\) while reading a word.
Figure 3: Illustration of Lemma 3.5. A derivation of the original grammar (shown on the left) is transformed into a derivation of the new grammar (on the right).
Characterizing reachability by three runsIt remains to show Proposition 3.6. The advantage of reducing our problem to the letter-uniform case is that we can employ a characterization of Blondin and Haase (2017) about the existence of runs. In the rest of this section, we will assume that the language \(K\) comes with a corresponding support sequence \((\Gamma,<)\).
The following lemma tells us that reachability along a letter-uniform semigroup can be characterized by the existence of three runs: One run that witnesses reachability under \(\mathbb{Q}\)-semantics, and two runs that witness "admissibility" in both directions. Here, the "only if" direction is trivial, because the run from \(\mathbf{u}\) to \(\mathbf{v}\) along \(K\) is a run of all three types. For the converse, we use the fact that \(K\) is a semigroup and letter-uniform to compose the three runs into a run under \(\mathbb{Q}_{+}\)-semantics.
**Lemma 3.7**: _Let \(K\subseteq(\mathbb{Z}^{d})^{*}\) be a letter-uniform semigroup. Then we have \(\mathbf{u}\xrightarrow{K}_{\mathbb{Q}_{+}}\mathbf{v}\) if and only if there are \(\mathbf{u}^{\prime},\mathbf{v}^{\prime}\in\mathbb{Q}^{d}\) such that:_
\[\mathbf{u}\xrightarrow{K}_{\mathbb{Q}_{+}}\mathbf{v}, \mathbf{u}\xrightarrow{K}_{\mathbb{Q}_{+}}\mathbf{v}^{\prime}, \mathbf{u}^{\prime}\xrightarrow{K}_{\mathbb{Q}_{+}}\mathbf{v}.\]
Lemma 3.7 is an extension of (Blondin and Haase, 2017, Proposition 4.5). The only difference is that in (Blondin and Haase, 2017), \(K\) is given by a non-deterministic finite automaton where one state is both the initial and the final state. The "only if" direction is trivial. For the "if" direction, the proof in (Blondin and Haase, 2017) takes the three runs and shows that a suitable concatenation of these runs, together with an appropriate choice of multiplicities, yields the desired run \(\mathbf{u}\xrightarrow{K}_{\mathbb{Q}_{+}}\mathbf{v}\). Since \(K\) is a semigroup, the same argument yields Lemma 3.7. See Subsection A.1 of the appendix for details.
Lemma 3.7 allows us to express the reachability relation along \(K\): It tells us that we merely have to express existence of the three simpler types of runs. The first of the three runs is reachability under \(\mathbb{Q}\)-semantics while the second and third are examples of _admissible_ runs under \(\mathbb{Q}_{+}\)-semantics. Thus we need to characterize these two types of runs. We will do this in the following two subsections.
Characterizing reachability under \(\mathbb{Q}\)-semanticsWe first show how to construct an ELRA formula for the \(\mathbb{Q}\)-reachability relation along a letter-uniform context-free \(K\).
**Lemma 3.8**: _Given a letter-uniform context-free language \(K\subseteq(\mathbb{Z}^{d})^{*}\), we can construct in exponential time an ELRA formula for the relation_
\[R_{K}^{\mathbb{Q}}=\{(\mathbf{u},\mathbf{v})\in\mathbb{Q}_{+}^{d}\times \mathbb{Q}_{+}^{d}\mid\mathbf{u}\xrightarrow{K}_{\mathbb{Q}}\mathbf{v}\}.\]
Our proof relies on the following, which was shown in (Blondin and Haase, 2017, Proposition B.4):
**Lemma 3.9** (Blondin and Haase, 2017): _Given an NFA \(\mathcal{A}\) over some alphabet \(\Sigma\subseteq\mathbb{Z}^{d}\), one can in polynomial time construct an ELRA formula \(\varphi\) such that for \(\mathbf{u},\mathbf{v}\in\mathbb{Q}_{+}^{d}\), we have \(\varphi(\mathbf{u},\mathbf{v})\) if and only if \(\mathbf{u}\xrightarrow{L(\mathcal{A})}_{\mathbb{Q}}\mathbf{v}\)._
Proof of Lemma 3.8.: The key observation is that in the case of \(\mathbb{Q}\)-semantics, reachability along a word \(w\in(\mathbb{Z}^{d})^{*}\) does not depend on the exact order of the letters in \(w\). Let \(\Psi(w)\in\mathbb{N}^{|\Sigma|}\) be the _Parikh image_ of \(w\) i.e. \(\Psi(w)(a)\) for \(a\in\Sigma\) denotes the number of times \(a\) occurs in \(w\). Formally, if \(\Psi(w)=\Psi(w^{\prime})\), then \(\mathbf{u}\xrightarrow{w}_{\mathbb{Q}}\mathbf{v}\) if and only if \(\mathbf{u}\xrightarrow{w^{\prime}}_{\mathbb{Q}}\mathbf{v}\). In particular, for languages \(K,K^{\prime}\subseteq(\mathbb{Z}^{d})^{*}\), if \(\Psi(K)=\Psi(K^{\prime})\), then \(R_{K}^{\mathbb{Q}}=R_{K^{\prime}}^{\mathbb{Q}}\). We use this to reduce the case of context-free \(K\) to the case of regular languages \(K\).
It is well known that given a context-free grammar, one can construct an NFA of exponential size such that the NFA accepts a language of the same Parikh image as the grammar. For example, a
simple construction with a close-to-tight size bound can be found in [Esparza et al. 2011]. Therefore, given \(K\), we can construct an exponential-sized NFA \(\mathcal{A}\) such that \(\Psi(L(\mathcal{A}))=\Psi(K)\).
Observe that \(\Psi(L(\mathcal{A}))=\Psi(K)\) implies that for any \(\mathbf{u},\mathbf{v}\in\mathbb{Q}_{+}^{d}\), we have \(\mathbf{u}\xrightarrow{K}_{\mathbb{Q}}\mathbf{v}\) if and only if \(\mathbf{u}\xrightarrow{L(\mathcal{A})}_{\mathbb{Q}}\mathbf{v}\). Therefore, we apply Lemma 3.9 to compute a formula \(\varphi\) from \(\mathcal{A}\). Since \(\mathcal{A}\) is exponential in size, this computation takes exponential time and results in an exponential formula \(\varphi\). Then, for \(\mathbf{u},\mathbf{v}\in\mathbb{Q}_{+}^{d}\), we have \(\varphi(\mathbf{u},\mathbf{v})\) if and only if \(\mathbf{u}\xrightarrow{L(\mathcal{A})}_{\mathbb{Q}}\mathbf{v}\), which is equivalent to \(\mathbf{u}\xrightarrow{K}_{\mathbb{Q}}\mathbf{v}\).
We note that it is also possible to use a construction of [Verma et al. 2005] to construct a formula for \(R_{K}^{\mathbb{Q}}\) in polynomial time - the reason we chose the current presentation is that applying the result from [Verma et al. 2005] as a black box would result in a formula over _mixed_ integer-rational arithmetic (of polynomial size): One would use integer variables to implement the construction from [Verma et al. 2005] and then rational variables to account for continuous semantics. This would yield the same complexity bound in the end (existential mixed linear arithmetic is still in NP), but we preferred not to introduce another logic.
**Characterizing admissibility under \(\mathbb{Q}_{+}\)-semantics** Finally, we construct an ELRA formula for the set of vectors \(\mathbf{u}\) such that there exists a \(\mathbf{v}^{\prime}\in\mathbb{Q}_{+}^{d}\) with \(\mathbf{u}\xrightarrow{K}_{\mathbb{Q}_{+}}\mathbf{v}^{\prime}\). We call such vectors \(K\)_-admissible_ and denote the set of \(K\)-admissible vectors as \(A_{K}\). The key observation is that \(\mathbf{u}\) is \(K\)-admissible if and only if the total order \(<\) satisfies some simple properties. Intuitively, \(\mathbf{u}\) is \(<\)_-admissible_ if for each letter that decrements a counter, either (i) that counter is positive in \(\mathbf{u}\) or (ii) there is an earlier letter that increments this counter. For \(a\in\mathbb{Z}^{d}\), we denote by \(\llbracket a\rrbracket\) (resp. \(\llbracket a\rrbracket^{+}\) or \(\llbracket a\rrbracket^{-}\)) the subset of indices \(i\) where \(a(i)\neq 0\) (resp. \(a(i)>0\) or \(a(i)<0\)). Formally, \(\mathbf{u}\) is \(<\)-admissible if for each \(a\in\Gamma\) and each \(j\in\llbracket a\rrbracket^{-}\), we either have (i) \(\mathbf{u}(j)>0\) or (ii) there is a \(b\in\Gamma\) with \(b<a\) and \(j\in\llbracket b\rrbracket^{+}\). We show the following:
Lemma 3.10.: _Let \(K\) be letter-uniform and \(K\neq\emptyset\). Then \(\mathbf{u}\in A_{K}\) if and only if \(\mathbf{u}\) is \(<\)-admissible._
The "if" direction is easy. If \(\mathbf{u}\) is not \(<\)_-admissible_, this means that there is some index \(j\) and some letter \(\gamma_{i}\) such that \(\gamma_{i}\) decrements \(j\), \(\mathbf{u}(j)=0\) and \(\gamma_{k}(j)\leq 0\) for all \(k<i\). This means that starting from \(\mathbf{u}\), on any word \(w\in K\), we would go below \(0\) in index \(j\) when \(\gamma_{i}\) first occurs in \(w\). Hence \(\mathbf{u}\notin K\).
For the converse, suppose that \(\mathbf{u}\) is \(<\)-admissible and \(w\in K\) be any word. Write \(w=w_{1}\cdots w_{n}\) with \(w_{1},\ldots,w_{n}\in\Gamma\). For each \(i\in\{0,\ldots,n\}\), we define
\[I_{i}=\llbracket u\rrbracket^{+}\cup\bigcup_{\ell=1}^{i}\llbracket w_{\ell} \rrbracket^{+},\]
i.e. the set of components that are incremented at some point when firing the prefix \(w_{1}\cdots w_{i}\).
We will show the following for every \(i\): There exists a \(\mathbf{v}_{i}\) such that \(\mathbf{u}\xrightarrow{w_{i}\cdots w_{n}}_{\mathbb{Q}_{+}}\mathbf{v}_{i}\) such that \(\mathbf{v}_{i}\in\mathbb{Q}_{+}\) and \(\mathbf{v}_{i}(j)>0\) for \(j\in I_{i}\).
We proceed by induction on \(i\). For \(i=0\), the statement clearly holds, because \(\mathbf{u}\) is positive on all co-ordinates in \(I_{0}=\llbracket\mathbf{u}\rrbracket^{+}\) and \(\mathbf{u}\in\mathbb{Q}_{+}\). Now suppose there is a run \(\mathbf{u}\xrightarrow{w_{1}\cdots w_{n}}_{\mathbb{Q}_{+}}\mathbf{v}_{i}\). We know that \(\llbracket w_{i+1}\rrbracket^{-}\subseteq I_{i}\) and thus \(\mathbf{v}_{i}\) is positive on all indices where \(w_{i+1}\) is negative. Let \(a_{i+1}=\min\{-\frac{\mathbf{v}_{i}(j)}{2w_{i+1}(j)}\mid j\in\llbracket w_{i+1} \rrbracket^{-}\}\). Clearly \(\mathbf{v}_{i}+a_{i+1}w_{i+1}=\mathbf{v}_{i+1}\in\mathbb{Q}_{+}\) and also, \(\mathbf{v}_{i+1}(j)>0\) for all \(j\in I_{i+1}\). In particular this means that \(\mathbf{u}\xrightarrow{w}_{\mathbb{Q}_{+}}\mathbf{v}_{n}\) and thus \(\mathbf{u}\in A_{K}\).
Lemma 3.11.: _Given a non-empty letter-uniform context-free language \(K\subseteq(\mathbb{Z}^{d})^{*}\), we can construct in polytime an ELRA formula for the relation \(A_{K}\)._
The formula \(\phi_{<}\), where \(\phi_{<}(\mathbf{u})\) for a vector \(\mathbf{u}\in\mathbb{Q}_{+}^{d}\) is true iff \(\mathbf{u}\) is \(<\)-admissible can be written as:
\[\phi_{<}(\mathbf{x})=\bigwedge_{Y\in\Gamma}\bigwedge_{i\in[Y]^{-}}\mathbf{x}(i)> 0\vee\bigvee_{\eta<Y}i\in[\![\eta]\!]^{+}\]
Proof of Proposition 3.6.: We are now ready to prove Proposition 3.6. By Lemma 3.7, it suffices to show that there are formulae for the relations \(R_{K}^{\mathbb{Q}}\) and \(A_{K}\). These formulae have been obtained in Lemma 3.8 and Lemma 3.11 respectively.
This concludes the proof that reachability in \(\mathbb{Q}_{+}\)-PVASS is in NEXPTIME.
**State reachability** The material in this section also allows us to derive Theorem 1.3. Since state-reachability is NP-hard already for \(\mathbb{Q}_{+}\)-VASS (Blondin and Haase, 2017), we only have to show membership in NP. Using the language-theoretic translation from the beginning of this section, state reachability can be phrased as: Given a set of vectors \(\Sigma\subseteq\mathbb{Z}^{d}\), a letter-uniform context-free language \(K\subseteq\Sigma^{*}\) (which comes with an associated \((\Gamma,<)\)) and \(\mathbf{u}\in\mathbb{Q}_{+}^{d}\), decide if \(\mathbf{u}\) is \(K\)-admissible. This is because, given an arbitrary context-free language \(K\), we can see it as the disjoint union of (exponentially) many \(K_{(\Gamma,<)}\) each of which is letter-uniform. Furthermore, we can construct each \(K_{(\Gamma,<)}\) in polynomial time and check if they are non-empty. The NP upper bound then follows from Lemma 3.11: We can guess \((\Gamma,<)\), construct \(K_{(\Gamma,<)}\) and Lemma 3.11 lets us build an ELRA formula for \(A_{K_{(\Gamma,<)}}\). Since the truth problem for ELRA is in NP(Sontag, 1985), we can then check whether \(\mathbf{u}\) satisfies the constructed formula for \(A_{K}\).
## 4. NEXPTIME-hardness of \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\)
We now move on to proving the NEXPTIME-hardness of reachability in \(\mathbb{Q}_{+}\)-PVASS. As outlined in the introduction, we do this by a chain of reductions. Our reduction chain starts with the machine model \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\). Informally, a \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\) has a finite-state control along with two counters, each of which can hold a non-negative integer. A rule of the machine allows us to move from one state to another whilst either incrementing the value of a counter by \(1\) or doubling the value of a counter. The set of final configurations of such a machine will be given by a final state and _an equality condition on the two counters._
Formally, a \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\) is a tuple \(\mathcal{M}=(Q,q_{in},q_{f},\Delta)\) where \(Q\) is a finite set of states, \(q_{in},q_{f}\in Q\) are the initial and the final states respectively, and \(\Delta\subseteq Q\times\{\mathbf{inc}_{0},\mathbf{inc}_{1},\mathbf{double}_{0},\mathbf{double}_{1},\mathbf{nop}\}\times Q\) is a finite set of rules. A configuration of \(\mathcal{M}\) is a triple \((q,v_{0},v_{1})\) where \(q\in Q\) is the current state of \(\mathcal{M}\) and \(v_{0},v_{1}\in\mathbb{N}\) are the current values of the two counters respectively. Let \(r=(q,t,q^{\prime})\in\Delta\) be a rule. A _step_ from a configuration \(C=(p,v_{0},v_{1})\) to another configuration \(C^{\prime}=(p^{\prime},v_{0}^{\prime},v_{1}^{\prime})\) by means of the rule \(r\) (denoted by \(C\xrightarrow{r}C^{\prime}\)) is possible if and only if \(p=q,p^{\prime}=q^{\prime}\), and
\[\text{If }t=\mathbf{inc}_{i}\text{ then }v_{i}^{\prime}=v_{i}+1,\ v_{1-i}^ {\prime}=v_{1-i}\qquad\text{If }t=\mathbf{double}_{i}\text{ then }v_{i}^{\prime}=2v_{i},\ v_{1-i}^{ \prime}=v_{1-i}\] \[\text{If }t=\mathbf{nop}\text{ then }v_{0}^{\prime}=v_{0},\ v_{1}^{ \prime}=v_{1}\]
We then say that a configuration \(C\) can reach another configuration \(C^{\prime}\) if \(C^{\prime}\) can be reached from \(C\) by a sequence of steps. The initial configuration of \(\mathcal{M}\) is \(C_{init}\coloneqq(q_{in},0,0)\). The set of _final configurations_ of \(\mathcal{M}\) is taken to be \(\{(q_{f},n,n):n\in\mathbb{N}\}\).
The reachability problem for \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\) asks, given a \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\)\(\mathcal{M}\) and a number \(m\)_in binary_, if the initial configuration can reach _some final configuration_ in exactly \(m\) steps.
Example 4.1.: Let us consider the \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\) given in Figure 4, which we shall denote by \(\mathcal{M}\). The initial state is \(q_{0}\) and the final state is \(q_{2}\). Note that the initial configuration \((q_{0},0,0)\) can reach \((q_{2},1,1)\) in exactly \(2\) steps. Hence if we set the length of the run \(m\) to \(2\), then the instance \(\langle\mathcal{M},m\rangle\) is a positive instance of the reachability problem for \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\). On the other hand, for any other
value of \(m\), the instance \(\langle\mathcal{M},m\rangle\) is a negative instance of the reachability problem for \(2\mathcal{CM}_{\mathrm{RL}}^{2+1}\). Indeed, first note that in this \(2\mathcal{CM}_{\mathrm{RL}}^{2,+1}\), each state has exactly one outgoing transition. Hence, there is exactly one run starting from \((q_{0},0,0)\) and that run is as follows: First it reaches the configuration \((q_{2},1,1)\) in exactly \(2\) steps. Then from there, it follows the following (cyclical) pattern.
\[(q_{2},x,y)\rightarrow(q_{3},2x,y)\rightarrow(q_{4},2x+1,y) \rightarrow(q_{5},2x+1,2y)\rightarrow(q_{2},2x+1,2y)\rightarrow(q_{3},4x+2,2y)\] \[(q_{4},4x+3,2y)\rightarrow(q_{5},4x+3,4y)\rightarrow(q_{2},4x+3,4y)\ldots\]
This pattern indicates that after the configuration \((q_{2},1,1)\), whenever the run reaches the state \(q_{2}\), the first counter has an odd value, whereas the second counter has an even value. Hence, the run will never reach a final configuration and so \(\langle\mathcal{M},m\rangle\) is a negative instance of the reachability problem whenever \(m\neq 2\).
We shall prove the following:
**Theorem 4.2**: _The reachability problem for \(2\mathcal{CM}_{\mathrm{RL}}^{2+1}\) is \(\mathtt{NEXPTIME}\)-hard._
Theorem 4.2 is shown using a bounded version of the classical Post Correspondence Problem (PCP). Recall that, in the PCP problem, we are given a set of pairs of words \((u_{1},v_{1})\), \((u_{2},v_{2})\), \(\ldots\), \((u_{m},v_{m})\) over a common alphabet \(\Sigma\) and we are asked to decide if there is a sequence of indices \(i_{1},i_{2},\ldots,i_{k}\) for some \(k\) such that \(u_{i_{1}}\cdot u_{i_{2}}\cdot\cdots\cdot u_{i_{k}}=v_{i_{1}}\cdot v_{i_{2}}\cdot \cdots\cdot v_{i_{k}}\). It is well-known that this problem is undecidable [10]. For our purposes, we shall use a bounded version of PCP, called bounded PCP, defined as follows.
**Input:**: A set of pairs of words \((u_{1},v_{1})\), \((u_{2},v_{2}),\ldots,(u_{m},v_{m})\) over an alphabet \(\Sigma\) such that none of the given words is the empty string, and a number \(\ell\) encoded in binary.
**Question:**: Is there a sequence of indices \(i_{1},i_{2},\ldots,i_{k}\) such that \(u_{i_{1}}\cdot u_{i_{2}}\cdot\cdots\cdot u_{i_{k}}=v_{i_{1}}\cdot v_{i_{2}} \cdot\cdots\cdot v_{i_{k}}\), and the length of \(u_{i_{1}}\cdot\cdots\cdot u_{i_{k}}\) is exactly \(\ell\).
Note that this problem is decidable - we simply have to guess a sequence of indices of length at most \(\ell\) and check that the resulting words from these indices satisfy the given property. In [11, Section 6.1], Bounded-PCP was shown to be \(\mathtt{NEXPTIME}\)-hard. We now prove Theorem 4.2 by giving a reduction from Bounded-PCP to the reachability problem for \(2\mathcal{CM}_{\mathrm{RL}}^{2,+1}\).
Let \((u_{1},v_{1}),\ldots,(u_{m},v_{m})\) be a set of pairs of words over a common alphabet \(\Sigma\) and let \(\ell\in\mathbb{N}\). Without loss of generality we assume that \(|\Sigma|=2^{k}\) for some \(k\geq 1\), for instance, by adding at most twice as many dummy letters as the size of the alphabet. With this assumption, there are two essential ideas behind this reduction, which we now briefly outline.
The first idea is as follows: Since the size of \(\Sigma\) is \(2^{k}\), we can identify \(\Sigma\) with the set \(\{0,1,\ldots,2^{k}-1\}\), by mapping each letter in \(\Sigma\) to some unique number in \(\{0,1,\ldots,2^{k}-1\}\). This identification means that, any non-empty word \(w\) represents a number \(n\) in base \(|\Sigma|\) in the most significant bit notation. In this way, to any number \(n\) we can bijectively map a non-empty word \(w\).
The second idea is as follows: Assume that we have a word \(w\) and its corresponding number \(n\). Suppose we are given another word \(w^{\prime}\) and we are asked to compute the number corresponding to the concatenated word \(w\cdot w^{\prime}\). We can do that as follows: Let \(w^{\prime}=w^{\prime}_{1},\ldots,w^{\prime}_{j}\) with each \(w^{\prime}_{i}\) being
a letter. Construct the sequence of numbers \(n_{0},n_{1},\ldots,n_{j}\) given by \(n_{0}=n\) and \(n_{i}=|\Sigma|\cdot n_{i-1}+w^{\prime}_{i}\). Notice that each \(n_{i}\) is essentially the representation of the string \(w\cdot w^{\prime}_{1}\cdot w^{\prime}_{2}\cdot\cdots w^{\prime}_{i}\) and so \(n_{j}\) is the representation for the word \(w\cdot w^{\prime}\).
These two ideas essentially illustrate the reduction from the Bounded-PCP problem to the reachability problem for \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\). Given a Bounded-PCP instance \(\langle(u_{1},v_{1}),\ldots(u_{m},v_{m}),\ell\rangle\), we construct a \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\) as follows: Initially it starts at an initial state \(q_{in}\) with both of its counters set to \(0\). From here it executes a loop in the following manner: Suppose at some point, the machine is at state \(q_{in}\) with counter values \(n_{1}\) and \(n_{2}\) corresponding to some strings \(w_{1}\) and \(w_{2}\) respectively. Then the machine picks some index between \(1\) and \(k\) and then by the idea given in the previous paragraph, it updates the values of its counters to \(n^{\prime}_{1}\) and \(n^{\prime}_{2}\) corresponding to the strings \(w_{1}\cdot u_{i}\) and \(w_{2}\cdot v_{i}\), respectively and then comes back to the state \(q_{in}\).
We can hard-code the rules in this machine so that whenever it has the representation for two strings \(w,w^{\prime}\) in its counters and it wants to compute the representation for \(w\cdot u_{i}\) and \(w^{\prime}\cdot v_{i}\) for some \(1\leq i\leq k\), it takes _exactly_\(t\) steps for some \(t\) which is polynomial in the size of the given Bounded-PCP instance. Then clearly, reaching a configuration \((q_{in},z,z)\) for some number \(z\) in the machine in exactly \(t\ell\) steps is equivalent to finding a sequence of indices \(i_{1},\ldots,i_{k}\) such that \(u_{i_{1}}\cdots u_{i_{k}}=v_{i_{1}}\cdots v_{i_{k}}\) and the length of \(u_{i_{1}}\cdots u_{i_{k}}\) is exactly \(\ell\). This completes the reduction.
## 5. From \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\) to \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\)
The next step in our reduction chain moves from \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\) to \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\). Intuitively, a \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) has a finite-state control along with some number of _continuous counters_, each of which can only hold _a fractional number_ belonging to the interval \([0,1]\). A rule of such a machine allows us to move from one state to another whilst incrementing or decrementing some counters by _some fractional number_. Further a rule can also specify that _the effect of firing that rule_ makes some counters \(0\), thereby allowing us to perform zero-tests. Note that a \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) is different from \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}\) in two aspects: First, the counters of a \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) can only hold numbers in \([0,1]\), whereas the counters of a \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}\) can hold any rational number. Second, the counters of a \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) can be tested for zero, which is not possible in a \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}\). We now proceed to formally define the model of a \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\).
More formally, a \(d\)-dimensional \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) (or \(d\)-\([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) or simply \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\)) is a tuple \(\mathcal{C}=(Q,T,\Delta)\) where \(Q\) is a finite state of states, \(T\subseteq\mathbb{Z}^{d}\times 2^{[d]}\) is a finite set of _transitions_ and \(\Delta\subseteq Q\times T\times Q\) is a finite set of _rules_. A configuration of \(\mathcal{C}\) is a tuple \(C=(q,\mathbf{v})\) where \(q\in Q\) is the current state of \(\mathcal{C}\) and \(\mathbf{v}\in[0,1]^{d}\) is the vector representing the current values of the counters of \(\mathcal{C}\). We use the notations \(\mathrm{state}(C),\mathsf{val}(C),C(i)\) to denote \(q,\mathbf{v},\mathbf{v}(i)\), respectively. Let \(I=(q,t,q^{\prime})\in\Delta\) be a rule with \(t=(r,s)\) and let \(\alpha\in(0,1]\). A _step_ from a configuration \(C\) to another configuration \(C^{\prime}\) via the pair \((\alpha,I)\) (denoted by \(C\stackrel{{ at}}{{\longrightarrow}}C^{\prime}\)) is possible if and only if \(\mathrm{state}(C)=q,\mathrm{state}(C^{\prime})=q^{\prime}\) and
\[\mathsf{val}(C^{\prime})=\mathsf{val}(C)+\alpha r\quad\text{ and }\quad\mathsf{val}(C^{ \prime})(i)=0\text{ for all }i\in s\]
Note that we implicitly require that \(\mathsf{val}(C)+\alpha r\in[0,1]^{d}\) and also that the value obtained after firing \(\alpha I\) is \(0\) on all the counters in the set \(s\). We define the notions of firing sequences \(\alpha_{1}I_{1},\ldots,\alpha_{n}I_{n}\) and reachability between configurations \(C\stackrel{{\alpha_{1}I_{1},\ldots,\alpha_{n}I_{n}}}{{\longrightarrow}}C^{\prime}\) as for \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}\), The reachability problem for \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) asks, given a \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\), two configurations \(c_{init},c_{fin}\) and a number \(m\) encoded _in binary_, whether \(c_{init}\) can reach \(c_{fin}\) in exactly \(m\) steps. We show that
Theorem 5.1 ().: _The reachability problem for \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{0?}\) is \(\mathsf{EXPTIME}\)-hard._
We prove this theorem by exhibiting a reduction from the reachability problem for \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\). Fix a \(2\mathsf{CM}_{\mathrm{RL}}^{2,+1}\)\(\mathcal{M}\) and a number \(m\) in binary. Since the initial values of both the counters are \(0\)
the largest value we can attain in any counter during a run of length \(m\) is at most \(2^{m}\) (in fact, the bound is \(2^{m-1}\)). Hence, we shall implicitly assume that the set of configurations of \(\mathcal{M}\) that are under consideration are those where the counter values are bounded by \(2^{m}\).
**Overview of the reduction.** We want to construct an \([0,1]\)-VASS\({}_{\text{RL}}^{0?}\)\(\mathcal{C}\) that simulates \(\mathcal{M}\). As already mentioned in the introduction, we use _exponential precision_ and represent a discrete counter value \(n\) in a configuration of \(\mathcal{M}\) as the value \(\frac{n}{2^{m}}\) in a continuous counter of \(\mathcal{C}\). Furthermore, we want to correctly simulate increment and doubling operations on \(\mathcal{M}\) which correspond to addition of \(\frac{1}{2^{m}}\) and doubling in \(\mathcal{C}\) respectively. Since we do not control the fraction \(\alpha\) in a rule, we have to overcome the following challenge:
1. How can we create gadgets which simulate addition of \(\frac{1}{2^{m}}\) and doubling?
Towards solving this challenge, we use the following idea: Suppose we are in some configuration \(C\) and suppose we want to make a step from \(C\) by adding \(\frac{1}{2^{m}}\) to a counter \(c\). Assume that there are two other counters \(st\) and \(te\) whose values in \(C\) are \(\frac{1}{2^{m}}\) and \(0\), respectively. Suppose \(I\) is a rule which decrements \(st\) by \(1\), increments \(c\) and \(te\) both by \(1\) and then checks that the value of \(st\) (after firing \(I\)) is \(0\). Then, if \(C\xrightarrow{\alpha I}C^{\prime}\) is a step, it must be that \(\alpha\) is _exactly_\(\frac{1}{2^{m}}\). This is because, by assumption, before firing this rule the value of \(st\) was \(\frac{1}{2^{m}}\) and after firing this rule, the zero-test ensures that the value of \(st\) is \(0\). Hence, the only possible value that \(\alpha\) can take is \(\frac{1}{2^{m}}\). Therefore, this rule allows us to add \(\frac{1}{2^{m}}\) to the counter \(c\).
However, note that after firing \(I\), the values of \(st\) and \(te\) are reversed, i.e., the values of \(st\) and \(te\) are \(0\) and \(\frac{1}{2^{m}}\), respectively. This is undesirable, as we might once again want to use \(st\) to simulate addition by \(\frac{1}{2^{m}}\). Therefore, we add another rule \(J\), which decrements \(te\) by \(1\), increments \(st\) by \(1\) and then checks that the value of \(te\) (after firing \(J\)) is \(0\). Then, a successful firing of the rule \(J\) by some fraction \(\beta\) means that \(\beta=\frac{1}{2^{m}}\) (due to the same reasons as above) and so this would mean that the values of \(st\) and \(te\) after firing \(J\) would again become \(\frac{1}{2^{m}}\) and \(0\), respectively. Hence, the counter \(te\) essentially acts as a temporary holder of the value of \(st\) and allows us to "refill" the value of \(st\).
Generalizing this technique allows us to _control the firing fraction_ to perform doubling as well. However, this technique has a single obstacle, which we now address.
For this technique to work, we need a counter \(st\) initially which stores the value \(\frac{1}{2^{m}}\). It might be tempting to simply declare that the value of \(st\) in the initial configuration is \(\frac{1}{2^{m}}\). However, this cannot be done, because the number \(m\) is given to us in binary and so the number of bits needed to write down the number \(\frac{1}{2^{m}}\) is exponential in the size of the given input \(\langle\mathcal{M},m\rangle\), which would not give as a polynomial-time reduction. This raises the following challenge as well:
1. How can we create a value of \(\frac{1}{2^{m}}\) in a continuous counter?
We show that challenge C2 can also be solved by our idea of controlling the firing fraction.
**Solving Challenge (C1).** From the \(2\mathsf{CM}_{\text{RL}}^{2,+1}\)\(\mathcal{M}=(Q,q_{in},q_{f},\Delta)\), we construct a \([0,1]\)-VASS\({}_{\text{RL}}^{0?}\)\(C_{0}\) as follows. \(C_{0}\) will have \(4\) counters \(c_{0}\), \(c_{1}\), \(st\), \(te\), i.e., it will be \(4\)-dimensional. Intuitively, each \(c_{i}\) will store the value of one of the counters of \(\mathcal{M}\), \(st\) will store the value \(1/2^{m}\) that will be needed for simulating the addition operation, and \(te\) will be used to temporarily store the values of \(c_{0}\), \(c_{1}\) and \(st\) at some points along a run. A rule in \(C_{0}\) consists of a vector \(r\in\mathbb{Z}^{4}\) and a subset \(s\subseteq\{1,2,3,4\}\). For ease of reading, we write the vector \(r\) as a sequence of increment or decrement operations \(c+n\)(or \(c-n\)) whose intended meaning is that counter \(c\) is incremented (or decremented) by \(n\), followed by a sequence of zero-tests. For example, \(t=(r,s)\) where \(r=(1,0,0,-2)\) and \(s=\{1,3\}\) is represented by \(c_{0}+1,te-2;\ c_{0}=0?,st=0?\).
\(C_{0}\) will have all the states of \(\mathcal{M}\) and in addition, for every rule \(r\) of \(\mathcal{M}\), it will have a state \(r_{mid}\). The set of rules of \(C_{0}\) will be given as follows.
* For the rule \(r:=(q,\mathbf{inc}_{i},q^{\prime})\) of \(\mathcal{M}\), \(\mathcal{C}_{0}\) will have the "increment(\(i\))" gadget given in Figure 4(a).
* For the rule \(r:=(q,\mathbf{double}_{i},q^{\prime})\), \(\mathcal{C}_{0}\) will have the "double(\(i\))" gadget given in Figure 4(b).
* For the rule \(r:=(q,\mathbf{nop},q^{\prime})\), \(\mathcal{C}_{0}\) will have the "nop" gadget given in Figure 4(c).
Note that for every rule \(r\) of \(\mathcal{M}\), the corresponding gadget in \(\mathcal{C}_{0}\) has exactly two rules, where the first rule (from \(q\) to \(r_{mid}\)) will be denoted by \(r^{b}\) and the second rule (from \(r_{mid}\) to \(q^{\prime}\)) by \(r^{e}\). We would now like to show that the rules of \(\mathcal{M}\) are simulated by their corresponding gadgets. To this end, we first define a mapping \(g\) from configurations of \(\mathcal{M}\) to configurations of \(\mathcal{C}_{0}\) as follows: If \(C=(q,v_{0},v_{1})\), then \(g(C)\) is the configuration of \(\mathcal{C}_{0}\) such that
\[\operatorname{state}(g(C))=q,\ g(C)(c_{0})=v_{0}/2^{m},\ g(C)(c_{1})=v_{1}/2^{m },\ g(C)(st)=1/2^{m},\ g(C)(te)=0\]
We now have the following "gadget simulation" lemma, which solves Challenge (C1).
Lemma 5.2 (Gadget Simulation).: _Suppose \(C\) is a configuration and \(r\) is a rule of \(\mathcal{M}\)._
* _Soundness: If_ \(C\xrightarrow{r}C^{\prime}\)_, then there exists_ \(\alpha,\beta\) _such that_ \(g(C)\xrightarrow{\alpha r^{b},\beta r^{e}}g(C^{\prime})\)_._
* _Completeness: If_ \(g(C)\xrightarrow{\alpha r^{b},\beta r^{e}}D\) _for some_ \(\alpha,\beta\) _and_ \(D\)_, then there exists_ \(C^{\prime}\) _such that_ \(D=g(C^{\prime})\) _and_ \(C\xrightarrow{r}C^{\prime}\)_._
Proof sketch.: We have already discussed the case of increments in some detail before and so we will concentrate on when \(r\) is a doubling rule of the form \((q,\mathbf{double}_{i},q^{\prime})\). The soundness part can be easily obtained by setting \(\alpha=g(C)(c_{i})\) and \(\beta=2\cdot g(C)(c_{i})\). For completeness, note that since \(r^{b}\) has a zero-test on \(c_{i}\), it must be that \(\alpha=g(C)(c_{i})\). Hence, after firing \(\alpha r^{b}\), the value of \(te\) must be \(2\cdot g(C)(c_{i})\). Now since \(r^{e}\) has a zero-test on \(te\), it must be that \(\beta=2\cdot g(C)(c_{i})\). So the net effect of firing \(\alpha r^{b},\beta r^{e}\) is to make the value of \(c_{i}\) to be \(2\cdot g(C)(c_{i})\). Hence, if we let \(C^{\prime}\) be such that \(C\xrightarrow{r}C^{\prime}\) in \(\mathcal{M}\), it can be verified that \(g(C^{\prime})=D\). For more details, see Subsection B.1 of the appendix.
The "finish" gadget.: Before, we solve Challenge (C2), we make a small modification to \(\mathcal{C}_{0}\). Recall that in \(\mathcal{M}\), we have a _set of final configurations_ given by \(F\coloneqq\{(q_{f},n,n):n\leq 2^{m}\}\), whereas in a \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{\emptyset\gamma}\), we are allowed to specify only one final configuration. However, the \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{\emptyset\gamma}\)\(\mathcal{C}_{0}\) only promises us that the initial configuration \(c_{init}\) of \(\mathcal{M}\) can reach some configuration in \(F\) in \(m\) steps iff \(g(c_{init})\) can reach some configuration in the set \(\{g(D):D\in F\}\) in \(2m\) steps. Hence, we need to make a modification to \(\mathcal{C}_{0}\) which allows us to replace the set of configurations with a single final configuration. To this end, we modify \(\mathcal{C}_{0}\) by adding the "finish gadget" from Figure 4(d), where \(q^{\prime}_{f}\) and \(\overline{q_{f}}\) are two fresh states and the first and the second rule are respectively denoted by \(f^{b}\) and \(f^{e}\). Let us call the resulting \([0,1]\)-\(\mathsf{VASS}_{\mathrm{RL}}^{\emptyset\gamma}\) as \(\mathcal{C}_{1}\).
Note that the effect of firing \(f^{b}\) is to set the values of \(st\) and \(te\) to \(0\). Further, if \(f^{e}\) is fired, then \(c_{0}\) and \(c_{1}\) are decremented by the same amount and both of them are tested for zero. This means that
\(f^{e}\) could be fired successfully only if the counter values of \(c_{0}\) and \(c_{1}\) at state \(q^{\prime}_{f}\) are the same and the effect of firing \(f^{e}\) is to set the values of \(c_{0}\) and \(c_{1}\) to \(0\). This observation along with repeated applications of the Gadget Simulation lemma give us the following Simulation theorem.
Theorem 5.3 (Simulation Theorem).: _The initial configuration \(c_{init}\) of \(\mathcal{M}\) can reach a final configuration in \(m\) steps iff \(g(c_{init})\) can reach the configuration \((\overline{q_{f}},\mathbf{0})\) in \(2m+2\) steps in \(\mathcal{C}_{1}\)._
The full proof of this theorem can be found in Subsection B.2 of the appendix. We now move on to solving Challenge (C2).
Solving Challenge (C2).: Thanks to the Simulation theorem, the required reduction is almost over. As we had discussed before, the only remaining part is that since \(g(c_{init})(st)=1/2^{m}\) and \(m\) is already given in binary, we cannot write down \(g(c_{init})\) in polynomial time. To handle this challenge (Challenge (C2)), we construct an "initialization" gadget which starts from a "small" initial configuration and then "sets up" the configuration \(g(c_{init})\).
The initialization gadget is shown in the Figure 6. The gadget shares the counters \(st\) and \(te\) with \(C_{1}\) and has two new counters \(x\) and \(count\). Initially, the gadget will start in \(in_{0}\) and will have the values \(1,0,1/m\) and \(0\) in \(st,te,x\) and \(count\) respectively. In each iteration of the gadget, the value of \(st\) will be halved. The function of \(x\) is to store the value \(1/m\) and the function of \(count\) is to count the number of executions of this gadget. Initially the value of \(count\) is \(0\) and in every iteration its value will increase by \(1/m\). Hence, if we finally require the value of \(count\) to be \(1\), then we would have executed this gadget precisely \(m\) times, thereby setting the value of \(st\) to \(1/2^{m}\).
The following lemma,whose full proof could be found in Subsection B.3 of the appendix, follows from an analysis of the initialization gadget, similar to the one for the Gadget Simulation lemma.
Lemma 5.4 (The Initialization lemma).: _Suppose \(C\) is a configuration of the initialization gadget such that \(\operatorname{state}(C)=in_{0},C(te)=0\) and \(C(x)=1/m\). Then we can execute one iteration of the gadget from \(C\) to go to a configuration \(C^{\prime}\) if and only if \(C^{\prime}\) is the same as \(C\) except that \(C^{\prime}(st)=C(st)/2\) and \(C^{\prime}(count)=C(count)+1/m\)._
We now construct our final \([0,1]\text{-VASS}^{0\%}_{\text{RL}}\)\(\mathcal{C}\) as follows: We take the initialization gadget and the \([0,1]\text{-VASS}^{0\%}_{\text{RL}}\)\(\mathcal{C}_{1}\) and we add a rule from \(in_{0}\) to \(q_{in}\) which does not do anything to the counters. Intuitively, we first execute the initialization gadget for some steps and then pass the control flow to \(\mathcal{C}_{1}\). We let \(d_{init}\) be the configuration of \(\mathcal{C}\) whose state is \(in_{0}\) and whose counter values are all \(0\), except for \(d_{init}(x)=1/m\) and \(d_{init}(st)=1\). Then, we let \(d_{fin}\) be the configuration of \(\mathcal{C}\) whose state is \(\overline{q_{f}}\) and whose counter values are all \(0\), except for \(d_{fin}(x)=1/m\) and \(d_{fin}(count)=1\). If we encode \(d_{init}\) and \(d_{fin}\) in binary, then they can be written down in polynomial time. Since \(d_{fin}(count)=1\), when the control flow passes from the initialization gadget to \(\mathcal{C}_{1}\), the value of \(st\) must be \(1/2^{m}\), which is exactly what we want.
Theorem 5.5.: \(g(c_{init})\) _can reach the configuration \((\overline{q_{f}},\mathbf{0})\) in the \([0,1]\text{-VASS}^{0\%}_{\text{RL}}\)\(\mathcal{C}_{1}\) in \(2(m+1)\) steps if and only if \(d_{init}\) can reach \(d_{fin}\) in the \([0,1]\text{-VASS}^{0\%}_{\text{RL}}\)\(\mathcal{C}\) in \(4m+1+2(m+1)\) steps._
Figure 6: The “initialization” gadget.
The full details behind this theorem can be found in Subsection B.4 of the appendix. Combining this theorem with Theorem 5.3, proves the correctness of our reduction.
## 6. From \([0,1]\)-Vass\({}_{\mathrm{RL}}^{0?}\) to \(\mathbb{Q}_{+}\)-Pvass
We now move on to the next step in our reduction chain with the following problem called the reachability problem for \(\mathbb{Q}_{+}\)-VASS\({}_{\mathrm{RL}}\), defined as follows: Given a \(\mathbb{Q}_{+}\)-VASS \(\mathcal{M}\), two configurations \(c_{init}\), \(c_{fin}\), and a number \(m\)_in binary_, whether one can reach \(c_{fin}\) from \(c_{init}\) in exactly \(m\) steps.
Theorem 6.1 ().: _The reachability problem for \(\mathbb{Q}_{+}\)-VASS\({}_{\mathrm{RL}}\) is_ NEXPTIME_-hard._
We prove this theorem by giving a reduction from the reachability problem for \([0,1]\)-VASS\({}_{\mathrm{RL}}^{0?}\). Fix a \([0,1]\)-VASS\({}_{\mathrm{RL}}^{0?}\)\(\mathcal{C}\), two of its configurations \(c_{init}\), \(c_{fin}\), and a number \(m\). Without loss of generality, we assume that every rule in \(\mathcal{C}\) performs at least one zero-test.
**Overview of the reduction.** We want to construct a \(\mathbb{Q}_{+}\)-VASS \(\mathcal{M}\) that simulates \(\mathcal{C}\) for \(m\) steps. The primary challenge that prevents us from doing this is the following:
1. How can we create gadgets to simulate exactly \(m\) zero-tests of \(\mathcal{C}\) in \(\mathcal{M}\)?
We circumvent this challenge as follows: We know that in a \([0,1]\)-VASS\({}_{\mathrm{RL}}^{0?}\), the value of every counter will always be in the range \([0,1]\). Hence, for every counter \(x\), we introduce another counter \(\tilde{x}\), called the _complementary counter_ of \(x\) and maintain the invariant \(x+\tilde{x}=1\) throughout a run. Then testing if the value of \(x\) is \(0\), amounts to testing if the value of \(\tilde{x}\) is at least \(1\). This allows us to replace a zero-test with a greater than or equal to \(1\) (geq1) test.
The latter can be implemented as follows: If \(t\) and \(t^{\prime}\) are rules which decrement and increment \(\tilde{x}\) by \(1\) respectively and \(C\xrightarrow{1t}C^{\prime}\xrightarrow{1t^{\prime}}C^{\prime\prime}\) is a run, then we know that the value of \(\tilde{x}\) in \(C\) is at least \(1\), which lets us implement a geo1 test. Note that for this to succeed, we require that both \(t\) and \(t^{\prime}\) are fired completely, i.e., with fraction \(1\).
To sum this up, this means that if were to simulate a rule \(r=(q,(w,s),q^{\prime})\) of the \([0,1]\)-VASS\({}_{\mathrm{RL}}^{0?}\)\(\mathcal{C}\) in our new machine with the complementary counters, we need one rule to take care of the updates corresponding to \(w\) and two rules to take care of geo1 tests corresponding to the zero tests in \(s\), both of which must be _fired completely_. Hence, simulating \(m\) steps of \(\mathcal{C}\) in our new machine requires \(3m\) steps, of which exactly \(2m\) steps must be _fired completely_. This leads us to
1. How can we force the rules corresponding to geo1 tests to be fired completely, for exactly \(2m\) times?
To solve this challenge, we introduce another counter \(ctrl\), called the _controlling counter_. We modify every rule corresponding to a geo1 test to also increment the value of the counter \(ctrl\) by \(1\). This means that, if \(\rho\) is a run of \(3m\) steps such that the value of \(ctrl\) after \(\rho\) is exactly \(2m\), then every rule corresponding to a geo1 test must have been fired completely along the run \(\rho\).
**Formal construction.** Having given an informal overview of the reduction, we now proceed to the formal construction. We are given an \([0,1]\)-VASS\({}_{\mathrm{RL}}^{0?}\)\(\mathcal{C}\) and a number \(m\) in binary. From the \([0,1]\)-VASS\({}_{\mathrm{RL}}^{0?}\)\(\mathcal{C}\), we will construct a \(\mathbb{Q}_{+}\)-VASS \(\mathcal{M}\) as follows. For every counter \(x\) of \(\mathcal{C}\), \(\mathcal{M}\) will have two counters \(x\) and \(\tilde{x}\). Every transition that increments \(x\) will decrement \(\tilde{x}\) by the same amount and vice-versa, so that the sum of the values of \(x\) and \(\tilde{x}\) will be equal to \(1\) throughout. Further, \(\mathcal{M}\) will have another counter \(ctrl\), called the _controlling counter_.
Suppose \(r\coloneqq(q,t,q^{\prime})\) is a rule of \(\mathcal{C}\) such that \(t=(w,s)\). Denote by \(\tilde{w}\) the vector such that \(w(ctrl)=0\) and for every counter \(x\) of \(\mathcal{C}\), \(\tilde{w}(x)=w(x)\) and \(\tilde{w}(\tilde{x})=-w(x)\). Then corresponding to the rule \(r\) in \(\mathcal{C}\), \(\mathcal{M}\) will have the gadget in Figure 7, whose first, second and third rules will be denoted by \(r^{b},r^{m}\) and \(r^{e}\) respectively.
For any configuration \(C\) of \(\mathcal{C}\), let \(G(C)\) denote the _set of configurations_ of \(\mathcal{M}\) such that \(D\in G(C)\) iff \(\text{state}(D)=\text{state}(C)\), \(D(x)=C(x)\) and \(D(\bar{x})=1-C(x)\) for every counter \(x\) of \(C\). Note that any two configurations in \(G(C)\) differ only in their value of the counter \(ctrl\). For any number \(\alpha\), let \(G(C)_{\alpha}\) denote the unique configuration in \(G(C)\) whose \(ctrl\) value is \(\alpha\). The following lemma is a consequence of the discussion given in the overview section., whose full proof could be found in Subsection C.1 of the appendix.
Lemma 6.2 (Control counter simulation).:
* _Soundness: If_ \(C\xrightarrow{\alpha r}C^{\prime}\) _in_ \(\mathcal{C}\)_, then for any_ \(\zeta\)_,_ \(G(C)_{\zeta}\xrightarrow{\alpha r^{b},r^{m},r^{e}}G(C^{\prime})_{\zeta+2}\)_._
* _Completeness: If_ \(G(C)_{\zeta}\xrightarrow{\alpha r^{b},\beta r^{m},r^{e}}D^{\prime}\) _for some_ \(\alpha,\beta,\gamma,\zeta\) _and_ \(D\) _such that_ \(D(ctrl)=\zeta+2\)_, then there exists_ \(C^{\prime}\) _such that_ \(D^{\prime}=G(C^{\prime})_{\zeta+2}\)_,_ \(\beta=\gamma=1\) _and_ \(C\xrightarrow{\alpha r}C^{\prime}\)_._
Repeated applications of the Control Counter Simulation lemma give us the following theorem, which completes our reduction.
Theorem 6.3 ().: \(c_{init}\) _can reach \(c_{fin}\) in \(C\) in \(m\) steps iff \(G(c_{init})_{0}\) can reach \(G(c_{fin})_{2m}\) in \(3m\) steps._
The full proof of the above theorem can be found in Subsection C.2 of the appendix.
Example 6.4 ().: Let us see a concrete application of this reduction on some example. To this end, consider the \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) given in Figure 8. Note that this is essentially a renamed version of the "increment(i)" gadget described in Figure 4(a). We consider this version here since it makes it easier to describe the effect of our reduction. The result of the application of the reduction on this \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) is given in Figure 9.
Suppose for some \(u,v\in[0,1]\) with \(u+v\leq 1\), we start in state \(q_{0}\) in the \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) given in Figure 8 with counter values \(u,v\) and \(0\) for the counters \(c,st\) and \(te\), respectively. From the argument given in the previous section, we know that if we fire the \([0,1]\text{-VASS}_{\text{RL}}^{0?}\) in Figure 8 once, then we will reach the state \(q_{2}\) with counter values \(u+v,v\) and \(0\) for the counters \(c,st\) and \(te\), respectively.
Now, suppose we start in \(q_{0}\) in the \(\mathbb{Q}_{+}\text{-VASS}_{\text{RL}}\) given in Figure 9 with counter values \(u,1-u,v,1-v,0,1\) and \(0\) in \(c,\overline{c},st,\overline{st},te,\overline{te}\) and \(ctrl\), respectively. From the reduction, we know that if we fire the gadget in Figure 9 once, and reach the state \(q_{2}\) with counter value \(4\) for the controlling counter \(ctrl\), then the counter values for counters \(c,\overline{c},st,\overline{st},te,\overline{te}\) are \(u+v,1-u-v,0,1-v,0\) and \(1\) respectively.
### Wrapping up
We now provide the final steps to prove that reachability for \(\mathbb{Q}_{+}\text{-PVASS}\) is \(\text{NEXPTIME}\text{-hard}\). To do this, we recall a well-known folklore fact about pushdown automata. It essentially states that we can implement a binary counter in a PDA.
Figure 8. Renamed version of the “increment(0)” gadget from Figure 4(a). The rule from \(q_{0}\) to \(q_{1}\) shall be denoted by \(I\) and the rule from \(q_{1}\) to \(q_{2}\) shall be denoted by \(J\).
Figure 7. Gadget for the rule \(r:=(q,t,q^{\prime})\) with \(t=(w,s)\).
Lemma 6.5: _For any number \(m\), in polynomial time in \(\log(m)\), we can construct a PDA \(P_{m}\) of bounded stack-height and two configurations \(C\) and \(C^{\prime}\) such that there is exactly one run from \(C\) to \(C^{\prime}\). Moreover, this run is of length exactly \(m\)._
Proof: The essential idea is to use the stack to do a depth-first search of a binary tree of size \(O(m)\). At each point, the PDA will only store at most \(O(\log m)\) many entries in its stack, because the depth of the tree is \(O(\log m)\). We now give a more precise construction.
Note that when \(m=1\), \(P_{1}\) can simply be taken to be a PDA with two states and a single transition between the first state and the second state which does nothing to the stack. Now, let us consider the case when \(m>1\) is a power of \(2\), i.e., \(m=2^{k}\) for some \(k\). Consider the following PDA \(P_{m}\) with \(k\) stack symbols \(S_{1}^{k},S_{2}^{k},\ldots,S_{k}^{k}\). \(P_{m}\) starts in the state \(b_{m}\) with the empty stack. It then moves to state \(e_{m}\) while pushing \(S_{1}^{k}\) onto the stack. The state \(e_{m}\) has \(k\) self-loop transitions as follows: For each \(1\leq i<k\), the \(i^{th}\) self-loop pops \(S_{i}^{k}\) and pushes \(S_{i+1}^{k}\) twice. Further, the \(k^{th}\) self-loop simply pops \(S_{k}^{k}\). It can be easily verified that starting from state \(b_{m}\) with the empty stack, there is exactly one path to the configuration whose state is \(e_{m}\) and whose stack is empty. Moreover this path is of length exactly \(m\). This is because the desired path is essentially the depth-first search traversal of a binary tree of size \(m-1\), where the root is labelled by \(S_{1}^{k}\) and each node at height \(i\) is labelled by \(S_{i+1}^{k}\). Due to the depth-first search traversal, the number of elements stored in the stack at any point during the run is \(O(k)\).
Now for the general case, suppose \(m=\sum_{1\leq i\leq n}2^{k_{i}}\) for some \(k_{1}<k_{2}<\cdots<k_{n}\leq\log(m)\). The desired PDA \(P_{m}\) has \(\sum_{1\leq i\leq n}k_{i}\) stack symbols given by \(S_{1}^{k_{1}},\ldots,S_{k_{1}}^{k_{1}},S_{1}^{k_{2}},\ldots,S_{k_{2}}^{k_{2} },\ldots,S_{1}^{k_{n}},\ldots,S_{k_{n}}^{k_{n}}\). Further, \(P_{m}\) has \(n+1\) states \(b_{m}^{1},\ldots,b_{m}^{n},b_{m}^{n+1}\). Initially, it starts in the state \(b_{m}^{1}\) with the empty stack. Then for each \(1\leq i\leq n\), it has a transition from \(b_{m}^{i}\) to \(b_{m}^{i+1}\) which pushes \(S_{1}^{k_{i}}\) onto the stack. Then, at state \(b_{m}^{n+1}\), it has the following set of self-loops: For each \(1\leq i\leq n\) and each \(1\leq j<k_{i}\), it pops \(S_{j}^{k_{i}}\) from the stack and pushes \(S_{j+1}^{k_{i}}\) twice. Further for each \(1\leq i\leq n\), it pops \(S_{k_{i}}^{k_{i}}\). It can now be easily verified that starting from state \(b_{m}^{1}\) with the empty stack, there is exactly one path to the configuration whose state is \(b_{m}^{n+1}\) and whose stack is empty and also that this path is of length exactly \(m\).
We now give a reduction from reachability for \(\mathbb{Q}_{+}\)-VASS\({}_{\text{RL}}\) to reachability for \(\mathbb{Q}_{+}\)-PVASS. Let \(\mathcal{M}=(Q,T,\Delta)\) be a \(\mathbb{Q}_{+}\)-VASS such that \(c_{init}\) and \(c_{fin}\) are two of its configurations and let \(m\) be a number, encoded in binary. Construct the pair \((P_{m},C,C^{\prime})\) as given by the Folklore lemma 6.5. We now take the usual cross product, i.e., the Cartesian product between \(P_{m}\) and \(\mathcal{M}\), to obtain a \(\mathbb{Q}_{+}\)-PVASS \(C\). (This operation is very similar to taking the cross product between a PDA and an NFA). Intuitively, the PDA part of \(C\) corresponds to simulating a binary counter, counting till the value \(m\) and the \(\mathbb{Q}_{+}\)-VASS part of \(C\) corresponds to simulating the \(\mathbb{Q}_{+}\)-VASS \(\mathcal{M}\).
Figure 9: Application of the reduction described in this section on the example \([0,1]\)-VASS\({}_{\text{RL}}^{0\gamma}\) given in Figure 8.
Let \(\mathbf{u}\) (resp. \(\mathbf{v}\)) be the configuration of \(\mathcal{C}\) such that \(\operatorname{state}(\mathbf{u})=(\operatorname{state}(C),\operatorname{state}( c_{init})),\operatorname{stack}(\mathbf{u})=\operatorname{stack}(C),\operatorname{val}( \mathbf{u})=\operatorname{val}(c_{init})\) (resp. \(\operatorname{state}(\mathbf{v})=(\operatorname{state}(C^{\prime}),\operatorname {state}(c_{fin})),\operatorname{stack}(\mathbf{v})=\operatorname{stack}(C^ {\prime}),\operatorname{val}(\mathbf{v})=\operatorname{val}(c_{fin})\)). By construction of \(\mathcal{C}\), \(c_{init}\) can reach \(c_{fin}\) in \(\mathcal{M}\) in \(m\) steps iff \(\mathbf{u}\) can reach \(\mathbf{v}\) in \(\mathcal{C}\).
Theorem 6.6 ().: _Reachability in \(\mathbb{Q}_{+}\)-PVASS is -hard._
## 7. Coverability, Number of Counters and Encodings
The chain of reductions from reachability in \(2\mathsf{CM}_{\mathsf{RL}}^{\,2,+1}\) to reachability in \(\mathbb{Q}_{+}\)-PVASS prove that the latter is -hard. The reduction from \(2\mathsf{CM}_{\mathsf{RL}}^{\,2,+1}\) to \([0,1]\)-\(\mathsf{VASS}_{\mathsf{RL}}^{0?}\) was accomplished by using 6 counters, and the reduction from \([0,1]\)-\(\mathsf{VASS}_{\mathsf{RL}}^{0?}\) to \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) used \(2x+1\) counters where \(x\) is the number of counters of the \([0,1]\)-\(\mathsf{VASS}_{\mathsf{RL}}^{0?}\) instance. Finally, the reduction from \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) to \(\mathbb{Q}_{+}\)-PVASS did not add any new counters. It follows that the lower bound already holds for \(\mathbb{Q}_{+}\)-PVASS of dimension 13.
We can go one step further. Similar to reachability in \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) we can define coverability in \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\), where want to cover some configuration in a given number of steps. Let us inspect the \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\)\(\mathcal{M}\) that we constructed in Section 6. We claim that
\[G(c_{init})_{0}\]
can reach
\[G(c_{fin})_{2m}\]
in
\[3m\]
steps iff
\[G(c_{init})_{0}\]
can cover
\[G(c_{fin})_{2m}\]
in
\[3m\]
steps.
The left-to-right implication is trivial. For the other direction, notice that in any run of \(3m\) steps in \(\mathcal{M}\) starting from \(G(c_{init})_{0}\), the value of \(ctrl\) can be increased by at most \(2m\). Further, for every counter \(x\neq\mathit{ctrl}\), we maintain the invariant \(x+\tilde{x}=1\) throughout. It then follows that the only way to cover \(G(c_{fin})_{2m}\) in \(3m\) steps is by actually reaching \(G(c_{fin})_{2m}\). Hence, coverability in \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) is also -hard. Since the reduction in Section 6.1 preserves coverability, we obtain:
Theorem 7.1 ().: _The coverability problem for 13-dimensional \(\mathbb{Q}_{+}\)-PVASS is -hard._
Let us now consider the encoding of the numbers that we use. It can be easily verified that in the final \(\mathbb{Q}_{+}\)-PVASS instance that we construct using our chain of reductions from Sections 4 till 6, all the numbers are fixed constants, except for the numbers appearing in the initial and final configurations, which are encoded in binary. Hence, the above theorem holds for 13-dimensional \(\mathbb{Q}_{+}\)-PVASS where the numbers are encoded in binary. We show that it is possible to strengthen this result to unary-encoded numbers at the cost of increasing the number of counters by a constant. More specifically, in Section D of the appendix, we present an alternate reduction, which given an instance of reachability in \([0,1]\)-\(\mathsf{VASS}_{\mathsf{RL}}^{0?}\) over \(x\) counters produces an instance of coverability in \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) over \(10x+25\) counters where all numbers are encoded in unary. (We have already discussed the idea of this reduction in the Introduction). Since the proof in Section 5 shows that reachability in \([0,1]\)-\(\mathsf{VASS}_{\mathsf{RL}}^{0?}\) is -hard already over 6 counters, this will prove that coverability in \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) over 85 counters where all numbers are encoded in unary is also -hard. Since the reduction given in Section 6.1 from \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) to \(\mathbb{Q}_{+}\)-PVASS produces a \(\mathbb{Q}_{+}\)-PVASS of bounded stack-height, does not add any new counters and does not change the encodings of the numbers, we can now conclude the following theorem.
Theorem 7.2 ().: _The coverability problem for \(\mathbb{Q}_{+}\)-PVASS is -hard, already over \(\mathbb{Q}_{+}\)-PVASSes of dimension 85, bounded stack-height, and when all numbers are encoded in unary._
This hardness result is very strong, as it simultaneously achieves coverability, bounded stack, constant dimensions, and unary encodings. In contrast, in, we can decide reachability of \(\mathbb{Q}_{+}\)-PVASS over arbitary dimension, even when all the numbers are encoded in binary.
Finally, the reduction from \(\mathbb{Q}_{+}\)-\(\mathsf{VASS}_{\mathsf{RL}}\) to \(\mathbb{Q}_{+}\)-PVASS in Section 6.1 only used the fact that for every \(m\), (1) there is a PDA of size \(O(\log(m))\) which can "count" exactly till \(m\) and (2) we can take product of a PDA with a \(\mathbb{Q}_{+}\)-VASS. For any model of computation that satisfies these two constraints,
the corresponding reachability problem over continuous counters should also be \(\mathtt{NEXPTIME}\)-hard. For instance, if we replace a stack in \(\mathbb{Q}_{+}\)-PVASS with _Boolean programs_ to define _Boolean programs with continuous counters_ then their reachability and coverability problems are also \(\mathtt{NEXPTIME}\)-hard. A similar result also holds when we replace the stack with a (discrete) one-counter machine which can only increment its counter and whose accepting condition is reaching a particular counter value given in binary. For both models, the reachability and coverability problems must also be in \(\mathtt{NEXPTIME}\), because the former can be converted into an exponentially bigger \(\mathbb{Q}_{+}\)-VASS, for which these problems are in \(\NP\)(Blondin and Haase, 2017, Theorem 4.14).
## 8. Conclusion
We have shown that the reachability problem for continuous pushdown VASS is \(\mathtt{NEXPTIME}\)-complete. While our upper bound works for any arbitrary number of counters, our lower bound already holds for the coverability problem for continuous pushdown VASS with constant number of counters, bounded stack-height and when all numbers are encoded in unary.
As part of future work, it might be interesting to study the complexity of coverability reachability for continuous pushdown VASS over low dimensions. It might also be interesting to study the coverability and reachability problems for extensions of continuous pushdown VASS. For instance, it is already known that reachability in continuous VASS in which the counters are allowed to be tested for zero is undecidable (Blondin and Haase, 2017, Theorem 4.17). It might be interesting to see if this is also the case when the continuous counters are endowed with operations such as resets or transfers. Finally, it would be nice to extend the decidability result here to other machine models, such as continuous VASS with higher-order stacks.
###### Acknowledgements.
The authors are grateful to the anonymous reviewers for their helpful comments and for pointing out a small (and easily fixable) mistake in an earlier version. This research was sponsored in part by the Deutsche Forschungsgemeinschaft project 389792660 TRR 248-CPEC. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement number 787367 (PaVeS). Funded by the European Union (ERC, FINABIS, 101077902). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
|
2305.12748 | Geometry effects in quantum dot families | We consider Schr\"odinger operators in $L^2(\mathrm{R}^\nu),\, \nu=2,3$, with
the interaction in the form on an array of potential wells, each on them having
rotational symmetry, arranged along a curve $\Gamma$. We prove that if $\Gamma$
is a bend or deformation of a line, being straight outside a compact, and the
wells have the same arcwise distances, such an operator has a nonempty discrete
spectrum. It is also shown that if $\Gamma$ is a circle, the principal
eigenvalue is maximized by the arrangement in which the wells have the same
angular distances. Some conjectures and open problems are also mentioned. | Pavel Exner | 2023-05-22T06:17:31Z | http://arxiv.org/abs/2305.12748v3 | # Geometry effects in quantum dot families
###### Abstract.
We consider Schrodinger operators in \(L^{2}(\mathbb{R}^{\nu}),\,\nu=2,3\), with the interaction in the form on an array of potential wells, each on them having rotational symmetry, arranged along a curve \(\Gamma\). We prove that if \(\Gamma\) is a bend or deformation of a line, being straight outside a compact, and the wells have the same arcwise distances, such an operator has a nonempty discrete spectrum. It is also shown that if \(\Gamma\) is a circle, the principal eigenvalue is maximized by the arrangement in which the wells have the same angular distances. Some conjectures and open problems are also mentioned.
Key words and phrases:Schrodinger operators, geometrically induced discrete spectrum, spectral optimisation 2010 Mathematics Subject Classification: 81Q37, 35J10, 35P15
## 1. Introduction
Spectral theory of Schrodinger operators is a topic which may never be exhausted. In this paper we focus on what one could call _guided quantum dynamics_, in other words, description of particle motion restricted in one direction but free in the other(s). Mathematically such systems are usually described either by Dirichlet Laplacians in tube- or layer-form regions, or alternatively by Schrodinger operators with a singular interaction supported by a manifold or complex of a lower dimension, see [1] for a survey.
Recently another model attracted attention where, in contrast to the above mentioned operator classes, the confinement is'soft' being realized by a regular potential well built over a fixed curve, cf. [1] and the subsequent work in [11, 2, 12, 13] as well as the results concerning the analogous problem about confinement in the vicinity of surfaces of a positive Gauss curvature [1, 12]. One has to add that Schrodinger operators of this type were studied before - see, e.g. [14, 15] - for a different purpose; the focus in those works was on the limit in which the'size' of the transverse confinement shrinks to zero.
The common feature of all the mentioned work is that the interaction is invariant with respect to shifts along the defining manifold; the potential depends on the distance from it only. This is, for instance, a natural model of semiconductor quantum wires. The present solid-state physics, however, makes it possible to fabricate many other objects, among which a prominent place belongs to _quantum dots_, also called semiconductor nanocrystals
[Qwiki]. They often appear in arrays in which case a natural question concerns the electron transport, or the absence of it, in such systems. The simplest model one can use here features an array of potential wells, which is what we our going to investigate in this paper. Apart from the indicated physical motivation, an extension of the studies mentioned above to the situation where a soft quantum waveguide has a nontrivial longitudinal structure represents a mathematical problem of an independent interest.
We analyze Schrodinger operators in \(L^{2}(\mathbb{R}^{\nu}),\,\nu=2,3\), with the interaction term in the form of arrays of potentials wells, for simplicity assuming that each of those has a rotational symmetry. We derive two main results. The first concerns infinite arrays obtained by local perturbations of a straight family of equidistantly spaced wells, bends or deformations; using Birman-Schwinger analysis we show that they have a nonempty discrete spectrum (Theorem 3.6). Secondly, in analogy with [1, Prop. 3.2.1. and Thm. 10.6] and [1] we consider the situation where the wells are arranged on a circle; using the Birman-Schwinger principle again we prove that the principal eigenvalue of such a Schrodinger operator is sharply maximized in the arrangement where the wells have the same angular distances (Theorem 5.1). Before stating and proving these claims, we describe in the next section the setting of our task in proper terms. We will also outline relations between the present problem and spectral properties of Schrodinger operators with point interactions [1], and we conclude the paper by listing two conjectures about the ground-state optimisation together with some other open problems about operators of this type.
## 2. Preliminaries
The setting we consider is simple. Given a \(\rho>0\) and a real-valued function \(V\in L^{2}(0,\rho)\) we define radial potential supported in an open ball \(B_{\rho}(y)\) centered at a point \(y\in\mathbb{R}^{\nu}\), \(\nu=2,3\), as the map \(x\mapsto V(\operatorname{dist}(x,y))\); with an abuse of notation we use for the latter the symbol \(V\) again. Furthermore, we consider a family of points, \(Y=\{y_{i}\}\subset\mathbb{R}^{\nu}\), finite or infinite, and such that the balls centered at them do not overlap, \(\operatorname{dist}(y_{i},y_{j})\geq 2\rho\) if \(i\neq j\), and denote by \(V_{i}\) the potential determined by the function \(x\mapsto V(x-y_{i})\) in the ball \(B_{\rho}(y_{i})\). The object of our interest is then the Schrodinger operator
\[H_{\lambda V,Y}=-\Delta-\lambda\sum_{i}V_{i}(x) \tag{2.1}\]
which is by our assumption about the function \(V\) self-adjoint on \(H^{2}(\mathbb{R}^{\nu})\); we will use the shorthand \(-\lambda V_{Y}\) for the potential term on the right-hand side of (2.1). Without repeating it further we will always restrict our attention to nontrivial situations when \(V\)_is nonzero_, and unless stated otherwise, we put \(\lambda=1\). We will also suppose that potential supports _do not overlap_, \(B_{\rho}(y_{i})\cap B_{\rho}(y_{i})=\emptyset\) for \(i\neq j\). To visualise better the geometry of the set \(Y\) we suppose that its points are distributed in specific ways over a curve \(\Gamma\subset\mathbb{R}^{\nu}\), or alternatively over a surface \(\Sigma\subset\mathbb{R}^{3}\). We are interested in
relations between the form of \(Y\) and the spectrum of \(H_{V,Y}\), in particular about implications of variations of the geometry of the curve \(\Gamma\).
If \(Y\) consists of a single point, the position of the interaction plays no role and we use the abbreviated symbol \(H_{\lambda V}\) for the operator (2.1). It is straightforward to check that \(\sigma_{\mathrm{ess}}(H_{\lambda V})=[0,\infty)\) and the discrete spectrum, written as an ascending sequence \(\{\epsilon_{n}\}\) with the multiplicity taken into account, is at most finite. In two dimensions it is nonempty provided \(\int_{0}^{\rho}V(r)\,r\mathrm{d}r\geq 0\); for all small enough positive \(\lambda\) there is a unique negative eigenvalue if and only if the integral is non-negative [10]. In three dimension, the existence of bound states requires a critical interaction strength.
## 3. Bound states in bent or locally deformed chains
In this section the set \(Y\) is infinite and its points lie at a curve regarded as a continuous, piecewise \(C^{1}\) map \(\Gamma:\,\mathbb{R}\to\mathbb{R}^{\nu}\); without loss of generality we may suppose that the curve is unit-speed, \(|\dot{\Gamma}|=1\), in other words, that it is parametrized by its arc length. The points of the array, which now may be denoted as \(Y_{\Gamma}\), will be then supposed to be distributed equidistantly with respect to this variable with a spacing \(a\geq 2\rho\). Note that the necessary, but in general not sufficient condition for the potential components not to overlap is \(|\Gamma(s+a)-\Gamma(s)|\geq 2\rho\) for any \(s\in\mathbb{R}\); recall that the radius of \(\operatorname{supp}V\) is smaller than \(a\) by assumption.
### The essential spectrum
Consider first the geometrically trivial case where the set \(Y=Y_{0}\) is invariant with respect to discrete translations, i.e. the generating curve is a straight line:
**Proposition 3.1**.: _Let the potentials be placed along a straight line, \(\Gamma=\Gamma_{0}\), then \(\sigma(H_{V,Y_{0}})\supset[0,\infty)\). If \(\int_{0}^{\rho}V(r)\,r^{\nu-1}\mathrm{d}r\geq 0\), we have \(\inf\sigma(H_{V,Y_{0}})<0\), and the spectrum may or may not have gaps. Their number is finite and does not exceed \(\#\sigma_{\mathrm{disc}}(H_{V})\). This bound is saturated for the spacing \(a\) large enough if \(\nu=2\), in the case \(\nu=3\) there may be one gap less which happens if the potential is weak, i.e. for \(H_{\lambda V,Y_{0}}\) with \(\lambda\) sufficiently small._
Proof.: Without loss of generality we may identify \(\Gamma_{0}\) with the \(x\) axis and \(Y_{0}\) with the set \(\{(ia,0\in\mathbb{R}^{\nu-1}):i\in\mathbb{Z}\}\). The inclusion \(\sigma(H_{V,Y_{0}})\supset[0,\infty)\) is easy to check: one has to choose a disjoint family of increasing regions on which \(H_{V}\) acts as Laplacian and construct a suitable Weyl sequence the elements of which are products of plane waves with appropriate mollifiers. To establish the band-gap structure of the negative part of the spectrum, we use Floquet decomposition, \(H_{V,Y_{0}}=\int_{\mathcal{B}}^{\oplus}H_{V}(\theta)\,\mathrm{d}\theta\) with \(\mathcal{B}=\big{[}-\frac{\pi}{a},\frac{\pi}{a}\big{)}\), where the fiber \(H_{V}(\theta)\) is an operator in \(L^{2}(S_{a})\), where \(S_{a}:=J_{a}\times\mathbb{R}^{\nu-1}\) and \(J_{a}:=\big{(}-\frac{a}{2},\frac{a}{2}\big{)}\), acting as \(H_{V}=-\Delta-V\) on the domain
\[D(H_{V}(\theta))=\Big{\{}\psi\in H^{2}(S_{a}):\,\psi\big{(}\tfrac{a}{2},x_{ \perp}\big{)}=\mathrm{e}^{i\theta}\psi\big{(}-\tfrac{a}{2},x_{\perp}\big{)} \ \text{ and }\\ \partial_{x_{1}}\psi\big{(}\tfrac{a}{2},x_{\perp}\big{)}= \mathrm{e}^{i\theta}\partial_{x_{1}}\psi\big{(}-\tfrac{a}{2},x_{\perp}\big{)} \ \text{ for all }\ x_{\perp}\in\mathbb{R}^{\nu-1}\Big{\}}, \tag{3.1}\]
where we use the notation \(x=(x_{1},x_{\perp})\) for elements of \(S_{a}\) and \(\mathbb{R}^{\nu}\). The negative spectrum of \(H_{V,Y_{0}}\) is nonempty if this is the case for some \(H_{V}(\theta)\), and it is obvious that such spectral points can be only eigenvalues of \(H_{V}(\theta)\). Each \(H_{V}(\theta)\) is self-adjoint and associated with the quadratic form
\[Q_{V,\theta}[\psi]:=\int_{S_{a}}\big{(}|\nabla\psi(x)|^{2}-V(x)|\psi(x)|^{2} \big{)}\,\mathrm{d}x \tag{3.2}\]
with the domain consisting of all \(\psi\in H^{1}(S_{a})\) that satisfy the first quasi-periodic condition in (3.1). Using further the unitary transformation \(\phi(x)=\mathrm{e}^{i\theta x_{\perp}/a}\psi(x)\), we can rewrite the form (3.2) as the map
\[\phi\mapsto\int_{S_{a}}\Big{(}\big{|}\big{(}-i\partial_{x_{1}}-\tfrac{\theta} {a}\big{)}\phi(x)\big{|}^{2}+|\nabla_{x_{\perp}}\phi(x)|^{2}-V(x)|\phi(x)|^{2} \Big{)}\,\mathrm{d}x\]
defined on \(H^{1}(S_{a})\) with periodic boundary conditions, \(\phi\big{(}\tfrac{a}{2}\big{)}=\phi\big{(}-\tfrac{a}{2}\big{)}\). From here one can check that the eigenvalues of \(H_{V}(\theta)\), if they exist, are continuous functions of \(\theta\), and their ranges constitute the spectral bands. Moreover, the lower and upper band edges correspond respectively to the symmetric and antisymmetric solutions, \(\psi(x)=\pm\psi(-x)\), and at the same time, the bracketing argument [10, Sec. XIII.15] applied to (3.2) gives the bounds
\[H^{\mathrm{N}}_{V,a}\leq H_{V}(\theta)\leq H^{\mathrm{D}}_{V,a},\quad\theta \in\mathcal{B},\]
where \(H^{\mathrm{N/D}}_{V,a}\) are the operators acting as \(-\Delta+V\) on functions of \(H^{2}(S_{a})\) satisfying the Neuman and Dirichlet conditions, respectively, at the boundary of the slab \(S_{a}\). By minimax principle, this means that the \(j\)th spectral band is squeezed between the \(j\)th eigenvalues of the two operator provided those exist; if such an eigenvalue exists for \(H^{\mathrm{N}}_{V,a}\) but not for \(H^{\mathrm{D}}_{V,a}\) the upper bound is replaced by zero. Another application of the bracketing argument shows that the estimating eigenvalues are monotonous with respect to \(a\) so that the bands shrink as \(a\) increases. Furthermore, we note that the discrete spectrum of the two operators is the same as that of \(\tilde{H}^{\mathrm{N/D}}_{V,a}\) obtained from \(H_{V}\) by adding the Neumann/Dirichlet condition at \(x_{1}=\pm\tfrac{a}{2}\) since the 'outer' part of these operators are positive. Increasing the spacing of the added conditions we arrive eventually to the same eigenvalue equation, hence the bands shrink to the eigenvalues of \(H_{V}\) as \(a\to\infty\); this yields the last claim.
To prove the sufficient condition for the existence of negative spectrum it is enough to find a trial function which makes the form \(Q_{V,0}\) of (3.2) negative. We can use, for instance, the functions
\[\chi_{b,c}(x)=\left\{\begin{array}{ccc}1&\cdots&|x_{\perp}|\leq b\\ \frac{c-x_{\perp}}{c-b}&\cdots&b\leq|x_{\perp}|\leq c\\ 0&\cdots&|x_{\perp}|\geq c\end{array}\right. \tag{3.3}\]
independent of \(x_{1}\) if \(\nu=2\), and
\[\chi_{b,c}(x)=\left\{\begin{array}{ccc}1&\cdots&|x_{\perp}|\leq b\\ -\ln\frac{|x_{\perp}|}{c}\big{(}\ln\frac{c}{b}\big{)}^{-1}&\cdots&b\leq|x_{ \perp}|\leq c\\ 0&\cdots&|x_{\perp}|\geq c\end{array}\right. \tag{3.4}\]
if \(\nu=3\). Choosing \(b>\rho\) we ensure that the supports of \(V\) and \(\nabla\chi_{b,c}\) are disjoint. The potential term in \(Q_{V,0}[\chi_{b,c}]\) equals \(\inf\sigma(H_{V,Y_{0}}))\) being negative by assumption, and it is easy to check that the kinetic term can be made arbitrarily small by putting \(c\) sufficiently large; this concludes the proof.
**Remark 3.2**.: The fact that \(\inf\sigma(H_{V,Y_{0}}))<0\) holds whenever the potential \(V\) is attractive in the mean is not in contradiction with the need of critical strength to achieve \(\inf\sigma(H_{V})<0\) in the three-dimensional case; note that the lower edge of the spectrum indicated in the proof converges then to zero as \(a\to\infty\). We also note that the spectrum is absolutely continuous, however, we will not need this property in the following.
Our aim is to find what happens with the spectrum, if \(\Gamma\) is bent or locally deformed; to make things simpler we assume that the curved part is finite and the halfline asymptotes of \(\Gamma\) are either not parallel, or if they are, they point in the opposite directions.
**Proposition 3.3**.: _Suppose that the curve \(\Gamma\) is straight outside a compact set and \(|\Gamma(s)-\Gamma(-s)|\to\infty\) holds as \(|s|\to\infty\), then \(\sigma_{\operatorname{ess}}(H_{V,Y})\) is the same as in the case of a straight line._
Proof.: The inclusion \(\sigma(H_{V,Y})\supset[0,\infty)\) is checked as in the previous case. To prove that also the negative part of the essential spectrum is preserved, we use again the Weyl criterion by which \(\mu\) belongs to a spectral band of \(H_{V,Y}\) if and only if there is a sequence \(\{\psi_{n}\}\subset D(H_{V,Y})\) such that
\[\lim_{n\to\infty}\|(H_{V,Y}-\mu)\psi_{n}\|=0. \tag{3.5}\]
For \(H_{V,Y_{0}}\) such a sequence can be constructed explicitly, its elements being products of the generalized eigenfunction of \(H_{V,Y_{0}}\) corresponding to the eigenvalue \(\mu\) of \(H_{V}(\theta)\) for an appropriate \(\theta\in\mathcal{B}\) with suitable mollifiers in the \(x_{1}\) variable; without loss of generality the latter can be chosen to have disjoint supports. This can be used to construct a Weyl sequence for \(H_{V,Y}\) in the form \(\{\psi_{n}\chi_{n}\}\) where \(\chi_{n}\) are transverse mollifiers of the type (3.3) or (3.4) for \(\nu=2,3\), respectively. By assumption the radius of \(\operatorname{supp}\chi_{n}\) can be made arbitrarily large, and consequently, this mollifier influence on \(\|(H_{V,Y}-\mu)\psi_{n}\|\) arbitrarily small if the longitudinal mollifier is supported far enough from the curved part of \(\Gamma\); this yields the inclusion \(\sigma(H_{V,Y})\subset\sigma(H_{V,Y_{0}})\).
The argument can be reverted. Assume that there is a \(\mu\in\sigma_{\operatorname{ess}}(H_{V,Y})\setminus\sigma_{\operatorname{ ess}}(H_{V,Y_{0}})\). By Weyl criterion one can then find a sequence \(\{\psi_{n}\}\) such that \(\|(H_{V,Y}-\mu)\psi_{n}\|\to 0\), and since \(\mu\) belongs to the essential spectrum, the sequence is weakly convergent, \(\psi_{n}\rightharpoonup 0\) as \(n\to\infty\). The functions \(\psi_{n}\) have to be supported dominantly in the vicinity of the support of the potential \(V_{Y}\) because \(\mu\) is negative by assumption and this is the only part of \(\mathbb{R}^{\nu}\) which can make a negative contribution to the quadratic form of the operator (2.1),
\[Q_{V,Y}[\psi]:=\int_{\mathbb{R}^{\nu}}|\nabla\psi(x)|^{2}\,\mathrm{d}x-\sum_{i }\int_{\mathbb{R}^{\nu}}V(x-y_{i})|\psi(x)|^{2}\,\mathrm{d}x,\]
defined on \(H^{1}(\mathbb{R}^{\nu})\). Furthermore, the weak convergence means, in particular, that \(\|\psi_{n}|_{\Omega}\|\to 0\) holds for any bounded \(\Omega\subset\mathbb{R}^{\nu}\) as \(n\to\infty\), hence the regions where the functions \(\psi_{n}\) are concentrated must move away from the non-straight part of \(\Gamma\). It may happen that a \(\psi_{n}\) is supported in the vicinity of both the asymptotic arms of \(\Gamma\), but since the asymptotes are not parallel in the same direction, we may use the weak convergence and replace such a \(\psi_{n}\) by a sum of two \(H^{1}\) functions with the intersection of such a split becoming negligible as \(n\) increases. Considering now a subsequence of \(\{\psi_{n}\}\) concentrated around a single asymptotic arm with their centers-of-mass moving to infinity, we get a Weyl sequence applicable to a straight \(\Gamma\). This would mean that \(\mu\in\sigma_{\mathrm{ess}}(H_{V,Y_{0}})\) which, however, is a contradiction.
### The discrete spectrum
Now we are going to suppose that the potential is _purely attractive_, \(V\geq 0\), and show that geometric perturbations do then give rise to a nonempty spectrum below \(\mu_{0}:=\inf\sigma(H_{V,Y})\). We will employ the Birman-Schwinger principle; for a rich bibliography concerning this remarkable tool we refer to [1]. To this aim we define for any \(z\in\mathbb{C}\setminus\mathbb{R}_{+}\) the operator in \(L^{2}(\mathbb{R}^{\nu})\),
\[K_{V,Y}(z):=V_{Y}^{1/2}(-\Delta-z)^{-1}V_{Y}^{1/2}\,; \tag{3.6}\]
we are particularly interested in the negative spectral parameter value, \(z=-\kappa^{2}\) with \(\kappa>0\). In view of our assumptions about the potential the non-trivial part of \(K_{V,Y}(-\kappa^{2})\) is positive and maps \(L^{2}(\operatorname{supp}V_{Y})\to L^{2}(\operatorname{supp}V_{Y})\). Since the supports of the potentials \(V_{i}\) are disjoint by assumption, we have
\[L^{2}(\operatorname{supp}V_{Y})=\sum_{i}^{\oplus}L^{2}(B_{\rho}(y_{i}))\]
and using this orthogonal sum decomposition we can write the Birman-Schwinger operator (3.6) in the'matrix' form with the 'entries'
\[K_{V,Y}^{(i,j)}(-\kappa^{2}):=V_{i}^{1/2}(-\Delta+\kappa^{2})^{-1}V_{j}^{1/2} \tag{3.7}\]
mapping \(L^{2}(B_{\rho}(y_{j}))\) to \(L^{2}(B_{\rho}(y_{i}))\). The Birman-Schwinger principle allows us to determine eigenvalues of \(H_{V,Y}\) by inspection of those of \(K_{V,Y}(-\kappa^{2})\):
**Proposition 3.4**.: \(z\in\sigma_{\mathrm{disc}}(H_{V,Y})\) _holds if and only if \(\,1\in\sigma_{\mathrm{disc}}(K_{V,Y}(z))\) and the dimensions of the corresponding eigenspaces coincide. The operator \(K_{V,Y}(-\kappa^{2})\) is bounded for any \(\kappa>0\) and the function \(\kappa\mapsto K_{V,Y}(-\kappa^{2})\) is continuously decreasing in \((0,\infty)\) with \(\lim_{\kappa\to\infty}\|K_{V,Y}(-\kappa^{2})\|=0\)._
Proof.: The first claim is a particular case of a more general and commonly known result, see, e.g., [1]. Using the explicit form of \((-\Delta-z)^{-1}\) as the integral operator with the kernel \((x,x^{\prime})\mapsto\frac{1}{2\pi}K_{0}(\kappa|x-x^{\prime}|)\) and \(\frac{\mathrm{e}^{-\kappa|x-x^{\prime}|}}{4\pi|x-x^{\prime}|}\) for \(\nu=2,3\), respectively, we can check that \(K_{V,Y}(-\kappa^{2})\) is bounded if \(V\in L^{2}\). Using Sobolev inequality [11, Sec. IX.4] we infer that each \(K_{V,Y}^{(i,j)}(-\kappa^{2})\) has a finite Hilbert-Schmidt norm, uniformly in \(i,j\), if \(\nu=3\). To make the same conclusion for \(\nu=2\) one has to use in addition the fact that
\(|K_{0}(\kappa r)|\leq cr^{-1}\) holds on \([0,2\rho]\) for a fixed \(\kappa>0\) and some \(c>0\). The operator \(K_{V,Y}(-\kappa^{2})=\sum_{i,j}K_{V,Y}^{(i,j)}(-\kappa^{2})\) is no longer compact, of course, but due to the uniformity the boundedness persists.
The continuity in \(\kappa\) follows from the functional calculus and we have
\[\frac{\mathrm{d}}{\mathrm{d}\kappa}(\psi,V_{Y}^{1/2}(-\Delta+\kappa^{2})^{-1} \,V_{Y}^{1/2}\psi)=-2\kappa(\psi,V_{Y}^{1/2}(-\Delta+\kappa^{2})^{-2}\,V_{Y}^{ 1/2}\psi)<0\]
for any nonzero \(\psi\in L^{2}(\operatorname{supp}V_{Y})\) which implies, in particular, the norm monotonicity. It follows from the dominated convergence theorem that \(\lim_{\kappa\to\infty}\|K_{V,Y}^{(i,i)}(-\kappa^{2})\|_{2}=0\) holds for the 'diagonal' operators, uniformly in \(i\). Using further the fact that \(\operatorname{dist}(y_{i},y_{j})\geq\delta:=a-2\rho>0\), \(i\neq j\), we get
\[\|K_{V,Y}^{(i,j)}(-\kappa^{2})\|\leq\|K_{V,Y}^{(i,j)}(-\kappa^{2})\|_{2}\leq \frac{\mathrm{e}^{-\kappa\delta}}{4\pi\delta}\,\|K_{V,Y}^{(i,i)}(-\kappa^{2}) \|_{2}\]
for the 'non-diagonal' opertors if \(\nu=3\), and a similar estimate with the right-hand side factor replaced by \(\frac{1}{2\pi}K_{0}(\kappa\delta)\) if \(\nu=2\).
**Remark 3.5**.: Applying the Birman-Schwinger principle to the fiber operators in the decomposition \(H_{V,Y_{0}}=\int_{\mathcal{B}}^{\oplus}H_{V}(\theta)\,\mathrm{d}\theta\) one can check that the spectrum of \(K_{V,Y_{0}}^{(i,j)}(-\kappa^{2})\) has the band-gap structure and the function \(\kappa\mapsto\sup\sigma(K_{V,Y_{0}}(-\kappa^{2}))\) is decreasing being equal to one at \(\kappa_{0}=\sqrt{-\epsilon_{0}}\). By Proposition 3.3 the essential spectrum is preserved by the considered geometric perturbations, hence the function \(\kappa\mapsto\sup\sigma_{\mathrm{ess}}(K_{V,Y}(-\kappa^{2}))\) has the same properties; note that one can apply the BS principle to the essential spectrum directly using the spectral shift function [10].
Now we are in position to state the main result of this section:
**Theorem 3.6**.: _Assume that \(\Gamma\) satisfying the assumptions of Proposition 3.3 is not a straight line and \(V\geq 0\), then \(\inf\sigma(H_{V,Y})<\epsilon_{0}\), and consequently, we have \(\sigma_{\mathrm{disc}}(H_{V,Y})\neq\emptyset\)._
Proof.: In view of Proposition 3.4 we have to show that there is a \(\kappa>\sqrt{-\epsilon_{0}}\) such that \(K_{V,Y}(-\kappa^{2})\) has eigenvalue one. By Remark 3.5 such a spectral point can be an eigenvalue of finite multiplicity only, and Proposition 3.4 tells us that any such eigenvalue is a decreasing function of \(\kappa\) which tends to zero as \(\kappa\to\infty\). To prove the theorem, it is thus sufficient to check that \(\sup\sigma(K_{V,Y}(-\kappa_{0}^{2}))>1=\sup\sigma_{\mathrm{ess}}(K_{V,Y}(- \kappa_{0}^{2}))\).
We are going to use the fact that our geometric perturbations are sign-definite - in the mean sense - and construct a trial function \(\psi\) such that
\[(\phi,K_{V,Y}(-\kappa_{0}^{2})\phi)-\|\phi\|^{2}>0. \tag{3.8}\]
The first expression on the right-hand side can be rewritten as
\[\int_{\mathbb{R}^{\nu}\times\mathbb{R}^{\nu}}\overline{\phi}(x)V_{Y}^{1/2}(x)( -\Delta+\kappa_{0}^{2})^{-1}(x,x^{\prime})V_{Y}^{1/2}(x^{\prime})\phi(x^{ \prime})\,\mathrm{d}x\,\mathrm{d}x^{\prime},\]
or more explicitly using the operators (3.7) as
\[\sum_{i,j\in\mathbb{Z}}\int_{B_{\rho}(y_{i})\times B_{\rho}(y_{j})}\overline{\phi} (x)V_{i}^{1/2}(x)(-\Delta+\kappa_{0}^{2})^{-1}(x,x^{\prime})V_{j}^{1/2}(x^{ \prime})\phi(x^{\prime})\,\mathrm{d}x\,\mathrm{d}x^{\prime}.\]
Denote now by \(\phi_{0}\) the generalized eigenfunction of \(K_{V,Y}(-\kappa_{0}^{2})\) corresponding to the spectral threshold of the straight chain \(Y_{0}\); as this function is the product of the corresponding generalized eigenfunction of \(H_{V,Y_{0}}\) and \(V_{Y}^{1/2}\), it is periodic and without loss of generality we may suppose that it is real-valued and positive. What matters are the restrictions of \(\phi_{0}\) to the balls supporting the potential, \(\phi_{0,i}=\phi_{0}\upharpoonright B_{\rho}(y_{i})\), which are shifted copies of the same function, \(\phi_{0,i}(\xi)=\phi_{0}(\xi+y_{i})\) for \(\xi\in B_{\rho}(0)\). Recall that we identified \(Y_{0}\) with the set \(\{(ia,0\in\mathbb{R}^{\nu-1}):i\in\mathbb{Z}\}\), then functions \(\phi_{0,i}\) are even with respect to the ball centers in the direction of the chain axis, and have rotational symmetry with respect to it (for \(\nu=2\) this means being even also transversally), in other words, \(\phi_{0,i}(-\xi)=\phi_{0,i}(\xi)\) holds for \(\xi\in B_{\rho}(0)\).
As it is common in such situations [1], we use the function \(\phi_{0}\) as the starting point for construction of the sought trial function. Using it we construct for a given \(Y\) the functions \(\phi_{0}^{Y}\) as an 'array of beads': its values in \(B_{\rho}(y_{i})\) would coincide with \(\phi_{0,i}\) rotated in such a way that the axis of \(\phi_{0,i}\) agrees with the tangent to \(\Gamma\) at the point \(y_{i}\); for \(Y=Y_{0}\) we drop the superscript \(Y\). To make such a function an \(L^{2}\) element, we need a suitable family of mollifiers; we choose it in the form
\[h_{n}(x)=\frac{1}{2n+1}\,\chi_{M_{n}}(x),\quad n\in\mathbb{N}. \tag{3.9}\]
where \(M_{n}:=\{x:\ \mathrm{dist}(x,\Gamma\upharpoonright[-(2n+1)a/2,(2n+1)a/2]) \leq\rho\}\) is a \(2\rho\)-wide closed tubular neighborhood of the \((2n+1)a\)-long arc of \(\Gamma\). We have to ensure that the positive contribution from such a cut-off can be made arbitrarily small. This is indeed the case:
**Lemma 3.7**.: \((h_{n}\phi_{0}^{Y},K_{V,Y}(-\kappa_{0}^{2})h_{n}\phi_{0}^{Y})-\|h_{n}\phi_{0} ^{Y}\|^{2}=\mathcal{O}(n^{-1})\) _as \(n\to\infty\)._
Proof.: Since \(\phi_{0}\) is periodic along the chain, one obtains for the second term the following expression,
\[\|h_{n}\phi_{0}\|^{2}=\frac{1}{(2n+1)^{2}}\int_{B_{\rho}(0)}|\phi_{0}(x)|^{2} \,\mathrm{d}x.\]
Using the fact that the function (3.9) is constant on its support, we get
\[(h_{n}\phi_{0}^{Y},K_{V,Y}(-\kappa_{0}^{2})h_{n}\phi_{0}^{Y})=\frac{1}{(2n+1)^{2} }\sum_{|i|\leq n}\sum_{|j|\leq n}\int_{B_{\rho}(y_{i})}\mathrm{d}x\,\phi_{0,i}^{ Y}(x)\]
\[\times\int_{B_{\rho}(y_{j})}V_{Y}^{1/2}(x)\,(-\Delta+\kappa_{0}^{2})^{-1}(x,x^{ \prime})\,V_{Y}^{1/2}(x^{\prime})\phi_{0,i}^{Y}(x^{\prime})\,\mathrm{d}x^{\prime}\]
\[=\frac{1}{(2n+1)^{2}}\sum_{|i|\leq n}\sum_{|j|\leq n}\int_{B_{\rho}(0)}\mathrm{ d}\xi\,\phi_{0}(\xi)\]
\[\times\int_{B_{\rho}(0)}V^{1/2}(\xi)\,(-\Delta+\kappa_{0}^{2})^{-1}(\xi,\xi^{ \prime})\,V^{1/2}(\xi^{\prime})\phi_{0}(\xi^{\prime})\,\mathrm{d}\xi^{\prime}\]
By assumption, \(\phi_{0}\) is the generalized eigenfunction of \(K_{V,Y_{0}}(-\kappa_{0}^{2})\) corresponding to the spectral threshold which makes it possible to rewrite the right-hand side of the last relation as
\[\frac{1}{(2n+1)^{2}}\int_{B_{\rho}(0)}|\phi_{0}(x)|^{2}\,\mathrm{d}x-\frac{1} {(2n+1)^{2}}\sum_{|i|\leq n}\int_{B_{\rho}(0)}\mathrm{d}\xi\,\phi_{0}(\xi)\]
\[\times\sum_{|j|>n}\int_{B_{\rho}(0)}V^{1/2}(\xi)\,(-\Delta+\kappa_{0}^{2})^{-1 }(\xi,\xi^{\prime}+y_{j}-y_{i})\,V^{1/2}(\xi^{\prime})\phi_{0}(x^{\prime})\, \mathrm{d}\xi^{\prime}\]
For a straight chain we have \(|y_{j}-y_{i}|=a|j-i|\), for a curved one the distance under the assumptions of Proposition 3.3 also increases linearly as \(|j-i|\to\infty\). Given the fact that the resolvent kernel is asymptotically exponentially decreasing, we see that the second sum converges for a fixed \(Y\) and has a bound independent of \(i\) which yields the sought result.
In view of the lemma and relation (3.8) one has therefore to check that
\[(h_{n}\phi_{0},K_{V,Y}(-\kappa_{0}^{2})h_{n}\phi_{0})-(h_{n}\phi_{0},K_{V,Y_{0 }}(-\kappa_{0}^{2})h_{n}\phi_{0})>0.\]
holds for all \(n\) large enough, or equivalently, that
\[\lim_{n\to\infty}(h_{n}\phi_{0}^{Y},K_{V,Y}(-\kappa_{0}^{2})h_{n}\phi_{0}^{Y} )-(h_{n}\phi_{0}K_{V,Y_{0}}(-\kappa_{0}^{2})]h_{n}\phi_{0})>0.\]
In view of (3.7), in turn, this will be true if we prove that
\[(\phi_{0},[K_{V,Y}^{(i,j)}(-\kappa^{2})-K_{V,Y_{0}}^{(i,j)}(-\kappa^{2})]\phi_ {0})\geq 0 \tag{3.10}\]
holds any \(\kappa>0\) and all \(i,j\in\mathbb{Z}\) being _positive_ for some of them. In the last relation we allow for an abuse of notation writing the first part as the matrix element between the functions \(\phi_{0}\) keeping in mind, of course, that the axes of its components in \(B_{\rho}(y_{i})\) and \(B_{\rho}(y_{j})\) are in general not parallel. Naturally, the left-hand side of (3.10) is zero for \(i=j\) or for \(i,j\) such that the segment of \(\Gamma\) between \(y_{i}\) and \(y_{j}\) is straight. If \(Y\neq Y_{0}\), however, there is a pair of indices for which this is not the case, \(|y_{i}-y_{j}|<|i-j|a\), in fact, infinitely many such pairs. Was the potential a point interaction as in [10], the result would follow immediately from the monotonicity of the
resolvent kernel, but the problem is more subtle here because bending of the chain, even a weak one, may cause some distances between points of potential supports outside the ball centers to _increase_.
Denoting the resolvent kernel by \(G_{i\kappa}\) for the sake of brevity, we can write the left-hand side of (3.10) explicitly as
\[\int_{B_{\rho}(0)}\int_{B_{\rho}(0)}\phi_{0}(\xi)V^{1/2}(\xi)\big{[}G_{i \kappa}(y_{i}-y_{j}+\xi-\xi^{\prime})-G_{i\kappa}(y_{i}^{(0)}\!-y_{j}^{(0)}\!+ \xi-\xi^{\prime})\big{]}\]
\[\times V^{1/2}(\xi^{\prime})\phi_{0}(\xi^{\prime})\mathrm{d}\xi\,\mathrm{d}\xi^ {\prime}\]
\[=\frac{1}{2}\int_{B_{\rho}(0)}\int_{B_{\rho}(0)}\phi_{0}(\xi)V^{1/2}(\xi) \big{[}G_{i\kappa}(y_{i}-y_{j}+\xi-\xi^{\prime})-G_{i\kappa}(y_{i}^{(0)}\!-y_ {j}^{(0)}\!+\xi-\xi^{\prime})\]
\[+G_{i\kappa}(y_{i}-y_{j}-\xi+\xi^{\prime})-G_{i\kappa}(y_{i}^{(0)}\!-y_{j}^{( 0)}\!-\xi+\xi^{\prime})\big{]}V^{1/2}(\xi^{\prime})\phi_{0}(\xi^{\prime}) \mathrm{d}\xi\,\mathrm{d}\xi,\]
where we have used the fact that \(\phi_{0}(\xi)V^{1/2}(\xi)=\phi_{0}(-\xi)V^{1/2}(-\xi)\). The integration over \(\xi\) can be split into the transversal and longitudinal part with respect to the vector \(y_{i}-y_{j}\), namely \(\int_{B_{\rho}(0)}\mathrm{d}\xi=\int_{-\rho}^{\rho}\mathrm{d}\xi_{\perp}\int_{ -\sqrt{\rho^{2}-s_{\perp}^{2}}}^{\sqrt{\rho^{2}-s_{\perp}^{2}}}\mathrm{d}\xi_{ ||}\), and similarly for \(\xi^{\prime}\). What is important is the behavior of the square bracket at the longitudinal integration. We observe that not only the function \(G_{i\kappa}(\cdot)\) is convex, but the same is true for \(G_{i\kappa}(|y_{i}-y_{j}|+\cdot)-G_{i\kappa}(|y_{i}^{(0)}\!-y_{j}^{(0)}|+\cdot)\) as long as \(|y_{i}-y_{j}|<|y_{i}^{(0)}\!-y_{j}^{(0)}|\), and in that case the square bracket can be estimated by virtue of Jensen's inequality from below by
\[G_{i\kappa}(|y_{i}-y_{j}|)-G_{i\kappa}(|y_{i}^{(0)}\!-y_{j}^{(0)}|)>0.\]
In combination with the positivity of \(\phi_{0}V^{1/2}\) this proves that the right-hand side of (3.10) is positive whenever \(|y_{i}-y_{j}|<|i-j|a\); this concludes the proof of the theorem.
**Remark 3.8**.: The symmetry of the potential \(V\) played an important role in the proof. In its absence the above argument would work only if the deformation of \(\Gamma\) is strong enough to diminish _all_ the distances between the points of the considered pairs of balls, for instance, if \(|y_{i}-y_{i+1}|<a-2\rho\) holds for neighboring balls. Such a condition is clearly not optimal and the harder to fulfill the larger the ratio \(\frac{\rho}{a}\) is; we postpone the discussion of this question to a later publication.
## 4. Shrinking the potential
If the potential \(V\) is strongly localized one may think about replacing \(H_{V,Y}\) by a singular Schrodinger operator. Properties of point-interaction Hamiltonians are well known and nicely summarized in the classical monograph [1]. These operators can be introduced by several equivalent ways; one of them is based on self-adjoint extensions, starting from restriction of the Laplacian to \(C_{0}^{\infty}(\mathbb{R}^{\nu}\setminus Y)\). In dimensions \(\nu=2,3\) the resulting operator
has deficiency indices \((\#Y,\#Y)\)[1]; among its numerous self-adjoint extensions one focuses on the _local_ ones characterized - in the present situation when all the interactions are the same - by the boundary conditions
\[L_{1}(\psi,y_{j})-\alpha L_{0}(\psi,y_{j})=0,\quad\alpha\in\mathbb{R}, \tag{4.1}\]
coupling the generalized boundary values
\[L_{0}(\psi,y) :=\lim_{|x-y|\to 0}\,\frac{\psi(x)}{\phi_{\nu}(x\!-\!y)},\] \[L_{1}(\psi,y) :=\lim_{|x-y|\to 0}\bigl{[}\psi(x)-L_{0}(\psi,y)\,\phi_{\nu}(x-y) \bigr{]},\]
where \(\phi_{\nu}\) are the appropriate fundamental solutions,
\[\phi_{2}(x)=-\frac{1}{2\pi}\,\ln|x|,\quad\phi_{3}(x)=\frac{1}{4\pi|x|},\]
for \(\nu=2,3\), respectively. These point interactions are non-additive perturbations of the free Hamiltonian; the latter obviously corresponds to \(\alpha=\infty\). Following [1] we employ the symbol \(-\Delta_{\alpha,Y}\) for the singular operators defined by boundary conditions (4.1).
Approximation of point interactions in dimensions \(\nu=2,3\) is not an easy matter; it is well known that such a limit is generically trivial. There are nevertheless situations when one can make sense of such a limit:
**Proposition 4.1**.: _Let \(\nu=3\) and assume that \(H_{V}\) has a zero-energy resonance with which one can associate a solution \(f\in L^{2}_{\rm loc}(\mathbb{R}^{\nu})\) of the equation \(V^{1/2}(-\Delta)^{-1}V^{1/2}f=f\), then the family of operators_
\[H_{V_{\varepsilon},Y}:=-\Delta+\frac{\mu(\varepsilon)}{\varepsilon^{2}}\,\sum _{i}V\bigl{(}\tfrac{\cdot-y_{i}}{\varepsilon}\bigr{)},\]
_where \(\mu\) is real analytic in the vicinity of zero and such that \(\mu(0)=1\), converges in the norm resolvent sense to \(-\Delta_{\alpha,Y}\) with \(\alpha:=-\mu^{\prime}(0)|(V_{Y}^{1/2},f)|^{-2}\)._
Proof.: Since the points of \(Y\) do not accumulate, \(\inf_{i\neq j}|y_{i}-y_{j}|\geq 2a>0\), the claim follows from the analysis presented in [1], in particular, from Theorems II.1.2.1 and III.1.2.1 there.
In the two-dimensional case zero-energy resonances of \(H_{V}\) are again crucial. The scaled-potential approximation is worked out in [1] for single point interaction but it can be extended to more complicated sets \(Y\) similarly as for \(\nu=3\); the resulting parameter \(\alpha\) again depends on how exactly the coupling constant is scaled in the vicinity of the resonance.
For point-interaction Hamiltonians \(-\Delta_{\alpha,Y}\) one can also ask about the implications that a nontrivial geometry would have for the spectrum. What one finds in this case is consistent with the results of the previous section: a bend or a local deformation of a straight periodic array, which shortens the Euclidean distances, lowers the spectral threshold, and if \(Y\) is asymptotically
straight in a suitable sense so that the essential spectrum is preserved, isolated eigenvalues emerge again [10]. At the same time, the approximation of \(-\Delta_{\alpha,Y}\) by Schrodinger operators with scaled potential does not require spherical symmetry of the potential \(V\) which, similarly as Remark 3.8, gives a hint that assumptions of Theorem 3.6 might be weakened.
## 5. Ground state optimization
Let us return to arrays of regular potentials, this time finite ones, and change slightly the setting. We consider the two-dimensional situation and fix the curve \(\Gamma\) which will be now a _circle_ of radius \(R\) on which we place the centers of the disks \(B_{\rho}(y_{i})\); without loss of generality we may identify the circle center with the origin of the coordinates. The only restriction imposed is that they must not overlap, that is, \(\rho\leq R\sin\frac{\pi}{N}\), where \(N:=\#Y\).
We are interested in the configuration which makes the principal eigenvalue of \(H_{V,Y}\) maximal. It appears that this happens if \(Y\) has the full symmetry with respect to the discrete rotations:
**Theorem 5.1**.: _Up to rotations, \(\epsilon_{1}(H_{V,Y})=\inf\sigma(H_{V,Y})\) is uniquely maximized by the configurations in which all the neighboring points of \(Y\) have the same angular distance \(\frac{2\pi}{N}\)._
Proof.: The potential is compactly supported, so the negative spectrum of \(H_{V,Y}\) is now discrete and finite, and the ground state \(\epsilon_{1}(H_{V,Y})\) is a simple eigenvalue. We denote by \(Y_{\rm sym}\) the symmetric array in which all the neighboring points have the same angular distances. The real-valued eigenfunction \(\psi_{\rm sym}\) associated with \(\epsilon_{1}(H_{V,Y_{\rm sym}})\) has the appropriate symmetry: in polar coordinates we have \(\psi_{\rm sym}(r,\varphi)=\psi_{\rm sym}(r,\varphi+\frac{2\pi n}{N})\) for any \(n\in\mathbb{Z}\).
We use the Birman-Schwinger principle again and denote by \(\phi_{\rm sym}\) the eigenfunction corresponding to the largest eigenvalue of \(K_{V,Y_{\rm sym}}(\epsilon_{\rm sym})\), where \(\epsilon_{\rm sym}=\inf\sigma(H_{V,Y_{\rm sym}})\). It also has the symmetry with respect to rotations on multiples of the angle \(\frac{2\pi}{N}\), and as in the proof of Theorem 3.6 we may suppose that it is real-valued and positive. Referring to the monotonicity of \(K_{V,Y}(\cdot)\) stated in Proposition 3.4, in order to show that \(\epsilon_{1}(H_{V,Y})<\epsilon_{1}(H_{V,Y_{\rm sym}})\) holds whenever \(Y\neq Y_{\rm sym}\), modulo discrete rotations, one has to check the inequality \(\max\sigma(K_{V,Y}(\epsilon_{\rm sym}))>\max\sigma(K_{V,Y_{\rm sym}}(\epsilon_ {\rm sym}))\), and to this goal it is sufficient to find a trial function \(\phi\) such that
\[(\phi,K_{V,Y}(-\kappa_{0}^{2})\phi)-\|\phi\|^{2}>0,\quad\kappa_{0}=\sqrt{- \epsilon_{\rm sym}}. \tag{5.1}\]
A general configuration \(Y\) of point on the circle is characterized by the family of angles \(\theta_{i},\,i=1,\ldots,N\) satisfying \(\sum_{i=1}^{N}\theta_{j}=2\pi\) as sketched in Fig. 1 for \(N=5\). As before we construct the trial function \(\phi_{Y}\) as a 'array of beads'; we start from the restriction of \(\phi_{\rm sym}\) to the ball \(B_{\rho}(y_{1})\) calling it \(\phi_{\rm sym,1}\) and use it to create \(\phi_{\rm sym,j}\), \(j=2,\ldots,N\), by rotating this function on the angle \(\sum_{i=1}^{j-1}\theta_{i}\) around the origin. For \(Y=Y_{0}\) the left-hand side of (5.1)
vanishes by construction, hence it is sufficient to prove that
\[(\phi_{Y},K_{V,Y}(-\kappa^{2})\phi_{Y})-(\phi_{\mathrm{sym}},K_{V,Y_{0}}(-\kappa^ {2})\phi_{\mathrm{sym}})>0\]
holds for any \(\kappa>0\), in particular, for \(\kappa=\kappa_{0}\), or explicitly
\[\frac{1}{2\pi}\sum_{i,j=1}^{N}\Big{\{} \int_{B_{\rho}(0)}\int_{B_{\rho}(0)}\phi_{\mathrm{sym}}(\xi)V^{1/ 2}(\xi)\,K_{0}(\kappa|y_{i}+\xi-y_{j}-\xi^{\prime}|)\] \[\times V^{1/2}(\xi^{\prime})\phi_{\mathrm{sym}}(\xi^{\prime})\, \mathrm{d}\xi\,\mathrm{d}\xi^{\prime}\] \[-\int_{B_{\rho}(0)}\int_{B_{\rho}(0)}\phi_{\mathrm{sym}}(\xi)V^{ 1/2}(\xi)\,K_{0}(\kappa|y_{i}^{(0)}+\xi-y_{j}^{(0)}-\xi^{\prime}|)\] \[\times V^{1/2}(\xi^{\prime})\phi_{\mathrm{sym}}(\xi^{\prime})\, \mathrm{d}\xi\,\mathrm{d}\xi^{\prime}\Big{\}}>0\]
We denote \(d_{ij}:=|y_{i}-y_{j}|\) and \(d_{ij}^{(0)}:=|y_{i}^{(0)}-y_{j}^{(0)}|\) and write the first part of the above expression as \(\sum_{i,j=1}^{N}\tilde{G}_{i\kappa}(d_{ij})\), in the second one \(d_{ij}\) is replaced by \(d_{ij}^{(0)}\); the sought inequality then takes the form
\[\sum_{i,j=1}^{N}\tilde{G}_{i\kappa}(d_{ij})>\sum_{i,j=1}^{N}\tilde{G}_{i\kappa} (d_{ij}^{(0)}),\]
and rearranging the summation order, we have to check that
\[F(d_{ij}):=\sum_{m=1}^{[N/2]}\sum_{|i-j|=m}\big{[}\tilde{G}_{i\kappa}(d_{ij})- \tilde{G}_{i\kappa}(d_{ij}^{(0)})\big{]}>0\]
holds for every family \(\{d_{ij}\}\) which is not congruent with \(\{d_{ij}^{(0)}\}\).
Figure 1. To the proof of Theorem 5.1
The resolvent kernel contained in the expression is a convex function of its argument, and since \(|\xi-\xi^{\prime}|<2\rho<d_{ij}\), the function \(d_{ij}\mapsto|y_{i}+\xi-y_{j}-\xi^{\prime}|\) is increasing and concave. Consequently, \(d_{ij}\mapsto K_{0}(\kappa|y_{i}+\xi-y_{j}-\xi^{\prime}|)\) is convex again for any \(\xi,\xi^{\prime}\in B_{\rho}(0)\), and being integrated with the positive weight the result will be again convex. This makes it possible to apply Jensen's inequality which yields
\[F(d_{ij})\geq\sum_{m=1}^{[N/2]}\nu_{n}\Big{[}\tilde{G}_{i\kappa}\Big{(}\frac{1} {\nu_{n}}\sum_{|i-j|=m}d_{ij}\Big{)}-\tilde{G}_{i\kappa}\big{(}d_{i,i+m}^{(0)} \big{)}\Big{]}, \tag{5.2}\]
where \(\nu_{n}\) is the number of distinct line segments connecting the points \(y_{i}\) and \(y_{i+m}\) for \(m=1,\dots,N\), that is, \(\nu_{n}=N\) except the case when \(N\) is even and \(m=\frac{1}{2}N\) where \(\nu_{n}=\frac{1}{2}N\).
To prove that the right-hand side of (5.2) is positive we use the fact the convexity is not the only property which \(\tilde{G}_{i\kappa}(\cdot)\) inherited from the resolvent kernel; since \(d_{ij}\mapsto|y_{i}+\xi-y_{j}-\xi^{\prime}|\) is increasing, the integrated function is (strictly) decreasing which means that it is only necessary to check that
\[\frac{1}{\nu_{n}}\sum_{|i-j|=m}d_{ij}<d_{i,i+m}^{(0)} \tag{5.3}\]
for any fixed \(i\). Denoting \(\beta_{ij}=\sum_{k=i}^{j-1}\theta_{k}\), we have \(d_{ij}=2\sin\frac{1}{2}\beta_{ij}\) and \(d_{i,i+m}^{(0)}=2\sin\frac{\pi m}{N}\), and since the sine is concave in \((0,\pi)\), we can use Jensen's inequality for concave function which gives
\[\frac{1}{\nu_{n}}\sum_{|i-j|=m}2\sin\frac{1}{2}\beta_{ij}<2\sin\Big{(}\frac{1} {\nu_{n}}\sum_{|i-j|=m}\frac{1}{2}\beta_{ij}\Big{)}=2\sin\frac{\pi m}{N}=d_{i,i+m}^{(0)}\]
for those families \(\{d_{ij}\}\) of circle chords which are not congruent with \(\{d_{ij}^{(0)}\}\); this concludes the proof.
**Remark 5.2**.: For simplicity, we have formulated the claim and its proof in the two-dimensional setting but the argument extends easily to arrays \(Y\subset\mathbb{R}^{3}\) situated on a planar circle. Note also that the symmetry of the potential \(V\) can abandoned as long as all the potential wells involved can be obtained one from another by rotations.
Beyond this simple extension there are more complicated questions. To begin with, the maximizing configuration in Theorem 5.1 places the disk centers at vertices of regular polygon of the perimeter \(2NR\sin\frac{\pi}{N}\). It is then natural to ask about the maximization within a wider class of sets \(Y\) in the setting analogous to that used in Sec. 3.
**Conjecture 5.3**.: _Suppose that the points of \(Y\) are on a loop \(\Gamma\) of a fixed length in \(\mathbb{R}^{\nu},\,\nu=2,3\), equidistantly in the arc length variable, and the balls \(B_{\rho}(y_{i})\) do not overlap. Then \(\epsilon_{1}(H_{V,Y})=\inf\sigma(H_{V,Y})\) is maximized, uniquely up to Euclidean transformations, by a planar regular polygon of \(\#Y\) vertices._
The next question is much harder. We again fix the manifold on which points of \(Y\) are allowed to be, but this time not as a curve, but as a sphere in \(\mathbb{R}^{3}\), and ask about the configurations optimizing the ground state energy of \(H_{V,Y}\). This problem has the flavor of the celebrated Thomson problem [11, 12], not fully solved after more than a century of efforts, except that we seek a maximizing, not minimizing configuration. What one can realistically hope for is the solution in particular cases of low \(N=\#Y\):
**Conjecture 5.4**.: _Let the point of \(Y\) be arranged on the a sphere in such a way that the balls \(B_{\rho}(y_{i})\) do not overlap. Then \(\epsilon_{1}(H_{V,Y})=\inf\sigma(H_{V,Y})\) is maximized, uniquely up to Euclidean transformations, by the following five 'equilateral' configurations:_
* _three_ simplices_, with_ \(N=2\) _(a pair antipodal points),_ \(N=3\) _(equilateral triangle), and_ \(N=4\) _(tetrahedron),_
* octahedron _with_ \(N=6\)_,_
* icosahedron _with_ \(N=12\)_._
Both the conjectures we have stated are motivated by the fact that in the singular limit discussed in Section 4 the corresponding optimisation results are known to be valid as demonstrated in [10].
## 6. Conclusions
The question about spectral properties of Schrodinger operators with potentials mixing a local order with a nontrivial geometry is rich and the current discussion just scratched the surface while suggesting various open problems. One may ask, for instance, about finer properties of the curvature-induced spectrum in relation to the geometry of the array and the shape of the single cell potential, such as the spectral counting function, the weak-deformation asymptotic behavior of the ground state, etc. At the same time, there are numerous generalizations one can think of. In addition to the asymmetry of \(V\) mentioned in Remarks 3.8 and 5.2, they include sign-changing potentials, replacing the chain \(Y\) by more complicated lattices, or a quasi-periodic arrangement of the building blocks.
Another question of interest concerns the influence of a magnetic field. The two-dimensional Landau Hamiltonian perturbed by a straight periodic array of point interactions is known to have the spectrum containing the unperturbed Landau level and absolutely continuous bands between them [1]. The point part is not likely to persist if the singular interactions will be replaced by regular potentials but the absolute continuity is expected to be preserved. One can again ask whether some geometric perturbations will give rise to a discrete spectrum. From the point of view of applications, it is also important to find out whether a part of the absolutely continuous spectrum can survive random perturbations.
Conjectures 5.3 and 5.4 are not the only optimisation problems one can address in systems with finite numbers of potential wells. As long as we
suppose that the balls supporting the individual potentials do not overlap, it is also natural to ask about the configurations that _minimize_ the ground state. Under our assumptions about the potential \(V\) the answer can be easily found for a few smallest values of \(N\), for larger ones - or with additional geometric constraints - the task may be considerably more difficult.
This list is no doubt incomplete and could go on, but we prefer to stop here and leave the continuation open.
### Acknowledgements
The work was supported by the Czech Science Foundation within the project 21-07129S. The author is grateful to Vladimir Lotoreichik for useful comments and pointing out a flaw in the first version of the proof of Theorem 5.1.
|
2303.07975 | Software-based security approach for networked embedded devices | As the Internet of Things (IoT) continues to expand, data security has become
increasingly important for ensuring privacy and safety, especially given the
sensitive and, sometimes, critical nature of the data handled by IoT devices.
There exist hardware-based trusted execution environments used to protect data,
but they are not compatible with low-cost devices that lack hardware-assisted
security features. The research in this paper presents software-based
protection and encryption mechanisms explicitly designed for embedded devices.
The proposed architecture is designed to work with low-cost, low-end devices
without requiring the usual changes on the underlying hardware. It protects
against hardware attacks and supports runtime updates, enabling devices to
write data in protected memory. The proposed solution is an alternative data
security approach for low-cost IoT devices without compromising performance or
functionality. Our work underscores the importance of developing secure and
cost-effective solutions for protecting data in the context of IoT. | José Ferreira, Alan Oliveira, André Souto, José Cecílio | 2023-03-14T15:30:37Z | http://arxiv.org/abs/2303.07975v1 | # Software-based security approach for networked embedded devices
###### Abstract
As the Internet of Things (IoT) continues to expand, data security has become increasingly important for ensuring privacy and safety, especially given the sensitive and, sometimes, critical nature of the data handled by IoT devices. There exist hardware-based trusted execution environments used to protect data, but they are not compatible with low-cost devices that lack hardware-assisted security features. The research in this paper presents software-based protection and encryption mechanisms explicitly designed for embedded devices. The proposed architecture is designed to work with low-cost, low-end devices without requiring the usual changes on the underlying hardware. It protects against hardware attacks and supports runtime updates, enabling devices to write data in protected memory. The proposed solution is an alternative data security approach for low-cost IoT devices without compromising performance or functionality. Our work underscores the importance of developing secure and cost-effective solutions for protecting data in the context of IoT.
IEEEexample:BST
## 2 Thread Model and Assumptions
Defining the assumptions about the attacker's capabilities and goals, the system's components and interactions, and the security goals that need to be achieved are essential. In this work, we consider adversaries with the following capabilities:
* The adversaries have access to the device. They may modify the application code running on the device to read or change the data the application handles.
* The adversaries can sniff the network, modify messages exchanged between devices, and perform man-in-the-middle attacks.
* Software-based adversaries may be present on the device where the architecture will be deployed. Their goal may be to change the data available in memory and consequently control the entities that rely on data accuracy.
We assume that the SbS4NED is correctly installed on the devices by a trusted party. We also assume it is bug-free, encrypted, and working as expected. Therefore, the adversary can not surpass the code, and the verification process carried out by its components. The final assumption is that each device has mechanisms to compute the encryption key used to protect the local files where SbS4NED keys are stored.
## 3 Software-based security Architecture
As mentioned in the Introduction, this work aims to build a software-based tempered-resistant solution that protects the software and data in networked embedded devices. Driven by this goal, we design the SbS4NED proposed architecture.
Figure 1 shows a high-level description of the proposed architecture, where the SbS4NED Computing Module (SbS4NED_CM) runs inside the Gateway. It is responsible for monitoring applications running on the nodes connected to the Gateway. Each node will have an agent (SbS4NED_Agent) generating the signature of the application code running on the node and sending it to the SbS4NED_CM for code integrity check. The SbS4NED_CM and the node code's application are encrypted to increase security and to offer more protection to SbS4NED_CM internal processes. Moreover, the messages exchanged between SbS4NED_CM and its agent are also encrypted. Next, we describe the SbS4NED components:
* It interacts with the applications deployed in the node and aims to perform application updates and send and receive data from the nodes.
* This component is responsible for managing (_i.e._, generating and renewing) the keys used internally by SbS4NED_CM and for external communication (with a SbS4NED_Agent running on the node). It uses Diffie-Hellman (DH) key-exchange protocol for external communication to generate or renew the key.
* Provides cryptography services inside the SbS4NED_CM and the SbS4NED_Agent. It can encrypt, decrypt, and compute the message authentication code (MAC). In the SbS4NED_CM side, the App Manager can also use this component to encrypt the app-compiled code before sending it to the node. This way, secure code update is ensured.
* Designed for memory integrity checking. It writes the data from App Manager in the memory and holds the (randomized) position where it is written. The Integrity Checker is also responsible for remotely checking the integrity of the nodes' code.
* It is responsible for keeping the log files updated regarding the memory integrity state, which app the data came from, which nodes are connected, and any network activity that must be logged to easily detect if an attacker is trying to join the network or injecting any data on the network.
* It is used for executing the application code developed by the user. It offers an API to interact with the node underlayer software and hardware layer. All the interaction must be done using the API to ensure, for instance, that the exchanged data is encrypted.
The system architecture of SbS4NED_CM and its agents is illustrated in Figure 2. The figure provides an overview of the
Figure 1: SbS4NED architecture.
Figure 2: SbS4NED modules.
individual components and their interconnections within the system.
The Key Manager and Crypto components are the essential modules used in the SbS4NED_CM and also in its agents. These components are deployed on both sides of the system, providing the necessary encryption services and ensuring that data transmitted between the agents and SbS4NED_CM remain secure and confidential.
The App Manager, Integrity Checker, and Logger are part of SbS4NED_CM. These must be deployed in the Gateway. The App Manager plays a crucial role in dealing with the application code deployed in the agents, providing the necessary services to manage, update, and configure them. On the other hand, the Integrity Checker ensures that the code and messages transmitted by the agent are authentic and have not been tampered with.
### Data Protection
Memory integrity is nowadays a crucial security concern. The integrity of the data stored in memory is essential to ensure the system's proper functioning and to prevent unauthorized access or manipulation of sensitive information. When data is written in the memory, two pieces of information are stored: the value (\(v\)) that can be accessed by any other external entity, such as an actuator, and the data integrity (\(I\)) needed to check the integrity of the data. Data integrity \(I\) is computed using the MAC, and its purpose is to ensure that the data in memory has not been tampered with or modified. The formula to compute it is \(I=\)MAC(\(v\oplus t\)), where \(t\) can be a timestamp or a random number known only by the integrity checker. The position of the value \(I\) in memory is arbitrary, and only the Integrity Checker knows where it is placed. The choice of MAC over hash functions is because MAC uses a secret key to generate the authentication code. Assuming that SbS4NED_CM has exclusive access to this private key, only authorized accesses to SbS4NED_CM can generate the correct MAC result. In contrast, anyone can compute the hash value by identifying the function used. A secret key prevents anyone from computing the hash value and faking the integrity data. This way, SbS4NED_CM provides a strong level of security against malicious attacks to the memory and an easy and efficient verification of the memory's integrity.
### Application Protection
Besides confidentiality protection provided by encryption, the application code is locally stored in the node and decrypted only during execution. Application code also has integrity protection to ensure it remains unmodified even when other parties can access the node. To check the code's integrity, a challenge-response protocol is used, in which the SbS4NED_CM will send \(enc(t)\), an encrypted challenge to the node, where \(t\) can be a timestamp or a value randomly generated by the SbS4NED_CM. The node has to reply with the hash of the whole application code (\(ac\)) XOR-ed with \(t\), i.e., \(enc(hash(ac\oplus t))\). Then, the SbS4NED_CM checks the validity of the result. If the node fails the validation, SbS4NED_CM could force an update to restore the node application. If the node gives no response, SbS4NED_CM assumes that the node is lost or compromised and its data is dropped.
### Keys Renewal
A renewal cycle mechanism can extend the system's lifetime. Therefore, before any application update, the key used to encrypt the application code and for communication between the SbS4NED_CM and node is renewed using a DH protocol. However, the application could take a long time without needing an update. Therefore, the node is provided with an encryption application code, and the SbS4NED_CM can initialize the DH key exchange with the node to generate the new key even when there is no call of the functionally. The generated key will be used to renew the encrypt of the application code and in further communication with SbS4NED_CM.
### Encryption Algorithms
Since the proposed architecture uses an encryption algorithm and targets low-end devices, the algorithms that can be used must be lightweight, including the cryptographic ones. Thus, this architecture will use NIST lightweight encryption algorithms to protect code and data during message exchange [13]. Although by the time of writing this manuscript, the final stage of the NIST competition comprises ten finalists, we are only interested in algorithms that can deal with stream encryption. The main reason is that we want the encrypted code and messages to have the same size as the original ones. Among the ten finalists, few support stream encryption natively. For the SbS4NED architecture, Xoodyak and ISAP schemes with keyed mode association are considered. Since the SbS4NED architecture is modular, other algorithms may also be used.
## 4 Proof of concept
To verify and characterize the proposed architecture, we are currently implementing it. We plan to deploy the architecture in a prototype, enabling system testing in a distributed environment. The prototype will consist of a Gateway on which the SbS4NED_CM will be running, with connections to multiple nodes where the SbS4NED agents and applications will be deployed. By conducting tests in a distributed environment, we can ensure that the architecture can effectively handle the communication and data exchange requirements between the Gateway and the nodes. Additionally, we will be able to identify potential issues or limitations during deployment, which will help us refine the architecture further.
During the early stages of designing the proposed architecture, we conducted experiments using two NIST lightweight encryption algorithms, ISAP and Xoodyak, to determine which would be more suitable for our research. Our experiment used a Raspberry Pi Model 3+ platform with a Quad-core @1.4GHz and 1GB LPDDR2 SRAM. We tested both algorithms for their execution time and memory usage using the same key length of 16 bytes, a nounce of 16 bytes, and file sizes from 1 to 65 kilobytes (kB) (Figure 3). The tests also included both the encryption (Figure3a) and decryption (Figure 3b) processes.
Our experiment shows that Xoodyak was more suitable for our research than ISAP in terms of both average execution time and memory usage. The average execution time for Xoodyak was consistently lower, especially for larger file sizes (generally, Xoodyak is 30 to 60 times faster than ISAP with a maximum standard deviation of 0.004 ms). Moreover, both algorithms required about 370 kb of RAM to perform encryption and decryption tasks. It was noticed that file size does not affect the algorithm's memory usage. In summary, our experiments with ISAP and Xoodyak algorithms, conducted on a Raspberry Pi Model 3+ platform, helped us to determine which algorithm is more suitable for our research.
## 5 Conclusion
This work proposes a software-based secure execution environment architecture that is lightweight, requires no hardware modifications and is resistant to the most common hardware attacks. Currently, this architecture is being implemented as a proof of concept. After the implementation, tests will be conducted to analyze its performance and characterize its efficiency.
## Acknowledgments
This work was supported by the LASIGE Research Unit (ref. UIDB/00408/2020 and ref. UIDP/00408/2020), and by the European Union's Horizon 2020 research and innovation programme under grant agreement No 871259 (ADMORPH project).
|
2306.13106 | Worst-case analysis of array beampatterns using interval arithmetic | Over the past decade, interval arithmetic (IA) has been utilized to determine
tolerance bounds of phased array beampatterns. IA only requires that the errors
of the array elements are bounded, and can provide reliable beampattern bounds
even when a statistical model is missing. However, previous research has not
explored the use of IA to find the error realizations responsible for achieving
specific bounds. In this study, the capabilities of IA are extended by
introducing the concept of ``backtracking'', which provides a direct way of
addressing how specific bounds can be attained. Backtracking allows for the
recovery of both the specific error realization and the corresponding
beampattern, enabling the study and verification of which errors result in the
worst-case array performance in terms of the peak sidelobe level. Moreover, IA
is made applicable to a wider range of arrays by adding support for arbitrary
array geometries with directive elements and mutual coupling, in addition to
element amplitude, phase, and positioning errors. Lastly, a simple formula for
approximate bounds of uniformly bounded errors is derived and numerically
verified. This formula gives insights into how array size and apodization
cannot reduce the worst-case peak sidelobe level beyond a certain limit. | Håvard Kjellmo Arnestad, Gábor Geréb, Tor Inge Birkenes Lønmo, Jan Egil Kirkebø, Andreas Austeng, Sven Peter Näsholm | 2023-06-19T12:57:17Z | http://arxiv.org/abs/2306.13106v1 | # Worst-case analysis of array beampatterns using interval arithmetic
###### Abstract
Over the past decade, interval arithmetic (IA) has been utilized to determine tolerance bounds of phased array beampatterns. IA only requires that the errors of the array elements are bounded, and can provide reliable beampattern bounds even when a statistical model is missing. However, previous research has not explored the use of IA to find the error realizations responsible for achieving specific bounds. In this study, the capabilities of IA are extended by introducing the concept of "backtracking", which provides a direct way of addressing how specific bounds can be attained. Backtracking allows for the recovery of both the specific error realization and the corresponding beampattern, enabling the study and verification of which errors result in the worst-case array performance in terms of the peak sidelobe level. Moreover, IA is made applicable to a wider range of arrays by adding support for arbitrary array geometries with directive elements and mutual coupling, in addition to element amplitude, phase, and positioning errors. Lastly, a simple formula for approximate bounds of uniformly bounded errors is derived and numerically verified. This formula gives insights into how array size and apodization cannot reduce the worst-case peak sidelobe level beyond a certain limit.
This article may be downloaded for personal use only. Any other use requires author and AIP Publishing prior permission. This article appears in The Journal of the Acoustical Society of America and may be found at [https://doi.org/10.1121/10.0019715](https://doi.org/10.1121/10.0019715). The current e-print was typeset by the authors and can differ in, e.g., pagination, reference numbering, and typographic detail. Copyright 2023 The Authors. This article is distributed under a Creative Commons Attribution (CC BY) License.
## I Introduction
The push for reliable, high-performance sonar array systems raises a need for analysis methods that can account for various tolerances in manufacturing and data processing. These tolerances relate to deviations from the sonar specifications such as: manufacturing imperfections, calibration tolerances, electronic processing limitations, varying environmental factors, and component wear and tear. Typically, such deviations manifest themselves as errors in the transducer element amplitude, phase response, or element mutual coupling (also referred to as cross-talk).
The beampattern of an ideally calibrated array is a function of the array geometry and electronic processing. The beampattern relates to the array's lateral resolution given by its mainlobe width. The contrast depends on the sidelobe levels, and a low contrast (i.e., high sidelobe levels) may impede target detection. This article deals with arrays subject to bounded errors and their associated beampattern bounds. These bounds determine the limits within which all possible beampattern realizations exist, and they must be constrained if one assumes that the errors are bounded. This problem is tackled using the mathematical technique of interval arithmetic (IA). The theory mainly applies to systems of small relative bandwidth, with sonars in mind.
An analysis of array errors may be carried out statistically, as has been done since the early phased-array systems [1], and in acoustic arrays [2; 3]. The common assumption is that the relevant errors are independent and identically distributed across elements, typically Gaussian, from which one can derive that the beampattern magnitude follows a Rician distribution, or more generally a Beckmann distribution [4]. A key finding from the statistical analysis is the expression for the _expected beampattern_[5], where a constant term due to errors or failed elements [5] may swamp the desired features of the nominal beampattern, unless the array is exceptionally well calibrated. It should be noted that the expected beampattern is not a proper beampattern, but rather the statistical average of all possible realizations.
Although Gaussian error distributions may be a reasonable assumption in many situations, it can also give misleading results. When statistical assumptions, such as error independence, does not hold, sidelobe levels that should be statistically impossible may occur frequently. Moreover, a comprehensive statistical description may not be available or lead to an intractable formulation. Also, with an unbounded distribution, the beampattern will in principle be unbounded too. For these reasons, the
statistical methods typically do not provide rigorous and finite upper and lower bounds on the beampattern.
Interval methods are generally suitable in various contexts that involve quantities that are bounded [7, 8]. This is a weak restriction, as the quantities need not be precisely known or representable to be enclosed by an interval and give reliable results. In contrast to statistical methods, IA provides finite beampattern bounds given finite error bounds and weaker assumptions.
The upper beampattern bound may be interpreted as a _worst-case_ beampattern performance in terms of the sidelobe level. Notably, the upper beampattern bound is also not a proper beampattern since it cannot be attained simultaneously in all directions. Because controlling the sidelobe level is a fundamental objective in array design, it is important to understand how tolerance errors in multiple variables can affect the beampattern; for instance, to mitigate the worst-case scenarios related to a high peak sidelobe level (PSLL). An earlier example of worst-case analysis is for phase quantization sidelobes [9]. A comprehensive overview of calibration errors and analysis methods for phased array antennas can be found in the works by He et al. [10]
The first application of IA in beampattern analysis was made in the antennae community by Anselmi et al. [11], where they studied the effects of bounded amplitude errors. Subsequent works expanded the scope further. Poli et al. [12] studied the effects of phase errors, whereas Zhang et al. studied joint amplitude and phase errors [13]. Interval errors in the positions have also been investigated in various ways, such as in the case of bump-like features in reflector antennas [14]. In beampattern analysis, the intervals reside in the complex plane. In the aforementioned works, the complex intervals are represented as rectangles in the complex plane (rIA, rectangular interval arithmetic).
Bounds resulting from mutual coupling errors have been analyzed for phased antenna arrays using the circular interval representation (cIA, circular interval arithmetic) [15]. A similar analysis is based on the Cauchy-Schwarz inequality [16]. However, in the mathematical models used therein, all error types are treated as special cases of mutual coupling errors, which blurs any clear separation between the different error types.
The rectangular and circular descriptions tend to overestimate the interval bounds by decoupling the inherent dependencies between the real and imaginary components. In order to produce tighter and more correct bounds, Tenuti et al. [17] proposed a polygonal representation (pIA, polygonal interval arithmetic) by using Minkowski summation. To date, this is the most accurate method in the literature for this specific application. Other techniques, such as the Taylor-based interval method [18], also exist.
Recently IA has been introduced to sonar beamforming, starting as a cross-pollination from the antenna field. In the previous works on IA for phased-array antennas, a uniform linear array geometry was used. To make the theory more applicable to a wider range of sonars, Kirkebo and Austeng [19] derived interval bounds for arrays of arbitrary shape and directive elements subject to amplitude errors by employing rIA. We have recently extended this framework and released a toolbox for beampattern interval analysis [20], which takes into account errors in amplitude, phase, position, and directivity by employing the tighter and more accurate pIA.
To the best of the authors' knowledge, there have been relatively few previous works published on IA in the context of acoustics. One example is the calculation of room acoustic reverberation times \(T_{60}\) from bounded quantities such as volumes and sound absorption coefficients [21]. Interval analysis has also been used to find all system configurations consistent with a set of measurements, as applied to underwater acoustic source localization [22].
The current work builds upon Ref. [20], aiming to provide a more thorough analysis of worst-case situations using IA. To this end, the framework is extended to also include coupling, and describe it separately from the other forms of calibration errors. The key result of this study follows with "backtracking", which directly recovers the errors that result in a specific upper or lower bound beampattern magnitude. Backtracking provides insight into particularly unwanted error patterns that may result in exceptionally high PSLLs. The non-uniqueness of the bounds due to ambiguities in the error distribution (i.e., phase and position errors) is presented, along with a solution for resolving these ambiguities. Finally, an expression for the approximate beampattern bounds is derived. The expression provides insight into the limitations of array length and apodization for reducing the worst-case psll. It also sheds light on the similarities between the worst-case and expected beampatterns. The code used for this article is available online [23].
The article is structured as follows: Sec. II covers the theoretical background, which primarily concerns beampatterns and real and complex interval arithmetic. Sec. III presents the mathematical model to obtain bounded beampatterns with IA, while Sec. IV introduces backtracking. In Sec. V an approximate bound is derived. All proposed methods are showcased as numerical experiments in Sec. VI. The results are discussed in Sec. VII and the article is concluded in Sec. VIII.
## II Theoretical background
### Beampatterns
Beamforming is the process of spatially filtering the wavefield using a sensor array. The array consists of \(M\) elements at positions \(\mathbf{r}_{m}\). The beamformer output is obtained by summing the appropriately delayed and weighted element inputs. The complex beampattern \(B(\theta)\) describes the array's characteristics. In the far-field narrowband situation it is
\[B(\theta)=\sum_{m=1}^{M}w_{m}\cdot d(\alpha_{m})\cdot e^{j(\mathbf{k}(\theta)- \mathbf{k}_{s})\cdot\mathbf{r}_{m}}, \tag{1}\]
for a wavefield arriving from the direction \(\mathbf{k}(\theta)\) and steering direction \(\mathbf{k}_{s}\). The array apodization is given by the element weights \(w_{m}\) and allows for a trade-off between a narrower mainlobe and a decreased sidelobe level. In this work, the weights are always normalized such that \(\sum_{m}w_{m}=1\). The angular dependence is due to the relation
\[\mathbf{k}(\theta)=[k_{x},k_{y}]^{\mathsf{T}}=\frac{2\pi}{\lambda}\cdot[\sin\theta,\cos\theta]^{\mathsf{T}}, \tag{2}\]
where \(\lambda\) is the wavelength. Two-dimensional beamforming is considered, with arrays in the \(x\)-\(y\) plane and broadside in \(\mathbf{\hat{y}}\) the direction. The element position is a function of the coordinates \(\mathbf{r}_{m}=[x_{m},y_{m}]^{\mathsf{T}}\).
The element-to-wavefield angle \(\alpha_{m}=\theta-\psi_{m}\) determines the influence of the element directivity through the directivity function \(d(\alpha_{m})\). Here, \(\psi_{m}\) is the angle orthogonal to the surface of the \(m^{\text{th}}\) element. For circular elements of diameter \(D\), the directivity takes on the form of the first-order Bessel aperture smoothing function [24]:
\[d(\alpha)=2\cdot\text{jinc}\left(\frac{2\pi\sin(\alpha)}{\lambda}D\right). \tag{3}\]
The directivity function may be tapered to zero beyond \(90^{\circ}\) so that the element is not sensitive to the rear, see Fig. 1.
It is customary to mainly consider the output power, defined as \(P(\theta)=|B(\theta)|^{2}\). In Fig. 2, the nominal error-free beampattern for an \(M=5\) element curved [19] sonar is illustrated, as obtained using Eq. (1). The array parameters are specified in Table 1. This example is meant to be illustrative and is referred to as array example A. The speed of sound throughout the text is assumed to be \(1500\,\mathrm{m/s}\), and the wave frequency is set to \(20\,\mathrm{kHz}\).
The beampattern bounds, using the error intervals given in Table 1, are introduced briefly in this section. The bounds are seen in Fig. 2, and the method of calculation is outlined in Sec. III. Between the two bounds, \(1000\) random realizations are plotted, illustrating the inclusive property of IA. The errors are drawn uniformly and independently from the intervals. Non-uniform phase bounds are chosen to highlight that edge elements may have different neighboring conditions, and that the IA framework can handle element-dependent error sizes. For instance, this could be relevant for thermal expansion where element positional deviation is proportional to the distance from the attachment point.
Taking a statistical approach, it is found that the expected power for Gaussian amplitude and small phase errors [5] is approximately
\[\mathbb{E}\left\{P(\theta)\right\}\approx|B(\theta)_{\text{nom.}}|^{2}\cdot e^ {-\sigma_{\phi}^{2}}+T_{\text{se}}\cdot\left(\sigma_{g}^{2}+\sigma_{\phi}^{2} \right), \tag{4}\]
where \(\sigma_{g}^{2}\) and \(\sigma_{\phi}^{2}\) are the variances for element amplitude and phase. \(T_{\text{se}}\) is the sensitivity function, defined as
\[T_{\text{se}}=\sum_{m=1}^{M}|w_{m}|^{2}. \tag{5}\]
In deriving this expression, we make the assumption that no variations depend on \(\theta\). The second term in Eq. (4) raises the power uniformly, affecting the ability to specify beampattern nulls in particular. The expected beampattern is also shown in Fig. 2. The variances for the non-uniform phase errors are calculated by averaging the variance across the elements, assuming uniform distributions. In Sec. V, we derive an expression for the worst-case beampattern, showing some resemblance in its formulation with the expected beampattern.
Figure 1: (Color online) The directivity function in Eq. (3) is evaluated within an orientation interval (\(\alpha\)-axis) of \(\pm 10^{\circ}\). \(D=\lambda/2\), with tapered response between \(80^{\circ}\) and \(100^{\circ}\).
Figure 2: (Color online) Beampatterns of example array A. The white region signifies the area between the bounds.
### Interval arithmetic of real numbers
Interval variables are indicated with the superscript \(I\) and represent a connected set of numbers:
\[x^{I}=[\underline{x},\overline{x}]=\{x\in\mathbb{R}:\underline{x}\leq x\leq \overline{x}\}, \tag{6}\]
where \(\underline{x}\) and \(\overline{x}\) are the lower and upper interval bounds, respectively. As with ordinary variables, operations can be performed on intervals. For the addition of two intervals
\[x^{I}+y^{I}=[\underline{x}+\underline{y},\overline{x}+\overline{y}], \tag{7}\]
and for multiplication
\[x^{I}\cdot y^{I}=[\min\left\{\underline{x}\underline{y}, \underline{x}\overline{y},\overline{x}\underline{y},\overline{x}\overline{y} \right\},\max\left\{\underline{x}\underline{y},\underline{x}\overline{y}, \overline{x}\underline{y},\overline{x}\overline{y}\right\}]. \tag{8}\]
An important feature of interval arithmetic is that subtraction and division are not additive or multiplicative inverses, except in the special case of degenerate intervals. In other words, \(x^{I}-x^{I}\neq[0,0]\), unless the upper and lower bounds are equal.
The interval output of functions can also be defined. For example, consider the quadratic function \(f(x)=x^{2}\), which has a minimum at \(x=0\). This highlights the importance of checking if an interval contains any extrema of the function. For the interval \(x^{I}\), the function \(f(x^{I})=\{x^{2}:x\in x^{I}\}\) can be expressed as
\[f(x^{I})=\begin{cases}[\overline{x}^{2},\underline{x}^{2}]&\text{if }\overline{x}\leq 0,\\ [0,\max\left\{\underline{x}^{2},\overline{x}^{2}\right\}]&\text{if }0\in x^{I},\\ [\underline{x}^{2},\overline{x}^{2}]&\text{if }\underline{x}\geq 0.\end{cases} \tag{9}\]
The final issue to address in interval arithmetic is _the dependence problem_. This occurs when a variable is represented more than once in an expression. For example, if the interval \(x^{I}=[\underline{x},\overline{x}]=[-1,1]\) is naively multiplied with itself, the result would be \(x^{I}\cdot x^{I}=[-1,1]\) instead of \((x^{I})^{2}=[0,1]\), because the \(x^{I}\) in the first factor is treated independently from the second factor. This is due to the lack of distributivity, but it can also result from the lack of additive and multiplicative inverses in interval arithmetic [7].
### Array errors and complex interval arithmetic
If the array elements are subject to bounded errors in phase and amplitude, the phasor values in Eq. (1) are bounded within a two-dimensional shape in the complex plane known as an annular sector, as seen in Fig. 3. This annular sector is considered to be a two-dimensional complex interval. In the beamforming process, the complex intervals are summed, resulting in a complex interval \(B^{I}\) as the output. Mathematically, for two intervals \(A^{I}_{1}\) and \(A^{I}_{2}\), the Minkowski sum is defined as the set of all possible sum combinations
\[A^{I}_{\text{sum}}=A^{I}_{1}+A^{I}_{2}=\{A_{1}+A_{2}:A_{1}\in A^{I}_{1},A_{2} \in A^{I}_{2}\}. \tag{10}\]
How this sum is performed in practice depends on the chosen interval representation, of which some examples are shown in Fig. 3. Rectangular and circular intervals enclose the annular sectors and were the first used for beampattern analysis [11; 15]. While these representations are convenient for summation, they are evidently not tight and can introduce "pessimism" to the bounds, also known as _the wrapping problem_ in IA.
A later development in this field involved integrating IA with Minkowski summation by wrapping the annular sectors with convex polygons [17], as seen in Fig. 3. The inner, concave part of the annular sector is included by forming the convex hull. Although this might seem problematic, the Shapley-Folkman lemma shows that the Minkowski summation is a convexifying operation. In other words, the sums of many concave sets are approximately convex [25]. This deviation can be quantified with the Hausdorff distance, but this topic is not a major concern because the bounds are inclusive in any case.
Considering only the convex boundary is sufficient because Minkowski summation and forming the convex hull are commuting operations [25]. The sampling resolution of the curve is determined by the error tolerance in the representation, as the polygon must enclose the arcs of the annular sectors. A summation algorithm can be implemented that runs in linear time with respect to the number of vertices on the two boundaries [26], based on comparing only vertex pairs that are extreme in the same direction.
Throughout this article, other complex interval operations are needed. For example, the absolute value of a complex interval is needed to plot the power bounds \(\underline{P}(\theta)\) and \(\overline{P}(\theta)\). This real-valued interval gives the distance from the origin to the closest and furthest point on the boundary of \(B^{I}\). If the origin is contained within the interval, then the minimum distance is zero. Additionally,
Figure 3: (Color online) The nominal element response is indicated by the arrow tip. Bounded errors in amplitude and phase give interval bounds that are shaped like annular sectors. Alternative representations, like the polygonal wrapping, are used to enable arithmetic operations.
multiplication of complex intervals is only needed for a special case, which is discussed in Sec. III.2.
## III Beampattern Bound Formulation
### Element error model
The model for element errors and the connection to the beampattern bounds are derived with reference to Eq. (11) and Fig. 4, where a plane wave is incident to the array. Note that no assumptions are made about the array geometry, and that coupling is treated separately in Sec. III.2.
Errors are allowed in the element amplitude \(g_{m}\), the element directivity \(d(\alpha_{m})\), the element positions \(\mathbf{r}_{m}\), and the phase \(\Phi_{m}\), so that they lie within certain intervals, such as \(g_{m}^{I}=\left[\underline{g}_{m},\,\overline{g}_{m}\right]\). Typically, the intervals are specified to be symmetric around some nominal value, referred to as the interval mid-point. Let \(\delta\) denote the maximum error (or half of the interval width), so for example, \(g_{m}^{I}=[1-\delta g_{m},\,1+\delta g_{m}]\). Note that \(\mathbf{r}_{m}^{I}\) is taken to be a vector with independent interval components, resulting in rectangular areas around the nominal element positions. These intervals can be directly inserted into Eq. (1) to give
\[B^{I}(\theta)=\sum_{m=1}^{M}\underbrace{w_{m}\cdot g_{m}^{I}\cdot d _{m}(\alpha_{m}^{I})}_{\text{Amplitude interval: }a_{m}^{I}}\cdot\underbrace{\text{ \footnotesize$\text{\footnotesize$\text{\footnotesize$\text{\footnotesize$ \text{\footnotesize$\text{\footnotesize$\text{\footnotesize$\text{\footnotesize$ \text{\footnotesize$\text{\footnotesize$\text{\footnotesize$\text{\text{\footnotesize$ \text{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox }}}}}}}}}}}}}}_{\underbrace{\text{\text{\footnotesize$ \text{\footnotesize$\text{\footnotesize$\text{\footnotesize$\text{\footnotesize$ \text{\footnotesize$\text{\footnotesize$\text{\footnotesize$\text{\footnotesize$ \text{\footnotesize$\text{\mbox{\mbox{\mbox{\mbox{\mbox{\mbox{\mboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmboxmbox 1 1 1 1 1 \mboxmbox{ \
This formulation provides tighter bounds, and the terms are also conveniently grouped by initially summing over circular intervals, allowing for the use of the following instrumental notation
\[E_{c}^{I} =g_{c}^{I}\cdot d_{c}(\alpha_{c}^{I})\cdot e^{j(\mathbf{k}(\theta) \cdot\mathbf{r}_{c}^{I}+\Phi_{c}^{I})}=a_{c}^{I}e^{j\varphi_{c}^{I}}, \tag{15a}\] \[A_{c}^{I} =\sum_{m=1}^{M}|C|_{mc}^{I}\cdot w_{m}\cdot e^{j(\angle C_{mc}^{I }-\mathbf{k}_{c}\cdot\mathbf{r}_{m})}, \tag{15b}\]
so that the beampattern intervals with coupling can be written as
\[B^{I}(\theta)=\sum_{c=1}^{M}E_{c}^{I}\cdot A_{c}^{I}. \tag{16}\]
Here \(E_{c}^{I}=a_{c}^{I}\cdot e^{j\varphi_{c}^{I}}\) represents a complex annular sector interval that _only_ describes an element. On the other hand, \(A_{c}^{I}\) is a complex circular interval that determines the element's interaction with the array structure, including effects such as coupling, apodization, and steering. The circle is centered at \(w_{c}\cdot e^{-j\mathbf{k}_{c}\cdot\mathbf{r}_{c}}\) and has a radius of \(R_{Ac}=\sum_{m\neq c}^{M}\gamma^{[m-c]}\cdot w_{m}\).
The product of \(E_{c}^{I}\cdot A_{c}^{I}\) results in "rounded" annular sectors, as shown in Fig. 5. These shapes are obtained through the complex Minkowski product [27], which can be understood as the union of many scaled and rotated circles. Upon closer examination, one can show that the product boundary consists of six circular arcs connected by linear segments; one for each of the four corners, and two for the inner and outer arcs of the annular sector. As discussed in Sec. II.3, the convex boundary is used for the polygonal representation. Since the inner arc is concave, the convex rounded annular sector can be described using five arcs. Thus, the sum with coupling is performed over polygons that represent rounded annular sectors.
## IV Backtracking: Direct Bound Verification
The calculated beampattern bounds have in most previous works been verified using Monte Carlo simulations [17, 19]. However, due to the statistical nature of such simulations, there is no guarantee of achieving the exact bounds. Alternatively, one could employ optimization methods to search for these bounds in a high-dimensional space.
In this section, we develop a novel technique for directly verifying the bounds. It works by recovering the errors corresponding to the beampattern that reaches the bound. This is illustrated in Fig. 6(a), and refer to this technique as "backtracking" as the intention is to backtrack the contributing points in the complex intervals of the elements from the summed interval \(B^{I}\).
### Simple phase and amplitude intervals
We first consider a situation with simple phase and amplitude intervals, and no directivity effects or positional errors (both involve dependence on \(\theta\)), or coupling. In that situation
\[E_{c}^{I} =g_{c}^{I}\cdot e^{j(\mathbf{k}(\theta)\cdot\mathbf{r}_{c}+\Phi_{c}^{I})}, \tag{17a}\] \[A_{c} =w_{c}\cdot e^{-j\mathbf{k}_{c}\cdot\mathbf{r}_{c}}. \tag{17b}\]
Next, we choose a reference angle \(\theta_{\text{ref.}}\), for example \(50^{\circ}\) as shown in Fig. 6(a), to backtrack the specific errors \(g_{c}\in g_{c}^{I}\) and \(\Phi_{c}\in\Phi_{c}^{I}\) associated with either \(\overline{P}(\theta_{\text{ref.}})\) or \(\underline{P}(\theta_{\text{ref.}})\). For the sake of this argument, we choose \(\overline{P}\), which is the sum of \(M\) complex numbers \(z_{c}\) (with \(c=1,\dots,M\)). These numbers \(z_{c}\) are found on the boundaries of the respective intervals \(z_{c}\in\partial(E_{c}^{I}\cdot A_{c})\), and
\[\overline{P}(\theta_{\text{ref.}})=\left|\sum_{c=1}^{M}z_{c}\right|^{2}. \tag{18}\]
Figure 5: (Color online) The product and backtracking of various intervals, scaled for illustrative purposes and subscript \(c\) omitted. \(E_{c}^{I}\cdot A_{c}^{I}\) is a rounded annular sector. Points \(z\) on this boundary can be uniquely backtracked into the factor intervals, so that \(E_{c}\cdot A_{c}=z_{c}\).
This can be interpreted as the squared maximum distance from the beampattern response to the origin, as illustrated in Fig. 6(b). In order to perform the backtracking, the terms that go into the sum must be recovered, that is, solving
\[\{z_{1},\ldots,z_{M}\}=\operatorname*{argmax}_{z_{c}\in E_{c}^{I}\cdot A_{c}} \left|\sum_{c=1}^{M}z_{c}\right|^{2}. \tag{19}\]
A method for efficiently recovering \(\{z_{1},\ldots,z_{M}\}\) can be implemented using the same principle as the linear time Minkowski sum algorithm. A detailed description of the algorithm can be found in Ch. 13 of Ref. [26], although it is not a prerequisite for the following discussion. Following Fig. 7, the first step is to find the vertex in the summed polygon that is farthest the origin. The extreme direction will lie between the outer vertex normals. The key concept is that only pairs of vertices that are extreme in the same direction contribute to the Minkowski sum boundary, and therefore this must also apply to the backtracked vertices. For each polygon in the sum, one needs to look for the corresponding vertex \(z_{c}\) whose outer normals contain this extreme direction. The normals can be found from the edges connecting each vertex to its neighbor. The same method and argument hold for the least extreme vertex by reversing the extreme vector direction. Since backtracking involves a linear search for the vertex satisfying the outer normal condition, the complexity of backtracking all polygons is \(\mathcal{O}(M\cdot N_{\text{vert.}})\), where \(N_{\text{vert.}}\) is the number of vertices used to sample each polygon.
With the points \(z_{c}\) known, the element errors and the corresponding beampattern can be calculated. The element errors \(\varepsilon_{c}=g_{c}\cdot e^{j\Phi_{c}}\) can be unambiguously obtained by undoing the phases from both steering and the wavefield, together with the apodization
\[\varepsilon_{c}=z_{c}/(A_{c}\cdot e^{-j\mathbf{k}(\theta_{\text{ref}})\cdot\mathbf{r}_ {c}}). \tag{20}\]
In Fig. 8, the phase and amplitude realizations for the upper and lower beampattern bounds in Fig. 6(a) are shown, along with their respective error bounds. It should be noted that while the amplitude errors will always be extreme, in the sense that either the upper or lower error bounds are reached, this is not necessary for the phase. The plot also serves to verify how well the bounds imposed on the errors are respected in the construction of the bounded beampattern.
Figure 6: (Color online) Panel (a) shows the nominal power pattern, with the backtracked upper and lower bounds for example array A. Panel (b) shows how the array response is a sum of the complex-valued element responses. The bounds (triangles) are the most extreme values possible in the array response.
Figure 7: (Color online) The points \(z_{c}\) that contribute to the most extreme vertex on the polygon sum can be found directly by matching the extreme direction with the outer vertex normals.
The beampattern is obtained by considering a wave-field impinging from another direction. Since the specific errors are known, one can simply apply them as weights when computing the beampattern
\[B(\theta|\theta_{\text{ref.}})=\sum_{c=1}^{M}w_{c}\cdot\varepsilon_{c}\cdot e^{j (\mathbf{k}(\theta)-\mathbf{k}_{z})\cdot\mathbf{r}_{c}}. \tag{21}\]
In Fig. 6(a), the unique beampatterns that reach the upper and lower bounds at the reference angle are depicted. It should be noted that certain features, such as the maximal mainlobe width, cannot be backtracked, since the bounds cannot be realized simultaneously in all directions.
### Non-unique bounds
Backtracking becomes ambiguous under two circumstances, including cases when coupling is considered. Firstly, in rare instances, there may be two or more equally extreme vertices to choose between. Secondly, and more relevant, when positional errors and phase response errors contribute to the combined phase error, and a non-extreme combined phase error is required to reach the beampattern bounds. This occurs when the extreme direction falls within the opening angle of an annular sector. In the following it is shown that backtracking can still be performed with phase and position errors jointly, as long as ambiguities are resolved when they occur, by deciding how the error is distributed over the different variables. To simplify the argument without loss of generality, assume that the position error is only in dimension \(x\).
The interval of _possible_ phase values \(z_{c}\) can assume is \(\angle z_{c}^{I}=k_{x}x^{I}+\Phi^{I}\). If the phase angle of \(z_{c}\) is not at the interval bounds, one must decide how the phase error is distributed among \(x\) and \(\Phi\). In this case, the choice is made to decide the error in \(\Phi\) first. Denote the selected value as \(\Phi^{*}\). The maximum value that can possibly be selected is indicated as \(\overline{\Phi^{*}}\), and cannot be greater than \(\overline{\Phi}\) under any circumstances. At the same time, it is limited by \(\angle z_{c}\) and the lower bound of \(k_{x}x^{I}\). Thus, any values of \(\Phi^{*}\) falling within the following bounds are valid
\[\overline{\Phi^{*}} =\min\left[\overline{\Phi},\angle z_{c}-\underline{k}_{x}x\right], \tag{22a}\] \[\underline{\Phi^{*}} =\max\left[\underline{\Phi},\angle z_{c}-\overline{k}_{x}x\right]. \tag{22b}\]
After choosing the value of \(\Phi^{*}\), the required value of \(x^{*}\) will be given as
\[x^{*}=\frac{\angle z_{c}-\Phi^{*}}{k_{x}}. \tag{23}\]
This way of resolving the ambiguities can be repeated when more errors contribute to the phase. As a consequence of these ambiguities, there may be multiple ways to realize the upper beampattern bound.
### With coupling
If coupling (Sec. III.2) is included, the backtracking is further complicated because the complex values can be taken from anywhere on the boundary of the product of two intervals, \(z_{c}\in\partial(E_{c}^{I}\cdot A_{c}^{I})\). The algorithm described in Sec. IV.1 can be used to find these values, but with the additional requirement of determining which value in \(E_{c}^{I}\) and \(A_{c}^{I}\) could result in that particular \(z_{c}\), and whether that value is unique or not.
Fig. 5 displays the product of \(E_{c}^{I}\cdot A_{c}^{I}\) along with a value \(z_{c}\) that contributes to \(\overline{P}\) for the purpose of illustration. By considering how the Minkowski product \(E_{c}^{I}\cdot A_{c}^{I}\) is formed, it is possible to demonstrate that any point on the boundary is generated by a unique pair of points on the boundaries of the two factor intervals.
To backtrack \(z_{c}\), the first step is to "invert" \(E_{c}^{I}\) to produce a candidate interval \(E_{c\text{inv.}}^{I}\) of points that can give \(z_{c}\) by multiplication with \(A_{c}^{I}\)
\[E_{c\text{inv.}}^{I}=\left[\frac{|z_{c}|}{\overline{a}_{c}},\frac{|z_{c}|}{ \underline{a}_{c}}\right]\cdot e^{j(\angle z_{c}+[-\overline{\Phi}_{c},- \underline{\Phi}_{c}])}. \tag{24}\]
The intersection \(A_{c}^{I}\cap E_{c\text{inv.}}^{I}\) meets at the point \(A_{c}\) that will correspond to the maximum coupling strength \(|C_{mc}|\), as shown in Fig. 5. Finally, \(E_{c}\) is directly obtained as \(E_{c}=z_{c}/A_{c}\). From \(E_{c}\), amplitude and phase error may be found as before. For \(A_{c}\) the phase must necessarily be
\[\angle C_{mc}=\angle\left\{A_{c}-w_{c}\cdot e^{-j\mathbf{k}_{z}\cdot\mathbf{r}_{c}} \right\}-j\mathbf{k}_{z}\cdot\mathbf{r}_{m}. \tag{25}\]
for all \(m\neq c\). For \(m=c\) the phase is zero by definition.
## V Approximate Beampattern Bounds
In order to gain a deeper insight into the factors that influence the beampattern bounds and PSLL, an expression for the approximate bounds is derived. The derivation begins with Eq. (16), assuming uniform and symmetric error bounds in amplitude and phase, without directional dependence. To reiterate, this means the maximum phase and amplitude errors are \(\pm\delta\Phi\) and \(\pm\delta g\), respectively, across all elements. Coupling is also allowed.
Figure 8: (Color online) Amplitude (left panel) and phase (right panel) of the backtracked errors \(\varepsilon\) for each element, corresponding to the upper bound in Fig. 6(a). It is worth noting that the indices \(c\) and \(m\) can be used interchangeably here.
The bounds are derived using the circular interval representation (cIA). First consider the intervals \(E_{c}^{I}\), which are enclosed with a circular interval, as illustrated in Fig. 3. The radius of the circular interval \(R_{E}\) is the same for all elements \(c\), and can be approximated by:
\[R_{E}=|(1+\delta g)\cdot e^{j\delta\Phi}-1|\approx\sqrt{\delta\Phi^{2}+\delta g ^{2}}. \tag{26}\]
The circular approximation overestimates the true bound, but it is most accurate when \(\delta\Phi\) and \(\delta g\) produce an annular sector that is well enclosed by a circle (e.g., \(\delta\Phi=\pm 3^{\circ}\) and \(\delta g=\pm 6\%\)). Note that \(\delta\Phi\) must be expressed in radians in the formula, whereas \(\delta g\) is unitless. The geometry being approximated is very similar to the one shown in Fig. 6(b). The deviation from the nominal array response is sought and this deviation is expressed as the sum of the respective radii when using cIA. The particular phase of the array sum is not significant and can be neglected. Therefore, the circle around \(E_{c}^{I}\), which represents the element sensitivity, can be assumed to have a nominal value of 1.
The circle that represents \(E_{c}^{I}\) is multiplied with another circle \(A_{c}^{I}\), which has a radius \(R_{Ac}\) (note the dependence on \(c\)), with the nominal value of the element weighting \(w_{c}\). The product will also have its nominal value on \(w_{c}\). The enclosing circle of the product interval, which forms a Cartesian oval [27], has a radius \(\rho_{c}\)
\[\rho_{c}=(1+R_{E})\cdot(w_{c}+R_{Ac})-w_{c}. \tag{27}\]
The maximum deviation from the nominal array response is obtained by summing the radii:
\[\begin{split}\rho_{B}&=\sum_{c=1}^{M}\rho_{c}=\sum _{c=1}^{M}(1+R_{E})\cdot(w_{c}+R_{Ac})-w_{c}\\ &\gtrapprox R_{E}+\sum_{c=1}^{M}R_{Ac},\end{split} \tag{28}\]
recalling that \(\sum w=1\). Here the second-order cross-term \(R_{E}\cdot R_{Ac}\) was neglected. To evaluate the sum over \(R_{Ac}\), the definitions in Eq. (15b) and in the subsequent text can be used, where it can be shown
\[\begin{split}\sum_{c=1}^{M}R_{Ac}&=\sum_{c=1}^{M} \sum_{\begin{subarray}{c}m=1\\ m\neq c\end{subarray}}^{M}\gamma^{|m-c|}\cdot w_{m}\\ &=-1+\sum_{m=1}^{M}w_{m}\sum_{c=1}^{M}\gamma^{|m-c|}\\ &\gtrapprox-1+\sum_{m=1}^{M}w_{m}\cdot\left(1+\frac{2\gamma}{1- \gamma}\right)\gtrapprox 2\gamma.\end{split} \tag{29}\]
The following approximations were used; first that the geometric sum runs from \(c=-\infty\) to \(\infty\), and second the Taylor expansion \(\gamma/(1-\gamma)\approx\gamma\).
Finally, the approximate upper bound is obtained as the nominal beam amplitude plus \(\rho_{B}\):
\[\overline{P}(\theta)\approx\left(|B_{\text{nom.}}(\theta)|+\sqrt{\delta\Phi^ {2}+\delta g^{2}}+2\gamma\right)^{2}. \tag{30}\]
The effect of this is that a constant term is added to the nominal amplitude to yield the upper bound. Interestingly, this term does, to a circular IA approximation, not depend on the apodization \(\mathbf{w}\). As a result, the worst-case sidelobe level has an asymptotic limit even for infinitely long arrays, which is obtained simply by setting \(|B_{\text{nom.}}(\theta)|=0\) in Eq. (30). This is in contrast to the expected beampattern in Eq. (4), which _does_ depend on \(\mathbf{w}\) through \(T_{\text{se}}\). This result is consistent with the following intuition: In statistical analysis, with independent amplitude and phase errors, the errors add incoherently, so that the average sidelobe level depends on the sum of the variances of the amplitude and phase errors divided by the number of elements. However, for the worst-case analysis, the errors all achieve their maximum allowed values and add coherently, resulting in a bound that does not depend on the number of elements.
## VI Numerical experiments
In this section, we perform numerical experiments in order to illustrate the type of analysis made possible using the theories developed in Secs. III, IV, and V. Consider example array B, tabulated in Table 2. For the polygonal method (pIA), the intervals need to be sampled. The sampling is sufficient to make the cumulative representation error well below \(-60\,\mathrm{dB}\). This is calculated as the maximum error in representing the analytic boundaries of the rounded annular sectors (shapes \(E_{c}^{I}\cdot A_{c}^{I}\)) and the \(M\) number of times these errors are summed.
The bounds for this problem are shown in Fig. 9. The lower bound can only be seen close to the mainlobe, and is therefore not mentioned in the legend. The upper bound is relatively uniform, with only minor dips where the beampattern nulls are expected. Additionally, the approximate worst case, calculated with Eq. (30), is also illustrated.
Fig. 9 also shows the worst-case beampattern corresponding to the upper bound at \(13.6^{\circ}\). The backtracking includes both the element errors and the coupling matrix.
\begin{table}
\begin{tabular}{l r} Number of elements, \(M\) & 31 \\ Element pitch & \(\lambda/2\) \\ Element diameter \(D\) & (omnidirectional) \(\approx 0\lambda\) \\ Array geometry & Uniform linear array \\ Apodization, \(\mathbf{w}\) & \(-30\,\mathrm{dB}\) Chebyshev \\ Steering angle, \(\theta_{s}\) & \(-10^{\circ}\) \\ Maximum amplitude error, \(\delta g\) & \(\pm 5\%\) \\ Maximum phase error, \(\delta\Phi\) & \(\pm 5^{\circ}\) \\ Coupling strength, \(\gamma\) & \(5\%\) \\ \end{tabular}
\end{table}
Table 2: Example array B.
The backtracked element errors resulting in the particular worst-case sidelobe are shown in Fig. 10**(a)** and **(b)**. The backtracked coupling matrix is shown in Fig. 10**(c)** and **(d)**. The coupling phase is shown in **(c)**, and lines are apparent on the anti-diagonals. These lines depend largely on the backtracked angle, and for a certain angle, this matrix will be symmetric. However, in this example, the matrix is only approximately symmetric. Panel **(d)** shows that only the neighboring elements make a significant magnitude contribution. The probability density function (pdf) of the array power response is obtained by uniformly sampling the phase and amplitude bounds of element error and coupling, as shown in panel **(e)**.
To visualize the beam profile, the continuous wave excitation is calculated using the k-Wave function acousticFieldPropagator[28]. The nominal beam is shown in panel **(f)**, while the worst-case performance is shown in **(g)**. Plotting the difference beam in panel **(h)** clearly reveals that the errors align to form a separate beam approximately \(-13\,\mathrm{dB}\) below the mainlobe.
Finally, it is explored how the PSLL is affected by the nominal sidelobe level specified when using a Chebyshev window. Four different cases are shown in Fig. 11. Case 1 is the same as discussed earlier (example B). In Case 2, the amplitude and phase error bounds are made more uneven (so the cIA approximation is worse). Case 3 looks at small, but even amplitude and phase error bounds. The analytic value for the asymptotic PSLL as the nominal sidelobes vanish is also plotted, using Eq. (30). Case 4 is an outlier; the elements are directive, and a tilt error interval of \(2^{\circ}\) is specified. No approximate bounds are available, but it is evident here that the PSLL decreases until it suddenly flattens out.
## VII Discussion
The backtracking technique showed it could recover the worst-case error realization for a PSLL in a given direction. By reapplying the errors it was made evident that the corresponding beampattern reaches the bounds. It is timely to address the relevance of the bounds. The backtracked errors plotted in Fig. 10**(a-b)** indicate that the errors are usually extreme, which was also seen in Sec. IV.1. The number of binary (maximum or minimum) error configurations for amplitude and phase, in panel **(a)** and **(b)**, is \(2^{31\cdot 2}\approx 4.6\cdot 10^{18}\). Even if we assume that only one of these configurations represents _the_ worst-case error for a particular direction, it be still considered unlikely to encounter a configuration that is the worst-case in _any_ direction. Panel **(e)** illustrates that with independent errors from uniform continuous distributions between the bounds, the sidelobe level clusters around the nominal value, making the bound practically impossible to achieve.
However, while IA may be seen as pessimistic, the statistical method assuming independence represents an optimistic approach. This is because without knowledge of the true bounded error distribution or covariance/dependence between the errors, it becomes challenging to assess the relative probability of values close to the bounds. IA avoids this issue entirely. Additionally, some arrays are composed of a limited number of blocks/modules with common errors within the blocks. Achieving a worst-case block positioning is much more likely than achieving a worst-case element position, and it naturally imposes a periodic structure in the errors. This can be connected with Chapter 3.1.3 in Ref. [24], where a sinusoidal disturbance essentially introduces a dominating sidelobe. It is also important to note that any error pattern resembling, in some sense, the worst-case scenario is still undesirable. Therefore, a practical usage of backtracking can be to identify configurations (such as periodic errors in linear arrays) that are plausible in manufacturing. We argue that for array designers, the worst-case scenario is relevant as it specifies a guaranteed performance all arrays will meet, and that this information is complementary to the expected behaviour.
In order to obtain the most accurate beampattern bounds, the polygonal representation (pIA) was used. This method requires sampling the boundaries of the complex intervals. For the chosen examples, this did not raise any practical issues. However, if the cumulative error has to stay fixed with an increase in the number elements, denser sampling may be needed. This requirement can potentially result in a significant computational cost unless proper methods are in place to address it. A natural technique to mitigate this issue is to remove vertices that are very close to each other (within some tolerance) between each polygon summation. This approach limits a potential exponential growth in the number of vertices, and preliminary tests have shown it to be a promising technique. However, addressing this issue in detail is beyond the scope of this article.
In the bound formulation with coupling, as shown in Eq. (14), coupling coefficient reciprocity \(C_{ij}=C_{ji}\) would be expected. However, despite concentrated efforts, no solutions has been found to enforce reciprocity due to the dependence problem in IA. On the other hand, as demonstrated by the backtracking results in Fig. 10**(c)**, there
Figure 9: (Color online) The beampatterns and bounds of example array B, tabulated in Table 2.
are situations where the coupling matrix \(\mathbf{C}\) is _nearly_ symmetric, resulting in small overestimation of the bounds, at least for linear arrays when only nearest-neighbor coupling matters. This means that even if coupling reciprocity could be enforced, the worst-case PSLL when using the Chebyshev window would not be significantly different.
In Fig. 11, the PSLL for certain error bounds was plotted against the specified nominal PSLL. For cases 1 to 3, the worst-case PSLL eventually deviates significantly from the nominal PSLL. The approximate bound closely follows the accurate bound, but less so in case 2 where the circular approximation assumed is worse. The general agreement indicates reasonable assumptions in the derivation, which may be helped by intermediate approximations that over- and underestimate the true quantities. In any case, choice of apodization and the number of elements can only to a limited extent reduce the worst-case PSLL.
The lack of directional errors in the approximate formula is not a major concern; for positional errors it can be included for linear arrays (at the cost of a more complicated expression), but eventually directional errors are translated into amplitude and phase anyhow. Case 4 highlights one special feature of directional errors, as directive elements are included in this case. The sudden flattening PSLL is unusual, but closer inspection reveals a peculiar effect: when the incidence angle is \(90^{\circ}\), the worst-case PSLL is obtained by alternate tilting of the elements such that a grating lobe is produced, effectively undersampling the wavefield. This comes from the directivity function in Fig. 1, but it can be argued that the strong effect shown here is a result of the sharp tapering used.
## VIII Conclusions
This article presents a comprehensive framework for calculating inclusive beampattern bounds. In addition to tackling arbitrary geometries, a multitude of error intervals can be specified, such as amplitude, phase, position, directivity, and coupling. This flexible technique does not rely on any assumptions about error distributions or correlations.
Figure 10: (Color online) Various plots related to the backtracked angle in Fig. 9. Panel **(a)** shows the corresponding element amplitude error, **(b)** the element phase error, **(c)** the coupling phase error, **(d)** the coupling magnitude error, **(e)** the pdf obtained from Monte Carlo simulation of independent errors within the bounds, **(f)** the nominal beam, **(g)** the worst-case beam, and **(h)** the difference between nominal and worst-case.
The most notable contribution of this study is "backtracking", which allows for the direct recovery of the specific configuration of errors and beampatterns that result in the upper or lower bounds. To the best of our knowledge, this is the first time such a direct method has been proposed to verify the beampattern bounds provided by interval arithmetic. Furthermore, backtracking shows which error patterns across the array are detrimental to the peak sidelobe level, allowing array designers to take measures to prevent them in practice.
In addition, this study presents an approximate formula for the bounds of uniformly bounded errors, assuming no directional dependencies. The derived formula specifies and quantifies the factors that influence the worst-case performance of the array. Notably, as opposed to the expected beampattern (a statistical concept), the worst-case beampattern cannot be improved by increasing the number of elements in the array. Additionally, the effect of the apodization window is limited to reducing the nominal beampattern.
The results obtained in this study pertain to the array farfield, but can easily be generalized to the nearfield. In future work, our aim is to analyze modular arrays. The worst-case performance is highly relevant in such systems since fewer degrees of freedom and imposed periodic errors make the worst-case significantly more likely. Additionally, we plan to quantify interval errors in the context of adaptive beamformers. Another open direction for further investigation is to extend the framework to broadband systems, such as medical ultrasound or synthetic aperture sonar. In the latter, various types of periodic array errors are prominent due to the repetition of the same platform in order to synthesize a large array.
###### Acknowledgements.
The authors thank Roy Edgar Hansen for interesting discussions on the topic and his comments to this work. G. G., T. I. L, and A. A acknowledge funding from the Research Council of Norway project _Element calibration of sonars and echosounders_, project number 317874. J. E. K acknowledges internal research funding from InPhase Solutions AS. H. K. A. and G. G. share the first authorship for this article.
|
2306.14669 | Quantum phases and spectrum of collective modes in a spin-1 BEC with
spin-orbital-angular-momentum coupling | Motivated by the recent experiments [Chen et al., Phys. Rev. Lett 121, 113204
(2018), Chen et al., Phys. Rev. Lett. 121, 250401 (2018)], we investigate the
low-lying excitation spectrum of the ground-state phases of
spin-orbital-angular-momentum-coupled (SOAM-coupled) spin-1 condensates.At
vanishing detuning, a ferromagnetic SOAM-coupled spin-1 BEC can have two
ground-state phases, namely coreless and polar-core vortex states, whereas an
antiferromagnetic BEC supports only polar-core vortex solution. The angular
momentum per particle, longitudinal magnetization, and excitation frequencies
display discontinuities across the phase boundary between the coreless vortex
and polar-core vortex phases. The low-lying excitation spectrum evaluated by
solving the Bogoliubov-de-Gennes equations is marked by avoided crossings and
hence the hybridization of the spin and density channels. The spectrum is
further confirmed by the dynamical evolution of the ground state subjected to a
perturbation suitable to excite a density or a spin mode and a variational
analysis for the density-breathing mode. | Paramjeet Banger, Rajat, Arko Roy, Sandeep Gautam | 2023-06-26T13:06:18Z | http://arxiv.org/abs/2306.14669v2 | Quantum phases and spectrum of collective modes in a spin-1 BEC with spin-orbital-angular-momentum coupling
###### Abstract
Motivated by the recent experiments [Chen _et al._, Phys. Rev. Lett **121**, 113204 (2018), Chen _et al._, Phys. Rev. Lett. **121**, 250401 (2018)], we investigate the low-lying excitation spectrum of the ground-state phases of spin-orbital-angular-momentum-coupled (SOAM-coupled) spin-1 condensates. At vanishing detuning, a ferromagnetic SOAM-coupled spin-1 BEC can have two ground-state phases, namely coreless and polar-core vortex states, whereas an antiferromagnetic BEC supports only polar-core vortex solution. The angular momentum per particle, longitudinal magnetization, and excitation frequencies display discontinuities across the phase boundary between the two phases. The low-lying excitation spectrum evaluated by solving the Bogoliubov-de-Gennes equations is marked by avoided crossings and hence the hybridization of the spin and density channels. The spectrum is further confirmed by the dynamical evolution of the ground state subjected to a perturbation suitable to excite a density or a spin mode and a variational analysis for the density-breathing mode.
+
Footnote †: preprint: APS/123-QED
## I Introduction
The experimental realization of spin-orbit (SO) coupling marked an important milestone in the field of quantum degenerate Bose gases [1; 2; 3]. The SO coupling in these experiments, coupling the spin and linear momentum of electrically neutral bosons, is created by controlling the interaction between atoms and light [1; 2; 3; 4]. The rich ground-state phase diagram of SO-coupled spin-1 BECs besides having plane-wave, standing-wave, and zero-momentum phases [5; 6] also admit half-quantum vortex [7], vortex-lattice states [8], etc. Apart from the ground-state phase diagram, collective excitations in trapped SO-coupled BECs, another aspect of fundamental interest [9; 10], have been studied experimentally [11; 12] as well as theoretically [13; 14] in harmonically-trapped SO-coupled pseudospinor BECs. Recently, we studied the collective excitations in a quasi-one-dimensional SO-coupled spin-1 BEC with antiferromagnetic interactions at zero and finite temperatures [15].
For the last few years, there has been a growing interest in coupling the orbital angular momentum of atoms' center-of-mass with their internal spin states using a pair of copropagating Laguerre-Gaussian laser beams with opposite winding numbers. Commonly known as the spin-orbital-angular-momentum (SOAM) coupling, this feature has been independently demonstrated by two experimental groups by coupling two [16] or three magnetic sub-levels of \(F=1\) manifold of \({}^{87}\)Rb atoms [17; 18], thus affirming the validity of earlier theoretical proposals [19; 20; 21; 22; 23; 24]. In the context of SOAM-coupled pseudospin-1/2 BECs, polarized and zero-momentum phases have been observed experimentally. Besides these, a stripe, an annular-stripe, a two-vortex molecule, and a vortex-antivortex molecule phases have also been studied theoretically [25; 26; 27; 22]. Theoretical studies on the effects of the ring-trapping potential on the annular-stripe phase in SOAM-coupled pseudospin-1/2 condensate have also been carried out [28; 21].
Along with studies on equilibrium ground-state phase diagrams, spectroscopic studies have been carried out on SOAM-coupled pseudospin-1/2 BEC [26; 29; 30]. In particular, the low-lying excitation spectrum, including breathing and dipole modes, have been studied for the half-skyrmion and vortex-antivortex phases [26; 29]. Additionally, the ground-state phases and excitation spectrum have been studied for pseudospin-1/2 BEC with higher order SOAM coupling [30].
However, the detailed phase diagrams and excitation spectrums of SOAM-coupled spin-1 BECs with polar and ferromagnetic spin-exchange interactions have not been yet theoretically studied. This sets the stage for the present work. With inspiration drawn from the experimental research reported in [18] and an aim to bridge the research gap, our objective is to study the excitation spectrum of the ground-state phases observed in SOAM-coupled spin-1 BECs. The excitation spectrum, calculated by solving the Bogoliubov-de Gennes (BdG) equations, is supported by the time evolution of the expectation of the physical observables with an aptly chosen perturbation being added to the Hamiltonian at time \(t=0\). For the sake of comprehensiveness, we additionally employ the variational method to analytically calculate the frequency of the density-breathing mode.
The paper is organized as follows. In Sec. II, we present the Hamiltonian describing an SOAM-coupled spin-1 BEC in cylindrical coordinates and the reduction to a quasi-two-dimensional (quasi-2D) formulation through a set of coupled Gross-Pitaevskii equations (GPEs). In Sec. III, we discuss the ground-state phases of SOAM-coupled ferromagnetic and polar BECs in the limit of
vanishing detuning. In Sec. IVA, we discuss the spectrum of the noninteracting SOAM-coupled spin-1 BEC, and follow it with the collective excitations of the interacting SOAM-coupled spin-1 BECs in IVB. In Sec. IVC, we study real-time dynamics of the perturbed ground state to illustrate the ensuing dynamics in the density and spin channels. In Sec. IVD, the variational method to study a few low-lying modes is discussed, which is followed by the summary of key results in Sec. V.
## II Model
In this work, we consider SOAM-coupled spin-1 BECs in which the orbital angular momentum of the center of the mass of the atoms is synthetically coupled to their internal spin states [16; 17]. In the cylindrical coordinate system, the non-interacting (single-particle) part of the Hamiltonian for the spinor BEC is [17; 18]
\[H_{\mathrm{s}}= \Bigg{[}-\frac{\hbar^{2}}{2M}\frac{\partial}{r\partial r}\left(r \frac{\partial}{\partial r}\right)+\frac{L_{z}^{2}}{2Mr^{2}}-\frac{\hbar^{2}}{2M }\frac{\partial^{2}}{\partial z^{2}} \tag{1}\] \[+V(\mathbf{r})\Bigg{]}\mathbf{I}+\Omega(r)\cos(\phi)S_{x}-\Omega (r)\sin(\phi)S_{y}+\delta S_{z},\]
where \(\mathbf{I}\) is a 3\(\times\)3 identity matrix, \(V(\mathbf{r})=M\omega_{0}^{2}r^{2}/2+M\omega_{z}^{2}z^{2}/2\) constitutes the external harmonic potential to trap the atoms of mass \(M\), \(L_{z}=-i\hbar\partial/\partial\phi\) is the angular momentum operator, \(\Omega(r)=\Omega_{0}\sqrt{e}(r/r_{0})e^{-r^{2}/2r_{0}^{2}}\) is the Raman-coupling strength with \(\Omega_{0}\) and \(r_{0}\) as the Rabi frequency and the radius of the maximum-intensity (cylindrical) surface, respectively, \(\delta\) is the Raman detuning, and \(S_{x},S_{y}\) and \(S_{z}\) are irreducible representations of the spin-1 angular momentum operators [17; 18]. Under mean-field approximation, the interacting part of the Hamiltonian \(H_{\mathrm{int}}\) is given by [31]
\[H_{\mathrm{int}}=\frac{c_{0}}{2}\rho+\frac{c_{1}}{2}\mathbf{F}.\mathbf{S} \tag{2}\]
with \(c_{0}\) and \(c_{1}\) as the mean-field interaction parameters. The total density of the system is given by \(\rho\), \(\mathbf{F}=(F_{x},F_{y},F_{z})\) is the spin-density vector, and \(\mathbf{S}=(S_{x},S_{y},S_{z})\). Since the SOAM coupling is restricted to the radial plane, and we consider \(\omega_{z}\gg\omega_{0}\), the dominant dynamics is constrained to the same plane with frozen axial degrees of freedom. We can then integrate out the \(z\) degree of freedom from the condensate wave function and describe the system as quasi-2D on the radial \(r\)-\(\phi\) plane. Starting from the Hamiltonian \(H=H_{\mathrm{s}}+H_{\mathrm{int}}\), in polar coordinates, we obtain the following coupled quasi-2D GPEs in dimensionless form
\[i\frac{\partial\psi_{\pm 1}}{\partial t} = \mathcal{H}\psi_{\pm 1}+c_{1}(\rho_{0}\pm\rho_{-})\psi_{\pm 1 }+c_{1}\psi_{\mp 1}^{*}\psi_{0}^{2} \tag{3a}\] \[\pm\delta\psi_{\pm 1}+\frac{\Omega(r)}{\sqrt{2}}e^{\pm i\phi} \psi_{0},\] \[i\frac{\partial\psi_{0}}{\partial t} = \mathcal{H}\psi_{0}+c_{1}\rho_{+}\psi_{0}+2c_{1}\psi_{+1}\psi_{- 1}\psi_{0}^{*}\] (3b) \[+\frac{\Omega(r)}{\sqrt{2}}(e^{-i\phi}\psi_{+1}+e^{i\phi}\psi_{- 1}),\]
where
\[\mathcal{H} =-\frac{1}{2}\frac{\partial}{r\partial r}\left(r\frac{\partial}{ \partial r}\right)+\frac{L_{z}^{2}}{2r^{2}}+\frac{r^{2}}{2}+c_{0}\rho,\] \[\rho =\sum_{j=\pm 1,0}\rho_{j},\ \rho_{j}=\left|\psi_{j}\right|^{2},\ \rho_{\pm}=\rho_{+1}\pm\rho_{-1}.\]
Under geometric renormalization, in terms of \(s\)-wave scattering lengths \(a_{0}\) and \(a_{2}\) in the total spin 0 and 2 channels, respectively, \(c_{0}\) and \(c_{1}\) take the form
\[c_{0}=\sqrt{8\pi\alpha}\frac{N(a_{0}+2a_{2})}{3a_{\mathrm{osc}}},\quad c_{1}= \sqrt{8\pi\alpha}\frac{N(a_{2}-a_{0})}{3a_{\mathrm{osc}}} \tag{5}\]
denoting the spin-independent and spin-dependent interactions, respectively. The anisotropy parameter \(\alpha=\omega_{z}/\omega_{0}\) is defined to be the trapping frequency ratio along the axial to the radial direction, and \(N\) is the total number of atoms. The units of length, time, energy, and energy eigenfunctions are considered to be \(a_{\mathrm{osc}}=\sqrt{\hbar/(M\omega_{0})}\), \(\omega_{0}^{-1}\), \(\hbar\omega_{0}\), and \(a_{\mathrm{osc}}^{-1}\), respectively, and \(\int r\rho(r)drd\phi=1\).
## III Ground state quantum phases of SOAM coupled spinor BEC
To understand the intercomponent phase relationship imposed by various competing terms in the Hamiltonian, we consider a generic circularly symmetric _ansatz_, \(\psi_{j}=f_{j}(r)e^{i(w_{j}\phi+\beta_{j})}\), for the component wavefunctions, where \(w_{j}\) and \(\beta_{j}\) are, respectively, the phase-winding number and constant phase associated with the radially-symmetric real function \(f_{j}\). The phase-dependent part of the interaction energy is minimized, provided [32]
\[w_{+1}-2w_{0}+w_{-1} =0, \tag{6a}\] \[\beta_{+1}-2\beta_{0}+\beta_{-1} =\begin{cases}2n\pi\quad\text{for $c_{1}<0$},\\ (2n^{\prime}+1)\pi\quad\text{for $c_{1}>0$},\end{cases} \tag{6b}\]
where \(n\) and \(n^{\prime}\) are integers. Similarly, SOAM-part of the energy is minimized if
\[w_{+1}-w_{0} =1,\quad w_{0}-w_{-1}=1, \tag{7a}\] \[\beta_{+1}-\beta_{0} =(2p+1)\pi,\quad\beta_{0}-\beta_{-1}=(2p^{\prime}+1)\pi, \tag{7b}\]
where \(p\) and \(p^{\prime}\) are again integers. If the conditions on the winding numbers in Eq. (7a) are satisfied, the condition in Eq. (6a) is satisfied too. On the other hand, conditions between the constant phase factors in Eqs. (6b) and (7b) can be simultaneously satisfied for \(c_{1}<0\) only.
To further substantiate the intercomponent phase relationships imposed by SOAM-coupling, we extract \(\mathcal{S}=S_{x}\cos\phi-S_{y}\sin\phi\)[25] from the single-particle Hamiltonian \(H_{s}\). In the limit when \(\Omega_{0}\) is large, \(c_{1}\) dependent part of the Hamiltonian can be neglected, and the phase structure of the emergent ground-state solution is mainly determined by \(\mathcal{S}\) via its minimum energy eigen spinor. The normalized eigen spinor of \(\mathcal{S}\) with minimum eigen energy \(-1\) can be written as \((e^{i(m+1)\phi},\ -\sqrt{2}e^{im\phi},\ e^{i(m-1)\phi})^{T}/2\) with \(m\) being any integer. The phase structure of this eigenspinor is consistent with phase relations in Eqs. (7a)-(7b). With an increase in \(m\), there is an energy cost from the phase-dependent part of the kinetic energy suggesting that only small values of phase-winding numbers may emerge. Numerical results confirm this, where we obtain a solution corresponding to \(m=0\) in large \(\Omega_{0}\) limit irrespective of the nature of spin-exchange interactions. The spinor part of the ground state in this limit tends to approach the aforementioned eigenstate of \(\mathcal{S}\) with \(m=0\).
Various numerical techniques have been employed in the literature to study spinor BECs in quasi-one-dimensional, quasi-two-dimensional, and three-dimensional settings [33; 34; 35; 36; 37]. In practice, we choose the finite-difference method and choose different initial guess solutions as an input to Eqs. (3a)-(3b) to arrive at ground-state solutions. As an example, we take initial states \(\Psi\sim e^{-r^{2}/2}\times(e^{i(m+1)\phi},\ -\sqrt{2}e^{im\phi},\ e^{i(m-1)\phi})^{T}/2\), with different values of \(m\). Besides these initial states, we consider a random initial guess where \(\psi_{j}(r)\) are complex Gaussian random numbers.
At the outset, motivated by the experimental realization of the SOAM-coupled BECs [17; 18] using spin-1 \({}^{87}\)Rb atoms, we validate our numerical simulations to study and emulate the observed ground-state quantum phases of the ferromagnetic system in the absence of detuning \(\delta=0\). Similar to the experiment, we consider the \({}^{87}\)Rb atoms confined in an anisotropic harmonic trap with \(\omega_{0}=2\pi\times 140\) Hz and \(r_{0}=15\ \mu\)m [18]. However, we take \(\omega_{z}=2\pi\times 2400\) Hz enabling us to perform quasi-2D simulations. Here \(a_{0}=101.8a_{B}\) and \(a_{2}=101.4a_{B}\) with \(a_{B}\) as the Bohr radius [38]. The ground-state densities and phase distributions, obtained numerically by solving the coupled GPEs (3a)-(3b) with imaginary-time propagation, for given \(\Omega_{0}\) and \(N\), are in qualitative agreement with the experimental results. The ground-state densities calculated for a pair of \(\Omega_{0}\) values with \(N=5000\) are shown in Figs. 1(a)-(b).
For \(\Omega_{0}=0.25\), the solutions with \((+2,+1,0)\) and \((0,-1,-2)\) phase-winding numbers are two degenerate ground states, and with \(\Omega_{0}=1\), \((+1,0,-1)\) state is obtained as the ground state solution. As we vary \(\Omega_{0}\) from \(0\) to \(20\), at small \(\Omega_{0}\), due to the co-action of spin-dependent interaction term and SOAM coupling, (+2,+1,0)-type solution appears as the ground state. After a critical value of coupling strength (say \(\Omega_{0}^{c}\)), \(\Omega\mathcal{S}\) primarily dictates the nature of the solution to result in \((+1,0,-1)\)-type phase. The condition \(\langle\mathcal{S}\rangle\approx-1\) is satisfied in this latter phase for sufficiently large \(\Omega_{0}\) as shown in Fig. 2(a), which indicates that no further phase can be expected with higher \(\Omega_{0}\). We term these two phases I and II. In contrast to \({}^{87}\)Rb, \((+1,0,-1)\)-type is the single ground state phase for \({}^{23}\)Na with \(c_{1}>0\). In this case too, \(\langle\mathcal{S}\rangle\approx-1\) at large \(\Omega_{0}\) as shown in Fig. 2(a).
Longitudinal magnetization per particle \(f_{z}=\int F_{z}d\mathbf{r}\), spin expectation per particle \(f=\int|\mathbf{F}|d\mathbf{r}\) where \(|\mathbf{F}|=\sqrt{F_{x}^{2}+F_{y}^{2}+F_{z}^{2}}\), and angular momentum per particle \(\langle L_{z}\rangle\) can be used to characterize these ground-state phases. In the ferromagnetic domain with \(c_{0}=121.28\) and \(c_{1}=-0.56\), for \(\Omega_{0}<=\Omega_{0}^{c}=0.3\), i.e. in phase I, \(\langle L_{z}\rangle\neq 0\) and increases continuously as shown in the inset of Fig. 2(a), whereas \(|f_{z}|\approx 1\) and \(f=1\) as shown in Fig. 2(b). For \(\Omega_{0}>\Omega_{0}^{c}\), the transition to phase II is ac
Figure 2: (Color online) (a) \(\langle\mathcal{S}\rangle\) as a function of SOAM-coupling strength \(\Omega_{0}\) for \({}^{87}\)Rb with \(c_{0}=121.28\) and \(c_{1}=-0.56\) and \({}^{23}\)Na with \(c_{0}=121.35\) and \(c_{1}=3.80\). Inset in (a): \(\langle L_{z}\rangle\) for \({}^{87}\)Rb as a function of \(\Omega_{0}\). (b) \(|f_{z}|\) and \(f\) for \({}^{87}\)Rb and \({}^{23}\)Na as a function of SO coupling strength \(\Omega_{0}\). The \(c_{0}\) and \(c_{1}\) for \({}^{87}\)Rb and \({}^{23}\)Na are the same as those in (a).
companied by discontinuities in \(\langle L_{z}\rangle,|f_{z}|\), and \(f\), whereas the former two reduce to zero, the latter becomes less than one. In the antiferromagnetic domain, e.g. with \(c_{0}=121.35\) and \(c_{1}=3.8\), there is no phase transition with an increase in \(\Omega_{0}\) resulting in smooth behavior of the same quantities. Here \(f\) asymptotically approaches one, whereas \(|f_{z}|\) and \(\langle L_{z}\rangle\), expectantly, remain zero.
Furthermore, we calculate the ground state phase diagrams in \(c_{1}/c_{0}\)-\(\Omega_{0}\) plane, where we fix \(c_{0}=121.28\) and vary \(c_{1}\), and \(N\)-\(\Omega_{0}\) plane for fixed \(c_{1}/c_{0}=-0.0046\) which corresponds to \({}^{87}\)Rb. The ratio \(c_{1}/c_{0}\) may be manipulated experimentally by tuning one of the scattering lengths by optical Feshbach resonance [39]. These two are respectively shown in Figs. 3(a)-(b), thus again illustrating that an antiferromagnetic BEC has one ground-state phase in contrast to the ferromagnetic one. It can be seen that with a decrease in \(c_{1}\) (keeping \(c_{0}\) fixed) in the ferromagnetic phase, the domain of phase I increases, whereas with an increase in the number of atoms (keeping \(c_{1}/c_{0}\) fixed), it decreases. Phase I and II also have distinctive topological spin textures \(\mathbf{F}=(F_{x},F_{y},F_{z})\). For the solutions in Figs. 1(a) and (b) spin-textures are shown in Figs. 4(a) and (b), respectively. This allows the identification of phase I and II with the coreless vortex and polar-core vortex states, respectively [18].
## IV Collective excitation spectrum
To study the excitation spectrum, we exploit the innate circular symmetry of the Hamiltonian. To this end, we perform a local spin rotation about \(\hat{z}\) by the azimuthal angle -\(\phi\) to remove the \(\phi\)-dependence from the Hamiltonian. As a result, the order parameter \(\Psi=(\psi_{+1},\psi_{0},\psi_{-1})^{T}\) is transformed to \(e^{-iS_{x}\phi}\Psi=(e^{-i\phi}\psi_{+1},\psi_{0},e^{i\phi}\psi_{-1})^{T}\), and the transformed Hamiltonian takes form
\[H= \left[-\frac{1}{2}\frac{\partial}{r\partial r}\left(r\frac{ \partial}{\partial r}\right)+\frac{(L_{z}+S_{z})^{2}}{2r^{2}}+V(r)\right] \mathds{I}\] \[+\Omega(r)S_{x}+H_{\text{int}}, \tag{8}\]
where \(H_{\text{int}}=c_{0}\rho/2+c_{1}\mathbf{F.S}/2\). The Hamiltonian in Eq. (8) is circularly symmetric, and one can seek the simultaneous eigen functions of \(H\) and \(L_{z}\) with fixed angular momentum \(l_{z}=0,1,\ldots\). For example, the solutions presented in Figs. 1(a) and (b) can now be seen as corresponding to \(l_{z}=1\) and \(0\), respectively. We use the Bogoliubov approach to study the excitation spectrum. In which we consider the fluctuations to the ground state by writing the perturbed order parameter as
\[\Psi(r,\phi,t)=e^{-i\mu t+i(l_{z}+S_{z})\phi}[\Psi_{\text{eq}}(r)+\delta\Psi(r,t)e^{il_{q}\phi}], \tag{9}\]
where \(\Psi_{\text{eq}}(r)=(R_{+1}(r),R_{0}(r),R_{-1}(r))^{T}\) is the radial part of the order parameter with \(R_{j}\) as the radial wavefunction corresponding to the \(j^{\text{th}}\) spin component, \(\mu\) is the chemical potential, and \(l_{q}=0,\pm 1,\pm 2,\ldots\) is the magnetic quantum number associated with the angular momentum of the quasiparticle excitations. The details of the Bogoliubov-de Gennes (BdG) analysis are given in the Appendix.
### Non-interacting system
To understand the effect of coupling strength, we first study the single-particle excitation spectrum. The ground-state solution has phase-winding numbers \((\pm 1,0)\) in \(j=\pm 1,0\) spin states, respectively. The excitation spectrum is shown in Fig. 5. For \(\Omega_{0}=0\), the \(n^{\text{th}}\) energy level is \(3(n+1)\)-fold degenerate, as the single-particle Hamiltonian is identical with a system of three decoupled isotropic two-dimensional harmonic oscillators. For example, excitations with energies \(0\) and \(1\) are three- and six-fold degenerate, respectively. The SOAM-coupling lifts the degeneracies partially. For example, for \(\Omega_{0}\neq 0\), there is only one zero energy excitation; similarly, the red lines in the spectrum in Fig. 5 correspond to non-degenerate excitations, whereas the black ones to two-fold degenerate modes. The non-degenerate modes have
Figure 3: (Color online) The ground-state phase diagrams in (a) \(c_{1}/c_{0}\)-\(\Omega_{0}\) and (b) \(N\)-\(\Omega_{0}\) planes. In (a) \(c_{0}\) was kept fixed at \(121.28\) while varying \(c_{1}\). In (b) \(c_{1}/c_{0}=-0.0046\) corresponding to \({}^{87}\)Rb.
the magnetic quantum number of the excitation \(l_{q}=0\), whereas modes with two-fold degeneracy have \(l_{q}\neq 0\).
### Interacting spin-1 BEC
Here we study the excitation spectrum (a) as a function of \(\Omega_{0}\) for fixed \(c_{0}\) and \(c_{1}\) and (b) as a function of \(N\) for fixed \(\Omega_{0}\) and \(c_{0}/c_{1}\) ratio. Both \(\Omega_{0}\) and \(N\) can be varied in an experiment [17; 18]. As was discussed in Sec. III, for \(c_{1}<0\), both phase I and II can appear as the ground-state phases with a variation of either \(\Omega_{0}\) or \(N\). We primarily consider \({}^{87}\)Rb BEC in the following discussion.
_Phase I:_ Here we consider \(c_{0}=121.18\) and \(c_{1}=-0.56\) and vary \(\Omega_{0}\). The excitation spectrum for phase I is shown in Fig. 6(a) for \(l_{q}=0,\pm 1\) and (b) for \(|l_{q}|\geq 2\). The modes with frequencies \(1\) and \(2\) are, respectively, dipole and density-breathing modes in Fig. 6(a). This identification of a mode is based on the real-time evolution of the expectation of a suitably chosen observable, as will be discussed in the next subsection. The presence of ferromagnetic interactions further aids the lifting of the degeneracy, in this case between the modes with magnetic quantum numbers \(\pm l_{q}\), which are degenerate at the single-particle level. We have confirmed this, for example, by examining the excitation spectrum of a system with \(c_{0}=121.18\) and \(c_{1}=-0.6c_{0}\ll-0.56\) (not shown here), where the non-degenerate nature of the spectrum is clearly seen. In phase I, there are two zero-energy Goldstone modes corresponding to two broken continuous symmetries, namely gauge and rotational symmetry. The latter corresponds to the symmetry transformation generated by \(L_{z}\).
_Phase II:_ As already mentioned in Sec. III, the transition from phase I to II occurs at \(\Omega_{0}>0.3\) for \(c_{0}=121.18\) and \(c_{1}=-0.56\). The transition is accompanied by the discontinuities in the excitation spectrum. The excitation spectrum for phase II is shown in Fig. 6(c). Here among the low-lying modes are dipole and breathing modes corresponding to both density and spin channels. Both density- and spin-dipole modes are doubly degenerate corresponding to magnetic quantum number \(l_{q}=\pm 1\). On the other hand, both density- and spin-breathing modes are non-degenerate with \(l_{q}=0\). At small values
Figure 5: (Color online) Single particle excitation spectrum for spin-1 BEC as a function of SOAM coupling strength \(\Omega_{0}\).
Figure 6: (Color online) Low-lying excitation spectrum of \({}^{87}\)Rb SOAM-coupled spin-1 BEC with \(c_{0}=121.18\) and \(c_{1}=-0.56\) as a function of coupling strength \(\Omega_{0}\) of phase I with \(l_{q}=0,\pm 1\) in (a), \(l_{q}=\pm 2,\pm 3,\pm 4,\ldots\) in (b), and of phase II in (c). Among the named modes, \(l_{q}=0\) for density- and spin-breathing, \(l_{q}=+1\) for density-dipole, \(l_{q}=-1\) for spin-dipole, \(l_{q}=+2\) for density-quadrupole, and \(l_{q}=-2\) for spin-quadrupole modes. In (a) and (c), the dashed (magenta-colored) line is the variational estimate for the density-breathing mode.
of \(\Omega_{0}\), the energies of the spin modes are less than their density-mode analogues. There is a single zero-energy mode due to the broken gauge symmetry in this phase. Besides these modes, the density- and spin-quadrupole modes are also marked in the excitation spectrum in Figs. 6(a)-(c).
Additionally, the variation in SOAM-coupling strength leads to avoided crossings between the pairs of excitations, a few of which are identified by the blue circles in Fig. 6(c). We observe that the avoided crossing occur between the density and spin oscillations associated with the same magnetic quantum number \(l_{q}\). In the vicinity of the avoided crossing, the roles of the density and spin modes are interchanged as shown in Fig. 6(c). We study this mode-mixing by examining the density (\(\delta\rho\)) and spin fluctuations (\(\delta F_{x},\delta F_{y},\delta F_{z}\)) yielded by the perturbed order parameter and defined as
\[\delta\rho= 2\text{Re}\sum_{j}\psi_{j}\delta\psi_{j}^{*}, \tag{10a}\] \[\delta F_{x}= \sqrt{2}\text{Re}(\psi_{+1}\delta\psi_{0}^{*}+\psi_{0}\delta\psi_ {+1}^{*}+\psi_{-1}\delta\psi_{0}^{*}+\] \[\psi_{0}\delta\psi_{-1}^{*}),\] (10b) \[\delta F_{y}= -\sqrt{2}\text{Im}(-\psi_{+1}\delta\psi_{0}^{*}+\psi_{0}\delta \psi_{+1}^{*}+\psi_{-1}\delta\psi_{0}^{*}-\] \[\psi_{0}\delta\psi_{-1}^{*}),\] (10c) \[\delta F_{z}= 2\text{Re}(\psi_{+1}\delta\psi_{+1}^{*}-\psi_{-1}\delta\psi_{-1 }^{*}). \tag{10d}\]
The order-parameter fluctuation \(\delta\Psi(r,\phi,t)\), and hence density and spin fluctuations, can be constructed with the Bogoliubov quasiparticle amplitudes \(u\) and \(v\) corresponding to the frequency \(\omega\) of the mode as \(\delta\psi_{j}(r,\phi,t)\propto e^{i(l_{z}+S_{z}+l_{y})\phi}\left[u_{j}(r)e^{- i\omega t}-v_{j}^{*}(r)e^{i\omega t}\right]\). In the excitation spectrum in Fig. 6(c) at \(\Omega_{0}=1\), the density- and spin-dipole modes' frequencies are \(\omega_{\text{D}}=1\) and \(\omega_{\text{SD}}=0.08\), respectively, and the density- and spin-breathing modes' frequencies are \(\omega_{\text{B}}=1.97\) and \(\omega_{\text{SB}}=0.37\), respectively. One can see that the density-dipole, density-breathing, and spin-dipole modes encounter avoided crossings, whereas the spin-breathing mode does not. This observation agrees with the density and spin-density fluctuations evaluated along the \(\phi=0\) line and shown in Figs. 7(A)-(F). For the density-dipole mode with \(\omega_{\text{D}}=1\), both density and spin channels are excited as is seen from \(\delta\rho(r,\phi=0,t)\) and \(\delta F_{\nu}(r,\phi=0,t)\) in Fig. 7(A), where \(\nu=x,y,z\). Similarly, number density, longitudinal, and transverse magnetization densities oscillate in time, corresponding to the spin-dipole mode in Fig. 7(B), and density-breathing mode ends up exciting both the number and transverse magnetization densities in Fig. 7(C). On the other hand, the spin-breathing mode excites the spin channel alone in Fig. 7(D). The density- and spin-quadrupole modes too excite both the density and spin fluctuations which are not shown. The nomenclature of the modes in Figs. 6(a)-(c) is consistent with the density, \(\delta\rho(x,y,t)\), and longitudinal magnetization density, \(\delta F_{z}(x,y,t)\), fluctuations corresponding to density, breathing, and quadrupole modes in Fig. 8 shown at \(t=0,T/4,T/2,3T/4\), and \(T\) instants, where \(T\) is the period of the collective excitation.
Next, we study the excitation spectrum as a function of \(N\) for \(c_{1}/c_{0}=-0.0046\). Here first, we fix \(\Omega_{0}\) to \(0.3\), where a phase transition from phase I to II occurs at \(N=5700\). The excitation spectrum, in this case, for phase I and II are shown in Fig. 9(a) and (b). The same for \(\Omega_{0}=3\) is shown in Fig. 9(c), where phase II is the ground state phase with no phase transition. The modes in phase II are, again, either non-degenerate or with two-fold degeneracy. For SOAM-coupled \({}^{23}\)Na BEC with \(c_{0}=121.35\) and \(c_{1}=3.8\) the excitation spectrum, which is not shown here, is similar to the spectrum in Fig. 9(c) with some quantitative differences attributable to different \(c_{1}\) values.
### Dynamics
We examine the nature of low-lying collective excitations through the time evolution of the expectation of physical observables, which also serves to validate our calculation of the excitation spectrum from BdG equations. Here, we consider the Hamiltonian with an appropriately chosen time-independent perturbation, say \(H_{\text{s}}^{\prime}\) added to its single-particle part \(H_{\text{s}}\). This modifies the coupled GP Eqs. (3a)-(3b) with an added term corresponding to \(H_{\text{s}}^{\prime}\Psi(r,\phi,t)\) in each equation. We then solve these resultant GPEs over a finite period of time by considering previously obtained ground-state solutions as the initial
Figure 7: (Color online) (A) shows the density fluctuations, \(\delta\rho(r,\phi=0,t)\), and spin-density fluctuations, \(\delta F_{\nu}(r,\phi=0,t)\), with \(\nu=x,y,z\) corresponding to \(\omega_{\text{D}}=1\). (B)-(F) present the same for \(\omega_{\text{SD}}=0.08\), \(\omega_{\text{B}}=1.97\), \(\omega_{\text{SB}}=0.37\), respectively. The radial and time extents in each subfigure are \(4a_{\text{osc}}\) and \(5T\), respectively, where \(T=2\pi/\omega\) is the time period of the corresponding mode with \(\omega\) frequency.
solutions at \(t=0\). Numerically, one needs to consider a two-dimensional spatial grid over here, for which we choose the Cartesian \(x\)-\(y\) grid.
We consider \(c_{0}=121.28\), \(c_{1}=-0.56\), and \(\Omega_{0}=1\), which yielded the ground-state phase in Fig. 1(b), as an example set of parameters to study the dynamics. To excite the density-dipole mode, we take the perturbation \(H_{s}^{\prime}=\lambda x\), where \(\lambda\ll 1\). We then examine the dynamics of the center of mass of the BEC via \(x_{\rm cm}(t)=\langle x\rangle=\sum_{j=\pm 1,0}\int x\rho_{j}(x,y,t)dxdy\) which is plotted in Fig. 10(a). We also compute its Fourier transform \(\widehat{x}_{\rm cm}(\omega)\) to demonstrate that the dominant frequency resonates at \(\omega=1\) as can be seen in Fig. 10(b) and matches with \(\omega_{\rm D}=1\) in the BdG spectrum in Fig. 6(c). We could have chosen \(H_{s}^{\prime}=\lambda y\) and then calculated \(y_{\rm cm}(t)\) giving us the same excitation frequency. This is a consequence of the two-fold degeneracy in the density-dipole mode. We have checked that this mode can also be excited by shifting the minima of the external trapping potential. Similarly, to examine the excitation of the density-breathing mode with \(H_{s}^{\prime}=\lambda(x^{2}+y^{2})\), where the relevant observable is \(r^{2}=x^{2}+y^{2}\), we calculate mean square radius \(r_{\rm ms}^{2}(t)=\langle r^{2}\rangle\) as a function of time, which is plotted in Fig. 10(c). The Fourier transform \(\widehat{r_{\rm ms}^{2}}(\omega)\) of \(r_{\rm ms}^{2}(t)\) reveals a dominant peak at \(\omega=1.99\) which is close to BdG result of \(\Omega_{\rm B}=1.97\). This mode, again, can be excited by perturbing the trap strength. Similarly, the spin-dipole mode can be excited by adding a perturbation \(H^{\prime}=\lambda xS_{z}\) or \(\lambda yS_{z}\) with \(xS_{z}\) or \(yS_{z}\) as the pertinent observable corresponding to the spin-dipole mode. The two possible observables again reflect the two-fold degeneracy of spin-dipole modes. The time-variation of \(d_{x}(t)=\langle xS_{z}\rangle=\sum_{j=+1,-1}\int x\rho_{j}(x,y,t)dxdy\) is shown in Fig. 11(a) and its Fourier transform in Fig. 11(b) has a dominant peak at \(\omega=0.1\), which corresponds to the spin-dipole mode labeled in Fig. 6(c) with \(\omega_{\rm SD}=0.08\). Similarly, the spin-breathing mode corresponds to observable \(r^{2}S_{z}\). In Figs. 11(c) and (d), we show the dynamics of \(d_{r}^{2}(t)=\langle r^{2}S_{z}\rangle\), i.e. the relative difference in the mean-square radii of the \(j=\pm 1\) components and the associated Fourier transform, respectively, with a dominant peak at \(\omega=0.37\), in agreement with \(\omega_{\rm SB}\) in Fig. 6(c). Finally, the density- and spin-quadrupole modes' frequencies calculated from the time evolution of \(\langle xy\rangle\) and \(\langle xyS_{z}\rangle\) are in agreement with the numbers in Fig. 6(c).
### Variational analysis
For an SOAM-coupled spin-1 system, a few low-lying modes can be studied using a time-dependent variational
Figure 9: (Color online) Low-lying excitation spectrum for \({}^{87}\)Rb spin-1 BEC with \(c_{1}/c_{0}=-0.0046\) as a function of the number of atoms \(N\): (a)-(b) for \(\Omega_{0}=0.3\) with a phase transition from phase I to II at \(N=5700\) and (c) \(\Omega_{0}=3\). (a) corresponds to the spectrum of phase I, whereas (b) and (c) correspond to the spectrum of phase II. The different colors in (a) signify non-degenerate modes with different \(l_{q}\), while in (b) and (c), red, black, green, blue, and brown colors correspond, respectively, to the modes with \(l_{q}=0,\pm 1,\pm 2\), \(\pm 3\) and \(\pm 4\).
method [40]. For example, to calculate the density-breathing mode, we consider the following variational ansatz
\[\Psi=\frac{r}{2\sqrt{\pi}\sigma(t)^{2}}\exp\left[-\frac{r^{2}}{2\sigma(t)^{2}}+i \alpha(t)r^{2}\right]\times\begin{pmatrix}e^{i(m+1)\phi}\\ -\sqrt{2}e^{im\phi}\\ e^{i(m-1)\phi}\end{pmatrix} \tag{11}\]
where \(\sigma(t)\) and \(\alpha(t)\) are time-dependent variational parameters used to denote the width of condensate and chirp of Gaussian pulse, respectively, and \(m=\pm 1\) for phase I or 0 for phase II. The Lagrangian of the system is given by
\[L=\sum_{j}\int drd\phi\frac{i}{2}\left(\psi_{j}^{*}\frac{\partial\psi_{j}}{ \partial t}-\psi_{j}\frac{\partial\psi_{j}^{*}}{\partial t}\right)-E, \tag{12}\]
where energy \(E\) is defined as
\[E= \int_{0}^{\infty}\int_{0}^{2\pi}\left[\sum_{j}\psi_{j}^{*}\left(- \frac{1}{2}\frac{\partial}{r\partial r}\left(r\frac{\partial}{\partial r} \right)+\frac{L_{x}^{2}}{2r^{2}}+\frac{r^{2}}{2}\right)\psi_{j}\right.\] \[\left.+\frac{c_{0}}{2}\rho^{2}+\frac{c_{1}}{2}(\rho_{1}+\rho_{0} -\rho_{-1})\rho_{-1}+\frac{c_{1}}{2}(\rho_{1}+\rho_{-1})\rho_{0}+\right.\] \[\left.c_{1}(\psi_{-1}^{*}\psi_{0}^{2}\psi_{1}^{*}+\psi_{-1}\psi_ {0}^{2*}\psi_{1})+\frac{\Omega(r)}{\sqrt{2}}(\psi_{1}^{*}e^{i\phi}\psi_{0}+\right.\] \[\left.\psi_{1}e^{-i\phi}\psi_{0}^{*}+\psi_{0}^{*}e^{i\phi}\psi_{- 1}+\psi_{-1}^{*}e^{i\phi}\psi_{0})\right]drd\phi. \tag{13}\]
For \(m=\pm 1\), the (coupled) Euler-Lagrange equations are
\[\sigma(t)= \frac{\sigma}{2}\left(\frac{6\sqrt{2\pi}\sqrt{\epsilon}\Omega_{0 }\sqrt{\frac{1}{r_{0}^{2}}+\frac{2}{\sigma^{2}}}\left(r_{0}^{7}-2r_{0}^{5} \sigma^{2}\right)}{\left(2r_{0}^{2}+\sigma^{2}\right)^{4}}-2\right)\] \[+\frac{c_{0}+c_{1}+10\pi}{8\pi\sigma^{3}}, \tag{14a}\] \[\alpha= \frac{\dot{\sigma}}{2\sigma}, \tag{14b}\]
where \(\dot{}\) denotes the time derivative. The equilibrium width \(\sigma_{0}\) of the condensate satisfies
\[\frac{c_{0}+c_{1}+10\pi}{4\pi\sigma_{0}^{4}}+\frac{6\sqrt{2\pi}\sqrt{ \epsilon}\Omega_{0}\sqrt{\frac{1}{r_{0}^{2}}+\frac{2}{\sigma_{0}^{2}}}\left( r_{0}^{7}-2r_{0}^{5}\sigma_{0}^{2}\right)}{\left(2r_{0}^{2}+\sigma_{0}^{2} \right)^{4}}=2.\]
The frequency of the oscillation in width calculated by linearizing Eq. (14a) about equilibrium width \(\sigma_{0}\) is
\[\omega_{\rm B}^{\rm I}=\left[\frac{15\sqrt{2\pi}r_{0}^{4}\sqrt{ \epsilon}\sigma_{0}\Omega_{0}(3r_{0}^{2}-2\sigma_{0}^{2})\sqrt{2r_{0}^{2}+ \sigma_{0}^{2}}+1}{(2r_{0}^{2}+\sigma_{0}^{2})^{5}}+\right.\] \[\left.\frac{3(c_{0}+c_{1}+10\pi)}{8\pi\sigma_{0}^{4}}\right]^{1/2}. \tag{15}\]
Similarly, for \(m=0\) in Eq. (11), the density breathing mode is
\[\omega_{\rm B}^{\rm II}= \left[\frac{15\sqrt{2\pi}r_{0}^{4}\sqrt{\epsilon}\sigma_{0} \Omega_{0}(3r_{0}^{2}-2\sigma_{0}^{2})\sqrt{2r_{0}^{2}+\sigma_{0}^{2}}+1}{(2r _{0}^{2}+\sigma_{0}^{2})^{5}}+\right.\] \[\left.\frac{3(c_{0}+c_{1}+6\pi)}{8\pi\sigma_{0}^{4}}\right]^{1/2}. \tag{16}\]
The variationally calculated density-breathing mode's frequency agrees with the values in the BdG spectrum as demonstrated in Figs. 6(a) and (c) for phases I and II, respectively.
Figure 11: (Color online) (a) shows \(d_{x}(t)\) as a function of time and (b) corresponding Fourier transform with a dominant peak at \(\omega=0.1\) for \({}^{87}\)Rb spin-1 BEC with \(c_{0}=121.28\), \(c_{1}=-0.56\), and \(\Omega_{0}=1\). Similarly, (c) and (d) show the \(d_{x}^{2}(t)\) and its Fourier transform with a dominant peak at \(\omega=0.37\) for the same interaction and coupling strengths.
Figure 10: (Color online) (a) shows the center of mass oscillations, i.e. \(x_{\rm cm}(t)\) as a function of time and (b) corresponding Fourier transform with a dominant peak at \(\omega=1\) for \({}^{87}\)Rb spin-1 BEC with \(c_{0}=121.28\), \(c_{1}=-0.56\), and \(\Omega_{0}=1\). (c) shows the oscillations in the mean square size of the system \(r_{\rm ms}^{2}(t)\) and (d) the corresponding Fourier transform with a dominant peak at \(\omega=1.99\) for the same interaction and coupling strengths.
Summary and Conclusions
We have investigated the low-lying collective excitations of the coreless vortex and the polar-core vortex phases supported by the spin-1 BECs with SOAM coupling. The existence of the two phases is seen in the full phase diagrams in the _ratio of interaction strengths_ versus _coupling strength_ and also the _number of atoms_ versus _coupling strength_ planes. We have studied the excitation spectrum as a function of two experimentally controllable parameters, namely coupling strength and the number of atoms. The excitation spectrums are characterized by the discontinuities across the phase boundary between the two phases and within a phase by avoided crossings between the modes with the same magnetic quantum number of excitations. The avoided crossings signal the hybridization of the density and spin channels; the nature of spin and density fluctuations has indeed confirmed this. Among the low-lying modes, we identify dipole, breathing, and quadrupole modes for density and spin channels. The frequencies of these named modes are further validated from the time evolution of the expectations of the physical observables when an apt time-independent perturbation is added to the system's Hamiltonian. An analytic estimate for the density-breathing modes has also been obtained using the variational analysis.
###### Acknowledgements.
AR acknowledges the support of the Science and Engineering Research Board (SERB), Department of Science and Technology, Government of India under the project SRG/2022/000057 and IIT Mandi seed-grant funds under the project IITM/SG/AR/87. AR acknowledges National Supercomputing Mission (NSM) for providing computing resources of PARAM Himalaya at IIT Mandi, which is implemented by C-DAC and supported by the Ministry of Electronics and Information Technology (MeitY) and Department of Science and Technology (DST), Government of India. S.G. acknowledges support from the Science and Engineering Research Board, Department of Science and Technology, Government of India through Project No. CRG/2021/002597.
**Bogoliubov-de Gennes (BdG) analysis:** The fluctuation \(\delta\Psi(r,t)\) to the equilibrium order parameter in Eqn. (9) is \(\delta\Psi(r,t)=u(r)e^{-i\omega t}-v^{*}(r)e^{i\omega t}\), where \(u(r)\) and \(v(r)\) are Bogoliubov amplitudes and \(\omega\) is the excitation frequency. Linearization of the three coupled Gross-Pitaevskii Eqs. (3a)-(3b) and the conjugate set of equations using perturbed
order parameter in Eq. (9) yields following six-coupled BdG equations:
\[\omega u_{+1} = \left[-\frac{\nabla_{r}^{2}}{2}+\frac{r^{2}}{2}+\delta+\frac{(l_{q}+ l_{z}+1)^{2}}{2}-\mu+c_{0}R_{+1}^{2}+c_{1}(2R_{+1}^{2}+R_{0}^{2}-R_{-1}^{2}) \right]u_{+1} \tag{17a}\] \[+\left[\frac{\Omega(r)}{\sqrt{2}}+R_{+1}R_{0}(c_{0}+c_{1})+2c_{1} R_{0}R_{-1}\right]u_{0}+R_{+1}^{2}(c_{0}+c_{1})v_{+1}+R_{+1}R_{0}(c_{0}+c_{1})v_{0}\] \[+R_{+1}R_{-1}(c_{0}-c_{1})u_{-1}+(R_{+1}R_{-1}(c_{0}-c_{1})+2c_{1} R_{0}^{2})v_{-1},\] \[-\omega v_{+1} = \left[-\frac{\nabla_{r}^{2}}{2}+\frac{r^{2}}{2}+\delta+\frac{(l_{ q}+l_{z}+1)^{2}}{2}-\mu+c_{0}R_{+1}^{2}-c_{1}(2R_{+1}^{2}+R_{0}^{2}-R_{-1}^{2}) \right]v_{+1}\] (17b) \[+R_{+1}R_{0}(c_{0}+c_{1})v_{+1}+(c_{0}R_{0}^{2}+2c_{1}R_{+1}R_{-1} )v_{0}+\left[\frac{\Omega(r)}{\sqrt{2}}+R_{0}R_{-1}(c_{0}+c_{1})-2c_{2}R_{+1}R _{0}\right]u_{-1}\] \[+R_{+1}R_{-1}(c_{0}+c_{1})v_{-1}\] \[-\omega v_{0} = \left[-\frac{\nabla_{r}^{2}}{2}+\frac{r^{2}}{2}+\frac{(l_{q}+l_{z })^{2}}{2}-\mu+c_{0}R_{0}^{2}-c_{1}(R_{+1}^{2}+R_{-1}^{2})\right]v_{0}+\left[ \frac{\Omega(r)}{\sqrt{2}}-R_{+1}^{2}(c_{0}+c_{1})\right]v_{+1}\] (17d) \[+R_{+1}R_{-1}(c_{0}+c_{1})u_{-1}\] \[\omega u_{-1} = \left[-\frac{\nabla_{r}^{2}}{2}+\frac{r^{2}}{2}-\delta+\frac{(l_{ q}+l_{z}-1)^{2}}{2}-\mu+c_{0}R_{-1}^{2}+c_{1}(2R_{-1}^{2}+R_{0}^{2}-R_{+1}^{2}) \right]u_{-1}\] (17e) \[+\left[\frac{\Omega(r)}{\sqrt{2}}+R_{0}R_{-1}(c_{0}+c_{1})+2c_{1} R_{+1}R_{0}\right]u_{0}+(c_{0}-c_{1})R_{+1}R_{-1}u_{+1}\] \[+(R_{+1}R_{-1}(c_{0}-c_{1})+2c_{1}R_{0}^{2})v_{+1}+R_{-1}^{2}(c_{ 0}+c_{1})v_{-1}+R_{+1}R_{0}(c_{0}+c_{1})v_{0}\] \[-\omega v_{-1} = \big{[}-\frac{\nabla_{r}^{2}}{2}+\frac{r^{2}}{2}-\delta+\frac{(l_ {q}+l_{z}-1)^{2}}{2}-\mu+c_{0}R_{-1}^{2}+c_{1}(2R_{-1}^{2}+R_{0}^{2}-R_{+1}^{2}) \big{]}v_{-1}\] (17f) \[+\big{[}\frac{\Omega(r)}{\sqrt{2}}+R_{0}R_{-1}(c_{0}+c_{1})+2c_{1 }R_{+1}R_{0}\big{]}v_{0}+(c_{0}-c_{1})R_{+1}R_{-1}u_{+1}\] \[+(R_{+1}R_{-1}(c_{0}-c_{1})+2c_{1}R_{0}^{2})v_{+1}+R_{-1}^{2}(c_{ 0}+c_{1})v_{-1}+R_{+1}R_{0}(c_{0}+c_{1})v_{0}\] \[-\omega v_{-1} = \big{[}-\frac{\nabla_{r}^{2}}{2}+\frac{r^{2}}{2}-\delta+\frac{(l_ {q}+l_{z}-1)^{2}}{2}-\mu+c_{0}R_{-1}^{2}+c_{1}(2R_{-1}^{2}+R_{0}^{2}-R_{+1}^{2}) \big{]}v_{-1}\] (17g) \[+\big{[}\frac{\Omega(r)}{\sqrt{2}}+R_{0}R_{-1}(c_{0}+c_{1})+2c_{1 }R_{+1}R_{0}\big{]}v_{0}+(c_{0}-c_{1})R_{+1}R_{-1}v_{+1}\] \[+(R_{+1}R_{-1}(c_{0}-c_{1})+2c_{1}R_{0}^{2})u_{+1}+R_{-1}^{2}(c_{ 0}+c_{1})u_{-1}+R_{+1}R_{0}(c_{0}+c_{1})u_{0}\]
where \(\nabla_{r}^{2}=-\partial^{2}/(2\partial r^{2})-\partial/(2r\partial r)\), \(l_{z}=1\) for phase I and 0 for phase II. To solve coupled Eqs. (17a)-(17g), we use finite-difference method to discretize these equations over the spatial radial grid [41], thus transforming the BdG equations to a matrix eigenvalue equation which can be solved using standard matrix diagonalization subroutines.
|
2306.03315 | Few Shot Rationale Generation using Self-Training with Dual Teachers | Self-rationalizing models that also generate a free-text explanation for
their predicted labels are an important tool to build trustworthy AI
applications. Since generating explanations for annotated labels is a laborious
and costly pro cess, recent models rely on large pretrained language models
(PLMs) as their backbone and few-shot learning. In this work we explore a
self-training approach leveraging both labeled and unlabeled data to further
improve few-shot models, under the assumption that neither human written
rationales nor annotated task labels are available at scale. We introduce a
novel dual-teacher learning framework, which learns two specialized teacher
models for task prediction and rationalization using self-training and distills
their knowledge into a multi-tasking student model that can jointly generate
the task label and rationale. Furthermore, we formulate a new loss function,
Masked Label Regularization (MLR) which promotes explanations to be strongly
conditioned on predicted labels. Evaluation on three public datasets
demonstrate that the proposed methods are effective in modeling task labels and
generating faithful rationales. | Aditya Srikanth Veerubhotla, Lahari Poddar, Jun Yin, György Szarvas, Sharanya Eswaran | 2023-06-05T23:57:52Z | http://arxiv.org/abs/2306.03315v1 | # Few Shot Rationale Generation using Self-Training with Dual Teachers
###### Abstract
Self-rationalizing models that also generate a free-text explanation for their predicted labels are an important tool to build trustworthy AI applications. Since generating explanations for annotated labels is a laborious and costly process, recent models rely on large pretrained language models (PLMs) as their backbone and few-shot learning. In this work we explore a self-training approach leveraging both labeled and unlabeled data to further improve few-shot models, under the assumption that neither human written rationales nor annotated task labels are available at scale. We introduce a novel dual-teacher learning framework, which learns two specialized teacher models for task prediction and rationalization using self-training and distills their knowledge into a multi-tasking student model that can jointly generate the task label and rationale. Furthermore, we formulate a new loss function, Masked Label Regularization (MLR) which promotes explanations to be strongly conditioned on predicted labels. Evaluation on three public datasets demonstrate that the proposed methods are effective in modeling task labels and generating faithful rationales.
\({}^{1}\)Language Technologies Institute, Carnegie Mellon University
[email protected]
\({}^{2}\)Amazon
{poddarl, jnyin, szarvass, sharanye}@amazon.com
## 1 Introduction
Interpretable NLP has emerged to learn models which explain their predictions through either extractive DeYoung et al. (2020) or natural language explanations Camburu et al. (2018); Narang et al. (2020); Wiegreffe et al. (2020). Due to higher expressivity of free text, generative self-rationalizing models have gained much research interest. However, the early works assume a fully supervised setup and require a large annotated dataset Narang et al. (2020). Collecting large scale, manual annotations for task labels and corresponding explanations is challenging and expensive. On the other hand, a much larger unlabeled corpora is often available, making semi-supervised approaches like few-shot learning Brown et al. (2020) and self-training He et al. (2019) attractive solutions. In the context of self-rationalizing models, Marasovic et al. (2022) explore few-shot learning, while Zelikman et al. (2022) seek to improve a supervised labeler by augmenting it with rationale generation. In this work we start from a few-shot setup, assuming only a handful of examples available with their labels and hand-written rationale. We leverage a large unlabeled dataset and self-training techniques to improve over the simple few-shot model.
We hypothesize that using only a few examples, learning to generate meaningful explanations _jointly_ with predicting the labels themselves, is a particularly challenging objective and self-training can suffer from a weak initial model. To address this, we propose a novel Dual Teacher learning approach to learn a self-rationalizing model from the two teacher models in a cascading manner. At first, a Predictor model is learned for predicting task labels, and then a Rationalizer model is learned to generate an explanation conditioned on an input and the task labels predicted by the Predictor model. We iteratively improve both models via self-training. In contrast to learning the Joint model directly, the Rationalizer model allows for much richer representation learning by moving the label information from decoder to the encoder part, and utilizing the encoder's self-attention mechanism to extract input-label correlations. A stronger few-shot model for rationale generation provides higher quality pseudo labels, consequently making self-training more effective.
Although the two conditional models (Predictor and Rationalizer) might be better performing, a single self-rationalizing model is still desirable for practical applications, due to its ease-of-maintenance and parameter efficiency for faster inference. We apply principles from knowledge distillation Hinton et al. (2015); Kim and Rush (2016) on the two conditional models to learn a
joint model that generates task label and explanation as a single sequence. The teacher models are used for generating pseudo labels on the entire unlabeled dataset. The initial few-shot labeled data and the pseudo labeled dataset are finally combined to train the joint model.
Faithfulness of explanations is an imperative property for practical applications of interpretability analysis. A model generated explanation is considered faithful if it accurately explains the decision making of the model Alvarez Melis and Jaakkola (2018); Wiegreffe et al. (2020). Similar to prior study Jacovi and Goldberg (2020), we also observe that a free text explanation generated by models might sound _plausible_, without satisfying the _faithfulness_ criteria of explaining the predicted task label. This motivates us to design a masking based regularization function, Masked Label Regularizer (MLR), to encourage the model to condition on the task label while generating an explanation. MLR is an entropy based constraint that forces the Rationalizer model to be maximally uncertain in generating an explanation in absence of label tokens and is used to ensure that the Rationalizer model preserves faithfulness through the self-training iterations. To summarize, our contributions are:
* Proposing to utilize self-training for learning self-rationalizing models with free-text explanations, demonstrating that it provides significant performance boost compared to few-shot learning.
* Proposing a novel Dual Teacher framework, where two teacher models are trained with self-training in a cascading manner for learning two tasks, and a multi-task joint student model is learned through distillation from the teachers.
* Extensively studying the faithfulness property of free-text explanations, and designing an entropy based regularization to encourage label-explanation conditioning.
* Experiments on three public benchmark datasets and demonstrating the effectiveness of our proposed model in improving both task accuracy and explanation quality.
## 2 Related Work
Prior works on generating free text rationales have explored joint models Narang et al. (2020); Marasovic et al. (2022) as well as several variants of pipeline models Wiegreffe et al. (2020); Jang and Lukasiewicz (2021). We also use sequence to sequence models Raffel et al. (2019) as our backbone models. While most of the self-rationalizing literature assumes fully supervised setups, STaR Zelikman et al. (2022) explores an alternate bootstrapping setup where limited rationales are available, but the task labels are present for the whole dataset. We consider the generic and more restrictive setting where only limited annotations are available for both task label and rationale.
For limited labeled data scenario, many NLP applications have started reporting success with self-training Mehta et al. (2022); Yu et al. (2022); He et al. (2019); Bhat et al. (2021). Inspired from these works, we employ self-training to the self-rationalization problem. We introduce a new training framework with two conditional models and using them as teachers in a further distillation step to train the joint model. Besides the popular use for model compression, Knowledge Distillation has also shown superior performance when using the same model architecture and size for both the student and teacher models Furlanello et al. (2018), and distilling from multiple teachers Yuan et al. (2021); Liu et al. (2020). Recently, a work Ghiasi et al. (2021) in computer vision domain has explored using pseudo-labels from multiple teachers to train a joint student model. However, they have multiple specialized teachers trained independently through full supervision, in contrast to the cascading nature of our dual teacher self-training setup.
Evaluating the quality of free-text rationales is significantly challenging and several works have proposed metrics to evaluate the explanations around fluency and their faithfulness properties Hase and Bansal (2020); Hase et al. (2020); Marasovic et al. (2022). A recent work Wang et al. (2022) also tries to imbue faithfulness through a regularizing coefficient. However, they apply the regularizer to perturb the rationale while generating task label. In contrast we use a label masking regularizer to enforce the Rationalizer model to generate an explanation which is faithful to the label.
## 3 Background
We first provide some necessary background on Self-Rationalizing models and a theoretical outline of Self Training based learning.
**Self-Rationalization**: A Self-Rationalization model tries to learn the joint distribution of output(\(O\)) and explanation(\(E\)), given an input(\(I\)),
i.e. \(P(O,E|I)\). A common approach is modeling it as a sequence-to-sequence problem and generating the task prediction and the rationale jointly (Narang et al., 2020). Input-output format for a self rationalizing joint model is illustrated in Figure 1. The input consists of a task prompt, (e.g. explain nli), and in output sequence the task label is generated first (e.g. contradiction), followed by a separator token (explanation:), and then the free text explanation. During inference, greedy decoding is used to generate the sequence until an EOS token is produced.
**Self-Training** is a type of Semi-Supervised Learning based method, which assumes access to a small labeled dataset (\(D_{l}\)) and a large, unlabeled in-domain dataset (\(D_{u}\)). The algorithm progresses iteratively in four steps. First, a teacher model is trained on the labeled dataset (\(D_{l}\)), to obtain \(\theta^{T}\). The trained teacher is then used to infer _pseudo-labels_ on \(D_{u}\), generating the _pseudo-labeled_ dataset \(D_{pl}\). A student model is then trained on \(D_{pl}\) to obtain the \(\theta^{S}\). In the next iteration the teacher model is updated with the learned parameters from the student and the process repeats until a convergence criterion is met.
## 4 Dual Teacher for Self-Rationalization
We combine the strengths of self-training and knowledge distillation to train a self-rationalizing joint model from dual teachers. Following sections describe the components, their losses and the learning procedures in more detail. Input-output formats of the models are shown in Figure 1, and the overall framework is illustrated in Figure 2.
### Problem Setup
We tackle the self-rationalization problem with few-shot labels. We consider access to a small labeled set, \(D_{l}=\{(i_{j},o_{j},e_{j})\}_{j=1}^{N}\), where \(i_{j}\) is the input, \(o_{j}\) is the task output, and \(e_{j}\) is the natural language explanation. We also leverage a much larger unlabeled dataset denoted by \(D_{u}=\{i_{j}\}_{j=1}^{M}\), where \(M\gg N\). In the unlabeled dataset only the input text is available and no annotation is provided for either task label or rationale.
To keep all models identical, we model all distributions in a sequence to sequence manner using T5 (Raffel et al., 2019). The teacher model in self-training is trained on few shot ground truth output sequences and the trained teacher is then used for generating output sequences for the unlabeled dataset. These sequences are considered as pseudo labels to train the student model. We re-weight the loss of each example with confidence of the teacher model. This limits error propagation through self-training iterations due to the noisy nature of pseudo labels. We use likelihood of the generated sequence as confidence estimates. Following (Bhat et al., 2021) we normalize the weights in a batch.
### Splitting the Joint into Conditionals
In order to make the learning task easier, we break down the joint probability of modeling task and rationale, into its conditionals:
\[\underbrace{P(O,E|I)}_{\text{Joint}}=\underbrace{P(O|I)}_{\text{Predictor}} \times\underbrace{P(E|I,O)}_{\text{Rationalizer}} \tag{1}\]
This allows us to build two separate models in a cascading manner: (1) Predictor Model for predicting task label, i.e. \(P(O|I)\), and (2) Rationalizer Model for rationalizing the task label for an input, i.e. \(P(E|I,O)\). Prior works (Jang and Lukasiewicz, 2021) have shown that factorization of this distribution to predicting the output first (Prediction) and generating an explanation for the prediction (Rationalization) has obtained better performance than alternate factorizations.
We hypothesize that with limited labeled examples, learning a joint distribution for <task label+rationale> sequence would be much harder than focusing on learning to predict only the task label. More importantly, for rationale generation we move the task label from output sequence (in the joint model) to input sequence (in Rationalizer model). This allows the encoder to capture much richer interactions between task label and the input through its self-attention network, compared to only the decoder in joint model. The stronger initial few-shot models for predictor and rationalizer would be further boosted through self-training in generating higher quality pseudo labels.
Figure 1: Input and output formats for Predictor, Rationalizer and Joint models.
### Predictor Teacher
In the first step of our framework, we train a Predictor model with self-training. The Predictor is trained to model the probability of the task output given the input, i.e. \(P(O|I)\). The task output is decomposed into subwords, and the model is trained to minimize the negative log likelihood of the output token sequence:
\[\mathcal{L}_{pred}(\theta)=\mathbb{E}_{(i,o)\sim\mathcal{D}}\left[-\log P_{ \theta}(o|i)\right] \tag{2}\]
The predictor model is trained within its own self training loop, utilizing the few shot ground truth task labels and unlabeled inputs. After self-training has converged, we store the predictor and use it for generating pseudo task labels on all unlabeled data.
\[D_{pl}=\{(i,p_{\theta_{pred}}(o|i))\}_{i\in D_{u}} \tag{3}\]
These pseudo labels are then used for training the Rationalizer model and the Joint model.
### Rationalizer Teacher
In the second stage we train a Rationalizer model that can generate natural language explanations given an input and the predicted task output, modeling the conditional distribution \(P(E|I,O)\).
\[\mathcal{L}_{rat\_gen}(\theta)=\mathbb{E}_{(i,o,e)\sim\mathcal{D}}\left[- \log P_{\theta}(e|i,o)\right] \tag{4}\]
For training the teacher model we use the few-shot ground truth labeled dataset for task label and rationale. For generating rationale pseudo-labels on the unlabeled set, we use the task pseudo labels generated by the predictor model as input. The generated rationale pseudo labels are then used to train a student rationalizer model in self-training loop until convergence.
### Faithfulness of Explanations
For a Rationalizer model to generate a _faithful_ explanation, we want the explanation to be strongly conditioned on the label. The rationalizer should not be able to generate an explanation solely based on the input, but must take into consideration the label for which it is rationalizing. We introduce a regularizing constraint in our rationalizer model to explicitly encode this property.
### Masked Label Regularization
We design an entropy based regularization which tells the model to be maximally uncertain in generating the explanation in absence of a task label. We achieve this by replacing the task output with mask tokens and maximizing the per-token entropy of the explanation sequence.
\[\mathcal{L}_{MLR}(\theta)=\mathbb{E}_{(i,e)\sim\mathcal{D}}\left[-H_{\theta}[e |i]\right] \tag{5}\]
where \(H_{\theta}[e|i]\) refers to the entropy of producing an explanation from input directly.
There could be alternate ways of encoding the constraint of label-explanation association. We experimented with one such variant where the ground truth explanation would be generated with a high entropy in case of a wrong label. We observed similar empirical results in our experiments for this alternative. However, it is strictly less general - since it becomes limited to only categorical problems, and also is computationally more expensive due the necessity of computing entropy for multiple wrong labels. Therefore, we use the simpler and generic form of masking the label tokens.
The overall loss of the Rationalizer is a weighted summation of the sequence generation loss and the regularization loss:
\[\mathcal{L}_{rat}=\mathcal{L}_{rat\_gen}(\theta)+\lambda_{MLR}\mathcal{L}_{ MLR}(\theta) \tag{6}\]
Figure 2: Dual Teacher Training Framework. Predictor and Rationale models are trained in their own Self-training loop. Pseudo labels generated from the trained predictor and rationale model are used for training the Joint model.
\(\lambda_{MLR}\) is empirically set to \(1e^{-4}\) in our experiments for all datasets.
### Learning from Multiple Teachers: Distilling a Joint from the Conditionals
Knowledge Distillation is an effective learning paradigm to train a lighter student model with rich supervision signals from better performing teacher model(s). To alleviate the limitations of limited labeled data for learning a good self-rationalization model, we leverage the unlabeled data and collect task and rationale pseudo-labels sequentially from trained Predictor and Rationalizer teacher models. The final pseudo-labeled dataset is then combined with the few-shot labeled data and a joint model is trained on this set. This allows the knowledge from both the Predictor and Rationalizer models to be distilled into the student Joint model through pseudo labels and the teachers' confidence weights.
The joint model is trained to maximize the likelihood of a concatenated sequence of task output and explanation, as illustrated in Figure 1. The detailed training algorithm is described in Algorithm 1.
```
\(D_{l}=\{(i_{i},o_{i},e_{i})\}_{i=1}^{N}\)
0:\(D_{u}=\{i_{j}\}_{j=1}^{M}\)
0:\(D_{val}=\{(i_{i},o_{i},e_{i})\}_{i=1}^{K}\) Initialize \(\theta_{pred}\), \(\theta_{rat}\), \(\theta_{joint}\) randomly
1: /* Train Predictor model */ \(\theta_{pred}^{*}\gets SelfTraining(\mathcal{D}_{l},\mathcal{D}_{u},D_{ val},\theta_{pred})\)
2:\(\mathcal{D}_{pred}\leftarrow\{(i_{j},\hat{o}_{j})\}_{j=1}^{M},\hat{o}_{j}\sim p _{\theta_{pred}^{*}}(\cdot|I)\)
3: /* Train Rationalizer model */ \(\theta_{rat}^{*}\gets SelfTraining(\mathcal{D}_{l},\mathcal{D}_{pred},D_{ val},\theta_{rat})\)
4: \(\mathcal{D}_{pl}\leftarrow\{(i_{j},\hat{o}_{j},\hat{e}_{j})\}_{j=1}^{M}\), \(\hat{o}_{j}\sim p_{\theta_{pred}^{*}}(\cdot|I),\hat{e}_{j}\sim p_{\theta_{rat }^{*}}(\cdot|I,O)\)
5: /* Train Joint model */ \(D_{final}\gets D_{pl}\cup D_{l}\)
6: \(\theta_{joint}^{*}\gets Train(D_{final},D_{val},\theta_{joint})\)
```
**Algorithm 1** Dual Teacher Training Algorithm
**Loss Re-weighting**: Similar to most sequence-to-sequence models, in WT5 (Narang et al., 2020), all output tokens in the generated sequence have uniform weights in the loss. However, in the joint task setup, the number of tokens from task label is substantially smaller than those in the explanation. To balance this, we re-weight the token-level losses between the output and the explanation. For a tuple \((i_{j},o_{j},e_{j})\), the loss is computed as:
\[\mathcal{L} =\lambda\sum_{y_{m}\in o_{j}}-\log p_{\theta}(y_{m}|i_{j},y_{1}, \cdots y_{m-1})\] \[+(1-\lambda)\sum_{y_{n}\in e_{j}}-\log p_{\theta}(y_{n}|i_{j},y_ {1},\cdots y_{n-1})\]
where \(\lambda\in[0.5,1)\) is a weight coefficient.
## 5 Results and Discussion
We evaluate on public datasets for three different tasks. Table 1 shows statistics of the datasets.
**e-SNLI**(Camburu et al., 2018) extends the popular SNLI dataset (Bowman et al., 2015) by adding human-annotated explanations to the NLI labels. The task requires generation of a task label which describes the relationship between a premise and a hypothesis as entailment/contradiction/neutral, and a free text explanation for the prediction.
**ComVE**(Wang et al., 2020) aims to evaluate if a model can distinguish between sensible and non-sensical statements based on common knowledge. We combine the data from SubTask A (Validation) and SubTask C (Generation) for our experiments.
**ECQA**(Aggarwal et al., 2021) augments the Commonsense QA dataset (Talmor et al., 2019) with free-text explanations that support the the correct answer choice and refute the incorrect ones. We utilize the explanations for the correct output (Positive Property) as the explanation.
For few-shot settings we sample \(100\) examples per class for each dataset. The self-training setup leverages the few-shot labeled dataset(\(D_{l}\)) and the rest of the training set as unlabeled dataset(\(D_{u}\)).
### Implementation Details
We use the base variant of T5 (Raffel et al., 2019) as backbone model for the Predictor, Rationalizer and Joint models. Following (Narang et al., 2020), we also measure task performance using accuracy, and rationalization using SacreBLEU (Post, 2018). Label smoothing was set to 0.1 and early stopping
\begin{table}
\begin{tabular}{l|c c c} & c-SNLI & ComVE & ECQA \\ \hline \# classes & 3 & 2 & 5 \\ total train size & 549,367 & 10,000 & 7,598 \\ few shot dataset size & 300 & 200 & 500 \\ validation size & 9,842 & 1,000 & 1,090 \\ test size & 9,824 & 1,000 & 2,194 \\ Avg. tokens in output & 2.0 & 2.0 & 1.9 \\ Avg. tokens in explanation & 16.8 & 26.0 & 14.5 \\ \hline \end{tabular}
\end{table}
Table 1: Dataset Statistics. Token-level statistics were generated using the T5-base tokenizer.
with a patience of 5 was used for model selection. The few-shot examples were sampled randomly by stratifying across classes. We trained on 4 NVIDIA v100-16GB GPUs with a batch size of \(8\) and \(16\) for \(D_{l}\) and \(D_{pl}\), respectively. The token re-weighting coefficient \(\lambda\) is set to \(0.8\) for eSNLI and ComVE, and \(0.9\) for ECQA via grid search based on validation scores and average length of the explanations in the dataset. All results are reported after averaging \(3\) runs.
### Main Results
In Table 2 we compare the various training paradigms, namely, fully supervised, few-shot training, and self-training on all three datasets. For self-training we explore two setups - one without pseudo label re-weighting on a Joint model, which we call Vanilla Joint. Confidence-weighted Joint performs self-training on a Joint model where the pseudo labels are weighted by the confidence of the teacher model. The Dual Teacher refers to the proposed Joint model in Section 4.5 that is trained with distillation from two teachers.
**Few-shot vs Fully Supervised results:** Metrics from the fully-supervised setup provide an upper bound on the scores achievable when trained on complete dataset of labels and rationales. Aggregated across datasets, the few-shot performance of the model is around \(13\%\) behind the fully supervised model, and around \(5\) BLEU lower in rationalization performance.
**Self-Training helps boosting few-shot results.** Our experiments show that self-training is a promising direction in bridging the performance gap, improving accuracy and BLEU across all the tasks over the few-shot counterparts. We observe that re-weighting the pseudo-labels with the confidence of the teacher models, provides small improvements in the overall performance and is in alignment with previous findings (Bhat et al., 2021).
**Stronger Results with Dual Teacher Self-Training Framework.** Finally, we observe a further improvement by our proposed method of performing self-training on the Predictor and Rationalizer models, and subsequently distilling the knowledge to a joint student model through pseudo labels. The improvement in aggregate scores shows that the accuracy is within \(8\%\) of a fully supervised model, and \(5\%\) higher than the few-shot baseline. The improvements from the proposed model are most prominent for the Rationale generation task - the BLEU scores are improved by a large margin compared to learning both tasks jointly in a self-training setup. Impressively, the dual-teacher approach achieves an aggregated result of \(20.71\) BLEU which is close to the aggregate performance of the _Fully Supervised_ model (\(21.5\) BLEU). We even obtained higher performance (BLEU score) than the supervised model on the two smaller datasets, ComVE and ECQA.
### Discussion
Next we conduct several deeper analysis of the models and provide detailed insight to the overall results presented in Section 5.2.
**RQ1: Does breaking the joint into conditionals improve performance for task label prediction and explanation quality?**
We first want to analyze the effectiveness of breaking the joint model into conditionals and learning two separate models for task prediction and rationalization. From the results in Table 3, it is evident that by breaking the joint distribution into conditionals, we obtain significantly higher performance across all datasets, especially for explanation generation. This validates our hypothesis that with limited labels, it is much harder for the model to learn the joint distribution of output and explanation, compared to learning the conditionals separately. With self-training, the gap in performance
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model & e-SNLI & \multicolumn{2}{c}{ComVE} & ECQA & \multicolumn{2}{c}{Average} \\ /Metric & Acc & BLEU & Acc & BLEU & Acc & BLEU & Acc & BLEU \\ \hline _Fully Supervised_(Narang et al., 2020) & 90.44 & 33.76 & 86.2 & 14.53 & 53.6 & 16.25 & 76.75 & 21.5 \\ _Few-Shot_ & 82.57 & 24.21 & 73.77 & 12.74 & 34.29 & 9.77 & 63.54 & 15.57 \\ \hline \multicolumn{10}{l}{**Self-Training techniques**} \\ _Vanilla_ & 83.35 & 25.18 & 78.83 & 10.44 & 41.8 & 9.7 & 67.99 & 15.11 \\ _Confidence Weighted_ & 83.41 & 24.54 & 79.23 & 11.04 & 41.75 & 9.85 & 68.13 & 15.14 \\ _Dual Teacher_ & **83.95** & **30.17** & **79.61** & **14.83** & **44.26** & **17.12** & **69.27** & **20.71** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on different baselines and other self-training techniques on three datasets, measured using Accuracy for label prediction, and BLEU for explanation
between the joint and the conditionals decreases, but the individual models still outperform the joint model.
These results align with the improvement observed from the Dual Teacher framework over Joint model in Table 2. Training the Predictor and Rationalizer models in their own self-training loops creates two strong teacher models and provides better pseudo labels. This allows us to train a strong self-rationalizing model through distillation than training a joint model directly through self-training.
**RQ2: Does the Masked Label Regularization help to generate more faithful explanations?**
While our method achieves better BLEU scores compared to different baselines, it is also important to evaluate whether the generated explanations are _faithful_ to the predictions, i.e. provide reasoning that support the predicted label. During creation of the datasets, the annotators were instructed to assign a label and then explain the assignments with a natural language explanation. Therefore, it is desirable for the models to preserve the faithfulness properties in generated explanations.
We perform two tests to analyze whether (1) the explanations are dependent on the output and (2) if they reflect the intended label. Through these experiments we also conduct an ablation study to estimate the effect of the proposed Masked Label Regularization (MLR) constraint in improving the faithfulness of explanations.
**Label-Explanation Association.** We first conduct a simple analysis to check if the explanations are dependent on the model predictions. As a necessary condition for generating faithful explanations, different predicted labels have to produce different explanations. We measure this association as the number of test instances for which the model generates a distinct explanation for all labels.
We vary the task label and ask the model to generate an explanation. For joint models, we replace the generated label with other possible labels and ask the decoder to continue generating an explanation. For Rationalizer model, we simply generate predictions with providing different labels in the input. We study the effect of MLR by removing the entropy regularization loss while training the Rationalizer. We denote this variant as Rationalizer \(-\) MLR. Dual teacher \(-\) MLR refers to the Joint model trained using Rationalizer \(-\) MLR.
Results in Table 4 show that for the Joint model, only \(72\%\) of the examples have unique explanations per output on an average across datasets. This implies that the label-explanation association is not inherently captured in the decoder and for \(28\%\) of instances the generated explanation is constant and has no association with the labels. Adding the MLR loss encourages the model to condition on labels, and thereby provides a substantial improvement of over \(10\%\) for the Dual Teacher model. This indicates a strong association between the generated label and explanation, where the explanations are unique to the label in over \(88\%\) of cases. As can be seen from the Table, the Rationalizer teacher achieves significantly better label-explanation association compared to the Joint counterparts. The MLR constraint further improves the results, especially in the ComVE dataset where explanations
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Model/Dataset & e-SNLI & ComVE & ECQA & Avg \\ \hline Joint & \(76.8\) & \(52.0\) & \(89.0\) & \(72.6\) \\ Dual Teacher \(-\) MLR & \(86.5\) & \(54.8\) & \(93.9\) & \(78.4\) \\ Dual Teacher & \(95.7\) & \(74.7\) & \(95.8\) & \(88.7\) \\ \hline Rationalizer \(-\) MLR & \(97.8\) & \(69.9\) & \(95.3\) & \(87.7\) \\ Rationalizer & \(\mathbf{99.4}\) & \(\mathbf{84.8}\) & \(\mathbf{96.8}\) & \(\mathbf{93.7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Label-Explanation association measured as % of inputs with distinct explanations for each task label.
\begin{table}
\begin{tabular}{l l c c|c c|c c|c c} \hline \hline & Model & \multicolumn{2}{c|}{e-SNLI} & \multicolumn{2}{c|}{ComVE} & \multicolumn{2}{c|}{ECQA} & \multicolumn{2}{c}{Avg} \\ & /Metric & Acc & BLEU & Acc & BLEU & Acc & BLEU & Acc & BLEU \\ \hline \multirow{3}{*}{Fully Supervised} & Predictor & \(89.7\) & \(-\) & \(\mathbf{90.2}\) & \(-\) & \(\mathbf{55.9}\) & \(-\) & \(\mathbf{78.6}\) & \(-\) \\ & Rationalizer & \(-\) & \(\mathbf{34.9}\) & \(-\) & \(\mathbf{16.8}\) & \(-\) & \(\mathbf{18.8}\) & \(-\) & \(\mathbf{23.5}\) \\ & Joint & \(\mathbf{90.4}\) & \(33.7\) & \(86.2\) & \(14.5\) & \(53.6\) & \(16.2\) & \(76.7\) & \(21.5\) \\ \hline \multirow{3}{*}{Few-Shot} & Predictor & \(\mathbf{82.9}\) & \(-\) & \(\mathbf{75.7}\) & \(-\) & \(\mathbf{39.7}\) & \(-\) & \(\mathbf{66.1}\) & \(-\) \\ & Rationalizer & \(-\) & \(\mathbf{27.1}\) & \(-\) & \(\mathbf{14.8}\) & \(-\) & \(\mathbf{16.3}\) & \(-\) & \(\mathbf{19.4}\) \\ & Joint & \(82.6\) & \(24.2\) & \(73.8\) & \(12.7\) & \(34.3\) & \(9.8\) & \(63.5\) & \(15.6\) \\ \hline \multirow{3}{*}{Self-Training} & Predictor & \(\mathbf{83.8}\) & \(-\) & \(\mathbf{78.8}\) & \(-\) & \(\mathbf{44.4}\) & \(-\) & \(\mathbf{69}\) & \(-\) \\ & Rationalizer & \(-\) & \(\mathbf{31.2}\) & \(-\) & \(\mathbf{17.0}\) & \(-\) & \(\mathbf{19.2}\) & \(-\) & \(\mathbf{22.5}\) \\ \cline{1-1} & Joint & \(83.4\) & \(24.5\) & \(79.2\) & \(11.0\) & \(41.8\) & \(9.8\) & \(68.1\) & \(15.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of the Joint model compared to Predictor and Rationalizer models in Fully supervised, Few-Shot and Self-Training setup.
are much longer on average.
**Simulatibility of Explanations.** We utilize the Simulatability metric as defined in prior work (Chan et al., 2022; Hase and Bansal, 2020) to evaluate how well an external system, human or AI, is able to simulate the prediction made by a black-box, self-rationalizing model using the explanation it generates. As simulators, two models are trained to predict the task label - (1) a control model \(P(O|I)\), which predicts the output given input and (2) a treatment model, \(P(O|I,E)\) predicting output given input and an explanation. The simulators are used to measure how much the explanations generated by the self-rationalizing model help in 'guessing' its predicted label. The simulatability score is defined as
\[\Phi=\mathbbm{1}(y^{T}=\hat{y})-\mathbbm{1}(y^{C}=\hat{y}) \tag{7}\]
where \(\hat{y}\) refers to predicted label from the self-rationalizing model, \(y^{C}\) and \(y^{T}\) refers to predictions from the control and treatment simulators, respectively. The higher the faithfulness of a model, the better aligned its explanations are with its predicted labels, relative to the control simulator which does not consider explanations.
Table 5 shows the simulatibility scores of the various self-rationalizing models under consideration. We observe a similar trend as in Table 4 while comparing the different models, with the exception of e-SNLI. For e-SNLI the control simulator was notably stronger compared to treatment, potentially due to the overlap with pre-training tasks of T5. We note that overall there is significant gap in the simulatability between our models and the Fully-Supervised model, indicating a large room for improvement in the faithfulness of explanation for weakly supervised models.
**RQ3: How does the performance change as self-training progresses?**
Figure 3 shows the performance of different models over self-training iterations. We observe that the two teacher models consistently outperform the joint model over iterations in both datasets. In ECQA dataset there is a large jump in accuracy in the first iteration and the algorithm converges soon. A similar trend is observed for BLEU scores, with a slight improvement in the Rationalizer in first iteration and the score plateauing or even declining in case of the Joint model. For e-SNLI dataset, accuracy continues to improve till five iterations for the Predictor, and three for the Joint model. The rationalization performance also converges after nearly five iterations for both the models. Convergence of the algorithm could be explained by the poor separability of the class labels in the datasets, causing more erroneous pseudolabels and plateauing of performance as time progresses.
**RQ4: How does the performance change with increase in labelled dataset size?**
We study the performance of our model by conducting experiments with different dataset sizes. We only vary the labeled dataset size and keep the remaining training set as unlabeled data. For example, for ECQA the total size (\(D\)) is \(7.5K\), and we conduct experiments with labeled data (\(D_{l}\)) in the range \(\{50,2.5K\}\) and the remaining data size (\(D-D_{l}\)) as unlabeled data. Table 6 reports the accuracy and BLEU score of our proposed model for dataset sizes ranging from \(50\) to \(2500\) samples. We see that there is a improvement in the test accuracy and BLEU score as the labeled data size increases. With as few as 500 examples per label, the model is able to achieve accuracy within 6% of
Figure 3: Performance across self-training iterations on ECQA and e-SNLI datasets of the Confidence Weighted Joint, Predictor and Rationalizer models. Dashed lines show the performance of the Few-Shot Joint model.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Model/Dataset & e-SNLI & ECQA & ConvVE & Avg \\ \hline Fully-Supervised & 8.64 & 30.45 & 2.4 & 13.83 \\ Few-Shot & 2.61 & 16.91 & 0.2 & 6.57 \\ \hline Joint & 1.87 & 14.22 & 0.7 & 5.6 \\ Dual Teacher - MLR & **6.01** & 17.96 & 0.73 & 7.62 \\ Dual Teacher & 4.54 & **18.19** & **0.9** & **7.88** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Simulatability score of the explanations from different methods. The higher the score the more aligned the explanation is with the predicted label
the fully supervised model across all datasets. Interestingly, we note that with limited supervision the self-training setup is able to outperform the Fully supervised model in the BLEU scores, demonstrating the data efficiency of the Rationalizer teacher in achieving good performance.
## 6 Conclusion
We study the self-rationalization problem with few-shot labels and demonstrate that self-training is an effective learning paradigm and can significantly reduce the gap between few shot and fully supervised model performance. We present a novel dual teacher learning framework that learns two models for task label prediction and rationale generation through self-training and efficiently distill the knowledge in a single self-rationalizing joint model. With a masking based loss formulation we enforce label-explanation association in the rationalizer, leading to generation of more faithful explanations. We conduct experiments on three public benchmark datasets for free text explanations, and show that the proposed methods are effective in improving task performance while generating accurate and faithful explanations.
## 7 Limitations
Despite strong performance compared to few-shot our self-training methods still contain significant room for improvement compared to the fully supervised benchmarks. It would be interesting to try larger language models to see if it is possible to close this gap with more knowledge embedded into the pre-trained models. Our evaluation of free text rationales are limited by the automatic metrics, which are necessary but not sufficient to analyze quality of an explanation for decision making of the model. From example explanations (a few of which are shown in Appendix), it is evident that we still lack understanding on multiple dimensions such as, when an explanation is factually wrong, is it due to the model believing in the wrong knowledge or is unable to retrieve the correct one. Works that probe a language model with various prompts could be useful for investigating in these directions.
|
2309.02856 | Getting too personal(ized): The importance of feature choice in online
adaptive algorithms | Digital educational technologies offer the potential to customize students'
experiences and learn what works for which students, enhancing the technology
as more students interact with it. We consider whether and when attempting to
discover how to personalize has a cost, such as if the adaptation to personal
information can delay the adoption of policies that benefit all students. We
explore these issues in the context of using multi-armed bandit (MAB)
algorithms to learn a policy for what version of an educational technology to
present to each student, varying the relation between student characteristics
and outcomes and also whether the algorithm is aware of these characteristics.
Through simulations, we demonstrate that the inclusion of student
characteristics for personalization can be beneficial when those
characteristics are needed to learn the optimal action. In other scenarios,
this inclusion decreases performance of the bandit algorithm. Moreover,
including unneeded student characteristics can systematically disadvantage
students with less common values for these characteristics. Our simulations do
however suggest that real-time personalization will be helpful in particular
real-world scenarios, and we illustrate this through case studies using
existing experimental results in ASSISTments. Overall, our simulations show
that adaptive personalization in educational technologies can be a double-edged
sword: real-time adaptation improves student experiences in some contexts, but
the slower adaptation and potentially discriminatory results mean that a more
personalized model is not always beneficial. | ZhaoBin Li, Luna Yee, Nathaniel Sauerberg, Irene Sakson, Joseph Jay Williams, Anna N. Rafferty | 2023-09-06T09:34:54Z | http://arxiv.org/abs/2309.02856v1 | # Getting too personal(ized): The importance of feature choice in online adaptive algorithms
###### Abstract
Digital educational technologies offer the potential to customize students' experiences and learn what works for which students, enhancing the technology as more students interact with it. We consider whether and when attempting to discover how to personalize has a cost, such as if the adaptation to personal information can delay the adoption of policies that benefit all students. We explore these issues in the context of using multi-armed bandit (MAB) algorithms to learn a policy for what version of an educational technology to present to each student, varying the relation between student characteristics and outcomes and also whether the algorithm is aware of these characteristics. Through simulations, we demonstrate that the inclusion of student characteristics for personalization can be beneficial when those characteristics are needed to learn the optimal action. In other scenarios, this inclusion decreases performance of the bandit algorithm. Moreover, including unneeded student characteristics can systematically disadvantage students with less common values for these characteristics. Our simulations do however suggest that real-time personalization will be helpful in particular real-world scenarios, and we illustrate this through case studies using existing experimental results in ASSISTments [22]. Overall, our simulations show that adaptive personalization in educational technologies can be a double-edged sword: real-time adaptation improves student experiences in some contexts, but the slower adaptation and potentially discriminatory results mean that a more personalized model is not always beneficial.
m ulti-armed bandits, personalization, educational technologies, online adaptive algorithms, simulation 2019 acmcopyright
1
## 1 Introduction
Within educational technologies, there are a myriad of ways to design instructional components such as hints or explanations. Research in education and the learning sciences provides some insight into how to design these resources (e.g., [24, 3]). However, there is often uncertainty about which version of a resource will be most effective in a particular context, and effectiveness may vary based on students' characteristics, such as prior knowledge or motivation.
Randomized experiments are one way to compare multiple versions of a technology, but such experiments impose a delay between collecting required evidence and using that evidence to improve student experiences. Recently, multi-armed bandit (MAB) algorithms have been proposed to improve technologies in real time: each student is assigned to one version of the technology, and the algorithm observes the student's learning outcome [17, 27]. Each subsequent student is more likely to be assigned to a version of the technology that has been more effective for previous students, as the algorithm discovers what is effective. Such algorithms maintain uncertainty as they learn, balancing exploring to learn more about what works with exploiting the observed results from previous students. Typical MAB algorithms do not take into account student characteristics and thus can only identify which version of a technology is better for students on average, but contextual MAB algorithms can personalize which version to assign to each student, potentially increasing the number of students who are directed to versions that are most helpful for them individually [23].
While deploying contextual MAB algorithms could improve student experiences, it raises two potential issues. First, instructional designers must decide which student characteristics will be considered for personalization. For instance, more concrete examples might be more helpful for students with lower prior knowledge, while more abstract examples could be more helpful for students with higher prior knowledge. This relationship could only be learned if the algorithm has 'prior knowledge' as a feature of each student. Should the algorithm also consider which prerequisite course was taken when selecting an example, or is prior knowledge sufficient? Designers are unlikely to be certain which characteristics influence effectiveness, but the choice of characteristics will influence the performance of the algorithm. Excluding characteristics that do impact effectiveness could decrease the positive impact on students, but including extraneous characteristics that do _not_ impact effectiveness could
also decrease this impact. In the latter case, the system might have to do more exploration to learn how the effectiveness of instruction differs along each extraneous characteristic, and so direct a greater number of students to less effective versions.
The second issue raised by online adaptive algorithms is whether the constantly adapting system will benefit certain groups of students more than others. Since contextual MAB algorithms learn by observing how the consequences of their choices are related to feature values, students whose characteristics are less common may be more likely to interact with the algorithm when it has limited information about what is most effective for that type of student. This could exacerbate differences in outcomes between subgroups of students. Yet, such algorithms could also have an equalizing effect for students with less common characteristics: students have the potential to experience a version of the technology that is most appropriate for them, even when this version is not the most appropriate for a typical student.
In this paper, we use simulations to explore these issues and their consequences for student experiences in adaptive educational technologies which use MAB algorithms. We focus on three common types of models for how student characteristics are related to outcomes: a _baseline_ model in which student characteristics do not impact the effectiveness of different versions of the technology; a _universal optimal action_ model, in which student characteristics impact effectiveness but the same version is most effective for all students; and a _personalized optimal action_ model, in which student characteristics impact which version leads to the best outcomes for a given student.
We show that including the potential for personalization can negatively influence student outcomes except in the _personalized optimal action_ model, where this information is necessary to encode the best policy. The more characteristics that are included for personalization, the greater the negative impact on outcomes. Additionally, including these characteristics may lead the algorithm to systematically treat students differently based on characteristics that do not influence their outcomes. When student characteristics are not uniformly distributed, including extraneous characteristics means that students in a minority group are more negatively affected than students in the majority group. We use experimental data to show the potential benefits of personalization and add nuance to the prior simulation results by demonstrating how personalization can benefit not only students in a minority group but also all groups of students when information about the student characteristics magnifies differences between conditions. We end by discussing the consequences of these results for integrating adaptive components into existing educational technologies.
## 2 Related Work
A wide array of work has focused on using MAB and contextual MAB algorithms for optimization, including applications in advertising and recommendations (e.g., [16]), crowdsourcing (e.g., [13]), and designing experiments and clinical trials (e.g., [25]). Within educational technologies, MAB algorithms have been primarily used in two ways. Some work has used these algorithms to select problems that are of an appropriate difficulty level for a particular student [8, 15, 21]; unlike our work, these applications typically combine learned profiles about students with a second source of knowledge, such as prerequisite structure. We focus on a second proposed usage of MAB algorithms in education: assigning students to a particular version of a technology. For example, non-contextual MAB algorithms have been used to choose among crowdsourced explanations [26] and to explore an extremely large range of interface designs [18]. Some of this work has also considered the implications of collecting experimental data via MAB algorithms on measurement and inference [17, 19], showing systematic biases that can impair the drawing of conclusions about the conditions. Only a limited amount of work has applied contextual MAB algorithms to personalize which versions of a technology a student experiences (e.g., [23], but focused primarily on measurement). We build on this body of work by considering the performance implications of several common scenarios for how student characteristics, versions of an educational technology, and outcomes are related. Additionally, by specifically examining some scenarios in which student characteristics are unevenly distributed, we raise issues about personalization for minority groups of students.
There is a great deal of theory-based literature on both standard and contextual MAB algorithms related to quantifying performance, especially in terms of asymptotically bounding growth in cumulative regret (the amount that the expected reward from choosing an optimal action outpaces reward from the actually chosen actions). The optimal worst-case bound on regret growth is logarithmic [4]. Furthermore, the inclusion of contextual variables increases cumulative regret at least linearly; for Thompson sampling, which we use in our simulations, the regret bounds grow quadratically in the number of contextual variables [2]. We use simulations to consider non-asymptotic settings and focus on areas less explored theoretically, like impacts on individual groups of students and variability in performance.
In this paper, we are particularly concerned with how outcomes differ among different groups of students. One of the promises of educational technologies is to boost all students' outcomes to the level that can be achieved by individualized tutoring [9], and online adaptive algorithms may make it easier to develop such systems. Yet, the broader machine learning community has recently highlighted how automated systems can learn or exacerbate existing inequalities (see, e.g., [11] for an overview). Within educational data mining, there have been mixed results when the fairness of different models has been explored, and this variation has often been correlated with the diversity of the training data: [12] demonstrated that a model trained on a large and diverse dataset performed similarly well for predicting on-time graduation for students in different demographic groups, while [10] found disparities across genders in predicting course dropout, often associated with gender imbalances in the training data. This raises the issue of how to best use educational data mining in ways that promote equity across students. Within the MAB literature specifically, there has been limited discussion of fairness (e.g., [14]), although [20] show that a particular technical definition of data diversity can lead to fairer outcomes. Like in our work, [20] shows cases where the presence or absence of a majority group
can help or harm minority group outcomes. Our work considers scenarios specific to education, demonstrating that the particular scenario in [20] can be generalized considerably, and more precisely characterizes the circumstances in which including personal characteristics increases equity versus where doing so may lead to systematically poorer experiences for students in a minority group.
## 3 Contextual MAB Algorithms
We treat the problem of determining what version of an educational technology will be most effective for a student as a MAB problem. In such problems, a system must repeatedly choose among several actions, \(a_{1},\ldots,a_{K}\). The system initially does not know which action is likely to be the most effective, but after each action choice, the system receives feedback in the form of a stochastic reward \(r^{(t)}\).
There are a variety of MAB algorithms for choosing actions. We focus on Thompson sampling [1], which is a regret-minimizing algorithm that exhibits logarithmic regret growth. Thompson sampling maintains for each action a distribution over reward values. This distribution is updated after each action choice and represents the posterior distribution over reward values given the observed data. At each timestep, the algorithm samples from the posterior distribution over rewards for each action, and then chooses the action with the highest sampled value. While Thompson sampling is also applicable to real-valued rewards, many educational outcomes are binary, such as whether a student completes a homework assignment or answers a question correctly. Thus, we focus on these binary rewards in this paper, using a Beta prior distribution to enable simple conjugate updates after each choice.
In our setting in which we choose versions of an educational technology for each student, the actions are the different versions of the technology, and the reward is the student outcome. For example, imagine a student interacting with a system to do her math homework. The system might choose between two actions when the student asks for a hint: (a) show a fully worked example, versus (b) provide the first step of the problem as a hint and ask the student to identify an appropriate second step. The student outcome could be whether or not she completes the homework assignment.
In a traditional MAB problem, the reward distribution is fixed given the action choice. However, in the situation above, the reward may be dependent on the characteristics of the student. For instance, a student who has stronger proficiency in the prerequisite skills may be more prepared to identify what to do next in the problem, while a student with weaker proficiency may not be able to identify what to do next. A contextual MAB algorithm incorporates such student characteristics as features into its action choices.
For parametric contextual MAB algorithms, the features must be predetermined, including whether interactions between features is permitted. We adopt a contextual Thompson sampling approach that uses regularized Bayesian logistic regression to approximate the distribution of rewards given the features [2, 7]. The algorithm learns a distribution over the feature weights as coefficients using a Gaussian posterior approximation. To make each new action choice, the algorithm computes a reward value for each action by sampling each weight independently. The chosen action is the action with the highest sampled reward value. Updates may occur after each action or in batches to decrease computational costs; because the feature vectors that we consider are relatively small, we update after each action.
## 4 Importance of Feature Choice
When using a contextual MAB algorithm to personalize student experiences in an educational technology, the system designer must choose which student characteristics to include as variables for personalization. The designer is very unlikely to know with certainty which student features are truly relevant and will actually impact student outcomes. One could include every possible relevant feature, knowing that while the algorithm can learn that an included feature is not relevant, it cannot learn that a non-included feature is in fact relevant. However, asymptotic growth rates for regret are quadratic in the number of features [2], meaning that as more features are included, the algorithm will tend to take longer to learn. Designers thus must balance the desire to include all features that influence outcomes with the knowledge that extraneous features could hurt performance.
To better understand how student outcomes are impacted by the choice of features for personalization, we systematically explore the inclusion of both relevant and non-relevant features in a contextual MAB algorithm and examine the impact on student outcomes and on the rate of assigning students to their personally optimal version of the technology. For these simulations, we assume that features are uncorrelated and that their values are chosen uniformly at random for each student, i.e., the probability of observing any particular combination of features is the same as observing any other combination of features.
### Methods
#### 4.1.1 Representing student features
We focus on binary student features and thus feature values implicitly group students. For example, some CS classes may have two different prerequisites, such as a discrete math course taught by the CS department or a similar one taught by the math department. Students who have taken the CS version will all have the same value for the prerequisite feature, while those who take the math one will have the other.1
Footnote 1: In both the MAB algorithms and the outcome-generating models, feature values are represented using dummy variables.
#### 4.1.2 Outcome-generating models
The outcome-generating model describes the _true_ relationship between student characteristics (feature values), the actions of assigning students to different versions of a technology, and the outcomes of student learning. We focus on scenarios in which two actions, such as choosing between concrete versus abstract explanations, affect the outcomes for two groups of students, such as those with math versus CS prerequisite as aforementioned.
In each of the models, we generate the true reward probability for a student with particular features using logistic regression, with a separate logistic regression equation for
each action. Given a feature vector \(x^{(j)}\) for student \(j\), the reward probabilities are generated according to:
\[P_{action=k}(\text{reward}=1\mid x^{(j)})=\text{sigmoid}(b_{0,k}+\sum_{i=1}^{n}b _{i,k}x_{i}),\]
where \(b_{k}\) is the coefficient vector for action \(k\) and has interpret \(b_{0,k}\). For our simulations, the coefficients for the feature values were zero for any feature past the first feature, meaning that a maximum of one student feature impacts the outcomes but more features may still be observed. By varying the coefficients for the intercept (\(b_{0,k}\)) and the first feature, we produced three models for the relationship among student characteristics (i.e., features or feature values), action choices, and outcomes (see Table 1):
* _Baseline_: Student features have no impact on outcomes.
* _Universal optimal action_: Student features have an impact on outcomes, but not the optimal action--the best version of the technology is the same regardless of features.
* _Personalized optimal action_: Student features impact outcomes, meaning that the optimal action differs based on features--some students are better off experiencing Version A of the technology while for others Version B.
For the baseline model,the coefficients of the actions vary only for the intercept in order to control the effect of each action when student features are ignored. For the universal optimal action model, we included four variations to capture different educationally meaningful scenarios. For instance, universal optimal action (1) reflects a case in which differences in prior knowledge minimally interact with the impact of different versions of a technological intervention, while (2) reflects a student characteristic magnifying the effectiveness of an intervention.
#### 4.1.3 Simulation parameters
We varied three factors across the simulations: the outcome-generating model; the MAB algorithm (contextual or non-contextual); and the number of student features. For all simulations, we considered three horizons: classrooms of 50, 250, and 1000 students. Multiple horizons illustrate the behavior of the algorithm at different time points and can guide decisions for incorporating adaptive algorithms based on the number of students who are expected to interact with the system. Each simulation was repeated 1000 times.
For the non-contextual Thompson sampling, parameters for a Beta distribution per action are learned independent of student features. For the contextual algorithm, we specify the weights of the student features as model coefficients. All simulations included at least one student feature regardless of the outcome-generating models.
To model the fact that curriculum designers may not know which student characteristics really matter, we included simulations where the observed features were a superset of those that actually impacted outcomes. Specifically, we considered models with a total of 1, 2, 3, 5, 7, 8, and 10 features. Therefore, for the non-baseline scenarios, the proportion of included features that impacted outcomes varied from 100% to only 10%. Since our contextual features are binary, we include indicator variables for each of the two values, and learn a separate weight for each indicator variable.2
Footnote 2: In pilot simulations, this encoding led to better performance than if only a single coefficient was learned for each feature, and corrected asymmetries in performance for students who had different values of the feature.
### Results
First we focused on analyzing the performance of contextual and non-contextual MAB algorithms for the three outcome-generating models across 1 to 10 student features (i.e., contextual variables). Using an analysis of covariance (ANCOVA), we compared the two MAB algorithms' performance with respect to the proportion of optimal actions for 250 students across 1000 trials, treating the number of contextual variables as a covariate.
**Baseline:** When student features do not influence outcomes, we see that as expected, the non-contextual bandit outperforms the contextual bandit (Table 2): average performance per student for the final 50 out of 1000 students using the contextual algorithm is similar to that of all 1000 students using the non-contextual algorithm (Figure 2). As the number of student features increases, the contextual MAB chooses a lower proportion of optimal actions for the first 250 students (Figure 1a), but the effect is relatively small especially when considering the impact on actual reward (\(t(13996)=-37.654\), \(p<0.001\), \(b=-0.014\), 95% CI = \([-0.014,-0.013]\)). At longer horizons, the number of student features has less of an impact on overall average reward (Figure 2), which we discuss more below.
**Universal optimal action:** When outcomes are dependent on student features, the contextual MAB algorithm can learn a more accurate model than the non-contextual algorithm. However, when this more accurate model is not _needed_ for optimal action choices, learning the more accurate model does not improve action choices: the non-contextual bandit outperforms the contextual bandits in all four scenarios (Table 2; see Figure 1b for scenario 1). While each scenario might arise due to different educational conditions, they are all very similar in how they appear to the non-contextual bandit algorithm. The non-contextual bandit sees the two groups of students as identical, leading the overall performance to be the average for each group.
These changes in the average effectiveness of each intervention impact the algorithm's performance but do not nec
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{_Relevant Feature:_} & \multicolumn{2}{c}{F=0} & \multicolumn{2}{c}{F=1} \\ \multicolumn{1}{c}{_Action Number:_} & A1 & A2 & A1 & A2 \\ \hline Baseline & 0.4 & **0.6** & 0.4 & **0.6** \\ Universal optimal action (1) & 0.4 & **0.6** & 0.6 & **0.8** \\ Universal optimal action (2) & 0.4 & **0.6** & 0.4 & **0.8** \\ Universal optimal action (3) & 0.4 & **0.6** & 0.5 & **0.7** \\ Universal optimal action (4) & 0.4 & **0.6** & 0.8 & **0.9** \\ Personalized optimal action & 0.4 & **0.6** & **0.6** & 0.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Reward probabilities for each combination of actions (A1 and A2) and values of the relevant feature (F=0 and F=1) in the simulations. The optimal action, shown in bold, is the same (A2) for both feature values, except for the personalized optimal action model.
essarily degrade that performance; instead, the impact is dependent on how similar the two interventions are in their expected outcomes and how close those expected outcomes are to 0.5, where there is the most variance.
**Personalized optimal action:** When the best policy for individual students depends on their features, the contextual bandit significantly outperforms the non-contextual bandit (Table 2). When only one student feature is included, the contextual MAB algorithm chooses the optimal action 61% of the time for the first 50 students; this increases to 88% for the final 50 of the total 250. Including extra student features decreases performance - if ten features are included and only one impacts the policy, the overall proportion of optimal actions falls to only 53% for the first 50 students and 68% for the final 50 students. Yet, this is still an improvement over the non-contextual algorithm (Figure 1c), which cannot exceed 50% optimal actions in this scenario. These results suggest that even if the number of students who will interact with the system is not large and one is uncertain about which of a limited set of features will impact the result, including those features will on average have a positive impact on student outcomes if one is confident that the best version of the system for an individual student varies based on one of those features. However, the size of the student population matters in considering this tradeoff: with the "small" population of 50 students, the most expressive contextual model was barely above chance performance. Each additional student characteristic brings a cost, and if one is using an adaptive algorithm in a real educational technology, one must carefully consider the chance that any characteristic will actually influence which version of the technology is best, rather than including all characteristics that might possibly influence outcomes.
**Variability in policies across students**: As noted above, the extra parameters learned by the contextual MAB algorithm lead to the potential for greater variability in action choices within a single simulation. This can systematically affect groups of students when the algorithm attaches spurious relevance to a feature that does not actually impact outcomes. We can see this pattern by examining differences in action probabilities for students who differ only by characteristics that do not impact outcomes: that is, considering all students who have the same value for the first feature, how does
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline & Superior bandit & \(|b|\) & 95\% CI & \(F(1,13996)\) & \(p\) & Cohen’s \(d\) \\ \hline Baseline & Non Contextual & 0.058 & [0.052, 0.064] & 6308.000 & \(<.001\) & 1.279 \\ Universal Optimal Action (1) & Non Contextual & 0.054 & [0.048, 0.06] & 6333.000 & \(<.001\) & 1.278 \\ Universal Optimal Action (2) & Non Contextual & 0.051 & [0.048, 0.054] & 15762.000 & \(<.001\) & 1.892 \\ Universal Optimal Action (3) & Non Contextual & 0.053 & [0.047, 0.06] & 5918.000 & \(<.001\) & 1.240 \\ Universal Optimal Action (4) & Non Contextual & 0.057 & [0.049, 0.065] & 3180.000 & \(<.001\) & 0.927 \\ Personalized Optimal Action & Contextual & 0.272 & [0.269, 0.276] & 34180.000 & \(<.001\) & 2.610 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Inferential statistics for proportion of optimal actions for the two bandit types across all outcome-generating models, simulated for 1000 trials of 250 students each. \(b\) represents the coefficient of improvement of results for the superior bandit after controlling for the number of contextual variables.
Figure 1: Swarm plots for the proportion of optimal actions for the two bandit types. Each point represents results from one trial with 250 students. For the universal optimal action, all scenarios show similar results; hence only scenario (1) is shown. The decreased performance of the contextual bandits in the baseline and universal optimal action scenarios, especially for large number of contextual variables, highlights the potential risks of personalization.
Figure 2: Average reward per student across 1–10 contextual variables for the two bandit types in the baseline model. In this model, the maximum possible expected reward is 0.6, and the expected reward for uniform random assignment is 0.5. Error bars represent 1 standard error.
the probability of choosing a particular action change based on their different values for the other features? As the number of contextual variables increases, the average maximum difference in action choice probability between such students also increases from 9.8-15.6% when two student features are included in the model to over 63.7-86.6% when ten features are included in the model after running through 250 students. This occurs both based on the greater expressivity of the model with more student features and the fact that the model with more student features is likely still learning about the impact of each of these features. This raises potential concerns about inequity: students who should be treated identically by the system may instead be treated systematically differently, based on features that do not impact how they learn. With both two and ten student characteristics, the largest variability was in the personalized optimal action scenario, suggesting that the benefits of the extra expressivity were not likely to be achieved for all students.
## 5 Impact of uneven distribution of student characteristics
The results of the previous simulations demonstrate that in situations where student characteristics (features) impact the outcome of different educational interventions, a contextual MAB algorithm only provides an improvement over a non-contextual algorithm when knowledge about the characteristic is necessary for choosing the best action. These simulations provided insight into how performance is impacted by different patterns of relationships between student characteristics and outcomes, with the assumption that those characteristics were uniformly distributed. However, in reality, some characteristics are likely to be more common than others. For example, when optimizing which hint to give to students who answer a question incorrectly, the algorithm is more likely to encounter a student with lower prior knowledge than one with higher prior knowledge. Thus we now relax this assumption and explore how changing the distribution of student characteristics impacts student outcomes for both types of MAB algorithms. In these simulations, we examined not only overall outcomes, but also outcomes for different groups of students. Attention to group-specific outcomes is vital for identifying inequitable impacts of adaptive algorithms.
### Methods
Similar to the first set of simulations, we compared non-contextual and contextual MAB implementations that used Thompson sampling across the same three horizons of 50, 250, and 1000 students, with a focus on 250; we repeated each simulation 1000 times. These simulations include a new independent variable: the proportion of students in each group. Specifically, for each simulated student, we varied the probability of the student being in the minority group (i.e., having a value of one for the first student characteristic) from 10% to 50%, using 10% increments. In addition to analyzing performance across all students, we examined performance for both the minority and majority groups separately. We also examined the _balanced success rate_, defined as the simple average of the group-specific performances [5]. Balanced success rate provides a way of examining performance that treats each group as equally important, even though one group may have more students than another.
### Results
As in the previous analysis, we used an ANCOVA to compare the performance for the two bandit types in terms of the proportion of optimal actions, but this time treating the percentage of the minority group as a covariate.
**One student characteristic:** With one student characteristic, the contextual MAB algorithm's performance for the minority group decreases as the size of the minority group becomes smaller, across all outcome-generating models (Figure 2(b) and Figure 4; \(t(29988)=-18.894\), \(p<0.001\), \(b=-0.271\), 95% \(\text{CI}=[-0.300,-0.243]\)). This leads the contextual MAB algorithm to have a lower balanced success rate for smaller minority groups. However, overall performance across all students is slightly better since so many more students are in the majority group (Figure 4; \(t(29988)=5.733\), \(p<0.001\), \(b=0.053\), 95% \(\text{CI}=[0.035,0.071]\)). In other words, decreasing the minority group size hurts the minority group more than it helps the majority group on a per-student basis; but replacing students from the minority group, who are assigned worse conditions, with students from the majority group, who are assigned better conditions, increases overall reward.
This pattern of results occurs because the contextual MAB has more uncertainty about the impact of the particular value of the student characteristic that appeared fewer times: in the least balanced case, we expect the minority group to be seen only 25 times on average given a horizon of 250 students. Hence, providing a model with the potential to personalize for a minority group is a calculated risk - although the extra expressivity is likely intended to improve experiences for all groups of students, it can negatively impact minority groups, with a larger negative impact for smaller minority groups.
In contrast, the non-contextual MAB algorithm is relatively unaffected by the changing distribution of student characteristics in both the baseline (\(t(9996)=-1.117\), \(p=0.264\)) and universal optimal action scenarios (\(t(39996)=-0.358\), \(p=0.721\)), as shown by Figure 2(a). The changing distribu
Figure 3: Proportion of optimal actions for minority groups with sizes of 10%–50% for the two bandit types across the three outcome-generating models, limited to one contextual variable. Standard errors, represented by the translucent bands, are negligible.
tion of student characteristics changes the expected rate of obtained reward from each action, but the changes are small enough that they have little impact on the algorithm's ability to choose optimal actions.
However, for the personalized optimal action model, the size of the minority group _does_ have a large impact on individual student outcomes for the non-contextual MAB algorithm: when the minority group is small, the algorithm learns to choose the action that is best for the majority and worst for the minority, resulting in the optimal action being chosen only 15% of the time for the minority group, within a horizon of 250 students (Figure 3a). When the two groups are of equal size, the algorithm has no systematic information that shows one action as consistently better or worse than the other; thus on average, it chooses the optimal action about 50% of the time for both groups.
**Additional student characteristics for the contextual MAB algorithm:** When the number of student characteristics increases, the impact in the baseline and universal optimal actions models is relatively similar regardless of the size of the minority group. Balanced success rate is decreases by a relatively small amount with the addition of more student characteristics, ranging from an average decrease of 5.5 to 8.3 percentage points.
For the personalized optimal action scenario, increasing the number of student characteristics from one to five also decreases performance, but does so more acutely: the average decrease in balanced success rate is 11 percentage points for this scenario. The impact for small minority groups is particularly great: with five student characteristics and a minority group size of 10%, the optimal action is chosen just under half the time for the minority group, while with one student characteristic, the rate improved by 8.5 percentage points to 57%. Differences in optimal action proportions were about half of this size for the other scenarios. This illustrates the differences in what must be learned: for the personalized optimal action scenario, increasing the number of characteristics makes the challenge of identifying which characteristic(s) matter even more challenging, especially with limited data.
## 6 Real-World Experiments
The first two sets of simulations can guide system designers when making decisions about personalizing based on student features. However, they have some limitations: while they considered a relatively large space of possibilities for how outcomes relate to student features, they focused on showing a general variety of cases rather than on specific cases that might be most common or of particular interest in education. To address this, we conducted several case studies of how MAB algorithms would have impacted actual experiments. We consider existing experimental data in which the optimal action would be personalized to see if the contextual MAB algorithms benefits students (as would be expected from our previous simulations) and also to demonstrate how factors from the previous simulations manifest in real-world scenarios.
The experiments were previously conducted within _ASSISTments_, an online learning system, and focused primarily on middle school math. We selected several experiments from [22] based on how student outcomes were related to their prior successes in the system as well as their assignment of either the control or experimental condition. Prior success in the system is a strong candidate to be a student feature for personalization: it is typically easily available and can serve as a proxy for prior knowledge, which has been shown to influence the success of different instructional strategies [24].
### Methods
To model previously collected ASSISTments data in our MAB framework, we (1) transformed both the student characteristics and the student outcomes into discrete variables,3 and (2) resampled from the data to generate outcomes when the MAB algorithm assigned a condition.
Footnote 3: MAB algorithms can handle non-categorical data, but we focus on the categorical case to mirror our prior simulations.
For step (1), we first discretized students' prior percent correct on problems within ASSISTments, the sole student fea
Figure 4: Comparing the proportion of optimal actions of the contextual bandit between 1 and 5 student features (i.e., contextual variables) for the majority and minority groups, as well as their balanced and overall averages, across minority group sizes of 10%–50%. Standard errors, represented by the translucent bands, are negligible.
ture that we included for personalization, into four quartiles: the 25% of students who began the homework assignment with the lowest prior percent correct (Q1), then those in the 26-50% range (Q2), and so on. The dataset contains some students who began the homework but were not assigned to a condition. Since the experiments in [22] mainly manipulated students' experiences (e.g. type of hint) when they answered a question incorrectly, students who have never answered incorrectly are not included in the experiment results (nor will the MAB algorithm make choices for them). However, they are included in the quartile cutoffs, which means that in the population of students with whom the MAB algorithm interacts, the number of students in each quartile may not be uniform.
We also chose and discretized the student outcome measures. These experiments included two different measures of student outcomes: whether each student completed the homework and the number of problems that each student answered in the homework. All experiments took place in the _SkillBuilder_ interface, where students must answer three consecutive problems correctly to complete the homework. Completion of homework (denoted _Completed HW_) is already discrete and could easily be collected in real time; two of our simulations use this measure. However, it is relatively coarse, as the vast majority of students completed the homework. Thus, we also used a discretized version of the number of problems to completion (denoted _Completed Quickly_). If a student completes the homework, doing so in fewer problems is a better outcome than doing so in more problems. Outcomes were based on the median problem count for students who completed the homework. Students who completed the homework in the median number of problems or fewer had positive reward, while those who did not complete the homework or completed it more slowly had no reward. Though for practical use prior data would be needed to select an appropriate cut point, using a cut point based on collected data in our simulations measures the performance of students more closely.
For step (2), we simulate a MAB algorithm's performance by repeatedly sampling students from the experiment. Within each trial, we fix the number of timesteps to the total number of students in the original experiment. At each timestep, a random student is sampled, and the algorithm then selects a condition for that student. To compute the outcome, we sample from all outcomes for students in the experiment who were in the same quartile for prior percent correct and who experienced the same chosen condition. Each trial thus represents an experiment of the same size as the original, with the students drawn with replacement from the experimental data. We randomized each of the 1000 trials, though for each trial, we use the same student ordering for both types of MAB algorithms.
In our case studies, we focus on one problem set (#293151) where students are unevenly distributed across quartiles, with more lower-performing students (Q1), and one problem set (#263057) in which students are more evenly distributed across quartiles (see Table 6). With the two different outcome measures, this resulted in four simulation scenarios. We chose these problem sets because they had student outcomes that varied based on both condition and student quartile (see Figure 5), representing a setting where there is the most potential benefit from a contextual MAB algorithm.
### Results
In all four settings, at least one quartile of students (out of Q1-Q4) was helped by the contextual MAB algorithm, and in three of the four settings, average outcomes across all students were improved by personalization.
**Uneven Student Distribution, Completed HW:** As shown in Figure 5(a), in this scenario, students in Q4 were much more likely to experience their optimal condition with a contextual MAB algorithm. This occurs because the condition that is best for the average student is the one that is worse for Q4: the non-contextual MAB thus optimizes in a way
Figure 5: Original average reward per student in the ASSISTments data [22], across the four quartiles (Q1–Q4) of prior percent correct and their averages, for the two conditions in the experiments (control and experimental) illustrates our model parameters of real-world scenarios.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Problem Set & Students & Q1 Size & Q2 Size & Q3 Size & Q4 Size \\ \hline Uneven Student Distribution & 293151 & 320 & 113 (35.3\%) & 100 (31.3\%) & 69 (21.6\%) & 38 (11.9\%) \\ Even Student Distribution & 263057 & 129 & 33 (25.6\%) & 28 (21.7\%) & 34 (26.4\%) & 34 (26.4\%) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Student totals and group distributions in the original ASSISTments data [22] for the two problem sets of interest. Prior percent correct is discretized before removing students who have never answered incorrectly to experience the assigned condition, biasing group size towards the lower quartiles in the Uneven Student Distribution.
that has a systematic, negative outcome for Q4 students. Conversely, the contextual MAB algorithm does not do as well as the non-contextual algorithm for students in Q1-Q3 because of the extra exploration needed to learn about more variables that are not necessary to help these students. Overall, this means that the contextual MAB algorithm had a slightly lower rate of choosing the optimal action than the non-contextual MAB. However, the difference is only 1% of students, and this difference is even smaller in terms of average reward: average reward is reduced by less than 0.005 overall, while it is increased for Q4 students by about 0.044. In this experiment, reward rates are high in general (greater than 70% for all conditions and quartiles). Thus with 320 students, small differences in condition assignment often are not reflected in large differences in outcomes. Q1-Q3 students have very similar outcomes across the two methods of condition assignment; Q4 has the greatest difference in success for one condition versus another, and thus the large increase in optimal condition assignment for these students does boost their average outcomes.
**Uneven Student Distribution, Completed Quickly:** Using the Completed Quickly outcome measure with the same students, students in all quartiles were more likely to be assigned to the optimal condition when the contextual MAB algorithm was used (Figure (b)b). This pattern occurs because the overall probability of a positive outcome is very similar across the two conditions when student quartiles are ignored (shown by _All_ in Figure (b)b), making it difficult for the non-contextual bandit to learn that the experimental condition is better on average. In contrast, the differences between conditions are large for all quartiles except Q2. Thus, the information from the student quartiles makes the problem easier for the contextual MAB algorithm, though the relatively small difference between conditions for Q2 results in the lowest overall proportion of optimal action choices. This simulation thus importantly shows a scenario that was not explored in the prior simulations, in which knowing about extra information increases the number of parameters to learn but makes learning about each of those individual parameters easier.
**Even Student Distribution, Completed HW:** For this scenario, there were again very high reward probabilities across all conditions, and a relatively small overall difference between conditions but larger differences between conditions for three of the four quartiles. Results here had some of the characteristics of each of the previous two simulations: Q1 students experienced the largest gain in optimal condition assignment from the contextual MAB algorithm, as the best condition for them was not the same as the best condition overall. Q2 and Q3 students had similar rates of optimal condition performance regardless of algorithm. With the non-contextual MAB algorithm, there were fewer parameters to learn about, but the difference between arms was smaller, making the problem harder; with the contextual MAB algorithm, the increase in parameters did increase the time to learn, but this negative impact was balanced by the fact that the difference between conditions was larger for Q2 and Q3 than for students overall. All Q4 students completed their homework, so no differences between algorithms were possible. Overall, the contextual MAB algorithm has slightly better performance than the non-contextual MAB algorithm due to the improvement for Q1 students.
**Even Student Distribution, Completed Quickly:** Finally, using the Completed Quickly outcome measure for this second set of students, the results were still largely in favor of the contextual MAB algorithm. As the experimental condition is better on average, students in Q1 experience a large positive impact through personalization because the control condition is uniquely better for them. Q2 and Q3 also experience positive impacts, with the impact on Q2 students being larger because the difference between the two conditions is larger, which speeds learning for the contextual bandit. Conversely, Q4 students experience slightly less positive outcomes under the contextual MAB algorithm because the small difference between conditions slows learning; in comparison, the non-contextual MAB algorithm is more beneficial for Q4 students since the overall difference between conditions across all students is larger than the difference for Q4 students only.
## 7 Discussion
Real-time adaptive algorithms can respond quickly to optimize experiences for individual students, and their expressivity for personalizing experiences increases with each additional type of student information they are given. In this paper, we have shown that this expressivity may be worthwhile
Figure 6: Swarm plots for the proportion of optimal actions for the two bandit types for each quartile of prior percent correct and their averages. Each point represents results from each of the 1000 trials per experiment and the solid black lines indicate the means of each swarm plots. Points for Q4 of Even Student Distribution, Completed HW are clustered at 1.0 because both actions are optimal. The extra information learned by the contextual bandit improves performance in most cases but the bimodality for some quartiles demonstrates the associated systematic risks.
under two circumstances: when it is _necessary_ for expressing the best policy to improve student outcomes, as shown by the simulation results in Section 4.2, and in certain cases when including those characteristics magnifies differences between conditions for subgroups of students, as shown by the results in Section 6.2. For the scenarios we considered, in which there are only two interventions and binary rewards, this magnification in differences can only occur on average if the optimal policy varies across students (or if there is a ceiling or floor effect for rewards for some subgroup of students), but it is possible this situation could occur in other circumstances with more complex environments.
Our results also show the potential ethical concerns inherent in choosing whether to include student characteristics when these characteristics are not uniformly distributed. If these characteristics matter, then failing to include them may lead to a learned policy that systematically optimizes for the majority but not for a minority group. However, when this expressivity is not necessary, it increases variability across students and also increases the time for identifying the correct policy, thus significantly decreasing the number of students assigned the best version of the technology and slightly decreasing their average outcomes. In these cases, a minority group may disproportionately bear the cost of the need for the algorithm to learn additional parameters.
There are several limitations to our results. First, we have focused only on discrete student features and discrete outcomes but continuous parameters are also common. For example, we might measure student scores rather than homework completion or model prior knowledge as an estimated ability parameter. If one wanted to extend these analyses to real-valued student features, one could easily incorporate them into the current modeling framework with versions of Thompson sampling for real-valued outcomes [2], and there exist metrics from a large literature for assessing whether students are treated fairly (e.g., [6]). Using real-valued parameters is unlikely to significantly impact trends in results, except that defining student groups for analyzing equitable outcomes is more difficult. Our results from our universal optimal action scenarios show that, with binary rewards, knowledge of the student features is not beneficial if it is unnecessary for expressing the best policy. However, these results may not translate to the real-valued rewards case, where the latent student features will add to the variability in the distributions observed by the non-contextual bandit, and exploring these scenarios is an important step for future work. A second limitation is that our simulations comprise only a single student feature that influences the outcome, though in actual deployments multiple features may influence the best policy. Still, we believe that our results can guide system designers when thinking about such scenarios, especially in weighing the costs and benefits of including each possible variable.
The results from the real-world scenarios highlight the potential value of MAB algorithms for educational technologies. For almost all scenarios and groups, both types of MAB algorithms chose the optimal condition more often than if students had been assigned uniformly at random, and average rewards were in many cases very close to the optimal expected reward (i.e. if the optimal action had been chosen for all students). The absolute difference in rewards was relatively small between the two bandit types-at most 0.075-and the contextual bandit achieved at worst 12% less than the optimal expected reward for any student group. Yet the earlier simulations urge caution for incorporating student characteristics, due to (1) decreases in achieved outcomes when these characteristics are unnecessary and (2) the systematically different treatment of students based on irrelevant characteristics, as illustrated by large difference in condition assignment probability between vectors of student characteristics with identical outcome probabilities. Thus, system designers should weigh the risk of not personalizing when the best policy for the minority differs from the majority with these side effects of personalization and ultimately strive to only include variables that past evidence suggests differentially impact outcomes.
One could make a number of extensions of this work for using MAB algorithms to improve and personalize educational technologies. First, contextual MAB algorithms might mitigate issues of biases when different types of students interact with an educational technology and while all are most helped by the same version of the technology, their outcomes have different distributions. For example, struggling students may complete homework later, leading the MAB algorithm's early estimates to be non-representative of the broader population. Prior work has shown that this bias significantly worsens inference about the effectiveness of the technology as well as expected student outcomes [19]: the use of a contextual MAB algorithm could allow the system to adapt to such differences across students. Second, if the technology is used by a large number of students, the set of variables used by the contextual algorithm could be increased as more data are collected. Such a system might improve consistency across student outcomes, while still personalizing based on truly relevant features that are justified the sufficient information collected. The work in this paper both provides a starting point for considering what scenarios, algorithms, and metrics should be explored in future work, as well as guidance for system designers who would like to deploy MAB algorithms within their own technologies but are uncertain about which student characteristics, if any, to include for personalization. While including all possible characteristics might capture the desire for maximum personalization, this works points to the potential costs of such personalization and suggests that consideration of how likely it is that such characteristics will matter is important both for performance and equitability across students.
|
2306.15577 | Retrospective: A Scalable Processing-in-Memory Accelerator for Parallel
Graph Processing | Our ISCA 2015 paper provides a new programmable processing-in-memory (PIM)
architecture and system design that can accelerate key data-intensive
applications, with a focus on graph processing workloads. Our major idea was to
completely rethink the system, including the programming model, data
partitioning mechanisms, system support, instruction set architecture, along
with near-memory execution units and their communication architecture, such
that an important workload can be accelerated at a maximum level using a
distributed system of well-connected near-memory accelerators. We built our
accelerator system, Tesseract, using 3D-stacked memories with logic layers,
where each logic layer contains general-purpose processing cores and cores
communicate with each other using a message-passing programming model. Cores
could be specialized for graph processing (or any other application to be
accelerated).
To our knowledge, our paper was the first to completely design a near-memory
accelerator system from scratch such that it is both generally programmable and
specifically customizable to accelerate important applications, with a case
study on major graph processing workloads. Ensuing work in academia and
industry showed that similar approaches to system design can greatly benefit
both graph processing workloads and other applications, such as machine
learning, for which ideas from Tesseract seem to have been influential.
This short retrospective provides a brief analysis of our ISCA 2015 paper and
its impact. We briefly describe the major ideas and contributions of the work,
discuss later works that built on it or were influenced by it, and make some
educated guesses on what the future may bring on PIM and accelerator systems. | Junwhan Ahn, Sungpack Hong, Sungjoo Yoo, Onur Mutlu, Kiyoung Choi | 2023-06-27T15:56:19Z | http://arxiv.org/abs/2306.15577v1 | # _Retrospective:_ A Scalable Processing-in-Memory Accelerator
###### Abstract
Our ISCA 2015 paper [1] provides a new programmable processing-in-memory (PIM) architecture and system design that can accelerate key data-intensive applications, with a focus on graph processing workloads. Our major idea was to completely rethink the system, including the programming model, data partitioning mechanisms, system support, instruction set architecture, along with near-memory execution units and their communication architecture, such that an important workload can be accelerated at a maximum level using a distributed system of well-connected near-memory accelerators. We built our accelerator system, Tesseract, using 3D-stacked memories with logic layers, where each logic layer contains general-purpose processing cores and cores communicate with each other using a message-passing programming model. Cores could be specialized for graph processing (or any other application to be accelerated).
To our knowledge, our paper was the first to completely design a near-memory accelerator system from scratch such that it is both generally programmable and specifically customizable to accelerate important applications, with a case study on major graph processing workloads. Ensuring work in academia and industry showed that similar approaches to system design can greatly benefit both graph processing workloads and other applications, such as machine learning, for which ideas from Tesseract seem to have been influential.
This short retrospective provides a brief analysis of our ISCA 2015 paper and its impact. We briefly describe the major ideas and contributions of the work, discuss later works that built on it or were influenced by it, and make some educated guesses on what the future may bring on PIM and accelerator systems.
## I Background, Approach & Mindset
We started our research when 3D-stacked memories (e.g., [2, 3, 4]) were viable and seemed to have promise for building effective and practical processing-near-memory systems. Such near-memory systems could lead to improvements, but there was little to no research that examined how an accelerator could be completely (re-)designed using such near-memory technology, from its hardware architecture to its programming model and software system, and what the performance and energy benefits could be of such a re-design. We set out to answer these questions in our ISCA 2015 paper [1].
We followed several major principles to design our accelerator from the ground up. We believe these principles are still important: a major contribution and influence of our work was in putting all of these together in a cohesive full-system design and demonstrating the large performance and energy benefits that can be obtained from such a design. We see a similar approach in many modern large-scale accelerator systems in machine learning today (e.g., [5, 6, 7, 8, 9]). Our principles are:
1. _Near-memory execution_ to enable/exploit the high data access bandwidth modern workloads (e.g., graph processing) need and to reduce data movement and access latency.
2. _General programmability_ so that the system can be easily adopted, extended, and customized for many workloads.
3. _Maximal acceleration capability_ to maximize the performance and energy benefits. We set ourselves free from backward compatibility and cost constraints. We aimed to completely re-design the system stack. Our goal was to explore the maximal performance and energy efficiency benefits we can gain from a near-memory accelerator if we had complete freedom to change things as much as we needed. We contrast this approach to the _minimal intrusion_ approach we also explored in a separate ISCA 2015 paper [10].
4. _Customizable to specific workloads_, such that we can maximize acceleration benefits. Our focus workload was graph analytics/processing, a key workload at the time and today. However, our design principles are not limited to graph processing and the system we built is customizable to other workloads as well, e.g., machine learning, genome analysis.
5. _Memory-capacity-proportional performance_, i.e., processing capability should proportionally grow (i.e., scale) as memory capacity increases and vice versa. This enables scaling of data-intensive workloads that need both memory and compute.
6. _Exploit new technology (3D stacking)_ that enables tight integration of memory and logic and helps multiple above principles (e.g., enables customizable near-memory acceleration capability in the logic layer of a 3D-stacked memory chip).
7. _Good communication and scaling capability_ to support scalability to large dataset sizes and to enable memory-capacity-proportional performance. To this end, we provided scalable communication mechanisms between execution cores and carefully interconnected small accelerator chips to form a large distributed system of accelerator chips.
8. _Maximal and efficient use of memory bandwidth_ to supply the high-bandwidth data access that modern workloads need. To this end, we introduced new, specialized mechanisms for prefetching and a programming model that helps leverage application semantics for hardware optimization.
## II Contributions and Influence
We believe the major contributions of our work were 1) complete rethinking of how an accelerator system should be designed to enable maximal acceleration capability, and 2) the design and analysis of such an accelerator with this mindset and using the aforementioned principles to demonstrate its effectiveness in an important class of workloads.
One can find examples of our approach in modern large-scale machine learning (ML) accelerators, which are perhaps the most successful incarnation of scalable near-memory execution architectures. ML infrastructure today (e.g., [5, 6, 7, 8, 9]) consists of accelerator chips, each containing compute units and high-bandwidth memory tightly packaged together, and features scale-up capability enabled by connecting thousands of such chips with high-bandwidth interconnection links. The system-wide rethinking that was done to enable such accelerators and many of the principles used in such accelerators resemble our ISCA 2015 paper's approach.
The "memory-capacity-proportional performance" principle we explored in the paper shares similarities with how ML workloads are scaled up today. Similar to how we carefully sharded graphs across our accelerator chips to greatly improve effective memory bandwidth in our paper, today's ML workloads are sharded across a large number of accelerators by leveraging data/model parallelism and optimizing the placement to balance communication overheads and compute scalability [11, 12]. With the advent of large generative models requiring high memory bandwidth for fast training and inference, the scaling behavior where capacity and bandwidth are scaled together has become an essential architectural property to support modern data-intensive workloads.
The "maximal acceleration capability" principle we used in Tesseract provides much larger performance and energy improvements and better customization than the "minimalist" approach that our other ISCA 2015 paper on _PIM-Enabled Instructions_[10] explored: "minimally change" an existing
system to incorporate (near-memory) acceleration capability to ease programming and keep costs low. So far, the industry has more widely adopted the maximal approach to overcome the pressing scaling bottlenecks of major workloads. The key enabler that bridges the programmability gap between the maximal approach favoring large performance & energy benefits and the minimal approach favoring ease of programming is compilation techniques. These techniques lower well-defined high-level constructs into lower-level primitives [12, 13]; our ISCA 2015 papers [1, 10] and a follow-up work [14] explore them lightly. We believe that a good programming model that enables large benefits coupled with support for it across the entire system stack (including compilers & hardware) will continue to be important for effective near-memory system and accelerator designs [14]. We also believe that the maximal versus minimal approaches that are initially explored in our two ISCA 2015 papers is a useful way of exploring emerging technologies (e.g., near-memory accelerators) to better understand the tradeoffs of system designs that exploit such technologies.
## III Influence on Later Works
Our paper was at the beginning of a proliferation of scalable near-memory processing systems designed to accelerate key applications (see [15] for many works on the topic). Tesseract has inspired many near-memory system ideas (e.g., [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] and served as the de facto comparison point for such systems, including near-memory graph processing accelerators that built on Tesseract and improved various aspects of Tesseract. Since machine learning accelerators that use high-bandwidth memory (e.g., [29, 5]) and industrial PIM prototypes (e.g., [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]) are now in the market, near-memory processing is no longer an "eccentric" architecture it used to be when Tesseract was originally published.
Graph processing & analytics workloads remain as an important and growing class of applications in various forms, ranging from large-scale industrial graph analysis engines (e.g., [42]) to graph neural networks [43]. Our focus on large-scale graph processing in our ISCA 2015 paper increased attention to this domain in the computer architecture community, resulting in subsequent research on efficient hardware architectures for graph processing (e.g., [44, 45, 46]).
## IV Summary and Future Outlook
We believe that our ISCA 2015 paper's principled thinking of system design to accelerate an important class of data-intensive workloads provided significant value and enabled/influenced a large body of follow-on works and ideas. We expect that such rethinking of system design for key workloads, especially with a focus on "maximal acceleration capability," will continue to be critical as pressing technology and application scaling challenges increasingly require us to think differently to substantially improve performance and energy (as well as other metrics). We believe the principles exploited in Tesseract are fundamental and they will remain useful and likely become even more important as systems become more constrained due to the continuously-increasing memory access and computation demands of future workloads. We also project that as hardware substrates for near-memory acceleration (e.g., 3D stacking, in-DRAM computation, NVM-based PIM, processing using memory [15]) evolve and mature, systems will take advantage of them even more, likely using principles similar to those used in the design of Tesseract.
|
2310.07695 | Strong lensing by galaxies: past highlights, current status, and future
prospects | Galaxy-scale strong lensing is a powerful tool in Astrophysics and Cosmology,
enabling studies of massive galaxies' internal structure, their formation and
evolution, stellar initial mass function, and cosmological parameters. In this
conference proceeding, we highlight key findings from the past decade in
astrophysical applications of strong lensing at the galaxy scale. We then
briefly summarize the present status of discovery and analyses of new samples
from recent or ongoing surveys. Finally, we offer insights into anticipated
developments in the upcoming era of big data shaping the future of this field,
thanks to the Rubin, Euclid, and Roman observatories. | Anowar J. Shajib | 2023-10-11T17:45:58Z | http://arxiv.org/abs/2310.07695v1 | # Strong lensing by galaxies: past highlights, current status, and future prospects
###### Abstract
Galaxy-scale strong lensing is a powerful tool in Astrophysics and Cosmology, enabling studies of massive galaxies' internal structure, their formation and evolution, stellar initial mass function, and cosmological parameters. In this conference proceeding, we highlight key findings from the past decade in astrophysical applications of strong lensing at the galaxy scale. We then briefly summarize the present status of discovery and analyses of new samples from recent or ongoing surveys. Finally, we offer insights into anticipated developments in the upcoming era of big data shaping the future of this field, thanks to the Rubin, _Euclid_, and _Roman_ observatories.
Strong lensing, galaxies
## 1 Introduction
Strong lensing by galaxies provides a valuable probe of the internal structure of the lensing galaxies (Shajib et al., 2022). Since elliptical galaxies are the most common type of lensing galaxies, studies of strong lensing galaxies have mainly focused on massive ellipticals. The internal structure of elliptical galaxies is shaped through cosmic time by the initial baryonic infall, star formation, subsequent outflow induced by baryonic feedback mechanisms, and hierarchical formation through mergers. All of these baryonic processes are also accompanied by adiabatic contraction and expansion of the dark matter. Thus, constraining the internal structure of galaxies, that is, the distribution of baryonic and dark matter, at different redshifts can shed light on their formation and evolutionary history. Stellar dynamics, especially the spatially resolved kind, of local elliptical galaxies have provided great insight into their formation and evolution (Cappellari, 2016). However, strong lensing offers unique complementarities to stellar dynamics in providing more accessible and informative data for galaxies at redshift \(z\,\raise 1.29pt\hbox{$>$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}\,0.5\) and by providing an independent tracer of the mass to break degeneracies inherent to both probes, when combined.
Since the first discovery of strong lensing systems at the galaxy scale in the eighties, the field has come a long way in efficiently harnessing the rich information in large samples of strong lenses. Thanks to these large samples, for example, the Sloan Lens ACS Survey (SLACS; Bolton et al., 2006) and the Cosmic Lens All Sky Survey (CLASS; Myers et al., 1995), the internal structure of elliptical galaxies at \(z\sim 0.3\) were unveiled with significant precision in the 2000s. However, at the beginning of the past decade (2010s), Treu (2010) posed the following two main outstanding questions to be solved by strong lensing studies of galaxies:
1. How do luminous and dark matter density profiles evolve over cosmic time?
2. Does the dark matter density profile universally follow the Navarro-Frenk-White (NFW; Navarro et al., 1996, 1997) profile predicted by cosmological simulations?
In this proceeding, we briefly introduce the progress made on these two questions over the past decade (Section 2). We then introduce the current status of the discovery of larger lens samples and their analyses (Section 3), before concluding the proceeding with a discussion on the prospects for the forthcoming era of big data (Section 4). The readers are invited to see Shajib et al. (2022) for a more comprehensive review of the topic.
## 2 Highlights of past results
In this section, we present some major results from the literature on the internal structure of elliptical galaxies (Section 2.1) and the stellar initial mass function (IMF; Section 2.2).
### Internal structure of elliptical galaxies
Numerous strong lensing studies found that the total density profile in elliptical galaxies is well approximated by a power-law profile, that is, \(\rho(r)\propto r^{-\gamma}\). These studies also showed that the power-law profile is nearly isothermal, that is, \(\gamma\sim 2\). This result holds for the same-mean of the logarithmic slope \(\gamma\) constrained by both joint lensing-dynamics analyses [e.g., \(2.08\pm 0.03\) from the SLACS (Auger et al., 2009), \(2.11\pm 0.02\) from the BOSS Emission-Line Lens Survey (BELLS; Bolton et al., 2012), \(2.05\pm 0.06\) from the Strong Lensing Legacy Survey (SL2S; Sonnenfeld et al., 2013)] and strong-lensing-only analyses [e.g., \(2.08\pm 0.03\) from SLACS (Shajib et al., 2020), and \(2.09\pm 0.03\) from SLACS and BELLS (Etherington et al., 2022)]. These analyses also found intrinsic scatters in \(\gamma\) between \(0.13-0.19\).
The logarithmic slope \(\gamma\) constrained by the aforementioned studies can be compared to those from cosmological hydrodynamical simulations with various prescriptions of baryonic physics to learn about the particular baryonic physics that has been at play in shaping the elliptical galaxies. For example, Mukherjee et al. (2021) rule out scenarios such as no AGN feedback and, lower-viscosity AGN accretion, or environment-dependent stellar feedback (Fig. 1).
The evolution of the slope of the mass density profile over redshift can additionally provide further insights into the baryonic feedback processes and merger types that played crucial roles during the lifespan of elliptical galaxies at \(z<1\). Hydrodynamic simulations, such as IllustrisTNG and Magneticum, show that gas-poor mergers leave the logarithmic slope in elliptical galaxies unchanged, or make it slightly shallower with decreasing redshift (Fig. 2; Remus et al., 2017; Wang et al., 2020). Conversely, gas-rich mergers would have made the average logarithmic slope steeper. In apparent contrast with the simulations, joint lensing-dynamics studies found that the logarithmic slope \(\gamma\) gets steeper with decreasing redshift, thus favoring the gas-rich merger scenario (Ruff et al., 2011; Bolton et al., 2012). However, this discrepancy between simulations and observations can also be explained by systematic effects, such as selection effect (Sonnenfeld et al., 2015), projection effect (Remus et al., 2017), and parametrization of the stellar anisotropy (Xu et al., 2017).
Furthermore, Etherington et al. (2022) found no correlation in the logarithmic slopes obtained from lensing-only analysis and joint lensing-dynamic analysis. These authors, therefore, argue that the true mass distribution in elliptical galaxies must deviate from a pure power-law form. To allow more radial flexibility than a power law, several studies modeled the mass distribution in elliptical galaxies using a two-component model with one mass profile describing the dark matter halo (typically with an NFW profile) and the other describing the stellar mass distribution. However, some disagreement exists in the literature about the departure from the "vanilla" (i.e., not contracted nor expanded) NFW profile and the presence of a mass-to-light ratio gradient in the stellar mass distribution. Dutton and Treu (2014) found the vanilla NFW profile fits the lensing and dynamics data, whereas scenarios with adiabatic contraction or expansion do not. However, Oldham and Auger (2018) found a bimodality in their sample with the inner slope shallower (expanded) than the vanilla NFW in one mode and
steeper (contracted) in the other. In contrast, Sonnenfeld et al. (2018) found that a mass-to-light ratio gradient in the stellar mass profile, not a departure from the vanilla NFW profile, fits better the combined dataset from strong lensing, dynamics, and weak lensing. Finally, Shajib et al. (2021) ruled our contraction or expansion in the dark matter halos and found no strong evidence in favor of a mass-to-light ratio gradient in the stellar mass from the combination of strong lensing, dynamic, and weak lensing data. However, these discrepancies in the literature could potentially arise from different treatment stellar anisotropy parameterizations and light profile assumptions in the dynamical modeling.
Figure 1: Comparison of the constrained logarithmic slope \(\gamma\) and Einstein radius \(R_{\rm Ein}\) between observations (shaded ellipses) and predictions from the EAGLE simulation with various baryonic physics prescriptions (points with errorbars). Some prescriptions, such as the no-AGN feedback model (grey point), can be ruled out. Figure re-illustrated from Mukherjee et al. (2021).
Figure 2: Evolution of the average logarithmic slope \(\gamma\) with redshift \(z\). The green and pink arrows indicate the direction of evolution in \(\gamma\) for gas-rich and gas-poor mergers, respectively. Simulations (orange and blue shaded stripes) indicate gas-poor mergers make the slope slightly shallower with decreasing redshift. However, joint lensing–dynamics observations indicate the opposite trend favoring the gas-rich merger scenario.
### Stellar IMF
Most strong lensing studies found the stellar IMF to be heavier (i.e., Salpeter IMF; e.g., Treu et al., 2010; Spiniello et al., 2011; Sonnenfeld et al., 2012) than that found in the Milky Way (i.e., Chabrier IMF). These results also agree very well with several non-lensing constraints on the IMF, such as those based on stellar dynamics and the stellar population synthesis method (Cappellari et al., 2012; La Barbera et al., 2013; Spiniello et al., 2014). Fig. 3 illustrates this agreement between lensing-based and dynamics-based studies and the apparent dependency of the IMF on the velocity dispersion (Posacki et al., 2015).
However, there are also reports of a lighter IMF, such as the Chabrier IMF, from lensing-based studies (e.g., Ferreras et al., 2010; Smith et al., 2015; Sonnenfeld et al., 2019). However, systematics stemming from parameterizations of the dark matter profile, the mass-to-light-ratio gradient, or the stellar anisotropy cannot be ruled out as the source of this disagreement.
Despite much progress with multiple large samples of lenses over the last decade, the two questions posed in Section 1 are yet to be answered definitively. To achieve that goal in the future, larger statistical samples, combining strong lensing, weak lensing, and dynamics, and adopting models with more radial flexibility than the power law would be essential. In the next section, we describe the current efforts in these aspects.
## 3 Presently emerging samples and analyses
One way to achieve a larger statistical sample than the previously available ones is to create a "super-sample" assembled from archival samples. Following this way, Project Dinos (PI: Shajib) was created to study the evolution of elliptical galaxies' internal structure at \(z<1\) using as many suitable archival samples as possible. From this project, Tan et al. (in preparation) will present an analysis of the "super-sample" (SLACS\(+\)SL2S\(+\)BELLS) with available archival _Hubble Space Telescope_ (_HST_) data and ground-based velocity dispersion measurements.
Recently, lots of new samples have also been discovered, primarily using machine-learning techniques applied on several ground-based surveys (e.g., Agnello et al., 2018; Delchambre
Figure 3: Compilation of results on the stellar IMF from lensing and dynamics. The IMF mismatch parameter \(\alpha=1\) represents the Salpeter IMF. The majority of constraints from lensing and dynamics indicate a trend in the IMF with the velocity dispersion (purple line, Posacki et al., 2015). However, some measurements, for example, the blue points from Smith et al. (2015), disagree with this trend. Figure re-illustrated from Smith et al. (2015).
et al., 2019; Wong et al., 2022; Lemon et al., 2023). An emerging sample with high-resolution _HST_ imaging and high-quality ground-based velocity dispersion measurements is the Astro3D Galaxy Evolution with Lenses (AGEL) sample (Tran et al., 2022). This sample primarily consists of newly discovered strong lenses from the Dark Energy Survey data (Jacobs et al., 2019, 2019), and includes mostly galaxy-scale lenses with a subset of group-scale lenses.
Recently, two _HST_ Schedule Gap Program has been approved to take imaging data of the newly discovered lens systems from different surveys: one for galaxy-galaxy lenses (HST-SNAP-17307, PIs: Tran & Shajib), and the other for lensed quasar systems (HST-SNAP-17308, PI: Lemon). These two programs, combined, will obtain high-resolution _HST_ imaging of \(\sim\)450-600 galaxy-scale lenses over the next 3-5 years.
## 4 Future prospects in the era of big data
With the imminent launch of the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST), and from current and future space-based surveys such as the _Euclid_ and the _Roman Space Telescope_, even larger samples with size \(\mathcal{O}(10^{4})-\mathcal{O}(10^{5})\) will be discovered (Oguri and Marshall, 2010; Collett, 2015). Discovering these lenses would primarily utilize machine learning techniques with more novel algorithms currently being developed (e.g., Akhazhanov et al., 2022). However, analyzing such unprecedentedly large samples will also pose a computational challenge. One way to tackle this challenge is to use machine learning for lensing parameter extraction from the data, which is currently an active area of research (e.g., Hezaveh et al., 2017; Morningstar et al., 2019; Schuldt et al., 2021; Adam et al., 2022; Poh et al., 2022; Mishra-Sharma and Yang, 2022; Biggio et al., 2023). However, depending on specific scientific requirements, the conventional forward modeling approach would still be favorable for a subset of all the new lenses. This subset of lenses will still be too large (e.g., \(\mathcal{O}(10^{3})\)) for feasible analysis with the traditional method that requires a considerable amount of fine-tuning by a human modeler on a case-by-case basis. Several efforts are underway to develop automated lens modeling pipelines, e.g., PyAutoLens(Nightingale et al., 2021), using lenstronomy(Birrer and Amara, 2018; Birrer et al., 2021; Shajib et al., 2019, 2021; Schmidt et al., 2022), and using Glee(Ertl et al., 2023).
As the statistical precision grows tighter with these future large samples, systematic effects such as the selection function will become increasingly important, if not already. Using simulation, Sonnenfeld et al. (2023) show that strong lensing samples can be biased in the IMF mismatch parameter by \(\sim\)10% and in the dark matter inner slope by \(\sim\)5% (Fig. 4). However, the selection effect from more factors, such as the halo concentration, environment, viewing angles, etc., could still be investigated by future studies. Accurately estimating the selection bias from these factors would allow future analyses of strong lens samples to correct the potential selection biases.
Although the last decade saw a large amount of progress in discovering new lens samples and in novel modeling and analysis techniques, several key questions about the internal structure and evolution of massive elliptical galaxies still lack definitive answers. This decade is expected to revolutionize all applications of strong lensing at the galaxy scale due to the two or more orders of magnitude increase in the lens sample size. Thus, Rubin, _Euclid_, and _Roman_ observatories will play key roles in providing those answers in the forthcoming era of big data.
###### Acknowledgements.
AJS was supported by NASA through the NASA Hubble Fellowship grant HST-HF2-51492 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
|
2305.03645 | Algorithmic Decision Processes | We develop a full-fledged analysis of an algorithmic decision process that,
in a multialternative choice problem, produces computable choice probabilities
and expected decision times. | Carlo Baldassi, Fabio Maccheroni, Massimo Marinacci, Marco Pirazzini | 2023-05-05T16:05:36Z | http://arxiv.org/abs/2305.03645v1 | # Algorithmic Decision Processes
###### Abstract
We develop a full-fledged analysis of an algorithmic decision process that, in a multialternative choice problem, produces computable choice probabilities and expected decision times.
## 1 Introduction
An algorithmic decision procedureIn a multialternative choice problem, decision units aim to find the best alternatives within a finite choice set \(A\) of available ones. Had they unlimited resources allowing them to make an unconstrained number of exact judgments between alternatives, they could proceed by standard revision. This brute force comparison-and-elimination algorithm sequentially analyzes pairs of alternatives and permanently discards the inferior ones. If the unit preferences are complete and transitive, after \(|A|-1\) comparisons the incumbent solution of this algorithm is an optimal choice. Implicit in traditional choice theory is an underlying algorithm of this kind.
Yet, decision units' resources are typically too limited to implement a standard revision procedure. Indeed, binary comparisons are typically costly, time-consuming and subject to error (so inexact). This happens because the decision unit may only imperfectly know the relative desirability of the competing alternatives. As a result, deliberation over them consumes resources (economic or physiological), requires time and is subject to error.1 In choice episodes involving the same pair of alternatives, different decision times and choices may be then observed. Binary choice behavior can be described only in probabilistic terms, with stochastic decision times and choice probabilities.
Footnote 1: Here we are abstracting from inescapable non-decision times.
The well-known limits of working memory suggest a sequential structure for the multialternative choice problem. Costly, time-consuming and inexact binary comparisons unfold one after the other through a stochastic exploration process, with alternatives playing the roles of proposals and incumbents in each binary comparison. An iterative choice process then operates. The decision unit limited resources constrain this process by limiting the number of executable binary comparisons, with a (possibly random) cap on the number of affordable iterations. When the process is terminated, an alternative is selected. The inexact nature of comparisons and the stochasticity of the exploration process make this selection random.
We formalize this schema through a decision procedure, the Neural Metropolis Algorithm, that parsimoniously adapt standard revision by building on sequential binary comparisons between proposals and incumbents, explicitly modelled as time-consuming and subject to errors (so stochastic), that unfold through a Markovian exploration of the choice set \(A\). A stopping number, determined by the decision unit resources, terminates the iterations of the algorithm and thus makes the algorithm select an alternative. Different iterations of the algorithm may result in different selections of alternatives because of the stochastic nature of binary comparisons and of Markovian exploration.
To the best of our knowledge, this is the first full-fledged analysis of an algorithmic decision process over a choice set. We are able to derive both the choice probabilities and expected decision times that the Neural Metropolis Algorithm generates, with closed forms in some noteworthy cases (Section 4.2). We are also able to provide a value representation for the algorithm, which proceeds as if governed by some underlying utility judgements over alternatives (Section 4.4). In so doing, we generalize and extend to pairs of choice probabilities and decision times some basic results of traditional stochastic choice.
This value foundation makes it possible to interpret our algorithmic decision unit as the neural system of a decision maker that confronts a decision problem. In particular, traditional choice analysis can be implemented by the Neural Metropolis Algorithm when the resource constraint is relaxed, as it is the case in the traditional choice analysis sketched at the outset. On the other hand, our algorithm may incorporate neuroscience binary choice models, like the Drift Diffusion Model (DDM), which thus get embedded in a sequential multialternative decision process.
Outline of the analysisWe begin the analysis by generalizing traditional stochastic choice by introducing binary choice probabilities \(\rho\left(i\mid j\right)\) to model binary comparisons that, in a sequential setting, involve alternatives \(i\) and \(j\) playing the distinct roles of proposals and incumbents, respectively (a distinction that stochastic choice does not make). In our first main result, Theorem 4, we show that binary choice probabilities have a value representation through a Fechnerian scaling \(v:A\rightarrow\mathbb{R}\) when they satisfy a basic transitivity property, thus extending to our setting classic results of traditional deterministic choice theory and of traditional stochastic choice theory. Indeed, our analysis includes as special cases both deterministic traditional choice, where \(\rho\) is 0-1 valued, and traditional stochastic choice, where \(\rho\) is strictly positive.
Besides choice probabilities, the other key element of the analysis are the expected decision times \(\tau\left(i\mid j\right)\) that account for the average duration of comparisons between proposal \(i\) and incumbent \(j\). We introduce them formally and consider pairs \(\left(\rho,\tau\right)\) to study their interplay with binary choice probabilities. We propose a value representation also for these pairs. Such a representation is behaviorally characterized in our Theorem 5. Theorem 6 captures the special case of symmetric expected decision times which result from a classical speed/accuracy relation: faster decisons corresponding to smaller error rates.
With this, we move to the analysis of the Neural Metropolis Algorithm. It sequentially compares pairs of alternatives, playing the roles of incumbents and proposals. These comparisons use a binary choice model \(\left(\mathrm{C},\mathrm{RT}\right)\) consisting of choice variables and response times that determine the frequency \(\rho_{\mathrm{C}}\left(i\mid j\right)\) with which proposal \(i\) is accepted over incumbent \(j\) and the
mean response time \(\tau_{\text{RT}}\left(i\mid j\right)\) required by the comparison. To the pair \(\left(\rho_{\text{C}},\tau_{\text{RT}}\right)\) we can apply the value analysis previously developed, with a Fechnerian scaling \(v_{\text{C}}:A\to\mathbb{R}\) for the stochastic binary comparisons (featuring a positive \(\rho_{\text{C}}\)) and a Paretian utility function \(w_{\text{C}}:A\to\mathbb{R}\) for the deterministic ones (featuring, instead, a 0-1 valued \(\rho\)). An initial random condition \(\mu\) and an exploration matrix \(Q\) complete the description of the constituent elements of the Neural Metropolis Algorithm. A stopping time \(N\) terminates the algorithm, which thus selects an alternatives.
The algorithm thus generated a choice probability \(p_{N}\) over alternatives, where \(p_{N}\left(i,A\right)\) is the probability that the algorithm selects alternative \(i\) from the choice set \(A\), as well as a mean response time \(\tau_{N}\geq 0\), the average time that the algorithm takes to select an alternative from \(A\). We obtain closed forms for both \(p_{N}\) and \(\tau_{N}\) in the important case of negative binomial stopping times, which includes the geometric ones.
Our value analysis shows that, as the stopping number allows more and more iterations, the Neural Metropolis Algorithm has noteworthy optimality properties. It selects optimal alternatives when all binary comparisons are deterministic, thus implementing traditional choice behavior. When, instead, deterministic and stochastic binary comparisons coexist, the algorithm first selects alternatives \(i\) that are best across the deterministic comparisons, so belong to \(\arg\max_{A}w_{\text{C}}\), and then choose over them according to a logit a rule
\[\frac{e^{v_{\text{C}}\left(i\right)}}{\sum_{j\in\arg\max_{A}w_{\text{C}}}e^{v_ {\text{C}}\left(j\right)}}\]
where \(v_{\text{C}}\) and \(w_{\text{C}}\) are the Fechnerian scaling and Paretian utility previously mentioned.
Limitations of the analysisIn our analysis the stopping number, which accounts for the decision unit limited resources, is exogenous. It is a convenient assumption in a first analysis of an algorithmic decision process, but a topic for future research is the study of a decision problem that would endogenously deliver it.
Relatedly, we take the Markovian stochastic exploration as exogenous, though the decision unit may want to adjust it as exploration progresses. The study of more sophisticated exploration strategies is another topic for future research.
The sequential structure of the Neural Metropolis Algorithm appears to be natural in view of the limitations of working memory. It would interesting, however, to understand its optimality status by making explicit the working memory constraints that impede the parallel, rather than sequential, consideration of all competing alternatives in the choice set.
Related literatureThe Neural Metropolis Algorithm has the Metropolis DDM Algorithm of Baldassi et al. (2020) and Cerreia-Vioglio et al. (2022) as special cases in which binary comparisons are performed according the DDM of Ratcliff (1978), as adapted by Krajbich et al. (2010) and Milosavljevic et al. (2010) to value-based binary choices. The generalization is significant, moving from a specific binary comparison model to virtually all of them. Moreover, our results are novel even when binary comparisons are DDM based.
The Neural Metropolis Algorithm differs from most neuro-computational models of neuroscience that typically consider simultaneous evidence accumulation for all the alternatives in
the choice set \(A\). See, e.g., Roe et al. (2001), Anderson et al. (2004), McMillen and Holmes (2006), Bogacz et al. (2007), Ditterich (2010), and Krajbich and Rangel (2011). This simultaneity assumption, although efficient per se, is at odds with the known limits of attention and working memory, as previously mentioned.
An important exception is Reutskaja et al. (2011), who present three two-stage models in which subjects randomly search through the feasible set during an initial search phase and, when this phase is concluded, select the best item that was encountered during the search (up to some noise). This approach can be called quasi-exhaustive search in that time pressure may terminate the search phase before all alternatives have been evaluated and introduces an error probability.
Although different from the models considered by Krajbich and Rangel (2011) and Reutskaja et al. (2011), our model is consistent with some of their experimental findings about the exploration process of choice sets and with the conclusions of the seminal eye fixation study of Russo and Rosen (1975).
## 2 Preliminaries
### Mathematics
Stochastic matricesA square matrix \(B\) is (_left_) _stochastic_ if the sum of the entries of each column is 1. Its powers are the stochastic matrices \(B^{0}=I\) and \(B^{n}=BB^{n-1}\), with entry \(b_{ij}^{(n)}\) for each \(i,j\) in the index set of \(B\). A stochastic matrix \(B\) is:
1. _positive_ if its entries are strictly positive;
2. _quasi-positive_ if its off diagonal entries are strictly positive;
3. _primitive_ if there exists \(n\geq 1\) such that \(B^{n}\) is positive;
4. _irreducible_ if, for each \(i,j\) in its index set, there exists \(n\geq 1\) such that \(b_{ij}^{(n)}>0\);
5. _non-traceless_ if, it has at least one strictly positive element on its diagonal;
6. _nice_ if it is symmetric and quasi-positive;
7. _reversible_ if there exists a probability vector \(p\gg\mathbf{0}\) such that \[b_{ij}p_{j}=b_{ji}p_{i}\] (1) for each off diagonal entry \(b_{ij}\).
This terminology is standard, except (vi). Clearly, a positive matrix is quasi-positive, a quasi-positive matrix is primitive if at least of order 3. An irreducible and non-traceless matrix is primitive. Given a stochastic matrix \(B\), the matrix \(I-\zeta B\) is invertible when \(\zeta\in(-1,1)\) because \(\left\|\zeta B\right\|_{1}=\left|\zeta\right|<1\). Instead, the matrix \(I-B\) is not invertible because, by Markov's Theorem, there exists a probability vector \(p\) such that
\[Bp=p\]
Such a vector is called a _stationary distribution_ of \(B\).
Stopping numberA _stopping number_ (or _rule_) is a \(\mathbb{N}\)-valued random variable with finite mean \(\mathbb{E}\left[N\right]\) defined on an underlying probability space featuring a probability measure \(\mathbb{P}\). Two important functions are associated with a stopping number \(N\). The _probability generating function_\(f_{N}:\left[0,1\right]\rightarrow\mathbb{R}\) is defined by
\[f_{N}\left(z\right)=\underset{n=0}{\overset{\infty}{\sum}}\mathbb{P}\left[N=n \right]z^{n}\]
while the _survival generating function_\(g_{N}:\left[0,1\right]\rightarrow\mathbb{R}\) is defined by
\[g_{N}\left(z\right)=\underset{n=0}{\overset{\infty}{\sum}}\mathbb{P}\left[N>n \right]z^{n}\]
These two functions are related as follows (Feller, 1968, p. 265):
\[g_{N}\left(z\right)=\frac{1-f_{N}\left(z\right)}{1-z}\qquad\forall z\in\left[ 0,1\right]\]
under the limit convention
\[g_{N}\left(1\right)=\lim_{z\to 1^{-}}\frac{1-f_{N}\left(z\right)}{1-z}= \mathbb{E}\left[N\right] \tag{2}\]
Probability generating functions are widely used and several formulas are available for them (see, e.g., Johnson et al. 2005).
Let \(\mathcal{B}\) be the collection of all stochastic matrices. Given a stochastic matrix \(B\in\mathcal{B}\), we denote by \(f_{N}\left(B\right)\) and \(g_{N}\left(B\right)\) the square matrices of the same order of \(B\) defined by
\[f_{N}\left(B\right)=\underset{n=0}{\overset{\infty}{\sum}}\mathbb{P}\left[N=n \right]B^{n}\quad\text{and}\quad g_{N}\left(B\right)=\underset{n=0}{\overset{ \infty}{\sum}}\mathbb{P}\left[N>n\right]B^{n} \tag{3}\]
It is easy to check that the matrix power series on the r.h.s. converges entry by entry, and so the matrix \(f_{N}\left(B\right)\) is well defined.2 As \(B\in\mathcal{B}\) varies, via (3) one defines the matrix generating function on \(\mathcal{B}\), still denoted by \(f_{N}\), induced by a probability generating function \(f_{N}\).3
Footnote 2: See Section B.2.2 for more details.
Footnote 3: On matrix functions see, e.g., Rinehart (1955) and Higham (2008).
There is a natural partial order on stopping numbers: we say that stopping number \(N\)_stochastically dominates_ stopping number \(N^{\prime}\), written \(N\geq N^{\prime}\), if
\[\mathbb{P}\left[N>n\right]\geq\mathbb{P}\left[N^{\prime}>n\right]\qquad\forall n\geq 0\]
Intuitively, \(N\) is a less tight stopping number than \(N^{\prime}\). At the limit, we say that a sequence \(N_{k}\) of stopping numbers _diverges_, written \(N_{k}\rightarrow\infty\), if
\[\lim_{k\rightarrow\infty}\mathbb{P}\left(N_{k}>n\right)=1\qquad\forall n\geq 0\]
This means, as easily checked, that the probability of stopping at any finite \(n\) vanishes as \(k\rightarrow\infty\), that is, \(\lim_{k\rightarrow\infty}\mathbb{P}\left(N_{k}=n\right)=0\) for each \(n\geq 0\).
### Stochastic choice
Let \(A\) be a finite choice set, with at least three alternatives,4 called _menu_. Its typical elements are \(i\), \(j\), \(h\) and \(k\). We denote by \(\Delta\left(A\right)\) the set of all probability distributions on \(A\), viewed as \(\left|A\right|\)-dimensional vectors. In other words, \(\Delta\left(A\right)\) is the standard simplex in the Euclidean space \(\mathbb{R}^{\left|A\right|}\).
Footnote 4: In this paper we do not carry out comparative statics exercises across menus. For this reason, we develop the analysis in terms of an arbitrarily fixed menu \(A\).
A _choice probability_\(p\left(\cdot,A\right)\in\Delta\left(A\right)\) assigns to each alternative \(i\) the probability \(p\left(i,A\right)\) that the decision unit chooses \(i\) within \(A\). Formally, \(p\left(i,A\right)\) is the component \(i\) of the \(\left|A\right|\)-dimensional vector \(p\left(\cdot,A\right)\).
## 3 Binary choice
As discussed in the Introduction, our algorithmic decision process considers a sequence of binary choices between an incumbent and a proposal. To model these binary choices, in this section we generalize traditional stochastic choice to account for the distinction between the roles of incumbents and proposals that alternatives may play in binary choices. In the next section we will apply this generalized framework to observed binary choice behavior.
### Binary choice probabilities
A neural system, our decision unit,5 compares two alternatives \(i\) and \(j\) in a menu \(A\) of alternatives through a _probability kernel_ a function \(\rho:A^{2}\rightarrow[0,1]\). For distinct \(i\) and \(j\),
Footnote 5: Throughout we use the terms “decision unit” and “neural system” interchangeably.
\[\rho\left(i\mid j\right)\]
denotes the probability with which _proposal_\(i\) is accepted when \(j\) is the _incumbent_ (or _status quo_). So, \(1-\rho\left(i\mid j\right)\) is the probability with which the proposal is rejected and the incumbent maintained. Next we introduce the class of kernels that we will study.
**Definition 1**: _A probability kernel \(\rho:A^{2}\rightarrow[0,1]\) is a binary choice probability if_
\[\rho\left(i\mid j\right)=1\Longleftrightarrow\rho\left(j\mid i\right)=0 \tag{4}\]
_with the convention \(\rho\left(i\mid i\right)=\varepsilon>0\)._
We thus assume throughout that when an alternative is chosen for sure over another alternative, this happens regardless of the roles that they play. With this, we now introduce some basic properties.
**Definition 2**: _A binary choice probability \(\rho\) is:_
* (status-quo) unbiased _if_ \[\underbrace{\rho\left(i\mid j\right)}_{\text{prob. $i$ if proposal}}=\underbrace{1-\rho\left(j\mid i\right)}_{\text{prob. $i$ if incumbent}}\] _for all distinct alternatives_ \(i\) _and_ \(j\)_;_
* positive _if_ \(\rho\left(i\mid j\right)>0\) _for all distinct alternatives_ \(i\) _and_ \(j\)_;_
* Dirac _if_ \(\rho\left(i\mid j\right)\in\{0,1\}\) _for all distinct alternatives_ \(i\) _and_ \(j\)_._
These properties have a simple interpretation: a binary choice probability is unbiased when gives the incumbent alternative no special status, it is positive when selects either alternative with strictly positive probability, and it is Dirac when selects either alternative deterministically.6
Footnote 6: If a binary choice probability is positive, then for all distinct \(i\) and \(j\), \(\rho\left(i\mid j\right)>0\) and \(\rho\left(j\mid i\right)>0\), then neither of them can be \(1\) (otherwise the other would be \(0\)). Thus we have \(0<\rho\left(i\mid j\right)<1\) for all \(i\) and \(j\).
Traditional stochastic choice usually considers unbiased (and often positive) binary choice probabilities, where \(\rho\left(i\mid j\right)=p\left(i,\{i,j\}\right)\) describes the probability of choosing \(i\) from the doubleton \(\{i,j\}\). General, possibly biased, binary choice probabilities account for the incumbent and proposal distinct roles that are peculiar to a sequential analysis.
**Definition 3**: _A binary choice probability \(\rho\) is transitive if_
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right)=\rho \left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right) \tag{5}\]
_for all distinct alternatives \(i\), \(j\) and \(k\)._
In words, a binary choice probability is transitive when violations of transitivity in the choices that it determines are due only to the presence of noise.7 Indeed, condition (5) amounts to require the intransitive cycles
Footnote 7: Cf. Luce and Suppes (1965) p. 341. Also note that if two alternatives are not distinct, then condition (5) is automatically satisfied.
\[i\to j\to k\to i\quad\text{and}\quad i\to k\to j\to i\]
to be equally likely (over independent choices). Transitivity ensures that systematic intransitivities, a violation of a basic rationality tenet, cannot occur. We expect that a viable neural system satisfies this property.
### Binary value analysis
The binary choice probability \(\rho\) induces two binary relations \(\succ^{*}\) and \(\succsim\) defined by
\[i\succ^{*}j\Longleftrightarrow\rho\left(i\mid j\right)=1\quad\text{and}\quad i \succsim j\Longleftrightarrow\rho\left(i\mid j\right)\geq\rho\left(j\mid i\right)\]
for all alternatives \(i\) and \(j\). We interpret \(\succ^{*}\) as a clear-cut, deterministic, strict preference over the alternatives that the decision unit is able to perfectly discriminate in value. It is a standard notion in traditional (non-stochastic) utility theory.8 Since
Footnote 8: See Fishburn (1970) and Kreps (1988).
\[i\succ^{*}j\Longleftrightarrow\rho\left(i\mid j\right)=1\Longleftrightarrow\rho \left(j\mid i\right)=0 \tag{6}\]
a strict preference holds irrespective of the alternatives' roles as incumbents or proposals.9 We write \(i\parallel^{*}j\) when there is no strict preference over the two alternatives, i.e.,
Footnote 9: To further elaborate, \(\rho\left(i\mid j\right)=1\) means that \(i\) is accepted for sure when proposed, while \(\rho\left(j\mid i\right)=0\), that is, \(1-\rho\left(j\mid i\right)=1\), means that \(i\) is maintained for sure when it is the incumbent.
\[i\parallel^{*}j\Longleftrightarrow i\not\succ^{*}j\text{ and }j\not\succ^{*}i\]
by (6) this is equivalent to \(\rho\left(i\mid j\right)\in(0,1)\), and also to \(\rho\left(j\mid i\right)\in(0,1)\). That is when choice is truly stochastic (again irrespective of the alternatives' roles).
In contrast, we interpret \(\succsim\) as a weak notion of preference that extends the strict preference \(\succ^{*}\) by allowing for the stochastic rankings that occur over alternatives that the decision unit only imperfectly discriminates in value. The consistency property
\[i\succ^{*}j\Longrightarrow i\succ j \tag{7}\]
shows that, as natural, a strict preference is preserved by (the asymmetric part of \(\succsim\)\). Finally, the binary relation \(\succsim^{\circ}\) defined by
\[i\succsim^{\circ}j\Longleftrightarrow i\succsim j\text{ and }i\parallel^{*}j\]
describes the rankings that are stochastic. Indeed,
\[i\succsim^{\circ}j\Longleftrightarrow 1>\rho\left(i\mid j\right)\geq\rho\left(j \mid i\right)>0\]
The next lemma substantiates our interpretations.
**Lemma 1**: _If the binary choice probability \(\rho\) is transitive, then_
1. \(\succ^{*}\) _is asymmetric and negatively transitive;_
2. \(\parallel^{*}\) _is an equivalence relation;_
3. \(\succsim\) _is complete and transitive;_
4. \(\succsim^{\circ}\) _is reflexive and transitive as well as complete on each equivalence class of_ \(\parallel^{*}\)_._
The two preferences \(\succsim^{\circ}\) and \(\succ^{*}\) complement each other by accounting for the stochastic and deterministic comparisons that occur, respectively, with imperfect and perfect discrimination in value. Jointly, they rank all pairs of alternatives.10 In view of this, we focus our analysis on them.
**Definition 4**: _A binary choice probability \(\rho:A^{2}\rightarrow[0,1]\) has a binary value representation if there exist \(v,w:A\rightarrow\mathbb{R}\) and a symmetric \(s:A^{2}\rightarrow(0,\infty)\) such that_
\[\rho\left(i\mid j\right)=\left\{\begin{array}{ll}1&\mbox{if }w\left(i \right)>w\left(j\right)\\ &\\ s\left(i,j\right)\frac{e^{v\left(i\right)}}{e^{v\left(i\right)}+e^{v\left(j \right)}}&\mbox{if }w\left(i\right)=w\left(j\right)\\ 0&\mbox{if }w\left(i\right)<w\left(j\right)\end{array}\right. \tag{8}\]
_for all \(i\) and \(j\)._
It is readily seen that, in this case
\[i\succ^{*}j\Longleftrightarrow w\left(i\right)>w\left(j\right)\quad;\;\;i \parallel^{*}j\Longleftrightarrow w\left(i\right)=w\left(j\right) \tag{9}\]
and, when \(w\left(i\right)=w\left(j\right)\),
\[i\succ^{\circ}j\Longleftrightarrow v\left(i\right)\geq v\left(j\right) \tag{10}\]
Thus, we interpret \(w\) as a utility function for \(\succ^{*}\) and \(v\) as a utility function for \(\succ^{\circ}\). Moreover, we interpret \(s\) as a status quo bias index. These interpretations are corroborated by the next result. In reading it, keep in mind that \(v\) and \(s\) are relevant only for stochastic rankings (as identified by the equivalence classes of \(\parallel^{*}\), so by the level sets of \(w\)).
**Lemma 2**: _If a binary choice probability admits a binary value representation, then,_
* _the utility function_ \(w\) _is unique up to a strictly increasing transformation;_
* _the utility function_ \(v\) _is, on each level set of_ \(w\)_, unique up to an additive constant;_
* _the status quo bias index_ \(s\) _is, on each level set of_ \(w\)_, unique with_ \[\rho\left(i\mid j\right)<1-\rho\left(j\mid i\right) \Longleftrightarrow s\left(i,j\right)<1\] \[\rho\left(i\mid j\right)=1-\rho\left(j\mid i\right) \Longleftrightarrow s\left(i,j\right)=1\] (11) \[\rho\left(i\mid j\right)>1-\rho\left(j\mid i\right) \Longleftrightarrow s\left(i,j\right)>1\]
The relations in the last point of the lemma clarify the interpretation of \(s\) as a status quo bias index for the comparison of proposal \(i\) and incumbent \(j\). In particular, bias favors the incumbent when \(s\left(i,j\right)<1\), the proposal when \(s\left(i,j\right)>1\), and it is absent otherwise. Thus, the binary choice probability \(\rho\) is unbiased if and only if \(s\) is constant to \(1\).
The utility \(w\) is a traditional Paretian utility function that, by ranking alternatives in an ordinal manner, represents the strict preference \(\succ^{*}\). This Paretian utility is constant, so irrelevant in (8), if and only if \(\rho\) is positive, i.e., when all rankings are stochastic. When \(\rho\) is both positive and unbiased, the binary value representation (8) reduces to
\[\rho\left(i\mid j\right)=\frac{e^{v\left(i\right)}}{e^{v\left(i\right)}+e^{v \left(j\right)}}\]
This is the strict utility representation of Marschak (1960) and Luce and Suppes (1965),11 which our binary value representation thus extends to general, possibly biased and partly deterministic, binary choice probabilities. As well-known, using the logistic function \(\xi\) we can write:
Footnote 11: Luce (1959) and Block and Marschak (1960) study a stronger non-binary version of strict utility.
\[\frac{e^{v(i)}}{e^{v(i)}+e^{v(j)}}=\xi\left(v\left(i\right)-v\left(j\right)\right) \tag{12}\]
The binary choice probability \(\rho\left(i\mid j\right)\) thus depends, in a Fechnerian way, on the utility difference \(v\left(i\right)-v\left(j\right)\).12
Footnote 12: Cf. Luce and Suppes (1965) p. 334. The logistic function \(\xi:\mathbb{R}\rightarrow\mathbb{R}\) is given by \(\xi\left(x\right)=1/\left(1+e^{-x}\right)\).
In our extension, \(v\) continues to be a _bona fide_ utility function on the level sets of \(w\), as (10) shows. We call it a _Fechnerian utility function_. When \(w\left(i\right)=w\left(j\right)\), it holds
\[\rho\left(i\mid j\right)\geq\rho\left(j\mid i\right)\Longleftrightarrow v \left(i\right)\geq v\left(j\right)\iff 1-\rho\left(j\mid i\right)\geq 1-\rho\left(i\mid j\right) \tag{13}\]
Alternatives with a higher Fechnerian utility thus have a higher probability to be selected, regardless of their roles as proposals or incumbents. When \(\rho\) is both positive and unbiased, (13) takes the form
\[v\left(i\right)\geq v\left(j\right)\Longleftrightarrow\rho\left(i\mid j \right)\geq\frac{1}{2}\]
familiar from traditional stochastic choice. The Fechnerian utility function is immaterial when \(\rho\) is Dirac, i.e., when all rankings of distinct alternatives are deterministic.
**Lemma 3**: _A binary choice probability \(\rho\) is Dirac and transitive if and only if \(\succ^{*}\) is weakly complete and transitive.13_
Footnote 13: The strict preference \(\succ^{*}\) is _weakly complete_ if, for each \(i\neq j\), either \(i\succ^{*}j\) or \(j\succ^{*}i\) (cf. Fishburn, 1970). Under weak completeness, we do not have to worry about indifferences, a notoriously delicate issue.
The transitivity of a binary choice probability thus generalizes the transitivity of a strict preference of traditional utility theory. In this case, the binary value representation (8) reduces to
\[\rho\left(i\mid j\right)=\left\{\begin{array}{ll}1&\quad\text{if }w\left(i \right)>w\left(j\right)\\ \frac{1}{2}&\quad\text{if }w\left(i\right)=w\left(j\right)\\ 0&\quad\text{if }w\left(i\right)<w\left(j\right)\end{array}\right.\]
The next representation theorem, our first main result, shows that transitivity characterizes the binary choice probabilities having a binary value representation.
**Theorem 4**: _A binary choice probability has a binary value representation if and only if it is transitive._
This theorem generalizes standard utility representations in stochastic choice (e.g., Luce and Suppes, 1965, p. 350) as well as, in view of Lemma 3, in traditional utility theory.
We conclude by observing that the preference \(\succsim\) has, in terms of the binary value representation (8), a lexicographic representation via the Fechnerian utility \(v\) and the Paretian utility \(w\). Indeed, it is easy to see that, for each \(i\) and \(j\),
\[i\succsim j\Longleftrightarrow\left(w\left(i\right),v\left(i\right)\right) \geq_{lex}\left(w\left(j\right),v\left(j\right)\right)\]
where \(\geq_{lex}\) is the lexicographic order on the plane.
### Expected response times
Besides the choice probability \(\rho\left(i\mid j\right)\), the other quantity featured in sequential binary choice is the expected time
\[\tau\left(i\mid j\right)\]
that the decision unit takes to choose between distinct proposal \(i\) and incumbent \(j\). We represent it with a function \(\tau:A^{2}\rightarrow[0,\infty)\).
**Definition 5**: _A pair \(\left(\rho,\tau\right)\) of a binary choice probability and an expected response time forms a tandem if, for each \(i\neq j\),_
\[\rho\left(i\mid j\right)\in\left\{0,1\right\}\Longleftrightarrow\tau\left( i\mid j\right)=0 \tag{14}\]
_and_
\[\tau\left(i\mid j\right)=\tau\left(j\mid i\right)\Longrightarrow\rho\left(i \mid j\right)=1-\rho\left(j\mid i\right) \tag{15}\]
A tandem provides a thorough description of the binary choices of our decision unit in the menu \(A\). In such a description, the consistency condition (14) ensures that deterministic choices are the ones that take no time (so we abstract from non-decision times). In particular, since \(\rho\) is a binary choice probability,
\[\tau\left(i\mid j\right)=0\iff\tau\left(j\mid i\right)=0\]
The consistency condition (15), instead, requires that the absence of a status quo bias manifests itself primarily in the symmetry of response times.
**Definition 6**: _A tandem \(\left(\rho,\tau\right)\) has a binary value representation if there exist \(v,w:A\rightarrow\mathbb{R}\), a symmetric \(s:A^{2}\rightarrow\left(0,\infty\right)\), and a strictly quasiconcave and unimodal \(\varphi:\mathbb{R}\rightarrow\left(0,\infty\right)\) such that (8) holds and_
\[\tau\left(i\mid j\right)=\left\{\begin{array}{ll}0&\text{if }w\left(i\right) \neq w\left(j\right)\\ \varphi\left(v\left(i\right)-v\left(j\right)\right)&\text{if }w\left(i\right) =w\left(j\right)\end{array}\right. \tag{16}\]
_for all \(i\) and \(j\)._
A strictly quasiconcave and unimodal \(\varphi:\mathbb{R}\rightarrow\left[0,\infty\right)\) is a function that first strictly increases to a strong maximum and then strictly decreases. This pattern is motivated by the standard psychophysical assumption that the stimulus strength determines response times with stronger stimuli inducing faster responses. Since here the stimulus strength corresponds to the preference intensity represented by utility differences \(v\left(i\right)-v\left(j\right)\), this standard assumption requires that large (positive) differences and small (negative) differences command short response times. This is exactly what is captured by the shape of \(\varphi\).
To give an observable counterpart of this standard assumption we need to introduce the following observables:
\[\ell_{ij}=\ln\frac{\rho\left(i\mid j\right)}{\rho\left(j\mid i\right)}\]
Using these log-odds we can introduce a class of tandems:
**Definition 7**: _A tandem \(\left(\rho,\tau\right)\) is chronometric if \(\rho\) is transitive and there exists a threshold \(l\) such that,_
\[\ell_{ij}=\ell_{hk} \implies\tau\left(i\mid j\right)=\tau\left(h\mid k\right) \tag{17}\] \[l\leq\ell_{ij}<\ell_{hk} \implies\tau\left(i\mid j\right)>\tau\left(h\mid k\right)\] (18) \[\ell_{ij}<\ell_{hk}\leq l \implies\tau\left(i\mid j\right)<\tau\left(h\mid k\right) \tag{19}\]
_for all pairs of alternatives \(i,j\) and \(h,k\) with nonzero response times._
We can now state our second representation theorem that characterizes tandems having a binary value representation.
**Theorem 5**: _A tandem has a binary value representation if and only if it is chronometric._
Another standard psychophysical assumption is that stimulus strength determines error rates, with stronger stimuli inducing lower error rates. Consistency of this assumption with the previous one requires shorter response times to correspond to lower error rates. Surprisingly this leads to unbiased tandems.
Specifically, observe that when comparing a proposal \(i\) and an incumbent \(j\) (or _viceversa_) we may make errors of two types: we may reject a superior proposal or accept an inferior one. In analogy with standard terminology, we call them _first-type_ and _second-type errors_, respectively. Their probabilities are
\[\mathrm{ER}^{\mathrm{I}}_{i,j}=\min\left\{1-\rho\left(i\mid j\right),1-\rho \left(j\mid i\right)\right\}\quad;\quad\mathrm{ER}^{\mathrm{II}}_{i,j}=\min \left\{\rho\left(i\mid j\right),\rho\left(j\mid i\right)\right\} \tag{20}\]
Next we introduce a basic error-monotonicity property.
**Definition 8**: _A tandem \(\left(\rho,\tau\right)\) is psychometric if \(\rho\) is transitive and_
\[\tau\left(i\mid j\right)<\tau\left(h\mid k\right)\Longrightarrow\mathrm{ER}^ {\mathrm{I}}_{i,j}<\mathrm{ER}^{\mathrm{I}}_{h,k}\quad\text{and}\quad\mathrm{ ER}^{\mathrm{II}}_{i,j}<\mathrm{ER}^{\mathrm{II}}_{h,k}\]
_and_
\[\tau\left(i\mid j\right)\leq\tau\left(h\mid k\right)\Longrightarrow\mathrm{ ER}^{\mathrm{I}}_{i,j}\leq\mathrm{ER}^{\mathrm{I}}_{h,k}\quad\text{and}\quad \mathrm{ER}^{\mathrm{II}}_{i,j}\leq\mathrm{ER}^{\mathrm{II}}_{h,k}\]
_for all pairs of alternatives \(i,j\) and \(h,k\) with nonzero response times._
In words, shorter binary expected response times correspond to lower errors of both types, a property that regards as easier to make the choices between alternatives with larger utility differences. The next representation theorem shows that psychometricity characterizes chronometric tandems with symmetric expected response times, thus featuring no status quo biases.
**Theorem 6**: _A tandem has a binary value representation with an even \(\varphi\) if and only if it is psychometric._
It is noteworthy that psychometricity implies chronometricity since the two definitions are not obviously related. With this theorem, our third main result, we conclude the general analysis of binary choices.
## 4 An algorithmic decision process
### Binary choice behavior
It is time to turn to observed binary choice behavior and apply to it the general binary choice framework just introduced.
**Definition 9**: _A binary choice model (BCM) is a pair of random matrices \(\mathrm{(C,RT)}\) where:_
1. \(\mathrm{C}=\left[\mathrm{C}_{i,j}\right]\) _consists of the random_ choice variables__\(\mathrm{C}_{i,j}\) _that describe the random outcome of the comparison between proposal_ \(i\) _and status quo_ \(j\)_, with_ \[\mathrm{C}_{i,j}=\left\{\begin{array}{ll}i&\mbox{if $i$ accepted}\\ j&\mbox{if $i$ rejected}\end{array}\right.\]
2. \(\mathrm{RT}=\left[\mathrm{RT}_{i,j}\right]\) _consists of random_ response times__\(\mathrm{RT}_{i,j}\) _required by the comparison._14__ Footnote 14: Throughout we assume that random response times (say measured in seconds) have finite mean and variance.
The distributions of C and RT are, in principle, both observable in choice behavior. By equating probabilities and frequencies, they induce a pair \(\left(\rho_{\mathrm{C}},\tau_{\mathrm{RT}}\right)\) where
\[\rho_{\mathrm{C}}\left(i\mid j\right)=\mathbb{P}\left[\mathrm{C}_{i,j}=i\right]\]
is the frequency with which proposal \(i\) is accepted over incumbent \(j\), and
\[\tau_{\mathrm{RT}}\left(i\mid j\right)=\mathbb{E}\left[\mathrm{RT}_{i,j}\right]\]
is the mean response time required by the comparison.
When \(\left(\rho_{\mathrm{C}},\tau_{\mathrm{RT}}\right)\) has a binary value representation, we denote by
\[\left(s_{\mathrm{C}},v_{\mathrm{C}},w_{\mathrm{C}},\varphi_{\mathrm{RT}}\right)\]
its elements. The most basic example of a BCM \(\left(\mathrm{C},\mathrm{RT}\right)\) occurs in traditional utility theory when the choices of the decision unit are deterministic. In this case, the pair \(\left(\rho_{\mathrm{C}},\tau_{\mathrm{RT}}\right)\) has the binary value representation of the Dirac form:
\[\rho_{\mathrm{C}}\left(i\mid j\right)=\left\{\begin{array}{ll}1&\quad\text{if }w_{ \mathrm{C}}\left(i\right)>w_{\mathrm{C}}\left(j\right)\\ \dfrac{1}{2}&\quad\text{if }w_{\mathrm{C}}\left(i\right)=w_{\mathrm{C}} \left(j\right)\\ 0&\quad\text{if }w_{\mathrm{C}}\left(i\right)<w_{\mathrm{C}}\left(j\right) \end{array}\right.\]
and \(\tau_{\mathrm{RT}}\) is typically undefined.
A popular stochastic binary choice model is the _Drift Diffusion Model_ (_DDM_) introduced by Ratcliff (1978). In its value version, developed by Krajbich et al. (2010) and Milosavljevic et al. (2010), the comparison of two alternatives \(i\) and \(j\) is governed by their _neural utilities_\(\nu\left(i\right)\) and \(\nu\left(j\right)\) about which the decision unit learns, for instance via memory retrieval, during the deliberation that precedes the choice between the two alternatives.15 Evidence accumulation in favor of either alternative is represented by the two Brownian motions with drift \(\mathrm{V}_{i}\left(t\right)=\nu\left(i\right)t+\mathrm{W}_{i}\left(t\right)\) and \(\mathrm{V}_{j}\left(t\right)=\nu\left(j\right)t+\mathrm{W}_{j}\left(t\right)\). Each accumulation experiences independent white noise fluctuations modeled by the uncorrelated Wiener processes \(\mathrm{W}_{i}\) and \(\mathrm{W}_{j}\). With this,
Footnote 15: To ease the analysis, we assume that the neural utility \(\nu:A\rightarrow\mathbb{R}\) is injective.
* the net evidence in favor of \(i\) over \(j\) is given, at each \(t>0\), by the difference \[\mathrm{Z}_{i,j}\left(t\right)=\mathrm{V}_{i}\left(t\right)-\mathrm{V}_{j} \left(t\right)=\left[\nu\left(i\right)-\nu\left(j\right)\right]t+\sqrt{2}\ \mathrm{W}\left(t\right)\] (21) where \(\mathrm{W}\) is the Wiener difference process \(\left(\mathrm{W}_{i}-\mathrm{W}_{j}\right)/\sqrt{2}\);
* comparison ends when \(\mathrm{Z}_{i,j}\left(t\right)\) reaches either the barrier \(\lambda>0\) or \(-\beta<0\); so the response time is \[\mathrm{RT}_{i,j}=\min\left\{t:\mathrm{Z}_{i,j}\left(t\right)=\lambda\text{ or }\mathrm{Z}_{i,j}\left(t\right)=-\beta\right\}\]
* proposal \(i\) is accepted when the upper barrier \(\lambda\) is reached, while incumbent \(j\) is maintained (so proposal \(i\) is rejected) when the lower barrier \(-\beta\) is reached; so the choice variable is \[\mathrm{C}_{i,j}=\left\{\begin{array}{ll}i&\quad\text{if }\mathrm{Z}_{i,j} \left(\mathrm{RT}_{i,j}\right)=\lambda\\ j&\quad\text{if }\mathrm{Z}_{i,j}\left(\mathrm{RT}_{i,j}\right)=-\beta\end{array}\right.\]
A different net evidence, \(\lambda\) and \(\beta\), accounts for the different roles of alternatives as proposal and status quo. A DDM is pinned down by its elements \(\nu\), \(\lambda\) and \(\beta\). We thus write it as DDM \(\left(\nu,\lambda,\beta\right)\). When \(\lambda=\beta\) we say that the DDM is _symmetric_.
**Proposition 7**: _The pair \(\left(\rho_{\mathrm{C}},\tau_{\mathrm{RT}}\right)\) generated by a DDM \(\left(\nu,\lambda,\beta\right)\) is a chronometric tandem, with \(\rho_{\mathrm{C}}\) positive (and transitive). It has a binary value representation with_
\[v_{\mathrm{C}}=\lambda\nu\quad\text{;}\quad s_{\mathrm{C}}\left(i,j\right)=1+ \frac{e^{\lambda\left|\nu\left(i\right)-\nu\left(j\right)\right|}-e^{\beta \left|\nu\left(i\right)-\nu\left(j\right)\right|}}{1-e^{\left(\lambda+\beta \right)\left|\nu\left(i\right)-\nu\left(j\right)\right|}}\quad\text{;}\quad w_{ \mathrm{C}}\text{ constant} \tag{22}\]
\[\varphi_{\rm RT}\left(x\right)=\frac{\lambda^{2}}{x}\left[\frac{1-e^{\frac{\beta}{ \lambda}x}}{e^{-x}-e^{\frac{\beta}{\lambda^{2}}}}\left(1+\frac{\beta}{\lambda} \right)-\frac{\beta}{\lambda}\right] \tag{23}\]
_for all \(x\in\mathbb{R}\)._
Thus, in the DDM case the pair \(\left(\rho_{\rm C},\tau_{\rm RT}\right)\) is a tandem with binary value representation
\[\rho_{\rm C}\left(i\mid j\right)=s_{\rm C}\left(i,j\right)\xi\left(v_{\rm C} \left(i\right)-v_{\rm C}\left(j\right)\right)=s_{\rm C}\left(i,j\right)\frac{e ^{v_{\rm C}\left(i\right)}}{e^{v_{\rm C}\left(i\right)}+e^{v_{\rm C}\left(j \right)}}\]
and
\[\tau_{\rm RT}\left(i\mid j\right)=\varphi_{\rm RT}\left(v_{\rm C}\left(i \right)-v_{\rm C}\left(j\right)\right)=\lambda\frac{\lambda\rho_{\rm C}\left( i\mid j\right)-\beta\left(1-\rho_{\rm C}\left(i\mid j\right)\right)}{v_{\rm C} \left(i\right)-v_{\rm C}\left(j\right)}\]
In particular, we have a decomposition of the Fechnerian utility \(v_{\rm C}=\lambda\nu\) in terms of neural utility function \(\nu\) and acceptance threshold \(\lambda\). Accordingly,
\[v_{\rm C}\left(i\right)-v_{\rm C}\left(j\right)=\lambda\left(\nu\left(i\right) -\nu\left(j\right)\right)\]
The Fechnerian utility difference is decomposed in the neural utility difference \(\nu\left(i\right)-\nu\left(j\right)\) weighted by the coefficient \(\lambda\). The higher the neural utility difference, the higher can be viewed the intensity of the neural value for \(i\) over \(j\). The higher \(\lambda\), the higher the DM ability to perceive this value difference, so to discriminate the alternatives' subjective values. In other words, \(\lambda\) acts as a magnification lens for neural utility differences.
The next result gives a sharp empirical content to the DDM case. It is convenient to state it using the log-odds
\[\ell_{ij}=\log\frac{\rho_{\rm C}\left(i\mid j\right)}{\rho_{\rm C}\left(j \mid i\right)}\quad\mbox{and}\quad\bar{\ell}_{ij}=\log\frac{1-\rho_{\rm C} \left(j\mid i\right)}{1-\rho_{\rm C}\left(i\mid j\right)}\]
**Proposition 8**: _The elements of a DDM \(\left(v,\lambda,\beta\right)\) are uniquely identified by the tandem \(\left(\rho_{\rm C},\tau_{\rm RT}\right)\) that it generates. In particular, if \(\ell_{ij}\neq 0\),_
\[\lambda=\left|\ell_{ij}\right|\sqrt{\frac{\tau_{ij}}{\ell_{ij}\rho_{ij}+\bar {\ell}_{ij}\left(\rho_{ij}-1\right)}}\]
_and_
\[\beta=\lambda\frac{\bar{\ell}_{ij}}{\ell_{ij}}\quad;\quad\nu\left(i\right)= \frac{1}{\lambda}\log r_{\rm C}\left(i,j^{*}\right)\]
_under the normalization \(v\left(j^{*}\right)=0\) for some alternative \(j^{*}\)._
Finally, we characterize symmetric DDMs by showing that symmetry is equivalent to an unbiased \(\rho_{\rm C}\) as well as to a symmetric \(\tau_{\rm RT}\).
**Proposition 9**: _For a tandem \(\left(\rho_{\rm C},\tau_{\rm RT}\right)\) generated by a DDM \(\left(\nu,\lambda,\beta\right)\), the following conditions are equivalent:_
* _the tandem is psychometric;_
_;_
2. \(\beta=\lambda\)_;_
3. \(\tau_{\rm RT}\left(i\mid j\right)=\tau_{\rm RT}\left(j\mid i\right)\) _for some (all)_ \(i\neq j\)_;_
4. \(\rho_{\rm C}\left(i\mid j\right)=1-\rho_{\rm C}\left(j\mid i\right)\) _for some (all)_ \(i\neq j\)_._
_In this case,_
\[\varphi_{\rm RT}\left(x\right)=\frac{\lambda^{2}}{x}\tanh\frac{x}{2}\]
_for all \(x\in\mathbb{R}\)._
We conclude this section by observing that a broad family of BCMs is given by _evidence threshold models_. They encompass integration models, like the DDM just studied, as well as the extrema detection models discussed by Stine et al. (2020). In Appendix A we discuss this family of BCMs in some detail.
### Neural Metropolis Algorithm
The protagonist of our analysis is an algorithmic decision process that a neural system might implement when facing a multialternative menu \(A\). This process consists of a sequence of pairwise comparisons conducted via a BCM, whose contestants are selected by a Markovian mechanism in the sense of Metropolis et al. (1953). This sequential structure is motivated, as discussed in the Introduction, by the well-known limits of working memory and is supported by classic and recent eye-tracking studies.16
Footnote 16: See Russo and Rosen (1975), Krajbich and Rangel (2011) and Reutskaja et al. (2011) as well as the discussion in Cerreia-Vioglio et al. (2022).
In broad strokes, this algorithmic decision process:
1. starts from an arbitrary element \(j\) of the menu, the _incumbent_;
2. selects a candidate alternative \(i\) in the menu, the _proposal_;
3. compares them via a BCM and makes the winner the new incumbent;
4. repeats steps 1-3 until deliberation time comes, with the last incumbent being the chosen alternative in the menu.
More in detail, the algorithm starts by selecting a first incumbent \(j\) according to an initial distribution \(\mu\in\Delta\left(A\right)\) that, for example, may describe the "first fixation" of the decision unit. It proceeds through an _exploration_ (_stochastic_) _matrix_
\[Q=\left[Q\left(i\mid j\right):i,j\in A\right]\]
of order \(\left|A\right|\) that describes how the algorithm navigates through alternatives. In particular, given the incumbent \(j\), a proposal \(i\) is selected with probability \(Q\left(i\mid j\right)\). Incumbent and proposal are then compared via a BCM \(\rm(C,RT)\). After \({\rm RT}_{i,j}\) seconds, the new incumbent is \(j^{\prime}=C_{i,j}\); a new proposal \(i^{\prime}\) is then selected with probability \(Q\left(i^{\prime}\mid j^{\prime}\right)\), and so on so forth.
The algorithm terminates according to a posited random _stopping number_\(N\) that limits the number of allowed iterations because of exogenously constrained computational resources (for instance, this number may have a cost, say economic or physiological, for the decision unit). The last incumbent is the algorithm output, so what the algorithm chooses from menu \(A\).
After this preliminary discussion, next we formalize the _Neural Metropolis Algorithm_, our algorithmic decision process. Its constitutive elements are a BCM \((\mathrm{C},\mathrm{RT})\) and an exploration strategy \((\mu,Q)\), summarized in the quartet
\[(\mathrm{C},\mathrm{RT},\mu,Q) \tag{24}\]
For mathematical convenience we start the algorithm at time \(-1\).
**Neural Metropolis Algorithm**
**Input:**_Given a stopping number_\(N\)_._
**Start:**_Draw i from A according to_\(\mu\) _and_
\(\bullet\) _set_\(t_{-1}=0\)_,_
\(\bullet\) _set_\(j_{-1}=i\)_._
**Repeat:**_Draw_\(i_{n}\) _from A according to_\(Q\left(\cdot\mid j_{n-1}\right)\) _and compare it to_\(j_{n-1}\)_:_
\(\bullet\) _set_\(t_{n}=t_{n-1}+\mathrm{RT}_{i_{n},j_{n-1}}\)_,_
\(\bullet\) _set_\(j_{n}=\mathrm{C}_{i_{n},j_{n-1}}\)_;_
**until**_\(n=N\)_._
**Stop:**_Set_\(k=j_{n-1}\)_._
**Output:**_Choose k from A._
Along with stopping number \(N\), the Neural Metropolis Algorithm (24) selects alternative \(j_{n-1}\) when \(N=n\), where \(n\) is the iteration at which the decision process is interrupted by the stopping number. The Neural Metropolis Algorithm generalizes the Metropolis-DDM Algorithm of Cerreia-Vioglio et al. (2022), which is the special case when the underlying BCM is generated by a DDM, the exploration matrix \(Q\left(i\mid j\right)\) is inversely proportional to the mean of \(\mathrm{RT}_{i,j}\) and a hard deadline is given.
### Algorithmic properties
The Neural Metropolis Algorithm (24) generates a Markov chain of incumbents
\[J=\left\{J_{-1},J_{0},J_{1},...\right\} \tag{25}\]
with \(\mathbb{P}\left[J_{-1}=j\right]=\mu\left(j\right)\) for all alternatives \(j\) in \(A\) and, for each \(n\geq 0\),
\[\mathbb{P}\left[J_{n}=i\mid J_{n-1}=j\right]=\underset{\text{prob. }i\text{ proposed}}{\underbrace{Q\left(i\mid j\right)}}\times \underset{\text{prob. }i\text{ accepted}}{\underbrace{\rho_{\text{C}}\left(i\mid j\right)}}=:M \left(i\mid j\right)\]
for all distinct alternatives \(i\) and \(j\) in \(A\). The stochastic matrix \(M\) is the _transition matrix_ of the incumbents' Markov chain \(J\). In particular, we say that the Neural Metropolis Algorithm is _reversible_ when its transition matrix is reversible and so the incumbents' Markov chain (25) is reversible.
The Neural Metropolis Algorithm induces, for each stopping number \(N\), a:
* _choice probability_\(p_{N}\in\Delta\left(A\right)\),17 where \(p_{N}\left(i,A\right)\) is the probability that alternative \(i\) is selected from menu \(A\) by the algorithm; Footnote 17: Recall that \(p_{N}\) is an \(\left|A\right|\)-dimensional vector.
* _mean response time_\(\tau_{N}\in\left[0,\infty\right)\), the average time that the algorithm takes to select an alternative from \(A\).
The possibility of computing these quantities in explicit form is what makes the Neural Metropolis Algorithm empirically relevant. To this end, next we introduce a class of stopping numbers amenable to computations.
**Definition 10**: _A stopping number \(N\) is simple within a Neural Metropolis Algorithm (24) if it is independent of the realizations of incumbents, proposals and response times._
Next we compute the choice probabilities and mean response times for a Neural Metropolis Algorithm with a simple stopping number. A piece of notation: we denote by
\[\bar{\tau}_{j}=\underset{i\in A}{\sum}Q\left(i\mid j\right)\tau_{\text{RT}} \left(i\mid j\right) \tag{26}\]
the average duration of an iteration when \(j\) is the incumbent.
**Proposition 10**: _For a Neural Metropolis Algorithm (24) with a simple stopping number \(N\),18_
Footnote 18: The r.h.s. of both formulas in (27) involve standard matrix-vector multiplications: \(\mu\) and \(\bar{\tau}\) are \(\left|A\right|\)-dimensional vectors, while \(f_{N}\left(M\right)\) and \(g_{N}\left(M\right)\) are the square matrices of order \(\left|A\right|\) defined by (3).
\[p_{N}=f_{N}\left(M\right)\mu\quad\text{and}\quad\tau_{N}=\bar{\tau}\cdot g_{N} \left(M\right)\mu \tag{27}\]
Using the definitions of probability and survival generating functions, we can rewrite the choice probabilities and mean response times (27) as
\[p_{N}=\left(\sum_{n=0}^{\infty}\!\mathbb{P}\left[N=n\right]M^{n}\right)\mu\quad \text{and}\quad\tau_{N}=\bar{\tau}\cdot\left(\sum_{n=0}^{\infty}\!\mathbb{P} \left[N>n\right]M^{n}\right)\mu \tag{28}\]
An immediate consequence of this rewriting is that
\[N\geq N^{\prime}\Longrightarrow\tau_{N}\geq\tau_{N^{\prime}}\]
A less tight stopping number results, as natural, in a longer mean decision time.
In the following important case we can compute the choice probabilities and mean response times in closed form.
**Definition 11**: _Given two coefficients \(\zeta\in\left(0,1\right)\) and \(r\geq 1\), the negative binomial stopping number \(N_{r}\left(\zeta\right)\) is defined by_
\[\mathbb{P}\left[N=n\right]=\binom{n+r-1}{r-1}\zeta^{n}\left(1-\zeta\right)^{r }\qquad\forall n\geq 0\]
Under this distribution, the decision unit receives a "search" signal with probability \(\zeta\) and a "stop" signal with probability \(1-\zeta\); it then proceeds to compare the alternatives when a search signal is received, while it stops searching after \(r\) stop signals.19 When \(r=1\), it reduces to a _geometric stopping number_
Footnote 19: For the first \(r-1\) stop signals it just freezes, restarting in the next round.
\[\mathbb{P}\left[N_{1}\left(\zeta\right)=n\right]=\zeta^{n}\left(1-\zeta \right)\qquad\forall n\geq 0\]
Now, the decision unit stops as soon as it receives the first stop signal.
**Proposition 11**: _It holds_
\[f_{N_{r}\left(\zeta\right)}\left(M\right)=\left(1-\zeta\right)^{r}\left(1- \zeta M\right)^{-r}\]
_and_
\[g_{N_{r}\left(\zeta\right)}\left(M\right)=-\left(\sum_{k=0}^{r}\binom{r}{k} \left(-\zeta\right)^{k}\sum_{j=0}^{k-1}M^{j}\right)\left(1-\zeta M\right)^{-r} \tag{29}\]
By Proposition 10, for a simple negative binomial stopping number we thus have
\[p_{N_{r}\left(\zeta\right)}=\left(1-\zeta\right)^{r}\left(1-\zeta M\right)^{ -r}\mu\quad\text{and}\quad\tau_{N_{r}\left(\zeta\right)}=-\bar{\tau}\cdot \left(\sum_{k=0}^{r}\binom{r}{k}\left(-\zeta\right)^{k}\sum_{j=0}^{k-1}M^{j} \right)\left(1-\zeta M\right)^{-r}\mu\]
In particular, in the geometric case \(r=1\) we get
\[p_{N_{1}\left(\zeta\right)}=\left(1-\zeta\right)\left(1-\zeta M\right)^{-1} \mu\qquad\text{and}\qquad\tau_{N_{1}\left(\zeta\right)}=\bar{\tau}\cdot\zeta \left(1-\zeta M\right)^{-1}\mu\]
The formula for \(p_{N_{1}\left(\zeta\right)}\) was first proved in Valkanova (2020), all other formulas appear to be novel.
### Algorithmic value analysis
Earlier in the paper we discussed the value underpinning of binary choice probabilities. Next we consider a similar concept for Neural Metropolis Algorithms.
**Definition 12**: _A Neural Metropolis Algorithm (24) is value based if its binary choice probability \(\rho_{\mathrm{C}}\) has a binary value representation \((s_{\mathrm{C}},v_{\mathrm{C}},w_{\mathrm{C}})\)._
This notion is the algorithmic counterpart of the binary value representation of a binary choice probability. By Theorem 4, value-based Neural Metropolis Algorithms are characterized by transitive binary choice probabilities.
**Theorem 12**: _If a value-based Neural Metropolis Algorithm has a nice exploration matrix, then_
\[\lim_{n\to\infty}\Pr\left[J_{n}=i\right]=\lim_{N_{k}\to\infty}p_{N_{k}}\left(i,A\right)=\left\{\begin{array}{ll}\frac{e^{v_{\mathrm{C}}(i)}}{\sum_{j\in \arg\max_{A}w_{\mathrm{C}}}e^{v_{\mathrm{C}}(j)}}&\mbox{ if }i\in\arg\max_{A}w_{ \mathrm{C}}\\ 0&\mbox{ else }\end{array}\right. \tag{30}\]
_for all sequences of divergent simple stopping rules \(N_{k}\)._
This result clarifies the nature of a value-based Neural Metropolis Algorithm. To appreciate it, observe that \(\Pr\left[J_{n}=i\right]\) is the probability that, unstopped, the algorithm chooses alternative \(i\) after \(n\) iterations, while \(\arg\max_{A}w_{\mathrm{C}}\) is the set of alternatives that are maximal under \(\succ^{*}\). Thus,
\[\lim_{n\to\infty}\Pr\left[J_{n}=i\right]\]
indicates the inherent tendency of the Neural Metropolis Algorithm to choose a maximal alternative \(i\), regardless of the exogenously posited stopping number. As a result, it can be seen as representing the underlying value of alternative \(i\). When the algorithm satisfies (30), we have, for alternatives \(i\) and \(j\) that are maximal under \(\succ^{*}\),
\[v_{\mathrm{C}}\left(i\right)\geq v_{\mathrm{C}}\left(j\right)\Longleftrightarrow \lim_{n\to\infty}\Pr\left[J_{n}=i\right]\geq\lim_{n\to\infty}\Pr\left[J_{n}=i\right]\]
The inherent tendency of the algorithm is thus consistent with the Fechnerian utility function \(v_{\mathrm{C}}\), which in the limit governs the choices between maximal alternatives (be they incumbents or proposals). The equality
\[\lim_{N_{k}\to\infty}p_{N_{k}}\left(i,A\right)=\lim_{n\to\infty}\Pr\left[J_{n }=i\right]\]
shows that this limit behavior occurs when the stopping number is less and less tight. This means, _inter alia_, that the limit behavior is unaffected by status quo biases, so \(s_{\mathrm{C}}\) plays no role. Implicit here is the view that these biases arise under external pressure, here embodied by the posited stopping number, so they vanish when this pressure relaxes.
Finally, formula (30) ensures that
\[\lim_{n\to\infty}\Pr\left[J_{n}=i\right]=\lim_{N_{k}\to\infty}p_{N_{k}}\left( i,A\right)=0\]
for all alternatives \(i\) in \(A\) that are not maximal under \(\succ^{*}\). In other words, at the limit these alternatives have no chance to be selected - as \(\lim_{N_{k}\rightarrow\infty}p_{N_{k}}\left(i,A\right)=0\) - and in any event the algorithm has no tendency to select them - as \(\lim_{n\rightarrow\infty}\Pr\left[J_{n}=i\right]=0\). This optimality property ensures that, as stopping numbers are less and less tight, the algorithm select alternatives that are maximal under \(\succ^{*}\). Among them, stochastic comparisons are then governed by the Fechnerian utility. In sum, at the limit the Neural Metropolis Algorithm hard-maximizes \(w_{\mathrm{C}}\) and soft-maximizes \(v_{\mathrm{C}}\).
A first important consequence of the previous theorem concerns the case in which \(\succ^{*}\) features a single maximal element.
**Corollary 13**: _A value-based Neural Metropolis Algorithm (24), with nice exploration matrix \(Q\), satisfies_
\[\lim_{n\rightarrow\infty}\Pr\left[J_{n}=i\right]=\lim_{N_{k}\rightarrow\infty }p_{N_{k}}\left(i,A\right)=\left\{\begin{array}{ll}1&\qquad\text{if }i\in\arg\max_{A}w_{ \mathrm{C}}\\ 0&\qquad\text{else}\end{array}\right.\]
_if and only if \(\arg\max_{A}w_{\mathrm{C}}\) is a singleton._
For instance, in the deterministic case of a transitive Dirac binary choice probability the Neural Metropolis Algorithm selects, at the limit, the best alternative.20 This limit analysis is much in line with the traditional assumption of unconstrained computational resources. Traditional choice behavior is thus implemented computationally by the Neural Metropolis Algorithm.
Footnote 20: Recall that a Dirac and transitive \(\rho_{\mathrm{C}}\) corresponds to a weakly complete and transitive \(\succ^{*}\) (cf. Lemma 3). So. \(\arg\max_{A}w_{\mathrm{C}}\) is a singleton consisting of the best alternative under \(\succ^{*}\).
In the traditional case just considered, \(\arg\max_{A}w_{\mathrm{C}}\) is a singleton in \(A\). In contrast, \(\arg\max_{A}w_{\mathrm{C}}\) coincides with the whole set \(A\) when, like in the DDM case, the binary choice probability \(\rho_{\mathrm{C}}\) is positive. In this case, \(w_{\mathrm{C}}\) is constant, so all alternatives are maximal under \(\succ^{*}\). We thus have a second noteworthy special case of the last representation theorem.
**Corollary 14**: _A value-based Neural Metropolis Algorithm (24), with nice exploration matrix \(Q\), satisfies_
\[\lim_{n\rightarrow\infty}\Pr\left[J_{n}=i\right]=\lim_{N_{k}\rightarrow\infty }p_{N_{k}}\left(i,A\right)=\frac{e^{v_{\mathrm{C}}\left(i\right)}}{\sum_{j\in A }e^{v_{\mathrm{C}}\left(j\right)}}\qquad\forall i\in A \tag{31}\]
_if and only if its binary choice probability \(\rho_{\mathrm{C}}\) is positive._
By Proposition 7, in the DDM special case we have \(v_{\mathrm{C}}=\lambda\nu\) in (31) and so multinomial logit behavior
\[\frac{e^{\lambda\nu\left(i\right)}}{\sum_{j\in A}e^{\lambda\nu\left(j\right)}} \tag{32}\]
emerges at the limit, like in Baldassi et al. (2020) and Cerreia-Vioglio et al. (2022), even though here the assumptions on the stopping numbers are different.
In the positive \(\rho_{\mathrm{C}}\) case, value-based Neural Metropolis Algorithms have a remarkable computational property, as the next theorem, our last main result, shows.
**Theorem 15**: _A Neural Metropolis Algorithm (24) with positive \(\rho_{\rm C}\) and nice exploration matrix \(Q\), is value based if and only if its transition matrix \(M\) is reversible.21_
Footnote 21: If and only if \(\rho_{\rm C}\) is transitive.
At a computational level, reversibility ensures that the transition matrix \(M\) is diagonalizable with real eigenvalues. Therefore,
\[M=U\mathop{\rm diag}\left(\lambda_{1},\lambda_{2},...,\lambda_{|A|}\right)U^{-1}\]
where \(\mathop{\rm diag}\left(\cdot\right)\) is the diagonal matrix of the eigenvalues \(\lambda_{i}\) of \(M\), each repeated according to its multiplicity, and the columns of \(U\) form a basis of the respective eigenvectors. In turn, this readily implies
\[f_{N}\left(M\right)=U\mathop{\rm diag}\left(f_{N}\left(\lambda_{1}\right),f_{ N}\left(\lambda_{2}\right),...,f_{N}\left(\lambda_{|A|}\right)\right)U^{-1} \tag{33}\]
and
\[g_{N}\left(M\right)=U\mathop{\rm diag}\left(\frac{1-f_{N}\left(\lambda_{1} \right)}{1-\lambda_{1}},\frac{1-f\left(\lambda_{2}\right)}{1-\lambda_{2}},...,\frac{1-f\left(\lambda_{|A|}\right)}{1-\lambda_{|A|}}\right)U^{-1} \tag{34}\]
with the limit convention (2). These formulas permit to compute choice probabilities and mean response times for simple stopping numbers, as in formulas (28). This computational achievement concludes the analysis of value-based Neural Metropolis Algorithms.
## 5 Discussion: temporal constrains
In our analysis we considered constrained resources as modelled by a stopping number on iterations, which may have an economic or physiological cost for the decision unit. For perspective, in this final section we consider a different type of constraint, namely, a temporal constraint in the form of a hard time constraint \(t\). This deadline induces a stopping number \(N_{t}\) with \(N_{t}=n\) if \(t\in[t_{n-1},t_{n})\). In words, the decision unit cannot conclude the \(n\)-th comparison when the duration \(t_{n}\) of that comparison exceeds the deadline \(t\). To see how this stopping number affects the analysis, observe that, if unstopped, a Neural Metropolis Algorithm realizes a stochastic process
\[\left(J,I,T\right)=\left(J_{-1},I_{0},T_{0},J_{0},I_{1},T_{1},...,J_{n-1},I_{n },T_{n},J_{n},I_{n+1},...\right) \tag{35}\]
where the realization \(j_{n-1}\) of \(J_{n-1}\) is the incumbent at the end of iteration \(n-1\), the realization \(i_{n}\) of \(I_{n}\) is the proposal at iteration \(n\), and the realization \(t_{n}\) of \(T_{n}\) is the duration of iteration \(n\). The stopping number \(N_{t}\) acts as follows:
\[N_{t}=n\iff T_{-1}+T_{0}+\cdot\cdot\cdot+T_{n-1}\leq t<T_{-1}+T_{0}+\cdot\cdot \cdot+T_{n-1}+T_{n}\]
where \(T_{-1}=0\). In this case, a closed form representation of \(p_{N_{t}}\) is not achievable in general and, by definition, \(\tau_{N_{t}}=t\). Yet, we can give a limit result, in the spirit of Theorem 12, under the following assumption.
**Regularity** A binary choice model \(\left(\mathrm{C},\mathrm{RT}\right)\) is _regular_ if \(\rho_{\mathrm{C}}\) is positive and transitive, \(\tau_{\mathrm{RT}}\) is positive and \(\mathrm{RT}=\left[\mathrm{RT}_{i,j}\right]\) consists of random response times \(\mathrm{RT}_{i,j}\) with a continuous distribution at \(0\) and with no singular part.22
Footnote 22: These two conditions on the distributions of response times are automatically satisfied when they all admit density.
We can now state the limit result.23
Footnote 23: In this discussion section we focus on the positive case, leaving the more general case to an in-depth future analysis.
**Proposition 16**: _If a Neural Metropolis Algorithm (24) with irreducible exploration matrix \(Q\) is based on a regular BCM \(\left(\mathrm{C},\mathrm{RT}\right)\), then_
\[\lim_{t\rightarrow\infty}p_{N_{t}}\left(i,A\right)=\frac{e^{v_{\mathrm{C}} \left(i\right)}\bar{\tau}_{i}}{\sum_{j\in A}e^{v_{\mathrm{C}}\left(j\right)} \bar{\tau}_{j}}\qquad\forall i\in A\]
As time pressure diminishes, the limit probability of choosing alternative \(i\) becomes proportional to the limit probability with which \(i\) is an incumbent times the average duration of the comparisons in which \(i\) is the incumbent. The intuition is natural: the longer the time spent in comparing an alternative with the other alternatives, the higher the probability of choosing that alternative at the deadline \(t\).
In the DDM special case, we get
\[\lim_{t\rightarrow\infty}p_{N_{t}}\left(i\right)=\frac{e^{\lambda\nu\left(i \right)}\bar{\tau}_{i}}{\sum_{j\in A}}\!\!e^{\lambda\nu\left(j\right)}\bar{ \tau}_{j}=\frac{e^{\lambda\nu\left(i\right)+\alpha\left(i\right)}}{\sum_{j\in A }}\qquad\forall i\in A \tag{36}\]
Thus, the limit probability is softmax with neural utility \(\nu\) and alternative specific bias
\[\alpha\left(i\right)=\log\bar{\tau}_{i}\qquad\forall i\in A\]
If, in addition, the DDM is symmetric and the off diagonal entries of the exploration matrix \(Q\) are inversely proportional to mean response times (as in Cerreia-Vioglio et al. 2022, Section 2), then the \(\bar{\tau}_{i}\)'s are approximately constant and multinomial logit behavior (32) emerges.24
Footnote 24: See Appendix B.4 for details.
Appendix: Evidence threshold models
As it will soon become clear, evidence threshold models are best introduced in discrete time. For each pair \(i,j\) of alternatives in \(A\), let \(\left\{\mathrm{Z}_{i,j}\left(t\right)\right\}_{t=0}^{\infty}\) be a discrete-time stochastic process in which each variable \(\mathrm{Z}_{i,j}\left(t\right)\) represents the net evidence - accumulated or instantaneous - in favor of \(i\) over \(j\) that the neural system takes into account at time \(t\). Given two evidence thresholds \(\lambda,\beta>0\), a decision is taken when either the evidence in favor of \(i\) reaches level \(\lambda\) or the evidence in favor of \(j\) reaches level \(\beta\). This happens at (stochastic) time
\[\mathrm{RT}_{i,j}=\min\left\{t:\mathrm{Z}_{i,j}\left(t\right)\geq\lambda\text{ or }\mathrm{Z}_{i,j}\left(t\right)\leq-\beta\right\} \tag{37}\]
With this, the choice variable is
\[\mathrm{C}_{i,j}=\left\{\begin{array}{ll}i&\quad\text{if }\mathrm{Z}_{i,j} \left(\mathrm{RT}_{i,j}\right)\geq\lambda\\ j&\quad\text{if }\mathrm{Z}_{i,j}\left(\mathrm{RT}_{i,j}\right)\leq-\beta \end{array}\right. \tag{38}\]
Evidence threshold models encompass integration models, like a discrete-time version of the DDM, as well as the extrema detection models discussed by Stine et al. (2020). To see why, consider the discrete-time Ornstein-Uhlenbeck process
\[\mathrm{Z}_{i,j}\left(t\right)=\underbrace{\left(1-\eta\right)\mathrm{Z}_{i, j}\left(t-1\right)}_{\text{past evidence}}+\underbrace{\zeta_{i,j}\left(t\right)}_{\text{new evidence}}\qquad\forall t\geq 1\]
with initial condition \(\mathrm{Z}_{i,j}\left(0\right)=0\). The scalar \(\eta\in\left[0,1\right]\) captures past evidence deterioration and the variable
\[\zeta_{i,j}\left(t\right)=\left[\nu\left(i\right)-\nu\left(j\right)\right] \mu\left(t-1\right)+\sigma\varepsilon\left(t\right)\qquad\forall t\geq 1 \tag{39}\]
is the instantaneous noisy evidence gathered at time \(t\) in favor of either alternative.25 The shock \(\varepsilon\) is a Gaussian white noise process - i.e., it consists of i.i.d. Gaussian random variables \(\varepsilon\left(t\right)\sim N\left(0,1\right)\); like in the DDM, \(\nu\left(i\right)\) is the value of alternative \(i\). When
Footnote 25: Evidence is in favor of \(i\) over \(j\) when \(\zeta_{i,j}\left(t\right)\geq 0\) and in favor of \(j\) over \(i\) when \(\zeta_{i,j}\left(t\right)\leq 0\). The possible dependence of \(\mu\) on \(t-1\) allows for urgency signals.
\[\eta=0\quad,\quad\mu=1\quad\text{and}\quad\sigma=\sqrt{2} \tag{40}\]
process (39) reduces to the following discrete-time version of the DDM
\[\mathrm{Z}_{i,j}\left(t\right)-\mathrm{Z}_{i,j}\left(t-1\right)=\left[\nu \left(i\right)-\nu\left(j\right)\right]+\sqrt{2}\,\varepsilon\left(t\right) \qquad\forall t\geq 1\]
Through the discrete-time Wiener process \(w\left(t\right)=\sum_{s=1}^{t}\varepsilon\left(s\right)\), it is immediate to see that \(\mathrm{Z}_{i,j}\left(t\right)\) represents accumulated noisy evidence:
\[\mathrm{Z}_{i,j}\left(t\right)=\sum_{s=1}^{t}\zeta_{i,j}\left(s\right)=\left[ \nu\left(i\right)-\nu\left(j\right)\right]t+\sqrt{2}w\left(t\right)\]
In contrast, when \(\eta=1\) the process (39) takes the _extrema detection_ form
\[\mathrm{Z}_{i,j}\left(t\right)=\zeta_{i,j}\left(t\right)=\left[\nu\left(i \right)-\nu\left(j\right)\right]\mu\left(t-1\right)+\sigma\varepsilon\left(t \right)\qquad\forall t\geq 1 \tag{41}\]
Now \(\mathrm{Z}_{i,j}\left(t\right)\) represents instantaneous noisy evidence, as opposed to the DDM accumulated one.
In continuous time, the Ornstein-Uhlenbeck process becomes
\[\mathrm{dZ}_{i,j}\left(t\right)=-\eta\mathrm{Z}_{i,j}\left(t\right)\mathrm{d}t+ \left[\nu\left(i\right)-\nu\left(j\right)\right]\mu\left(t\right)\mathrm{d}t+ \sigma\mathrm{dW}\]
with solution
\[\mathrm{Z}_{i,j}\left(t\right)=\left[\nu\left(i\right)-\nu\left(j\right) \right]\mu\left(t\right)\frac{1-e^{-\eta t}}{\eta}+\int_{0}^{t}e^{-\eta(t-s)} \sigma\mathrm{dW}\left(s\right) \tag{42}\]
The DDM (21) is still the special case (40). More difficult is to identify the continuous counterpart of the extrema detection model (41) because of the technical issues that arise with continuous time white noise. As these issues do not appear to have a substantive neural underpinning, we introduced evidence threshold models in discrete time.26
Footnote 26: The accumulated evidence used by integration models like the DDM is properly formalized by Wiener processes. The instantaneous evidence featured by extrema detection models would rely on a notion of “derivative” for Wiener processes, a notoriously subtle issue as their paths are nowhere differentiable.
Be that as it may, Bogacz et al. (2006) report formulas for the continuous time Ornstein-Uhlenbeck process that generalize the DDM ones upon which Proposition 7 is based. It is unclear, however, whether this generalized formulas deliver a sharp Ornstein-Uhlenbeck extension of this proposition. Nevertheless, (42) is a significant generalization of the DDM that, via the obvious continuous time versions of (37) and (38), can play the role of a BCM.
Appendix: Proofs and related analysis
### Section 3
In this section it is sometimes convenient to use the exponential transformation \(u=e^{v}\) of the Fenchel utility \(v\). We call \(u\)_strict utility_.
#### b.1.1 Proof of Lemma 1
(i) Asymmetry is easily checked. Assume _per contra_ that \(\succ^{*}\) is not negatively transitive. Then, there exist \(i\), \(j\) and \(k\) such that \(i\not\succ^{*}k\not\succ^{*}j\) but \(i\succ^{*}j\). Alternatives \(i\), \(k\) and \(j\) must be distinct: \(i\succ^{*}j\) implies \(i\neq j\), while \(i=k\) would imply \(i\not\succ^{*}j\) and so would \(k=j\). Moreover,
1. \(i\not\succ^{*}k\) implies \(\rho\left(i\mid k\right)<1\) and \(\rho\left(k\mid i\right)>0\),
2. \(k\not\succ^{*}j\) implies \(\rho\left(k\mid j\right)<1\) and \(\rho\left(j\mid k\right)>0\),
3. \(i\succ^{*}j\) implies \(\rho\left(i\mid j\right)=1\) and \(\rho\left(j\mid i\right)=0\).
Therefore,
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right)=0 \quad\text{and}\quad\rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left( i\mid j\right)\neq 0\]
which contradicts the transitivity of \(\rho\). We conclude that \(\succ^{*}\) is negatively transitive.
(ii) In view of (i), it follows from Fishburn (1970) p. 13.
(iii) Completeness is easily established. Assume _per contra_ that \(\succ\) is not transitive. Then there exist \(i\), \(j\) and \(k\) such that \(i\succsucc k\succ j\) but \(i\not\succ j\). Alternatives \(i\), \(k\) and \(j\) must be distinct: \(i\not\succ j\) implies \(i\neq j\), while \(i=k\) would imply \(i\succsucc j\), and so would \(k=j\). Moreover,
1. \(i\succ k\) implies \(\rho\left(i\mid k\right)\geq\rho\left(k\mid i\right)\),
2. \(k\succ j\) implies \(\rho\left(k\mid j\right)\geq\rho\left(j\mid k\right)\),
3. \(i\not\succ j\) implies \(\rho\left(j\mid i\right)>\rho\left(i\mid j\right)\).
It holds \(\rho\left(i\mid k\right)>0\). Indeed, \(\rho\left(i\mid k\right)=0\) would imply \(\rho\left(k\mid i\right)=1\), contradicting (a). Similarly, \(\rho\left(k\mid j\right)>0\). Then,
\[\rho\left(i\mid k\right)\rho\left(k\mid j\right)\geq\rho\left(k\mid i\right) \rho\left(j\mid k\right)\quad;\quad\rho\left(j\mid i\right)>\rho\left(i\mid j \right)\quad;\quad\rho\left(i\mid k\right)\rho\left(k\mid j\right)>0\]
If \(\rho\left(i\mid k\right)\rho\left(k\mid j\right)=\rho\left(k\mid i\right) \rho\left(j\mid k\right)\), then both terms are strictly positive, and
\[\rho\left(i\mid k\right)\rho\left(k\mid j\right)\rho\left(j\mid i\right)> \rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)\]
Else \(\rho\left(i\mid k\right)\rho\left(k\mid j\right)>\rho\left(k\mid i\right) \rho\left(j\mid k\right)\), and then
\[\rho\left(i\mid k\right)\rho\left(k\mid j\right)\rho\left(j\mid i\right)> \rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(j\mid i\right) \geq\rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)\]
In both cases the transitivity of \(\rho\) is contradicted. We conclude that \(\succ\) is transitive.
(iv) Reflexivity of \(\succ^{\circ}\) is obvious. Let \(i\succ^{\circ}j\) and \(j\succ^{\circ}k\). By definition, \(i\succsucc j\) and \(i\parallel^{*}j\) as well as \(j\succ k\) and \(j\parallel^{*}k\). As both \(\succ\) and \(\parallel^{*}\) are transitive, it follows that \(i\succsucc k\) and \(i\parallel^{*}k\), that is, \(i\succ^{\circ}k\). We conclude that \(\succ^{\circ}\) is transitive. Finally, assume \(i\parallel^{*}j\), and note that also \(j\parallel^{*}i\). Since \(\succ\) is complete, then either \(i\succsucc j\) or \(j\succsucc i\), thus either \(i\succsucc^{\circ}j\) or \(j\succ^{\circ}i\).
#### b.1.2 Proof of Lemma 2
Point (i) is obvious.
(ii) Let \(i\) and \(j\) be any two alternatives with \(w\left(i\right)=w\left(j\right)\), and let \(\tilde{u}:A\rightarrow\left(0,\infty\right)\) be such that
\[\rho\left(i\mid j\right)=s\left(i,j\right)\frac{\tilde{u}\left(i\right)}{\tilde {u}\left(i\right)+\tilde{u}\left(j\right)}\]
We have
\[\frac{\rho\left(i\mid j\right)}{\rho\left(j\mid i\right)}=\frac{s\left(i,j \right)\frac{\tilde{u}\left(i\right)}{\tilde{u}\left(i\right)+\tilde{u}\left( j\right)}}{s\left(j,i\right)\frac{\tilde{u}\left(j\right)}{\tilde{u}\left(i \right)+\tilde{u}\left(j\right)}}=\frac{\tilde{u}\left(i\right)}{\tilde{u} \left(j\right)}\]
Similarly,
\[\frac{\rho\left(i\mid j\right)}{\rho\left(j\mid i\right)}=\frac{s\left(i,j \right)\frac{u\left(i\right)}{u\left(i\right)+u\left(j\right)}}{s\left(j,i \right)\frac{u\left(j\right)}{u\left(i\right)+u\left(j\right)}}=\frac{u\left( i\right)}{u\left(j\right)}\]
Therefore, for any \(j^{*}\in A\),
\[\tilde{u}\left(i\right)=\frac{\tilde{u}\left(j^{*}\right)}{u\left(j^{*}\right) }u\left(i\right)\]
for all \(i\in A\). We conclude that \(u\) is unique up to a positive scalar multiple.
(iii) Let \(i\) and \(j\) be any two alternatives with \(w\left(i\right)=w\left(j\right)\). By the symmetry of \(s\),
\[\rho\left(i\mid j\right)+\rho\left(j\mid i\right)=s\left(i,j\right)\frac{u \left(i\right)}{u\left(i\right)+u\left(j\right)}+s\left(j,i\right)\frac{u\left( j\right)}{u\left(i\right)+u\left(j\right)}=s\left(i,j\right) \tag{43}\]
Then \(s\) is unique on the level set of \(w\left(i\right)\). The relations in (11) follow. \(\blacksquare\)
#### b.1.3 Proof of Lemma 3
Let \(\rho\) be a binary choice probability. Suppose that \(\rho\) is Dirac and transitive. Let \(i\neq j\). As \(\rho\) is Dirac, \(\rho\left(i\mid j\right)\in\left\{0,1\right\}\). If \(\rho\left(i\mid j\right)=1\), then \(i\succ^{*}j\). If \(\rho\left(i\mid j\right)=0\), then \(\rho\left(j\mid i\right)=1\) and so \(j\succ^{*}i\). We conclude that \(\succ^{*}\) is weakly complete.
Let \(i\succ^{*}j\) and \(j\succ^{*}k\). Hence, we have that \(j\not\succ^{*}i\), \(k\not\succ^{*}j\), and \(k\neq i\). As \(\rho\) is transitive, \(\succ^{*}\) is negatively transitive by Lemma 1. Hence, \(k\not\succ^{*}i\) thus \(\rho\left(i\mid k\right)\neq 0\). By the definition of Dirac and since \(i\neq k\), we have that \(\rho\left(i\mid k\right)=1\), thus \(i\succ^{*}k\). We conclude that \(\succ^{*}\) is transitive.
As to the converse, let \(\succ^{*}\) be weakly complete and transitive. Let \(i\neq j\). As \(\succ^{*}\) is weakly complete, either \(i\succ^{*}j\) or \(j\succ^{*}i\). If \(i\succ^{*}j\), then \(\rho\left(i\mid j\right)=1\); if \(j\succ^{*}i\), then \(\rho\left(j\mid i\right)=1\) and so \(\rho\left(i\mid j\right)=0\). We conclude that \(\rho\left(i\mid j\right)\in\left\{0,1\right\}\). This proves that \(\rho\) is Dirac. Suppose, _per contra_, that \(\rho\) is not transitive. Then, there exist three distinct alternatives \(i\), \(j\) and \(k\) such that
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right)\neq \rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)\]
It is impossible that both sides contain a zero factor. Since \(\rho\) is Dirac, then either
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right)=1\]
\[\rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)=1\]
In the former case, \(i\succ^{*}k\succ^{*}j\succ^{*}i\) and so \(\succ^{*}\) is not transitive. In the latter case, \(i\succ^{*}j\succ^{*}k\succ^{*}i\) and so, again, \(\succ^{*}\) is not transitive. We conclude that \(\rho\) must be transitive. \(\blacksquare\)
#### b.1.4 Theorem 4
We prove a more general result that provides a utility representation \(\bar{u}:A\rightarrow\mathbb{R}\) for the preference \(\succsim\).
**Theorem 17**: _Given a binary choice probability \(\rho\), the following conditions are equivalent:_
1. \(\rho\) _is transitive;_
2. _there exist_ \(w,u:A\rightarrow(0,\infty)\) _and a symmetric_ \(s:A^{2}\rightarrow(0,\infty)\) _such that_ \[\rho\left(i\mid j\right)=\left\{\begin{array}{ll}1&\text{if }w\left(i\right)>w\left(j\right)\\ s\left(i,j\right)\dfrac{u\left(i\right)}{u\left(i\right)+u\left(j\right)}& \text{if }w\left(i\right)=w\left(j\right)\\ 0&\text{if }w\left(i\right)<w\left(j\right)\end{array}\right.\] _for all_ \(i,j\in A\)_;_
3. _there exist_ \(\bar{u}:A\rightarrow(0,\infty)\)_,_ \(f:\operatorname{Im}\bar{u}\rightarrow(0,\infty)\) _increasing, and a symmetric_ \(s:A^{2}\rightarrow(0,\infty)\) _such that_ \[\rho\left(i\mid j\right)=\left\{\begin{array}{ll}1&\text{if }f\left(\bar{u} \left(i\right)\right)>f\left(\bar{u}\left(j\right)\right)\\ s\left(i,j\right)\dfrac{\bar{u}\left(i\right)}{\bar{u}\left(i\right)+\bar{u} \left(j\right)}&\text{if }f\left(\bar{u}\left(i\right)\right)=f\left(\bar{u} \left(j\right)\right)\\ 0&\text{if }f\left(\bar{u}\left(i\right)\right)<f\left(\bar{u} \left(j\right)\right)\end{array}\right.\] _for all_ \(i,j\in A\)_._
By setting \(v=\log u\) we recover Theorem 4 (note that since \(w\) is ordinally unique we can always assume it to be strictly positive).
**Proof** (i) implies (iii). Since \(A\) is finite there exists \(w:A\rightarrow(0,\infty)\) that represents \(\succ^{*}\) in the sense of (9). Then,
\[w\left(i\right)>w\left(j\right) \iff i\succ^{*}j\iff\rho\left(i\mid j\right)=1\] \[w\left(i\right)<w\left(j\right) \iff j\succ^{*}i\iff\rho\left(i\mid j\right)=0\] \[w\left(i\right)=w\left(j\right) \iff i\parallel^{*}j\iff\rho\left(i\mid j\right)\in(0,1)\]
By Lemma 1, \(\parallel^{*}\) is an equivalence relation on \(A\). Since \(w\) is unique up to a strictly increasing transformation, if \(\left|\operatorname{Im}w\right|=m\) we can assume \(\operatorname{Im}w=\{1,2,...,m\}\). For all \(h=1,2,...,m\) we
can choose \(i_{h}^{*}\in w^{-1}\left(h\right)\). With this, \(\left[i_{1}^{*}\right],...,\left[i_{m}^{*}\right]\) is the partition of \(A\) induced by \(\left\|{}^{*}\right.\). For each \(h=1,...,m\), set
\[u_{h}^{*}\left(j\right)=\frac{\rho\left(j\mid i_{h}^{*}\right)}{\rho\left(i_{h} ^{*}\mid j\right)}\qquad\forall j\in\left[i_{h}^{*}\right]\]
The ratio is well defined because \(w\left(j\right)=w\left(i_{h}^{*}\right)\) implies \(\rho\left(j\mid i_{h}^{*}\right),\rho\left(i_{h}^{*}\mid j\right)\in\left(0,1\right)\). With this,
\[\frac{u_{h}^{*}\left(j\right)}{u_{h}^{*}\left(k\right)}=\frac{\frac{\rho\left( j\mid i_{h}^{*}\mid j\right)}{\rho\left(i_{h}^{*}\mid j\right)}}{\rho \left(i_{h}^{*}\mid k\right)}=\frac{\rho\left(j\mid i_{h}^{*}\right)}{\rho \left(i_{h}^{*}\mid j\right)}\frac{\rho\left(i_{h}^{*}\mid k\right)}{\rho \left(k\mid i_{h}^{*}\right)}\qquad\forall j,k\in\left[i_{h}^{*}\right]\]
By transitivity, we have that
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right)= \rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)\]
for all \(i,\)\(j,\)\(k\in A\),27 and
Footnote 27: As previously observed, transitivity implies the above “product rule” for all triplets of alternatives in \(A\) and not only triplets of distinct ones.
\[\frac{\rho\left(j\mid i_{h}^{*}\right)\rho\left(i_{h}^{*}\mid k\right)}{\rho \left(i_{h}^{*}\mid j\right)}\frac{\rho\left(i_{h}^{*}\mid k\right)}{\rho \left(k\mid i_{h}^{*}\right)}=\frac{\rho\left(j\mid k\right)}{\rho\left(k \mid j\right)}\]
for all \(j,k\in\left[i_{h}^{*}\right]\). Therefore,
\[\frac{u_{h}^{*}\left(j\right)}{u_{h}^{*}\left(k\right)}=\frac{\rho\left(j \mid i_{h}^{*}\right)}{\rho\left(i_{h}^{*}\mid j\right)}\frac{\rho\left(i_{h} ^{*}\mid k\right)}{\rho\left(k\mid i_{h}^{*}\right)}=\frac{\rho\left(j\mid k \right)}{\rho\left(k\mid j\right)}\qquad\forall j,k\in\left[i_{h}^{*}\right]\]
for all \(h=1,...,m\).
Set \(\sigma_{1}=1\) and for each \(h=2,...,m\), choose a strictly positive constant \(\sigma_{h}\) such that
\[\max_{j\in\left[i_{h-1}^{*}\right]}\sigma_{h-1}u_{h-1}^{*}\left(j\right)<\min _{j\in\left[i_{h}^{*}\right]}\sigma_{h}u_{h}^{*}\left(j\right)\]
that is,
\[\sigma_{h}>\sigma_{h-1}\frac{\max_{j\in\left[i_{h-1}^{*}\right]}u_{h-1}^{*} \left(j\right)}{\min_{j\in\left[i_{h}^{*}\right]}u_{h}^{*}\left(j\right)}\]
Define
\[\bar{u}\left(j\right)=\sigma_{h}u_{h}^{*}\left(j\right)\qquad\forall j\in \left[i_{h}^{*}\right],\forall h=1,...,m\]
Note that for all \(h=2,...,m\), all \(j_{h-1}\in\left[i_{h-1}^{*}\right]\) and all \(j_{h}\in\left[i_{h}^{*}\right]\)
\[\bar{u}\left(j_{h-1}\right)<\bar{u}\left(j_{h}\right)\]
that is,
\[\bar{u}\left(j_{1}\right)<\bar{u}\left(j_{2}\right)<\cdots<\bar{u}\left(j_{m}\right)\]
whenever \(j_{h}\in\left[i_{h}^{*}\right]\) for all \(h=1,2,...,m\). Then, if \(\bar{u}\left(k\right)\geq\bar{u}\left(j\right)\), with \(k\in\left[i_{h_{k}}^{*}\right]\) and \(j\in\left[i_{h_{j}}^{*}\right]\), it cannot be the case that
\[h_{j}>h_{k}\]
Thus, \(h_{k}\geq h_{j}\), \(w\left(i_{h_{k}}^{*}\right)\geq w\left(i_{h_{j}}^{*}\right)\), and \(w\left(k\right)\geq w\left(j\right)\). Therefore there exists \(f:\bar{u}\left(A\right)\rightarrow\left(0,\infty\right)\) increasing and such that
\[f\circ\bar{u}=w\]
Thus,
\[f\left(\bar{u}\left(i\right)\right)>f\left(\bar{u}\left(j\right) \right) \iff\rho\left(i\mid j\right)=1\quad;\quad f\left(\bar{u}\left(i \right)\right)=f\left(\bar{u}\left(j\right)\right)\iff\rho\left(i\mid j\right) \in\left(0,1\right)\] \[f\left(\bar{u}\left(i\right)\right)<f\left(\bar{u}\left(j\right) \right) \iff\rho\left(i\mid j\right)=0\]
For all \(j\neq k\) in \(A\) such that \(f\left(\bar{u}\left(j\right)\right)=f\left(\bar{u}\left(k\right)\right)\), we have \(\rho\left(j\mid k\right)\in\left(0,1\right)\), and so \(j\parallel^{*}k\), then there exists \(h=1,...,m\) such that, for each \(j,k\in[i_{h}^{*}]\),
\[\frac{\bar{u}\left(j\right)}{\bar{u}\left(j\right)+\bar{u}\left(k\right)}= \frac{1}{1+\frac{\bar{u}\left(k\right)}{\bar{u}\left(j\right)}}=\frac{1}{1+ \frac{\sigma_{h}u^{*}\left(k\right)}{\sigma_{h}u^{*}\left(j\right)}}=\frac{1}{ 1+\frac{\rho\left(k\mid j\right)}{\rho\left(j\mid k\right)}}=\frac{\rho\left( j\mid k\right)}{\rho\left(j\mid k\right)+\rho\left(k\mid j\right)}\]
and
\[\rho\left(j\mid k\right)=\underbrace{(\rho\left(j\mid k\right)+\rho\left(k \mid j\right))}_{=s\left(j,k\right)}\frac{\bar{u}\left(j\right)}{\bar{u}\left( j\right)+\bar{u}\left(k\right)}\]
By setting \(s\left(j,k\right)=1\) if \(f\left(\bar{u}\left(i\right)\right)\neq f\left(\bar{u}\left(j\right)\right)\) we conclude the argument.
Since (iii) trivially implies (ii), it remains to prove that (ii) implies (i). Let \(u\), \(w\) and \(s\) represent \(\rho\) as in (ii). We have already observed that \(w\) represents \(\succ^{*}\).
For any triplet \(i,j,k\) of distinct elements of \(A\), consider the two products
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right) \qquad\text{and}\qquad\rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho \left(i\mid j\right)\]
Suppose first that \(i,j,k\) do not belong to the same level set of \(w\). Without loss of generality, we can then set \(\rho\left(j\mid i\right)=0\). Hence, \(\rho\left(i\mid j\right)=1\) and so \(i\succ^{*}j\), that is, \(w\left(i\right)>w\left(j\right)\). There are two cases to consider.
1. If \(w\left(k\right)\geq w\left(i\right)\), then \(w\left(k\right)>w\left(j\right)\) and so \(\rho\left(j\mid k\right)=0\).
2. Else \(w\left(i\right)>w\left(k\right)\), then \(\rho\left(k\mid i\right)=0\).
In both cases, the two products are null, so equal. Next suppose that \(i,j,k\) belong to the same level set of \(w\). Then,
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right) =s\left(j,i\right)\frac{u\left(j\right)}{u\left(j\right)+u\left( i\right)}s\left(k,j\right)\frac{u\left(k\right)}{u\left(k\right)+u\left(j \right)}s\left(i,k\right)\frac{u\left(i\right)}{u\left(i\right)+u\left(k \right)}\] \[=s\left(k,i\right)\frac{u\left(i\right)}{u\left(k\right)+u\left( i\right)}s\left(j,k\right)\frac{u\left(k\right)}{u\left(j\right)+u\left(k \right)}s\left(i,j\right)\frac{u\left(j\right)}{u\left(i\right)+u\left(j\right)}\] \[=s\left(k,i\right)\frac{u\left(k\right)}{u\left(k\right)+u\left( i\right)}s\left(j,k\right)\frac{u\left(j\right)}{u\left(j\right)+u\left(k \right)}s\left(i,j\right)\frac{u\left(i\right)}{u\left(i\right)+u\left(j \right)}\] \[=\rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)\]
We conclude that \(\rho\) is transitive.
#### b.1.5 Theorem 5
"Only if." If a tandem has a binary value representation \(\left(v,w,s,\varphi\right)\), then \(\rho\) is transitive (see Theorem 4). Consider the set
\[\mathbb{D} =\left\{\left(i,j\right):\tau\left(i\mid j\right)\neq 0\right\}= \left\{\left(i,j\right):\rho\left(i\mid j\right)\in\left(0,1\right)\right\}\] \[=\left\{\left(i,j\right):i\parallel^{*}j\right\}=\left\{\left(i,j \right):w\left(i\right)=w\left(j\right)\right\}\]
Note that for all \(\left(i,j\right)\in\mathbb{D}\),
\[\ell_{ij}=\ln\frac{\rho\left(i\mid j\right)}{\rho\left(j\mid i\right)}=v\left( i\right)-v\left(j\right)\]
thus (16) delivers
\[\ell_{ij}=\ell_{hk}\implies v\left(i\right)-v\left(j\right)=v\left(h\right)-v \left(k\right)\implies\tau\left(i\mid j\right)=\tau\left(h\mid k\right)\]
for all \(\left(i,j\right),\left(h,k\right)\in\mathbb{D}\). Thus (17) is satisfied.
Since \(\varphi\) is strictly quasiconcave and unimodal, then there exists a unique \(l\in\mathbb{R}\) such that \(\varphi\) is strictly increasing on \(\left(-\infty,l\right]\) and strictly decreasing on \(\left[l,\infty\right)\), then
\[l\leq\ell_{ij}<\ell_{hk}\implies l\leq v\left(i\right)-v\left(j \right)<v\left(h\right)-v\left(k\right)\] \[\implies\varphi\left(v\left(i\right)-v\left(j\right)\right)> \varphi\left(v\left(h\right)-v\left(k\right)\right)\implies\tau\left(i\mid j \right)>\tau\left(h\mid k\right)\]
and
\[\ell_{ij}<\ell_{hk}\leq l\implies v\left(i\right)-v\left(j\right) <v\left(h\right)-v\left(k\right)\leq l\] \[\implies\varphi\left(v\left(i\right)-v\left(j\right)\right)< \varphi\left(v\left(h\right)-v\left(k\right)\right)\implies\tau\left(i\mid j \right)<\tau\left(h\mid k\right)\]
for all \(\left(i,j\right),\left(h,k\right)\in\mathbb{D}\). Thus (18) and (19) are satisfied, as desired.
"If." If a tandem is chronometric then \(\rho\) is transitive and there exist \(v,w:A\rightarrow\mathbb{R}\) and a symmetric \(s:A^{2}\rightarrow\left(0,\infty\right)\) such that (8) holds. Consider the set
\[\mathbb{D} =\left\{\left(i,j\right):\tau\left(i\mid j\right)\neq 0\right\}= \left\{\left(i,j\right):\rho\left(i\mid j\right)\in\left(0,1\right)\right\}\] \[=\left\{\left(i,j\right):i\parallel^{*}j\right\}=\left\{\left(i,j \right):w\left(i\right)=w\left(j\right)\right\}\]
Note that for all \(\left(i,j\right)\in\mathbb{D}\),
\[\ell_{ij}=\ln\frac{\rho\left(i\mid j\right)}{\rho\left(j\mid i\right)}=v \left(i\right)-v\left(j\right)\]
and set \(L=\left\{\ell_{ij}:\left(i,j\right)\in\mathbb{D}\right\}=\left\{v\left(i \right)-v\left(j\right):\left(i,j\right)\in\mathbb{D}\right\}\). With this, (17) implies that there exists \(\psi:L\rightarrow\left(0,\infty\right)\) such that
\[\tau\left(i\mid j\right)=\psi\left(\ell_{ij}\right)=\psi\left(v\left(i\right) -v\left(j\right)\right)\]
for all \(\left(i,j\right)\in\mathbb{D}\). Moreover, by (18), if \(x,y\in L\) are such that \(l\leq x<y\), taking \(\left(i,j\right)\) and \(\left(h,k\right)\) in \(\mathbb{D}\) such that \(x=\ell_{ij}\) and \(y=\ell_{hk}\), it follows that
\[l\leq\ell_{ij}<\ell_{hk}\implies\tau\left(i\mid j\right)>\tau\left(h\mid k \right)\implies\psi\left(\ell_{ij}\right)>\psi\left(\ell_{hk}\right)\implies \psi\left(x\right)>\psi\left(y\right)\]
Analogously, by (19), if \(x,y\in L\) are such that \(x<y\leq l\), taking \((i,j)\) and \((h,k)\) in \(\mathbb{D}\) such that \(x=\ell_{ij}\) and \(y=\ell_{hk}\), it follows that
\[\ell_{ij}<\ell_{hk}\leq l\implies\tau\left(i\mid j\right)<\tau\left(h\mid k \right)\implies\psi\left(\ell_{ij}\right)<\psi\left(\ell_{hk}\right)\implies \psi\left(x\right)<\psi\left(y\right)\]
Summing up, \(L\) is a finite subset of \(\mathbb{R}\) and \(\psi:L\rightarrow(0,\infty)\) is such that there exists \(l\in\mathbb{R}\) for which
\[l\leq x<y\implies\psi\left(x\right)>\psi\left(y\right)\] \[x<y\leq l\implies\psi\left(x\right)<\psi\left(y\right)\]
for all \(x,y\in L\).
This allows to extend \(\psi\) to a function \(\varphi:\mathbb{R}\rightarrow(0,\infty)\) such that \(\varphi\) is strictly increasing on \((-\infty,l]\) and strictly decreasing on \([l,\infty)\). Thus there exists a strictly quasiconcave and unimodal \(\varphi:\mathbb{R}\rightarrow(0,\infty)\) such that
\[\tau\left(i\mid j\right)=\varphi\left(v\left(i\right)-v\left(j\right)\right)\]
if \(w\left(i\right)=w\left(j\right)\).
Finally, if \(w\left(i\right)\neq w\left(j\right)\), by (8), \(\rho\left(i\mid j\right)\in\{0,1\}\) and by definition of tandem \(\tau\left(i\mid j\right)=0\).\(\blacksquare\)
#### b.1.6 Theorem 6
Note that if either \((\rho,\tau)\) has a binary value representation or it is psychometric, then \(\rho\) is transitive and there exist \(v,w:A\rightarrow\mathbb{R}\) and a symmetric \(s:A^{2}\rightarrow(0,\infty)\) such that (8) holds. Therefore the set of pairs of alternatives with nonzero response time is
\[\mathbb{D} =\left\{\left(i,j\right):\tau\left(i\mid j\right)\neq 0\right\}= \left\{\left(i,j\right):\rho\left(i\mid j\right)\in(0,1)\right\}\] \[=\left\{\left(i,j\right):i\mid^{*}j\right\}=\left\{\left(i,j \right):w\left(i\right)=w\left(j\right)\right\}\]
Arbitrarily choose \(\left(i,j\right)\in\mathbb{D}\), since \(i\parallel^{*}j\), by Lemma 1, we can assume, without loss of generality, that \(i\succsim^{\circ}j\), thus
\[\rho\left(i\mid j\right)\geq\rho\left(j\mid i\right)\quad\text{and}\quad 1- \rho\left(j\mid i\right)\geq 1-\rho\left(i\mid j\right)\]
A second-type error is the probability of accepting an inferior proposal, that is,
\[\text{ER}^{\text{II}}_{i,j}=\rho\left(j\mid i\right)=\min\left\{\rho\left(i \mid j\right),\rho\left(j\mid i\right)\right\}\]
A first-type error is the probability of rejecting a superior proposal, that is,
\[\text{ER}^{\text{I}}_{i,j}=1-\rho\left(i\mid j\right)=\min\left\{1-\rho\left( i\mid j\right),1-\rho\left(j\mid i\right)\right\}\]
Since (8) holds, \(i\succsim^{\circ}j\) if and only if \(v\left(i\right)\geq v\left(j\right)\). Therefore:
\[\text{ER}^{\text{II}}_{i,j} =\rho\left(j\mid i\right)=s\left(j,i\right)\frac{1}{1+e^{-\left(v \left(j\right)-v\left(i\right)\right)}}=s\left(j,i\right)\frac{1}{1+e^{\left|v \left(j\right)-v\left(i\right)\right|}}\] \[=s\left(i,j\right)\frac{1}{1+e^{\left|v\left(i\right)-v\left(j \right)\right|}}\] \[\text{ER}^{\text{I}}_{i,j} =1-\rho\left(i\mid j\right)=1-s\left(i,j\right)\frac{1}{1+e^{- \left(v\left(i\right)-v\left(j\right)\right)}}=1-s\left(i,j\right)\frac{1}{1+e ^{-\left|v\left(i\right)-v\left(j\right)\right|}}\] \[=1-\text{ER}^{\text{II}}_{i,j}\frac{1+e^{\left|v\left(i\right)-v \left(j\right)\right|}}{1+e^{-\left|v\left(i\right)-v\left(j\right)\right|}}\]
Summing up, for each \(\left(i,j\right)\in\mathbb{D}\),
\[\mathrm{ER}_{i,j}^{\mathrm{II}} =s\left(i,j\right)\frac{1}{1+e^{\left|v\left(i\right)-v\left(j \right)\right|}}\] \[\mathrm{ER}_{i,j}^{\mathrm{I}} =1-\mathrm{ER}_{i,j}^{\mathrm{II}}\frac{1+e^{\left|v\left(i\right) -v\left(j\right)\right|}}{1+e^{-\left|v\left(i\right)-v\left(j\right)\right|}}\]
But then, for all \(\left(i,j\right)\) and \(\left(h,k\right)\) in \(\mathbb{D}\) such that \(\mathrm{ER}_{i,j}^{\mathrm{I}}<\mathrm{ER}_{h,k}^{\mathrm{I}}\) and \(\mathrm{ER}_{i,j}^{\mathrm{II}}<\mathrm{ER}_{h,k}^{\mathrm{II}}\), it follows that
\[1-\mathrm{ER}_{i,j}^{\mathrm{II}}\frac{1+e^{\left|v\left(i\right) -v\left(j\right)\right|}}{1+e^{-\left|v\left(i\right)-v\left(j\right)\right|}} <1-\mathrm{ER}_{h,k}^{\mathrm{II}}\frac{1+e^{\left|v\left(h\right)-v\left(k \right)\right|}}{1+e^{-\left|v\left(h\right)-v\left(k\right)\right|}}\] \[\mathrm{ER}_{h,k}^{\mathrm{II}}\frac{1+e^{\left|v\left(h\right) -v\left(k\right)\right|}}{1+e^{-\left|v\left(h\right)-v\left(k\right)\right|} }<\frac{\mathrm{ER}_{i,j}^{\mathrm{II}}}{1+e^{\left|v\left(i\right)-v\left(j \right)\right|}}\] \[\frac{\frac{1+e^{\left|v\left(h\right)-v\left(k\right)\right|}} {1+e^{-\left|v\left(h\right)-v\left(k\right)\right|}}}{\frac{1+e^{\left|v \left(i\right)-v\left(j\right)\right|}}{1+e^{-\left|v\left(i\right)-v\left(j \right)\right|}}}<\frac{\mathrm{ER}_{i,j}^{\mathrm{II}}}{\mathrm{ER}_{h,k}^{ \mathrm{II}}}<1\] \[\frac{1+e^{\left|v\left(h\right)-v\left(k\right)\right|}}{1+e^{ -\left|v\left(h\right)-v\left(k\right)\right|}}<\frac{1+e^{\left|v\left(i \right)-v\left(j\right)\right|}}{1+e^{-\left|v\left(i\right)-v\left(j\right) \right|}}\] \[\left|v\left(h\right)-v\left(k\right)\right|<\left|v\left(i \right)-v\left(j\right)\right|\]
in other words
\[\mathrm{ER}_{i,j}^{\mathrm{I}}<\mathrm{ER}_{h,k}^{\mathrm{I}}\quad\text{and} \quad\mathrm{ER}_{i,j}^{\mathrm{II}}<\mathrm{ER}_{h,k}^{\mathrm{II}}\Longrightarrow \left|v\left(i\right)-v\left(j\right)\right|>\left|v\left(h\right)-v\left(k \right)\right| \tag{44}\]
A similar argument shows that
\[\mathrm{ER}_{i,j}^{\mathrm{I}}\leq\mathrm{ER}_{h,k}^{\mathrm{I}}\quad\text{and }\quad\mathrm{ER}_{i,j}^{\mathrm{II}}\leq\mathrm{ER}_{h,k}^{\mathrm{II}} \Longrightarrow\left|v\left(i\right)-v\left(j\right)\right|\geq\left|v\left(h \right)-v\left(k\right)\right| \tag{45}\]
"If." If \(\left(\rho,\tau\right)\) is psychometric, then (44) and (45) imply
\[\tau\left(i\mid j\right)<\tau\left(h\mid k\right)\Longrightarrow\left|v\left(i \right)-v\left(j\right)\right|>\left|v\left(h\right)-v\left(k\right)\right| \tag{46}\]
and
\[\tau\left(i\mid j\right)\leq\tau\left(h\mid k\right)\Longrightarrow\left|v \left(i\right)-v\left(j\right)\right|\geq\left|v\left(h\right)-v\left(k\right)\right| \tag{47}\]
for all \(\left(i,j\right)\) and \(\left(h,k\right)\) in \(\mathbb{D}\). As a result
\[\left|v\left(i\right)-v\left(j\right)\right|\geq\left|v\left(h\right)-v\left( k\right)\right|\Longleftrightarrow\tau\left(i\mid j\right)\leq\tau\left(h\mid k\right)\]
Therefore, setting \(M=\left\{\left|v\left(i\right)-v\left(j\right)\right|:\left(i,j\right)\in \mathbb{D}\right\}\), there is a strictly decreasing function \(\psi:M\rightarrow\left(0,\infty\right)\) such that \(\tau\left(i\mid j\right)=\psi\left(\left|v\left(i\right)-v\left(j\right) \right|\right)\). We can first extend \(\psi\) from \(M\) to \(\left[0,\infty\right)\) as a strictly decreasing function and then set \(\varphi\left(x\right)=\psi\left(\left|x\right|\right)\) for all \(x\in\mathbb{R}\). With this, there exists a strictly quasiconcave, unimodal, and even \(\varphi:\mathbb{R}\rightarrow\left(0,\infty\right)\) such that
\[\tau\left(i\mid j\right)=\varphi\left(v\left(i\right)-v\left(j\right)\right)\]
if \(w\left(i\right)=w\left(j\right)\).
Finally, if \(w\left(i\right)\neq w\left(j\right)\), by (8), \(\rho\left(i\mid j\right)\in\left\{0,1\right\}\) and by definition of tandem \(\tau\left(i\mid j\right)=0\).
"Only if." If a tandem has a binary value representation \(\left(v,w,s,\varphi\right)\), then \(\rho\) is transitive (see Theorem 4). Now \(\varphi\) is strictly quasiconcave, unimodal, and even \(\varphi:\mathbb{R}\rightarrow\left(0,\infty\right)\), with strong maximum at \(0\) and strictly decreasing on \(\left[0,\infty\right)\). In particular,
\[\tau\left(i\mid j\right)=\tau\left(j\mid i\right)\]
for all alternatives \(i\) and \(j\). But then \(\rho\) is unbiased, and so \(s\left(i,j\right)=1\) for all \(\left(i,j\right)\in\mathbb{D}\), and so
\[\mathrm{ER}_{i,j}^{\mathrm{I}}=\mathrm{ER}_{i,j}^{\mathrm{II}}=\frac{1}{1+e^{ \left|v\left(i\right)-v\left(j\right)\right|}}\]
Moreover, for all \(\left(i,j\right)\) and \(\left(h,k\right)\) in \(\mathbb{D}\),
\[\tau\left(i\mid j\right)<\tau\left(h\mid k\right) \iff\varphi\left(\left|v\left(i\right)-v\left(j\right)\right| \right)<\varphi\left(\left|v\left(h\right)-v\left(k\right)\right|\right)\] \[\iff\left|v\left(i\right)-v\left(j\right)\right|>\left|v\left(h \right)-v\left(k\right)\right|\] \[\iff\mathrm{ER}_{i,j}<\mathrm{ER}_{h,k}\]
This proves that \(\left(\rho,\tau\right)\) is psychometric. \(\blacksquare\)
### Section 4
#### b.2.1 Subsection 4.1
To ease notation, we set \(\Lambda_{ij}=\nu\left(i\right)-\nu\left(j\right)\) as well as
\[\rho_{\mathrm{C}}\left(i\mid j\right)=\rho_{ij}\quad;\quad\rho_{\mathrm{C}} \left(j\mid i\right)=\rho_{ji}\quad;\quad\tau_{\mathrm{RT}}\left(i\mid j\right) =\tau_{ij}\quad;\quad\tau_{\mathrm{RT}}\left(j\mid i\right)=\tau_{ji}\quad; \quad s_{\mathrm{C}}\left(i,j\right)=s_{ij}\]
By Theorems 8.1 and 8.2 of Pinsky and Karlin (2011),
\[\rho_{ij}=\frac{1-e^{\beta\Lambda_{ij}}}{e^{-\lambda\Lambda_{ij}}-e^{\beta \Lambda_{ij}}}\quad\text{and}\quad\tau_{ij}=\frac{1}{\Lambda_{ij}}\left[\rho_ {ij}\left(\lambda+\beta\right)-\beta\right] \tag{48}\]
also note that
\[\rho_{ij}=\frac{1-e^{-\beta\Lambda_{ij}}}{1-e^{-\left(\lambda+\beta\right) \Lambda_{ij}}}\]
We begin with a few preliminary lemmas.
**Lemma 18**: _For each \(\left(i,j\right)\in A^{2}\), it holds_
\[\lambda\Lambda_{ij}=\ln\frac{\rho_{ij}}{\rho_{ji}}\quad\text{;}\quad\beta \Lambda_{ij}=\ln\frac{1-\rho_{ji}}{1-\rho_{ij}} \tag{49}\]
**Proof** Let \(\left(i,j\right)\in A^{2}\). We have
\[\frac{\rho_{ij}}{\rho_{ji}}=\frac{\frac{1-e^{\beta\Lambda_{ij}}}{e^{-\lambda \Lambda_{ij}}-e^{\beta\Lambda_{ij}}}}{\frac{1-e^{-\beta\Lambda_{ij}}}{e^{ \lambda\Lambda_{ij}}-e^{-\beta\Lambda_{ij}}}}=e^{\lambda\Lambda_{ij}}\]
and so \(\lambda\Lambda_{i,j}=\ln\rho_{ij}/\rho_{ji}\). We also have
\[\frac{1-\rho_{ji}}{1-\rho_{ij}}=\frac{1-\frac{1-e^{\beta\Lambda_{ij}}}{e^{ \lambda\Lambda_{ij}}-e^{-\beta\Lambda_{ij}}}}{1-\frac{1-e^{\beta\Lambda_{ij}}} {e^{-\lambda\Lambda_{ij}}-e^{\beta\Lambda_{ij}}}}=\frac{e^{\frac{e^{\lambda \Lambda_{ij}}-1}{e^{\lambda\Lambda_{ij}}-e^{-\beta\Lambda_{ij}}}}}{\frac{1-e^{- \lambda\Lambda_{ij}}}{e^{\lambda\Lambda_{ij}}-e^{-\lambda\Lambda_{ij}}}}=e^{ \beta\Lambda_{ij}}\]
and so \(\beta\Lambda_{i,j}=\ln\left(1-\rho_{ji}\right)/\left(1-\rho_{ij}\right)\). \(\blacksquare\)
**Lemma 19**: _Let \((i,j)\in A^{2}\) with \(\Lambda_{ij}\neq 0\). It holds_
\[\tau_{ij}=\frac{\lambda^{2}}{\ln\rho_{ij}-\ln\rho_{ji}}\left[\rho_{ij}+\frac{\ln \left(1-\rho_{ji}\right)-\ln\left(1-\rho_{ij}\right)}{\ln\rho_{ij}-\ln\rho_{ji}} \left(\rho_{ij}-1\right)\right] \tag{50}\]
**Proof** By (49),
\[\frac{\beta}{\lambda}=\frac{\beta\Lambda_{ij}}{\lambda\Lambda_{ij}}=\frac{\ln \frac{1-\rho_{ji}}{1-\rho_{ij}}}{\ln\frac{\rho_{ij}}{\rho_{ji}}}=\frac{\ln \left(1-\rho_{ji}\right)-\ln\left(1-\rho_{ij}\right)}{\ln\rho_{ij}-\ln\rho_{ji}} \tag{51}\]
Hence,
\[\tau_{ij} =\frac{1}{\Lambda_{ij}}\left[\rho_{ij}\left(\lambda+\beta\right) -\beta\right]=\frac{\lambda^{2}}{\lambda\Lambda_{ij}}\left[\rho_{ij}\left(1+ \frac{\beta}{\lambda}\right)-\frac{\beta}{\lambda}\right]=\frac{\lambda^{2}}{ \lambda\Lambda_{ij}}\left[\rho_{ij}+\frac{\beta}{\lambda}\left(\rho_{ij}-1 \right)\right]\] \[=\frac{\lambda^{2}}{\ln\rho_{ij}-\ln\rho_{ji}}\left[\rho_{ij}+ \frac{\ln\left(1-\rho_{ji}\right)-\ln\left(1-\rho_{ij}\right)}{\ln\rho_{ij}- \ln\rho_{ji}}\left(\rho_{ij}-1\right)\right]\]
as desired.
**Lemma 20**: _Let \((i,j)\in A^{2}\) with \(\Lambda_{ij}\neq 0\). If \(\tau_{ij}=\tau_{ji}\), then \(\beta=\lambda\)._
**Proof** To further ease notation, set \(x=\rho_{ij}\) and \(y=\rho_{ji}\). Since \(\Lambda_{ij}\neq 0\), by (49) we have \(x\neq y\). By (50), we have
\[\tau_{ij} =\tau_{ji}\] \[\iff\frac{\lambda^{2}}{\ln\frac{x}{y}}\left[x+\frac{\ln\frac{1-y} {1-x}}{\ln\frac{x}{y}}\left(x-1\right)\right]=\frac{\lambda^{2}}{\ln\frac{y}{ x}}\left[y+\frac{\ln\frac{1-x}{1-y}}{\ln\frac{y}{x}}\left(y-1\right)\right]\] \[\iff\frac{\ln\frac{y}{x}}{\ln\frac{x}{y}}\left[x+\frac{\ln\frac{1 -y}{1-x}}{\ln\frac{x}{y}}\left(x-1\right)\right]=y+\frac{\ln\frac{1-x}{1-y}}{ \ln\frac{y}{x}}\left(y-1\right)\] \[\iff-x-\frac{\ln\frac{1-y}{1-x}}{\ln\frac{x}{y}}\left(x-1\right)= y+\frac{\ln\frac{1-x}{1-y}}{\ln\frac{y}{x}}\left(y-1\right)\] \[\iff\frac{\ln\frac{1-y}{1-x}}{\ln\frac{y}{x}}\left(y-1\right)+ \frac{\ln\frac{1-y}{1-x}}{\ln\frac{y}{x}}\left(x-1\right)=y+x\Longleftrightarrow \frac{2}{x+y}+\frac{\ln\frac{y}{x}}{\ln\frac{1-y}{1-x}}=1\]
The locus of pairs \((x,y)\in(0,1)\times(0,1)\), with \(x\neq y\), that solve this equation is
\[\left\{\left(x,y\right)\in\left(0,1\right)_{\neq}^{2}:x=1-y\right\}\]
Thus, \(\rho_{ij}=1-\rho_{ji}\). By (49),
\[\frac{\beta}{\lambda}=\frac{\beta\Lambda_{ij}}{\lambda\Lambda_{ij}}=\frac{\ln \frac{1-\rho_{ji}}{1-\rho_{ij}}}{\ln\frac{\rho_{ii}}{\rho_{ji}}}=\frac{\ln \frac{\rho_{ij}}{\rho_{ji}}}{\ln\frac{\rho_{ji}}{\rho_{ji}}}=1\]
We conclude that \(\beta=\lambda\).
Proof of Proposition 7Positivity of \(\rho=\rho_{\rm C}\) follows immediately from (48) and hence \(\rho\) is a binary choice probability. So, to establish whether \((\rho,\tau)\) is a tandem we need to check only condition (15). When \(\beta=\lambda\), this condition trivially holds because \(\rho\) is unbiased and \(\tau\) is symmetric. Since \(\nu\) is injective, we have \(\Lambda_{ij}\neq 0\) for all distinct \(i\) and \(j\) in \(A\). By Lemma 20, it must then be the case that \(\tau_{ij}\neq\tau_{ji}\); so, condition (15) now vacuously holds. We conclude that \((\rho,\tau)\) is a tandem. Finally, the transitivity of \(\rho\) follows by Theorem 15 because Baldassi et al. (2020) show that, given any nice exploration matrix \(Q\), the transition matrix \(M\) is reversible.28
Footnote 28: Of course, it can also be verified by brute force from (48).
We now turn to the binary value representation. By positivity, \(w_{\rm C}\) is constant. For all \(i\) and \(j\) in \(A\),
\[s_{ij}=\rho_{ij}+\rho_{ji}=\frac{1-e^{-\beta\Lambda_{ij}}}{1-e^{-(\lambda+ \beta)\Lambda_{ij}}}+\frac{1-e^{\beta\Lambda_{ij}}}{1-e^{(\lambda+\beta)\Lambda _{ij}}}=1+\frac{e^{\lambda\Lambda_{ij}}-e^{\beta\Lambda_{ij}}}{1-e^{(\lambda+ \beta)\Lambda_{ij}}}\]
by symmetry
\[s_{ij}=s_{ji}=1+\frac{e^{\lambda(-\Lambda_{ij})}-e^{\beta(-\Lambda_{ij})}}{1-e ^{(\lambda+\beta)(-\Lambda_{ij})}}\]
and so
\[s_{\rm C}\left(i,j\right)=s_{ij}=1+\frac{e^{\lambda\Lambda_{ij}}-e^{\beta \Lambda_{ij}}}{1-e^{(\lambda+\beta)\Lambda_{ij}}}=1+\frac{e^{\lambda|\Lambda_{ ij}|}-e^{\beta|\Lambda_{ij}|}}{1-e^{(\lambda+\beta)|\Lambda_{ij}|}}\]
Moreover,
\[s_{ij}\frac{1}{1+e^{-\lambda\Lambda_{ij}}}=\left(1+\frac{e^{\lambda\Lambda_{ ij}}-e^{\beta\Lambda_{ij}}}{1-e^{(\lambda+\beta)\Lambda_{ij}}}\right)\frac{1}{1+e^ {-\lambda\Lambda_{ij}}}=\frac{1-e^{-\beta\Lambda_{ij}}}{1-e^{-(\lambda+\beta) \Lambda_{ij}}}=\rho_{ij}\]
This proves that \(v_{\rm C}=\lambda\nu\), thus completing the proof of (22).
As to (23),
\[\tau_{ij} =\frac{\lambda^{2}}{\lambda\Lambda_{ij}}\left[\rho_{ij}\left(1+ \frac{\beta}{\lambda}\right)-\frac{\beta}{\lambda}\right]=\frac{\lambda^{2}}{ \lambda\Lambda_{ij}}\left[\frac{1-e^{\beta\Lambda_{ij}}}{e^{-\lambda\Lambda_ {ij}}-e^{\beta\Lambda_{ij}}}\left(1+\frac{\beta}{\lambda}\right)-\frac{\beta}{ \lambda}\right]\] \[=\frac{\lambda^{2}}{\lambda\Lambda_{ij}}\left[\frac{1-e^{\frac{ \beta}{\lambda}\lambda\Lambda_{ij}}}{e^{-\lambda\Lambda_{ij}}-e^{\frac{\beta}{ \lambda}\Lambda\Lambda_{ij}}}\left(1+\frac{\beta}{\lambda}\right)-\frac{\beta }{\lambda}\right]\]
but \(\lambda\Lambda_{ij}=v_{\rm C}\left(i\right)-v_{\rm C}\left(j\right)\). We can then define \(\varphi_{\rm RT}:\mathbb{R}\rightarrow\mathbb{R}\) by
\[\varphi_{\rm RT}\left(x\right)=\frac{\lambda^{2}}{x}\left[\frac{1-e^{\frac{ \beta}{\lambda}x}}{e^{-x}-e^{\frac{\beta}{\lambda}x}}\left(1+\frac{\beta}{ \lambda}\right)-\frac{\beta}{\lambda}\right]\]
and obtain \(\tau_{ij}=\varphi_{\rm RT}\left(v_{\rm C}\left(i\right)-v_{\rm C}\left(j\right)\right)\).
Proof of Proposition 8By (50),
\[\tau_{ij}=\frac{\lambda^{2}}{\ell_{ij}}\left[\rho_{ij}+\frac{\bar{\ell}_{ij}} {\ell_{ij}}\left(\rho_{ij}-1\right)\right]=\frac{\lambda^{2}}{\ell_{ij}^{2}} \left[\ell_{ij}\rho_{ij}+\bar{\ell}_{ij}\left(\rho_{ij}-1\right)\right]\]
thus
\[\lambda=\left|\ell_{ij}\right|\sqrt{\frac{\tau_{ij}}{\ell_{ij}\rho_{ij}+\bar{ \ell}_{ij}\left(\rho_{ij}-1\right)}}\]
as desired. By (51), \(\beta=\lambda\ell_{ij}/\ell_{ij}\). Finally, set \(\nu\left(j^{*}\right)=0\) for some alternative \(j^{*}\). By (49), for each \(i\) we have
\[\nu\left(i\right)=\nu\left(i\right)-\nu\left(j^{*}\right)=\Lambda_{ij^{*}}= \frac{1}{\lambda}\ell_{ij^{*}}\]
concluding the proof. \(\blacksquare\)
Proof of Proposition 9By Proposition 7, the tandem \(\left(\rho_{\mathrm{C}},\tau_{\mathrm{RT}}\right)\) is chronometric.
(i) implies (iii) If \(\varphi_{\mathrm{RT}}\) is even, then
\[\tau_{\mathrm{RT}}\left(i\mid j\right)=\varphi_{\mathrm{RT}}\left(v_{\mathrm{C }}\left(i\right)-v_{\mathrm{C}}\left(j\right)\right)=\varphi_{\mathrm{RT}} \left(v_{\mathrm{C}}\left(j\right)-v_{\mathrm{C}}\left(i\right)\right)=\tau_{ \mathrm{RT}}\left(j\mid i\right)\]
for all \(i\neq j\).
(iii) implies (ii). If \(\tau_{\mathrm{RT}}\left(i\mid j\right)=\tau_{\mathrm{RT}}\left(j\mid i\right)\) for some \(i\neq j\), since \(\nu\) is injective, then \(\Lambda_{ij}\neq 0\) and so, by Lemma 20, \(\beta=\lambda\).
(ii) implies (iv). Indeed if \(\beta=\lambda\), then
\[s_{\mathrm{C}}\left(i,j\right)=1+\frac{e^{\lambda\left|\nu\left(i\right)-\nu \left(j\right)\right|}-e^{\beta\left|\nu\left(i\right)-\nu\left(j\right)\right| }}{1-e^{\left(\lambda+\beta\right)\left|\nu\left(i\right)-\nu\left(j\right) \right|}}=1\]
for all \(i\) and \(j\), and so \(\rho_{\mathrm{C}}\) is unbiased.
(iv) implies (i) If \(\rho_{\mathrm{C}}\left(i\mid j\right)=1-\rho_{\mathrm{C}}\left(j\mid i\right)\) for some \(i\neq j\), then \(\rho_{\mathrm{C}}\left(j\mid i\right)=1-\rho_{\mathrm{C}}\left(i\mid j\right)\), and, by Lemma 18,
\[\lambda\Lambda_{ij}=\beta\Lambda_{ij}\]
Since \(\nu\) is injective, then \(\Lambda_{ij}\neq 0\) and \(\beta=\lambda\). In particular,
\[\varphi_{\mathrm{RT}}\left(x\right)=\frac{\lambda^{2}}{x}\left(2\frac{1-e^{x} }{e^{-x}-e^{x}}-1\right)=\frac{\lambda^{2}}{x}\tanh\left(\frac{x}{2}\right)\]
is even. \(\blacksquare\)
#### b.2.2 Stochastic matrices
A sequence \(a=\left\{a_{n}\right\}\) of non-negative scalars is _summable_ if \(\underset{n=0}{\overset{\infty}{\sum}}a_{n}<\infty\). Its _generating function_ given by
\[f_{a}\left(z\right)=\underset{n=0}{\overset{\infty}{\sum}}a_{n}z^{n} \tag{52}\]
is defined where the power series on the right hand side converges. Summability of \(\left\{a_{n}\right\}\) guarantees that the radius of convergence \(R\) satisfies \(R\geq 1\) and that \(f_{a}\left(z\right)\) is defined and continuous on the unit disk \(\left\{z\in\mathbb{C}:\left|z\right|\leq 1\right\}\).
**Lemma 21**: _If \(a=\left\{a_{n}\right\}\) is a non-negative and summable sequence, then the matrix power series \(\underset{n=0}{\overset{\infty}{\sum}}a_{n}B^{n}\) converges (entry by entry) for all stochastic matrices \(B\)._
**Proof** The \((i,j)\)-th entry \(b_{ij}^{(n)}\) of the matrix \(B^{n}\) belongs to \([0,1]\) because \(B^{n}\) is a stochastic matrix too. Then \(\sum_{n=0}^{\infty}a_{n}b_{ij}^{(n)}\) is a non-negative series such that \(0\leq a_{n}b_{ij}^{(n)}\leq a_{n}\) and it converges because \(\sum_{n=0}^{\infty}a_{n}\) does. \(\blacksquare\)
As a consequence the function
\[f_{a}\left(B\right)=\underset{n=0}{\overset{\infty}{\sum}}a_{n}B^{n}\]
is well defined in the strong sense of Weyr (see e.g. Rinehart, 1955), for all stochastic matrices \(B\).
Denote by \(Q\) and \(M\) the exploration and transition matrices defined in the main text, which, as observed, are stochastic.
**Lemma 22**: _Let \(\rho_{\mathrm{C}}\) be positive. If \(Q\) is irreducible (quasi-positive), then \(M\) is primitive (positive)._
**Proof** To ease notation we write \(\rho\) in place of \(\rho_{\mathrm{C}}\). Let \(Q\) be irreducible. Recall that
\[M\left(i\mid j\right)=Q\left(i\mid j\right)\rho\left(i\mid j\right)\qquad \forall i\neq j \tag{53}\]
and \(M\left(j\mid j\right)=1-\underset{k\neq j}{\sum}Q\left(k\mid j\right)\rho \left(k\mid j\right)\), for all \(j\in A\). Given any \(j\in A\), since \(Q\) is irreducible, then it cannot be the case that \(Q\left(k\mid j\right)=0\) for all \(k\neq j\). Positivity of the BCM implies that
\[M\left(j\mid j\right)=1-\underset{k\neq j}{\sum}Q\left(k\mid j\right)\rho \left(k\mid j\right)>1-\underset{k\neq j}{\sum}Q\left(k\mid j\right)\geq 1- \underset{k\in A}{\sum}Q\left(k\mid j\right)=0\]
and so \(M\left(j\mid j\right)>0\) for all \(j\in A\).
Moreover, if \(i\neq j\), then there exist \(n\geq 1\) and \(k_{0},...,k_{n}\) in \(A\), with \(k_{0}=i\), \(k_{n}=j\), and \(k_{h}\neq k_{h-1}\) for all \(h=1,...,n\), such that
\[Q\left(k_{1}\mid k_{0}\right)Q\left(k_{2}\mid k_{1}\right)\cdots Q\left(k_{n} \mid k_{n-1}\right)>0\]
and positivity of the BCM implies that
\[M\left(k_{1}\mid k_{0}\right)M\left(k_{2}\mid k_{1}\right)\cdots M\left(k_{n} \mid k_{n-1}\right)>0\]
Together with positivity of \(M\) on the diagonal, this yields primitivity of \(M\) itself.29
Footnote 29: Because \(M\) is then irreducible and non-traceless.
Finally, if \(Q\) is quasi-positive, the argument above shows that \(M\) is positive on the diagonal, and (53) shows that \(M\) is positive also off the diagonal. \(\blacksquare\)
#### b.2.3 Proof of Proposition 10
By Lemma 21 the two matrix power series
\[\sum_{n=0}^{\infty}\mathbb{P}\left[N=n\right]M^{n}\qquad\text{and}\qquad\sum_{n= 0}^{\infty}\mathbb{P}\left[N>n\right]M^{n}\]
converge (note that \(\sum_{n=0}^{\infty}\mathbb{P}\left[N>n\right]=\mathbb{E}\left[N\right]<\infty\)). Recall that, if the algorithm stops at iteration \(n\in\mathbb{N}=\left\{0,1,...\right\}\), it chooses the incumbent \(j_{n-1}\). By independence, the joint probability of stopping at iteration \(n\) and choosing \(j\in A\) is
\[\mathbb{P}\left[N=n,J_{n-1}=j\right]=\mathbb{P}\left[N=n\right]\mathbb{P} \left[J_{n-1}=j\right]\]
Now, for \(n=0\) we have
\[\mathbb{P}\left[J_{-1}=j\right]=\mu_{j}=\left(M^{0}\mu\right)_{j}\]
Assume that for \(n=m\) we have
\[\mathbb{P}\left[J_{m-1}=j\right]=\left(M^{m}\mu\right)_{j}\]
Then, for \(n=m+1\) we have
\[\mathbb{P}\left[J_{(m+1)-1}=j\right] =\mathbb{P}\left[J_{m}=j\right]=\sum_{i\in A}\mathbb{P}\left[J_{m }=j,J_{m-1}=i\right]=\sum_{i\in A}\mathbb{P}\left[J_{m}=j\mid J_{m-1}=i\right] \mathbb{P}\left[J_{m-1}=i\right]\] \[=\sum_{i\in A}m_{ji}\left(M^{m}\mu\right)_{i}=\left(M\left(M^{m} \mu\right)\right)_{j}=\left(M^{m+1}\mu\right)_{j}\]
We have proved by induction that
\[\mathbb{P}\left[J_{n-1}=j\right]=\left(M^{n}\mu\right)_{j}\qquad\forall n\in \mathbb{N}\]
It follows that the probability of choosing \(j\) is
\[\sum_{n=0}^{\infty}\mathbb{P}\left[N=n,J_{n-1}=j\right]=\sum_{n=0}^{\infty} \mathbb{P}\left[N=n\right]\mathbb{P}\left[J_{n-1}=j\right]=\sum_{n=0}^{\infty }\mathbb{P}\left[N=n\right]\left(M^{n}\mu\right)_{j}\]
Then
\[p_{N} =\sum_{n=0}^{\infty}\mathbb{P}\left[N=n\right]\left(M^{n}\mu\right) =\lim_{k\to\infty}\sum_{n=0}^{k}\mathbb{P}\left[N=n\right]\left(M^{n}\mu\right) =\lim_{k\to\infty}\left(\left[\sum_{n=0}^{k}\mathbb{P}\left[N=n\right]M^{n} \right]\mu\right)\] \[=\left(\lim_{k\to\infty}\left[\sum_{n=0}^{k}\mathbb{P}\left[N=n \right]M^{n}\right]\right)\mu\]
and so \(p_{N}=f_{N}\left(M\right)\mu\) holds. The average duration of an iteration starting with incumbent \(j\) is
\[\tau_{j}=\sum_{i\in A}Q\left(i\mid j\right)\tau_{\text{RT}}\left(i\mid j\right)\]
where to ease notation we write \(\tau\) in place of \(\bar{\tau}\). Since \(\mathbb{P}\left[J_{n-1}=j\right]=\left(M^{n}\mu\right)_{j}\), the average duration of iteration \(k\) (if it takes place, i.e., if \(N>k\)) is
\[\underset{j\in A}{\sum}\tau_{j}\mathbb{P}\left[J_{k-1}=j\right]=\underset{j \in A}{\sum}\tau_{j}\left(M^{k}\mu\right)_{j}=\tau\cdot M^{k}\mu\]
The average duration if \(N=n\) is then
\[\underset{k=0}{\sum}\tau\cdot M^{k}\mu=\tau\cdot\left(\underset{k=0}{\sum}M^{ k}\right)\mu\]
with the convention \(\underset{k=0}{\sum}M^{k}=0\) (the zero matrix). Since the probability of stopping at \(n\) is \(\mathbb{P}\left[N=n\right]\), it follows that
\[\tau_{N} =\underset{n=0}{\sum}\mathbb{P}\left[N=n\right]\tau\cdot\left( \underset{k=0}{\sum}M^{k}\right)\mu=\tau\cdot\left(\underset{n=0}{\sum} \mathbb{P}\left[N=n\right]\left(\underset{k=0}{\sum}M^{k}\right)\right)\mu \tag{54}\] \[=\tau\cdot\left(\underset{n=1}{\sum}\mathbb{P}\left[N=n\right] \left(\underset{k=0}{\sum}M^{k}\right)\right)\mu\]
because \(\underset{k=0}{\sum}M^{k}=0\). Now
\[\underset{n=1}{\sum}\mathbb{P}\left[N=n\right]\left(\underset{k=0}{ \sum}M^{k}\right) =\underset{n=1}{\sum}\mathbb{P}\left[N=n\right]\left( \underset{k=1}{\sum}M^{k-1}\right)\] \[=\underset{n=1}{\sum}\underset{k=1}{\sum}1_{\left\{k\leq n\right\} }\mathbb{P}\left[N=n\right]M^{k-1}=\underset{k=1}{\sum}\underset{n=1}{\sum} \underset{k=1}{\sum}1_{\left\{k\leq n\right\}}\mathbb{P}\left[N=n\right]M^{k-1}\] \[=\underset{k=1}{\sum}\mathbb{P}\left[N\geq k\right]M^{k-1}= \underset{n=0}{\sum}\mathbb{P}\left[N\geq n+1\right]M^{n}=\underset{n=0}{ \sum}\mathbb{P}\left[N>n\right]M^{n}\]
This proves that \(\tau_{N}=\tau\cdot g_{N}\left(M\right)\mu\) holds.
#### b.2.4 Proof of Proposition 11
If \(N\) is negative binomial, then
\[f_{N}\left(z\right) =\underset{n=0}{\sum}\binom{n+r-1}{r-1}\zeta^{n}\left(1-\zeta \right)^{r}z^{n}=\left(1-\zeta\right)^{r}\underset{n=0}{\sum}\binom{n+r-1}{r -1}\left(\zeta z\right)^{n}=\frac{\left(1-\zeta\right)^{r}}{\left(1-\zeta z \right)^{r}}\] \[g_{N}\left(z\right) =\frac{1-f_{N}\left(z\right)}{1-z}=\frac{1-\frac{\left(1-\zeta \right)^{r}}{\left(1-\zeta z\right)^{r}}}{1-z}=\frac{\frac{\left(1-\zeta z \right)^{r}}{\left(1-\zeta z\right)^{r}}-\frac{\left(1-\zeta\right)^{r}}{\left( 1-\zeta z\right)^{r}}}{1-z}=\frac{\left(1-\zeta z\right)^{r}-\left(1-\zeta \right)^{r}}{\left(1-z\right)\left(1-\zeta z\right)^{r}}\]
For \(r=1\) it yields
\[f_{N}\left(z\right) =\left(1-\zeta\right)\left(1-\zeta z\right)^{-1}\] \[g_{N}\left(z\right) =\frac{1-\zeta z-1+\zeta}{\left(1-z\right)\left(1-\zeta z\right)} =\frac{\zeta-\zeta z}{\left(1-z\right)\left(1-\zeta z\right)}=\frac{\zeta \left(1-z\right)}{\left(1-z\right)\left(1-\zeta z\right)}=\zeta\left(1-\zeta z \right)^{-1}\]
In general, note that \(z=1\) is a root of \(\left(1-\zeta z\right)^{r}-\left(1-\zeta\right)^{r}\). Thus, the ratio
\[\frac{\left(1-\zeta z\right)^{r}-\left(1-\zeta\right)^{r}}{1-z}\]
appearing above is a polynomial of degree \(r-1\) in \(z\). Next we compute it. It holds
\[\left(1-\zeta z\right)^{r}-\left(1-\zeta\right)^{r} =\sum_{k=0}^{r}\binom{r}{k}\left(-\zeta z\right)^{k}-\sum_{k=0}^ {r}\binom{r}{k}\left(-\zeta\right)^{k}\] \[=\sum_{k=0}^{r}\left(-1\right)^{k}\binom{r}{k}\zeta^{k}z^{k}- \sum_{k=0}^{r}\left(-1\right)^{k}\binom{r}{k}\zeta^{k}\] \[=\sum_{k=0}^{r}\left(-1\right)^{k}\binom{r}{k}\zeta^{k}\left(z^{k }-1\right)=\sum_{k=0}^{r}\binom{r}{k}\left(-\zeta\right)^{k}\left(z-1\right) \sum_{j=0}^{k-1}z^{j}\]
because
\[z^{k}-1=\left(z-1\right)\left(1+z+\cdots+z^{k-1}\right)=\left(z-1\right)\sum_ {j=0}^{k-1}z^{j}\]
with the convention \(\sum_{j=0}^{-1}z^{j}=0\). Then,
\[g_{N}\left(z\right) =\frac{\left(1-\zeta z\right)^{r}-\left(1-\zeta\right)^{r}}{ \left(1-z\right)\left(1-\zeta z\right)^{r}}=\left(\sum_{k=0}^{r}\binom{r}{k} \left(-\zeta\right)^{k}\left(z-1\right)\sum_{j=0}^{k-1}z^{j}\right)\frac{1}{ \left(1-z\right)\left(1-\zeta z\right)^{r}}\] \[=-\left(\sum_{k=0}^{r}\binom{r}{k}\left(-\zeta\right)^{k}\sum_{ j=0}^{k-1}z^{j}\right)\left(1-\zeta z\right)^{-r}\]
showing that (29) holds.
#### b.2.5 Equations (33) and (34)
PreambleA reversible matrix \(B\) is diagonalizable with real eigenvalues. Indeed, from the detailed balance condition (1) it readily follows that the matrix \(B^{*}\) with off diagonal entries \(b_{ij}^{*}=b_{ij}\sqrt{p_{j}/p_{i}}\) is symmetric and has the same eigenvalues as \(B\). A stochastic reversible matrix \(B\) has then a largest eigenvalue \(\lambda_{1}\) equal to \(1\) and all its other eigenvalues have absolute values \(\leq\lambda_{1}\), i.e., they belong to \([-1,1]\). If, in addition, \(B\) is primitive, by Perron's Theorem their absolute values are actually \(<\lambda_{1}\), so they belong to \((-1,1)\).
EquationsLet the transition matrix \(M\) be diagonalizable (e.g., because it is reversible) and let \(\Lambda=\mathrm{diag}\left(\lambda_{1},\lambda_{2},...,\lambda_{m}\right)\) be the diagonal matrix of its eigenvalues, each repeated according to its multiplicity. For any summable sequence \(a=\left\{a_{n}\right\}\) of non-negative scalars we then have
\[f_{a}\left(M\right) =\underset{n=0}{\overset{\infty}{\sum}}a_{n}M^{n}=\underset{l \rightarrow\infty}{\overset{l}{\lim}}a_{n}U\Lambda^{n}U^{-1}=\underset{l \rightarrow\infty}{\lim}\left[U\left(\underset{n=0}{\overset{l}{\sum}}a_{n} \Lambda^{n}\right)U^{-1}\right]\] \[=U\left[\underset{l\rightarrow\infty}{\lim}\left(\underset{n=0} {\overset{l}{\sum}}a_{n}\Lambda^{n}\right)\right]U^{-1}=U\left[\mathrm{diag} \left(\underset{n=0}{\overset{\infty}{\sum}}a_{n}\lambda_{1}^{n},\underset{n =0}{\overset{\infty}{\sum}}a_{n}\lambda_{2}^{n},...,\underset{n=0}{ \overset{\infty}{\sum}}a_{n}\lambda_{m}^{n}\right)\right]U^{-1}\] \[=U\left[\mathrm{diag}\left(f_{a}\left(\lambda_{1}\right),f_{a} \left(\lambda_{2}\right),...,f_{a}\left(\lambda_{\left|A\right|}\right)\right) \right]U^{-1}\]
This immediately yields (33) and (34).
### Section 4.4
#### b.3.1 Proof of Theorem 12, Corollary 13, and Corollary 14
**Proposition 23**: _If \(\rho_{\mathrm{C}}\) is has a binary value representation and the exploration matrix \(Q\) is nice, then the probability distribution_
\[\pi\left(i\right)=\left\{\begin{array}{ll}u\left(i\right)\\ \overline{\sum_{j\in\arg\max{}_{A}w}u\left(j\right)}\\ 0\end{array}\right.\qquad\text{if }i\in\arg\max{}_{A}w\]
_is the only stationary distribution for \(M\), and there exists \(\varepsilon\in\left(0,1\right)\) such that, for all \(n\in\mathbb{N}\) and all \(\mu\in\Delta\left(A\right)\),_
\[\left\|M^{n}\mu-\pi\right\|_{1}\leq 2\left(1-\varepsilon\right)^{n} \tag{55}\]
_Moreover, if \(\left\{N_{k}\right\}_{k=0}^{\infty}\) is a sequence of stopping numbers that diverges, then, for all \(\mu\in\Delta\left(A\right)\)_
\[\left(\underset{n=0}{\overset{\infty}{\sum}}\mathbb{P}\left[N_{k}=n\right]M^ {n}\right)\mu\rightarrow\pi\qquad\text{as }k\rightarrow\infty \tag{56}\]
In particular, (55) implies that
\[\underset{n\rightarrow\infty}{\lim}\Pr\left[J_{n}=i\right]=\pi\left(i\right)\]
and (56) implies that
\[\underset{N_{k}\rightarrow\infty}{\lim}p_{N_{k}}=\pi \tag{57}\]
when the stopping numbers are simple. Hence,
\[\underset{N_{k}\rightarrow\infty}{\lim}p_{N_{k}}\left(i,A\right)=\underset{n \rightarrow\infty}{\lim}\Pr\left[J_{n}=i\right]=\left\{\begin{array}{ll}u \left(i\right)\\ \overline{\sum_{j\in\arg\max{}_{A}w}u\left(j\right)}\\ 0\end{array}\right.\qquad\text{if }i\in\arg\max{}_{A}w\]
This proves Theorem 12. Corollaries 13 and 14 follow immediately.
**Proof** To ease notation we write \(\rho\) in place of \(\rho_{\mathrm{C}}\). We first show that
\[M\left(k\mid j\right)\pi\left(j\right)=M\left(j\mid k\right)\pi\left(k\right) \tag{58}\]
for all \(j\) and \(k\) in \(A\). Denote, for brevity, \(W\left(A\right)=\arg\max_{A}w\). If \(j=k\), the equality is trivial. Let \(j\neq k\) in \(A\).
* If \(j,k\in W\left(A\right)\), then \[M\left(k\mid j\right)\pi\left(j\right) =Q\left(k\mid j\right)\rho\left(k\mid j\right)\frac{u\left(j \right)}{\sum_{i\in W\left(A\right)}u\left(i\right)}\] \[=Q\left(k\mid j\right)s\left(k,j\right)\frac{u\left(k\right)}{u \left(k\right)+u\left(j\right)}\frac{u\left(j\right)}{\sum_{i\in W\left(A \right)}u\left(i\right)}\] \[=Q\left(j\mid k\right)s\left(j,k\right)\frac{u\left(j\right)}{u \left(j\right)+u\left(k\right)}\frac{u\left(k\right)}{\sum_{i\in W\left(A \right)}u\left(i\right)}=Q\left(j\mid k\right)\rho\left(j\mid k\right)\pi \left(k\right)\] \[=M\left(j\mid k\right)\pi\left(k\right)\]
* If \(j,k\notin W\left(A\right)\), then \(\pi\left(j\right)=\pi\left(k\right)=0\) and \[M\left(k\mid j\right)\pi\left(j\right)=M\left(j\mid k\right)\pi\left(k\right)\]
* If \(j\in W\left(A\right)\) and \(k\notin W\left(A\right)\), then \(w\left(j\right)>w\left(k\right)\) and so \(\rho\left(k\mid j\right)=0=\pi\left(k\right)\), thus \[M\left(k\mid j\right)\pi\left(j\right)=Q\left(k\mid j\right)\rho\left(k\mid j \right)\frac{u\left(j\right)}{\sum_{i\in W\left(A\right)}u\left(i\right)}=0=M \left(j\mid k\right)\pi\left(k\right)\]
* If \(j\notin W\left(A\right)\) and \(k\in W\left(A\right)\), then \(w\left(k\right)>w\left(j\right)\) and so \(\rho\left(j\mid k\right)=0=\pi\left(j\right)\), thus \[M\left(k\mid j\right)\pi\left(j\right)=0=Q\left(j\mid k\right)\rho\left(j\mid k \right)\pi\left(k\right)=M\left(j\mid k\right)\pi\left(k\right)\]
The "detailed balance" condition (58) implies that
\[\underset{j\in A}{\sum}M\left(k\mid j\right)\pi\left(j\right)=\underset{j \in A}{\sum}M\left(j\mid k\right)\pi\left(k\right)=\pi\left(k\right)\underset{j \in A}{\sum}M\left(j\mid k\right)=\pi\left(k\right)\]
for all \(k\in A\), then \(M\pi=\pi\). Thus \(\pi\) is a stationary distribution for \(M\).
Take \(j_{0}\in W\left(A\right)\). Then, \(w\left(j_{0}\right)\geq w\left(i\right)\) for all \(i\neq j_{0}\), and so
\[M\left(j_{0}\mid i\right)=Q\left(j_{0}\mid i\right)\rho\left(j_{0}\mid i \right)=\left\{\begin{array}{ll}Q\left(j_{0}\mid i\right)&\text{if }w\left(j_{0} \right)>w\left(i\right)\\ Q\left(j_{0}\mid i\right)s\left(j_{0},i\right)\frac{u\left(j_{0}\right)}{u \left(j_{0}\right)+u\left(i\right)}&\text{if }w\left(j_{0}\right)=w\left(i\right) \end{array}\right.\]
For \(i=j_{0}\), we have that \(\rho\left(k\mid j_{0}\right)=0\) if \(k\notin W\left(A\right)\) and \(\rho\left(k\mid j_{0}\right)\in\left(0,1\right)\) if \(k\in W\left(A\right)\) (provided \(k\neq j_{0}\)),
\[M\left(j_{0}\mid j_{0}\right)=1-\underset{k\neq j_{0}}{\sum}Q\left(k\mid j_{0} \right)\rho\left(k\mid j_{0}\right)>1-\underset{k\neq j_{0}}{\sum}Q\left(k \mid j_{0}\right)\geq 1-\underset{k\in A}{\sum}Q\left(k\mid j_{0}\right)=0\]
By Doeblin's Theorem, \(\pi\) is the only stationary distribution for \(M\) and there exists \(\varepsilon\in\left(0,1\right)\) such that
\[\left\|M^{n}\mu-\pi\right\|_{1}\leq 2\left(1-\varepsilon\right)^{n}\]
for all \(n\in\mathbb{N}\) and all \(\mu\in\Delta\left(A\right)\).
Given any \(\mu\in\Delta\left(A\right)\), set, for each \(k\in\mathbb{N}\),
\[P_{k}\left(n\right)=\mathbb{P}\left[N_{k}=n\right]\qquad\forall n\in\mathbb{N}\]
and
\[p_{k}=\left(\sum_{n=0}^{\infty}\!P_{k}\left(n\right)M^{n}\right)\mu\]
Then
\[p_{k}=\underset{m\rightarrow\infty}{\lim}\sum_{n=0}^{m}\!P_{k}\left(n\right)M ^{n}\mu\quad\text{and}\quad\pi=\underset{m\rightarrow\infty}{\lim}\sum_{n=0}^ {m}\!P_{k}\left(n\right)\pi\]
and so
\[p_{k}-\pi=\underset{m\rightarrow\infty}{\lim}\sum_{n=0}^{m}\!P_{k}\left(n \right)\left(M^{n}\mu-\pi\right)\]
Thus,
\[\left\|p_{k}-\pi\right\|_{1} =\underset{m\rightarrow\infty}{\lim}\left\|\sum_{n=0}^{m}\!P_{k }\left(n\right)\left(M^{n}\mu-\pi\right)\right\|_{1}\leq\underset{m\rightarrow \infty}{\lim}P_{k}\left(n\right)\left\|M^{n}\mu-\pi\right\|_{1}\] \[\leq\underset{m\rightarrow\infty}{\lim}\sum_{n=0}^{m}\!P_{k} \left(n\right)2\left(1-\varepsilon\right)^{n}=\underset{n=0}{\sum}P_{k}\left(n \right)2\left(1-\varepsilon\right)^{n}\]
The sequence \(\left\{a_{k}\right\}_{k\in\mathbb{N}}\) of functions \(a_{k}\!:\mathbb{N}\rightarrow\left[0,\infty\right)\) given by
\[a_{k}\left(n\right)=P_{k}\left(n\right)2\left(1-\varepsilon\right)^{n}\]
is bounded above by the function \(a:\mathbb{N}\rightarrow\left[0,\infty\right)\) given by
\[a\left(n\right)=2\left(1-\varepsilon\right)^{n}\]
The latter is summable with respect to the counting measure \(\gamma\) on \(\mathbb{N}\). In addition, \(\lim_{k\rightarrow\infty}a_{k}\left(n\right)=0\) for all \(n\in\mathbb{N}\). By the Lebesgue Dominated Convergence Theorem,
\[\underset{k\rightarrow\infty}{\lim}\sum_{n=0}^{\infty}\!P_{k}\left(n\right) 2\left(1-\varepsilon\right)^{n}=\underset{k\rightarrow\infty}{\lim}\int_{ \mathbb{N}}a_{k}\left(n\right)\mathrm{d}\gamma\left(n\right)=0\]
Therefore, \(\lim_{k\rightarrow\infty}\left\|p_{k}-\pi\right\|_{1}=0\). \(\blacksquare\)
#### b.3.2 Proof of Theorem 15
Given a menu \(A\), with typical elements \(i\), \(j\) and \(k\), we denote by \(P=\left[P\left(i\mid j\right)\right]_{i,j\in A}\) a \(\left|A\right|\times\left|A\right|\) a stochastic matrix such that \(P\left(i\mid j\right)\) is interpreted as the probability with which a system moves from state \(j\) to state \(i\). Clearly, \(P\left(\cdot\mid j\right)\in\Delta\left(A\right)\) for all \(j\in A\).
**Definition 13**: _A stochastic matrix \(P\) is transitive if_
\[P\left(j\mid i\right)P\left(k\mid j\right)P\left(i\mid k\right)=P\left(k\mid i \right)P\left(j\mid k\right)P\left(i\mid j\right)\qquad\forall i,j,k\in A \tag{59}\]
Transitivity is known as the _Kolmogorov criterion_ in the Markov chains literature (see, e.g., Kelly, 1979, p. 24) and as the _product rule_ in the stochastic choice literature (Luce and Suppes, 1965, p. 341).
Transitivity is automatically satisfied if at least two of the three states \(i\), \(j\), and \(k\) in \(A\) coincide. In fact,
* if \(i=j\), then \[P\left(j\mid i\right)P\left(k\mid j\right)P\left(i\mid k\right) =P\left(i\mid i\right)P\left(k\mid i\right)P\left(i\mid k\right)\] \[P\left(k\mid i\right)P\left(j\mid k\right)P\left(i\mid j\right) =P\left(k\mid i\right)P\left(i\mid k\right)P\left(i\mid i\right)\]
* if \(i=k\), then \[P\left(j\mid i\right)P\left(k\mid j\right)P\left(i\mid k\right) =P\left(j\mid i\right)P\left(i\mid j\right)P\left(i\mid i\right)\] \[P\left(k\mid i\right)P\left(j\mid k\right)P\left(i\mid j\right) =P\left(i\mid i\right)P\left(j\mid i\right)P\left(i\mid j\right)\]
* if \(j=k\), then \[P\left(j\mid i\right)P\left(k\mid j\right)P\left(i\mid k\right) =P\left(j\mid i\right)P\left(j\mid j\right)P\left(i\mid j\right)\] \[P\left(k\mid i\right)P\left(j\mid k\right)P\left(i\mid j\right) =P\left(j\mid i\right)P\left(j\mid j\right)P\left(i\mid j\right)\]
Therefore, transitivity can be restated as
\[P\left(j\mid i\right)P\left(k\mid j\right)P\left(i\mid k\right)=P\left(k\mid i \right)P\left(j\mid k\right)P\left(i\mid j\right)\]
for all distinct \(i\), \(j\) and \(k\) in \(A\).30
Footnote 30: This argument applies to any function \(P:A\times A\rightarrow\mathbb{R}\) and is independent of its “diagonal” values \(P\left(i\mid i\right)\).
The next result, which relates reversibility and transitivity, builds upon Kolmogorov (1936) and Luce and Suppes (1965).
**Proposition 24**: _Let \(P\) be a positive stochastic matrix. The following conditions are equivalent:_
* \(P\) _is reversible under some_ \(\pi\in\Delta\left(A\right)\)_;_
* \(P\) _is transitive._
_In this case, given any \(i\in A\), it holds_
\[\pi\left(j\right)=\frac{r\left(j\mid i\right)}{\underset{k\in A}{\sum}r\left(k \mid i\right)}\qquad\forall j\in A\]
_where \(r\left(j\mid i\right)=P\left(j\mid i\right)/P\left(i\mid j\right)\). In particular, \(\pi\) is unique._
**Proof** Assume that there exists \(\pi\in\Delta\left(A\right)\) such that \(P\left(i\mid j\right)\pi\left(j\right)=P\left(j\mid i\right)\pi\left(i\right)\) for all distinct \(i,j\in A\) (note that this is weaker that reversibility in that \(\pi\) is not assumed to be positive), then
\[P\left(i\mid j\right)\pi\left(j\right)=P\left(j\mid i\right)\pi\left(i\right) \qquad\forall i,j\in A \tag{60}\]
If \(\pi\left(i^{*}\right)=0\) for some \(i^{*}\in A\), then (being \(P\) positive)
\[\pi\left(j\right)=\frac{P\left(j\mid i^{*}\right)}{P\left(i^{*}\mid j\right) }\pi\left(i^{*}\right)=0\qquad\forall j\in A \tag{61}\]
But, this is impossible since \(\underset{j\in A}{\sum}\pi\left(j\right)=1\). Hence, \(\pi\) is positive. Moreover, by (61) we have
\[\frac{P\left(j\mid i^{*}\right)}{\underset{k\in A}{\sum}\frac{P\left(k\mid i ^{*}\right)}{P\left(i^{*}\mid k\right)}}=\frac{\frac{P\left(j\mid i ^{*}\right)}{P\left(i^{*}\mid j\right)}\pi\left(i^{*}\right)}{\underset{k\in A }{\sum}\frac{P\left(k\mid i^{*}\right)}{P\left(i^{*}\mid k\right)}\pi\left(i^{ *}\right)}=\frac{\pi\left(j\right)}{\underset{k\in A}{\sum}\pi\left(k\right)} =\pi\left(j\right)\qquad\forall j\in A\]
irrespective of the choice of \(i^{*}\in A\). Hence, \(\pi\) is unique. Finally, given any \(i,j,k\in A\), by (60) we have:
\[\frac{\pi\left(j\right)}{\pi\left(i\right)}\frac{\pi\left(k\right) }{\pi\left(j\right)}\frac{\pi\left(i\right)}{\pi\left(k\right)} =1\implies\frac{P\left(j\mid i\right)P\left(k\mid j\right)P\left(i \mid k\right)}{P\left(i\mid j\right)P\left(j\mid k\right)P\left(k\mid i \right)}=1\] \[\implies\frac{P\left(j\mid i\right)P\left(k\mid j\right)P\left(i \mid k\right)}{P\left(k\mid i\right)P\left(j\mid k\right)P\left(i\mid j \right)}=1\] \[\implies P\left(j\mid i\right)P\left(k\mid j\right)P\left(i\mid k\right)=P \left(k\mid i\right)P\left(j\mid k\right)P\left(i\mid j\right)\]
So, transitivity holds.
Conversely, if transitivity holds, choose arbitrarily \(i^{*}\in A\) and set
\[\pi^{*}\left(j\right):=\frac{\frac{P\left(j\mid i^{*}\right)}{P\left(i^{*} \mid j\right)}}{\underset{k\in A}{\sum}\frac{P\left(k\mid i^{*}\right)}{P \left(i^{*}\mid k\right)}}=\frac{P\left(j\mid i^{*}\right)}{P\left(i^{*}\mid j \right)}\zeta\qquad\forall j\in A \tag{62}\]
where \(1/\zeta=\underset{k\in A}{\sum}P\left(k\mid i^{*}\right)/P\left(i^{*}\mid k \right)>0\). By transitivity, for all \(i,j\in A\),
\[P\left(j\mid i\right)P\left(i^{*}\mid j\right)P\left(i\mid i^{*}\right)=P \left(i^{*}\mid i\right)P\left(j\mid i^{*}\right)P\left(i\mid j\right)\]
and, since \(P\) is positive,
\[P\left(j\mid i\right)\frac{P\left(i\mid i^{*}\right)}{P\left(i^{*}\mid i\right)}= P\left(i\mid j\right)\frac{P\left(j\mid i^{*}\right)}{P\left(i^{*}\mid j\right)}\]
Thus, for all \(i,j\in A\),
\[P\left(j\mid i\right)\frac{P\left(i\mid i^{*}\right)}{P\left(i^{*}\mid i\right) }\zeta=P\left(i\mid j\right)\frac{P\left(j\mid i^{*}\right)}{P\left(i^{*}\mid j \right)}\zeta\]
In view of (62), reversibility with respect to \(\pi^{*}\) holds (note that \(\pi^{*}\) is strictly positive). \(\blacksquare\)
**Proof of Theorem 15** To ease notation we write \(\rho\) in place of \(\rho_{\mathrm{C}}\).
"If." By Lemma 22, since \(Q\) is quasi-positive, then \(M\) is positive. By assumption \(M\) is reversible. But then, by Proposition 24, \(M\) is transitive, thus
\[M\left(j\mid i\right)M\left(k\mid j\right)M\left(i\mid k\right)=M\left(k\mid i \right)M\left(j\mid k\right)M\left(i\mid j\right)\]
for all distinct \(i\), \(j\) and \(k\) in \(A\). By definition of \(M\),
\[Q\left(j\mid i\right)\rho\left(j\mid i\right)Q\left(k\mid j\right) \rho\left(k\mid j\right)Q\left(i\mid k\right)\rho\left(i\mid k\right)\] \[=Q\left(k\mid i\right)\rho\left(k\mid i\right)Q\left(j\mid k \right)\rho\left(j\mid k\right)Q\left(i\mid j\right)\rho\left(i\mid j\right)\]
for all distinct \(i\), \(j\) and \(k\) in \(A\). By symmetry and quasi-positivity of \(Q\), this implies
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right)= \rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)\]
for all distinct \(i\), \(j\) and \(k\) in \(A\). Therefore, \(\rho\) is transitive, and by Theorem 4 it admits a binary value representation, so that the Neural Metropolis Algorithm is value based.
"Only if." If the Neural Metropolis Algorithm is value based, by definition, \(\rho\) admits a binary value representation, and by Theorem 4 it is transitive. Thus
\[\rho\left(j\mid i\right)\rho\left(k\mid j\right)\rho\left(i\mid k\right)= \rho\left(k\mid i\right)\rho\left(j\mid k\right)\rho\left(i\mid j\right)\]
for all distinct \(i\), \(j\) and \(k\) in \(A\). Since \(Q\) is symmetric and quasi-positivite, then
\[Q\left(j\mid i\right)\rho\left(j\mid i\right)Q\left(k\mid j \right)\rho\left(k\mid j\right)Q\left(i\mid k\right)\rho\left(i\mid k\right)\] \[=Q\left(k\mid i\right)\rho\left(k\mid i\right)Q\left(j\mid k \right)\rho\left(j\mid k\right)Q\left(i\mid j\right)\rho\left(i\mid j\right)\]
for all distinct \(i\), \(j\) and \(k\) in \(A\). By definition of \(M\),
\[M\left(j\mid i\right)M\left(k\mid j\right)M\left(i\mid k\right)=M\left(k\mid i \right)M\left(j\mid k\right)M\left(i\mid j\right)\]
for all distinct \(i\), \(j\) and \(k\) in \(A\). By Lemma 22, since \(Q\) is quasi-positive, then \(M\) is positive. But then, by Proposition 24, \(M\) is reversible. \(\blacksquare\)
### Section 5
Proposition 16 is a consequence of the following result.
**Proposition 25**: _Given any positive \(\rho_{\mathrm{C}}\) and any irreducible exploration matrix \(Q\), the transition matrix \(M\) is primitive. Moreover, denoting by \(\pi\) the stationary distribution of \(M\), it follows that_
\[\lim_{t\to\infty}p_{N_{t}}\left(j\right)=\frac{\pi\left(j\right)\bar{\tau}_{j}} {\underset{k\in A}{\sum}\pi\left(k\right)\bar{\tau}_{k}}\qquad\forall j\in A\]
_provided the distribution of \(\mathrm{RT}_{i,j}\) has (strictly) positive expectation, is continuous at \(0\), and has no singular part for all \((i,j)\in A^{2}\)._
**Proof of Proposition 25** To ease notation we write \(\rho\) in place of \(\rho_{\mathrm{C}}\). The stochastic process \((I,J,T)\) produces sequences
\[\left(\underset{\text{state }x_{0}}{\underbrace{j_{-1},i_{0}}},t_{0},\, \underset{\text{state }x_{1}}{\underbrace{j_{0},i_{1}}},t_{1},...,\underset{\text{ state }x_{n}}{\underbrace{j_{n-1},i_{n}}},t_{n},\,\underset{\text{state }x_{n+1}}{\underbrace{j_{n},i_{n+1}}},...\right)\]
it then can be seen as a semi Markov chain with state space
\[\mathcal{X}=\left\{(j,i)\in A^{2}:Q\left(i\mid j\right)>0\right\}\]
where state \(x=(j,i)\in\mathcal{X}\) represents the comparison between incumbent \(j\) and proposal \(i\). Since the comparison between \(j\) and \(i\) produces incumbent \(k=i\) with probability \(\rho\left(i\mid j\right)\), \(k=j\) with probability \(1-\rho\left(i\mid j\right)\), and all other incumbents with probability \(0\), then the probability of switching from comparison \((j,i)\) to comparison \((k,h)\) is given by
\[\mathbb{P}\left[X_{n+1}=(k,h)\mid X_{n}=(j,i)\right]=\left(\delta_{i}\left(k \right)\rho\left(i\mid j\right)+\delta_{j}\left(k\right)\left(1-\rho\left(i \mid j\right)\right)\right)Q\left(h\mid k\right)\]
In fact,
* if \(i=j\), then the comparison between \(j\) and \(i\) produces new incumbent \(i\) for sure and
* if \(k=i\), then \[\mathbb{P}\left[X_{n+1}=(k,h)\mid X_{n}=(i,i)\right] =Q\left(h\mid i\right)\] \[=\left(\underset{=1}{\underbrace{\delta_{i}\left(k\right)}}\rho \left(i\mid j\right)+\,\underset{=1}{\underbrace{\delta_{j}\left(k\right)}} \left(1-\rho\left(i\mid j\right)\right)\right)Q\left(h\mid k\right)\]
* else \(k\neq i\), and \[\mathbb{P}\left[X_{n+1}=(k,h)\mid X_{n}=(i,i)\right] =0\] \[=\left(\underset{=0}{\underbrace{\delta_{i}\left(k\right)}}\rho \left(i\mid j\right)+\,\underset{=0}{\underbrace{\delta_{j}\left(k\right)}} \left(1-\rho\left(i\mid j\right)\right)\right)Q\left(h\mid k\right)\]
* if \(i\neq j\), then the comparison between \(j\) and \(i\) produces new incumbent \(k=i\) with probability \(\rho\left(i\mid j\right)\) and \(k=j\) with probability \(1-\rho\left(i\mid j\right)\) and
* if \(k=i\), then \[\mathbb{P}\left[X_{n+1}=\left(k,h\right)\mid X_{n}=\left(j,i\right)\right] =\rho\left(i\mid j\right)Q\left(h\mid i\right)\] \[=\left(\underbrace{\delta_{i}\left(k\right)}_{=1}\rho\left(i \mid j\right)+\ \underbrace{\delta_{j}\left(k\right)}_{=0}\left(1-\rho\left(i\mid j\right) \right)\right)Q\left(h\mid k\right)\]
* if \(k=j\), then, we have \[\mathbb{P}\left[X_{n+1}=\left(k,h\right)\mid X_{n}=\left(j,i\right)\right] =\left(1-\rho\left(i\mid j\right)\right)Q\left(h\mid j\right)\] \[=\left(\underbrace{\delta_{i}\left(k\right)}_{=0}\rho\left(i \mid j\right)+\ \underbrace{\delta_{j}\left(k\right)}_{=1}\left(1-\rho\left(i\mid j\right) \right)\right)Q\left(h\mid k\right)\]
* else \(k\neq i\) and \(k\neq j\), thus \[\mathbb{P}\left[X_{n+1}=\left(k,h\right)\mid X_{n}=\left(j,i\right)\right]=0= \left(\underbrace{\delta_{i}\left(k\right)}_{=0}\rho\left(i\mid j\right)+\ \underbrace{\delta_{j}\left(k\right)}_{=0}\left(1-\rho\left(i\mid j\right) \right)\right)Q\left(h\mid k\right)\]
Set
\[\hat{M}\left(\left(k,h\right)\mid\left(j,i\right)\right)=\left(\delta_{i} \left(k\right)\rho\left(i\mid j\right)+\delta_{j}\left(k\right)\left(1-\rho \left(i\mid j\right)\right)\right)Q\left(h\mid k\right)\qquad\forall\left(k,h \right),\left(j,i\right)\in\mathcal{X}\]
Next we show that \(\hat{M}\) is a _bona fide_ stochastic matrix. Given any \(\left(j,i\right)\in\mathcal{X}\),
\[\sum_{\left(k,h\right)\in\mathcal{X}}\hat{M}\left(\left(k,h \right)\mid\left(j,i\right)\right) =\sum_{\left(k,h\right)\in\mathcal{X}}\left(\delta_{i}\left(k \right)\rho\left(i\mid j\right)+\delta_{j}\left(k\right)\left(1-\rho\left(i \mid j\right)\right)\right)Q\left(h\mid k\right)\] \[=\sum_{\left(k,h\right)\in A^{2}}\left(\delta_{i}\left(k\right) \rho\left(i\mid j\right)+\delta_{j}\left(k\right)\left(1-\rho\left(i\mid j \right)\right)\right)Q\left(h\mid k\right)\] \[=\sum_{h\in A}\left(\sum_{k\in A}\left(\delta_{i}\left(k\right) \rho\left(i\mid j\right)+\delta_{j}\left(k\right)\left(1-\rho\left(i\mid j \right)\right)\right)Q\left(h\mid k\right)\right)\]
where equality in the second line follows from the fact that if \(\left(k,h\right)\in A^{2}\setminus\mathcal{X}\) then \(Q\left(h\mid k\right)=0\). We will use this fact repeatedly. Now,
* if \(i=j\), then \[\sum_{k\in A}\left(\delta_{i}\left(k\right)\rho\left(i\mid j\right)+\delta_{j }\left(k\right)\left(1-\rho\left(i\mid j\right)\right)\right)Q\left(h\mid k \right)=Q\left(h\mid i\right)\] hence \[\sum_{\left(k,h\right)\in\mathcal{X}}\hat{M}\left(\left(k,h\right)\mid\left( j,i\right)\right)=\sum_{h\in A}Q\left(h\mid i\right)=1\]
* else \[\sum_{k\in A}\left(\delta_{i}\left(k\right)\rho\left(i\mid j\right)+\delta_{j} \left(k\right)\left(1-\rho\left(i\mid j\right)\right)\right)Q\left(h\mid k \right)=\rho\left(i\mid j\right)Q\left(h\mid i\right)+\left(1-\rho\left(i \mid j\right)\right)Q\left(h\mid j\right)\] hence \[\sum_{(k,h)\in\mathcal{X}}\hat{M}\left(\left(k,h\right)\mid\left( j,i\right)\right) =\sum_{h\in A}\left(\rho\left(i\mid j\right)Q\left(h\mid i\right)+ \left(1-\rho\left(i\mid j\right)\right)Q\left(h\mid j\right)\right)\] \[=\rho\left(i\mid j\right)\sum_{h\in A}Q\left(h\mid i\right)+\left( 1-\rho\left(i\mid j\right)\right)\sum_{h\in A}Q\left(h\mid j\right)=1\]
Next we show that if \(\pi\in\Delta\left(A\right)\) and \(M\pi=\pi\), then setting
\[\hat{\pi}\left(j,i\right)=Q\left(i\mid j\right)\pi\left(j\right)\qquad\forall \left(j,i\right)\in\mathcal{X}\]
defines an element of \(\Delta\left(\mathcal{X}\right)\) such that \(\hat{M}\hat{\pi}=\hat{\pi}\). Clearly
\[\sum_{(j,i)\in\mathcal{X}}\hat{\pi}\left(j,i\right)=\sum_{(j,i)\in\mathcal{X} }Q\left(i\mid j\right)\pi\left(j\right)=\sum_{(j,i)\in A^{2}}Q\left(i\mid j \right)\pi\left(j\right)=\sum_{j\in A}\left(\sum_{i\in A}Q\left(i\mid j \right)\pi\left(j\right)\right)=1\]
Moreover, for all \(\forall\left(k,h\right)\in\mathcal{X}\),
\[\left(\hat{M}\hat{\pi}\right)_{\left(k,h\right)} =\sum_{(j,i)\in\mathcal{X}}\hat{M}\left(\left(k,h\right)\mid\left( j,i\right)\right)\hat{\pi}\left(j,i\right)=\] \[=\sum_{(j,i)\in\mathcal{X}}\left(\delta_{i}\left(k\right)\rho \left(i\mid j\right)+\delta_{j}\left(k\right)\left(1-\rho\left(i\mid j\right) \right)\right)Q\left(h\mid k\right)Q\left(i\mid j\right)\pi\left(j\right)\] \[=Q\left(h\mid k\right)\sum_{(j,i)\in A^{2}}\left(\delta_{i}\left( k\right)\rho\left(i\mid j\right)+\delta_{j}\left(k\right)\left(1-\rho\left(i\mid j \right)\right)\right)Q\left(i\mid j\right)\pi\left(j\right)\]
Next we show that, for all \(k\in A\),
\[\sum_{(j,i)\in A^{2}}\left(\delta_{i}\left(k\right)\rho\left(i\mid j\right)+ \delta_{j}\left(k\right)\left(1-\rho\left(i\mid j\right)\right)\right)Q \left(i\mid j\right)\pi\left(j\right)=\pi\left(k\right)\]
obtaining \(\left(\hat{M}\hat{\pi}\right)_{\left(k,h\right)}=Q\left(h\mid k\right)\pi \left(k\right)=\hat{\pi}_{\left(k,h\right)}\). Indeed
\[\sum_{(j,i)\in A^{2}}\left(\delta_{i}\left(k\right)\rho\left(i \mid j\right)+\delta_{j}\left(k\right)\left(1-\rho\left(i\mid j\right) \right)\right)Q\left(i\mid j\right)\pi\left(j\right)\] \[=\sum_{(j,i)\in A^{2}}\delta_{i}\left(k\right)\rho\left(i\mid j \right)Q\left(i\mid j\right)\pi\left(j\right)+\sum_{(j,i)\in A^{2}}\delta_{j} \left(k\right)Q\left(i\mid j\right)\pi\left(j\right)-\sum_{(j,i)\in A^{2}} \delta_{j}\left(k\right)\rho\left(i\mid j\right)Q\left(i\mid j\right)\pi\left(j\right)\] \[=\sum_{j\in A}\rho\left(k\mid j\right)Q\left(k\mid j\right)\pi \left(j\right)+\sum_{i\in A}Q\left(i\mid k\right)\pi\left(k\right)-\sum_{i\in A }\rho\left(i\mid k\right)Q\left(i\mid k\right)\pi\left(k\right)\]
now the central summand \(\underset{i\in A}{\sum}Q\left(i\mid k\right)\pi\left(k\right)\) is \(\pi\left(k\right)\), we conclude by showing that \(M\pi=\pi\) implies that
\[\underset{j\in A}{\sum}\rho\left(k\mid j\right)Q\left(k\mid j\right)\pi\left(j \right)=\underset{i\in A}{\sum}\rho\left(i\mid k\right)Q\left(i\mid k\right) \pi\left(k\right)\]
in fact
\[\underset{j\in A}{\sum}\rho\left(k\mid j\right)Q\left(k\mid j \right)\pi\left(j\right) =\underset{j\neq k}{\sum}\rho\left(k\mid j\right)Q\left(k\mid j \right)\pi\left(j\right)+\rho\left(k\mid k\right)Q\left(k\mid k\right)\pi\left( k\right)\] \[=\underset{j\neq k}{\sum}M\left(k\mid j\right)\pi\left(j \right)+\rho\left(k\mid k\right)Q\left(k\mid k\right)\pi\left(k\right)\] \[=-M\left(k\mid k\right)\pi\left(k\right)+\underset{j\in A}{\sum}M \left(k\mid j\right)\pi\left(j\right)+\rho\left(k\mid k\right)Q\left(k\mid k \right)\pi\left(k\right)\] \[=-M\left(k\mid k\right)\pi\left(k\right)+\pi\left(k\right)+\rho \left(k\mid k\right)Q\left(k\mid k\right)\pi\left(k\right)\] \[=\left(1-M\left(k\mid k\right)\right)\pi\left(k\right)+\rho \left(k\mid k\right)Q\left(k\mid k\right)\pi\left(k\right)\] \[=\underset{i\neq k}{\sum}\rho\left(i\mid k\right)Q\left(i\mid k \right)\pi\left(k\right)+\rho\left(k\mid k\right)Q\left(k\mid k\right)\pi \left(k\right)\] \[=\underset{i\in A}{\sum}\rho\left(i\mid k\right)Q\left(i\mid k \right)\pi\left(k\right)\]
So far, we did not use the fact that \(M\) is primitive, this is used next to show that \(\hat{M}\) is primitive too. By irreducibility of \(M\), for all \(i,k\in A\), there exists of a finite sequence
\[i=i_{0},i_{1},i_{2},...,i_{n}=k \tag{63}\]
satisfying \(i_{a+1}\neq i_{a}\) and \(M\left(i_{a+1}\mid i_{a}\right)>0\), for all \(a=0,...,n-1\), by the definition of \(M\), it follows that
\[Q\left(i_{a+1}\mid i_{a}\right)>0 \tag{64}\]
for all \(a=0,...,n-1\).
Consider any \(\left(j,i\right),\left(k,h\right)\in\mathcal{X}\), and a chain \(i_{0},i_{1},i_{2},...,i_{n}\) satisfying (63) and (64). The derived chain
\[\left(j,i\right)=\left(j,i_{0}\right),\left(i_{0},i_{1}\right),\left(i_{1},i _{2}\right)...,\left(i_{n-1},i_{n}\right),\left(i_{n},h\right)=\left(k,h\right)\]
belongs to \(\mathcal{X}\) because \(\left(j,i_{0}\right)=\left(j,i\right)\in\mathcal{X}\), \(\left(i_{n},h\right)=\left(k,h\right)\in\mathcal{X}\), and also \(\left(i_{a},i_{a+1}\right)\in\mathcal{X}\) because of (64). Now
\[\hat{M}\left(\left(i_{0},i_{1}\right)\mid\left(j,i_{0}\right)\right)=\left( \delta_{i_{0}}\left(i_{0}\right)\rho\left(i_{0}\mid j\right)+\delta_{j}\left( i_{0}\right)\left(1-\rho\left(i_{0}\mid j\right)\right)\right)Q\left(i_{1}\mid i_{0}\right)\]
which is strictly positive because, \(\rho\left(i_{0}\mid j\right)=\rho\left(i\mid j\right)>0\) (since \(\rho\) is positive) and \(Q\left(i_{1}\mid i_{0}\right)>0\). Moreover,
\[\hat{M}\left(\left(i_{a+1},i_{a+2}\right)\mid\left(i_{a},i_{a+1}\right)\right) =\left(\delta_{i_{a+1}}\left(i_{a+1}\right)\rho\left(i_{a+1}\mid i_{a}\right) +\delta_{i_{a}}\left(i_{a+1}\right)\left(1-\rho\left(i_{a+1}\mid i_{a}\right) \right)\right)Q\left(i_{a+2}\mid i_{a+1}\right)\]
which is strictly positive for \(a=0,...,n-2\), because \(\rho\left(i_{a+1}\mid i_{a}\right)>0\) and \(Q\left(i_{a+2}\mid i_{a+1}\right)>0\). Finally,
\[\hat{M}\left(\left(i_{n},h\right)\mid\left(i_{n-1},i_{n}\right)\right)=\left( \delta_{i_{n}}\left(i_{n}\right)\rho\left(i_{n}\mid i_{n-1}\right)+\delta_{i_{ n-1}}\left(i_{n}\right)\left(1-\rho\left(i_{n}\mid i_{n-1}\right)\right)\right)Q \left(h\mid i_{n}\right)\]
which is strictly positive, because \(\rho\left(i_{n}\mid i_{n-1}\right)>0\) and \(Q\left(h\mid i_{n}\right)=Q\left(h\mid k\right)>0\).
This shows irreducibility of \(\hat{M}\). Having proved irreducibility, primitivity can be established by exhibiting a non-zero diagonal element in the transition matrix \(\hat{M}\). By definition, for all \(\left(j,i\right)\in\mathcal{X}\)
\[\hat{M}\left(\left(j,i\right)\mid\left(j,i\right)\right)=\left(\delta_{i} \left(j\right)\rho\left(i\mid j\right)+\delta_{j}\left(j\right)\left(1-\rho \left(i\mid j\right)\right)\right)Q\left(i\mid j\right)\]
Positivity of \(\rho\) guarantees that \(\delta_{j}\left(j\right)\left(1-\rho\left(i\mid j\right)\right)>0\), while \(Q\left(i\mid j\right)>0\) by definition of \(\mathcal{X}\).
All the assumptions of Howard (1971) p. 713 are then satisfied by the semi-Markov chain with embedded Markov chain \(\hat{M}\left(\left(k,h\right)\mid\left(j,i\right)\right)\) and holding times \(\hat{T}\left(\left(k,h\right)\mid\left(j,i\right)\right)=\mathrm{RT}_{i,j}\).31 Hence the limit as \(t\rightarrow\infty\) of the probability \(\phi_{\left(j,i\right)}\left(t\right)\) with which comparison \(\left(j,i\right)\) is taking place at time \(t\) is given by
Footnote 31: Observe that holding times are independent of the “next state” \(\left(k,h\right)\) therefore “average waiting times” are just average holding times (see Howard, 1971, p. 691).
\[\phi_{\left(j,i\right)} =\frac{\hat{\pi}\left(j,i\right)\tau_{\mathrm{RT}}\left(i\mid j \right)}{\sum\limits_{\left(k,h\right)\in\mathcal{X}}\hat{\pi}\left(k,h \right)\tau_{\mathrm{RT}}\left(h\mid k\right)}=\frac{Q\left(i\mid j\right) \pi\left(j\right)\tau_{\mathrm{RT}}\left(i\mid j\right)}{\sum\limits_{ \left(k,h\right)\in\mathcal{X}}Q\left(h\mid k\right)\pi\left(k\right)\tau_{ \mathrm{RT}}\left(h\mid k\right)}\] \[=\frac{Q\left(i\mid j\right)\pi\left(j\right)\tau_{\mathrm{RT} }\left(i\mid j\right)}{\sum\limits_{\left(k,h\right)\in A^{2}}Q\left(h\mid k \right)\pi\left(k\right)\tau_{\mathrm{RT}}\left(h\mid k\right)}\qquad\forall \left(j,i\right)\in\mathcal{X}\]
The same is true if \(\left(j,i\right)\notin\mathcal{X}\), because in that case \(\phi_{\left(j,i\right)}\left(t\right)=0\) for all \(t\) and \(Q\left(i\mid j\right)=0\). Thus
\[\lim_{t\rightarrow\infty}p_{N_{t}}\left(j\right) =\lim_{t\rightarrow\infty}\sum\limits_{i\in A}\phi_{\left(j,i \right)}\left(t\right)=\sum\limits_{i\in A}\lim_{t\rightarrow\infty}\phi_{ \left(j,i\right)}\left(t\right)=\sum\limits_{i\in A}\phi_{\left(j,i\right)}\] \[=\frac{\pi\left(j\right)\tau_{j}}{\sum\limits_{k\in A}\pi\left(k \right)\tau_{k}}\]
\(\blacksquare\)
As desired.
Consider, like in Cerreia-Vioglio et al. (2022, Section 2), a symmetric DDM and an exploration matrix with off diagonal entries that are inversely proportional to mean response times. This means that there exists \(w>0\) such that
\[Q\left(i\mid j\right)=\frac{w}{\tau_{\mathrm{RT}}\left(i\mid j\right)}\qquad \forall i\neq j\]
so that
\[Q\left(j\mid j\right)=1-\sum\limits_{i\neq j}\frac{w}{\tau_{\mathrm{RT}}\left( i\mid j\right)}\qquad\forall j\]
Also assume that \(\tau_{\text{RT}}\left(j\mid j\right)<\delta\) (small) for all \(j\in A\).32 With this, for all \(j\in A\),
Footnote 32: The Drift Diffusion Model makes no predictions on the response time for comparisons of an alternative with itself. But, it makes sense to assume that the decision unit takes almost no time in realizing that no actual comparison needs to be made.
\[\bar{\tau}_{j} =\underset{i\in A}{\sum}Q\left(i\mid j\right)\tau_{\text{RT}} \left(i\mid j\right)\] \[=\underset{i\neq j}{\sum}\frac{w}{\tau_{\text{RT}}\left(i\mid j \right)}\tau_{\text{RT}}\left(i\mid j\right)+\left(1-\underset{i\neq j}{\sum} \frac{w}{\tau_{\text{RT}}\left(i\mid j\right)}\right)\tau_{\text{RT}}\left(j \mid j\right)=\left(\left|A\right|-1\right)w+\delta_{j}\]
with \(0\leq\delta_{j}<\delta\). Since \(\delta\) is small, then
\[\bar{\tau}_{j}\approx\left(\left|A\right|-1\right)w\]
irrespective of \(j\), so that \(\alpha\left(j\right)\) is approximately constant in (36) and
\[\underset{t\rightarrow\infty}{\lim}p_{N_{t}}\left(j\right)\approx\frac{e^{ \lambda v\left(j\right)}}{\underset{k\in A}{\sum}e^{\lambda v\left(k\right)}} \qquad\forall j\in A\]
that is, the limit probability is approximately of the multinomial logit type. |
2303.09958 | A combinatorial study on product of filter large sets | Sets satisfying Central sets theorem and other Ramsey theoretic large sets
were studied extensively in literature. Hindman and Strauss proved that product
of some of these large sets is again large. In this paper we show that if we
take two combinatorially large sets along idempotent filters, then their
product is also a filter large set. The techniques used here to prove our
results are completely elementary in nature. | Sujan Pal, Jyotirmoy Poddar | 2023-03-17T13:28:28Z | http://arxiv.org/abs/2303.09958v1 | # A combinatorial study on product of filter large sets
###### Abstract.
Sets satisfying Central sets theorem and other Ramsey theoretic large sets were studied extensively in literature. Hindman and Strauss proved that product of some of these large sets is again large. In this paper we show that if we take two combinatorially large sets along idempotent filters, then their product is also a filter large set. The techniques used here to prove our results are completely elementary in nature.
## 1. Introduction
Ramsey theory is a broad area of research revolving in the branch of combinatorics, number theory etc. where people search for large structures. If more deeply stated, if we have a "highly organized" structure and we divide it into finitely many cells then any one of the cells must contains the "highly organized" structure. For example we can go back to Schur [S] in 1916 or to van der Waerden [vdW] in 1927 for the two basic milestones of Ramsey theory.
**Theorem 1.1** (Schur).: _Let \(r\in\mathbb{N}\). If we divide \(\mathbb{N}\) into finitely many cells \(\{A_{i}:i=1,2,\ldots r\}\) there exist \(i\in\{1,2,\ldots,r\}\) and \(x\), \(y\) in \(\mathbb{N}\) such that \(\{x,y,x+y\}\subseteq A_{i}\)._
Proof.: [S]
**Theorem 1.2** (van der Waerden).: _Let \(l\), \(r\in\mathbb{N}\). If we divide \(\mathbb{N}\) into finitely many cells \(\{A_{i}:i=1,2,\ldots r\}\) then for any given \(l\), \(\exists\ i\in\{1,2,\ldots,r\}\) and \(a,\,d\in\mathbb{N}\) such that_
\[a,\,a+d,\,\ldots,\,a+ld\in A_{i}.\]
Proof.: [vdW]
Central sets have a very rich literature in Ramsey theory. These are the large sets which satisfies Central Sets Theorems, originally proved by Furstenberg [F] and later its various versions by other mathematicians. For over a hundred years there have been many devlopements, specially towards characterization of Central sets and Central sets theorem, firstly by Furstenberg himself and then by Hindman,Strauss,De and many others in [F, DHS, DHS08, DHS09, HS, HS09, HB].
Not only Central sets but there are other combinatorially large sets like \(IP\) sets, \(J\) sets and many others which mathematicians like to study from Ramsey theoretic aspect. There are several techniques that mathematicians use to study the large sets, namely the algebra of Stone-Cech compactification, Ergodic theory or basic combinatorial tools.
In [HS10] Hindman used technique of algebra to show that product of two similar type of Ramsey theoretic large sets is again large. Goswami gave a combinatorial
proof of these facts in [G]. In this paper we want to take the results further for a more generalised setting and our approach is purely combinatorial. For that we need to revisit some definitions and concepts, also need to define some new ones. In the second section we have given the prior definitions and necessary preliminaries. In the third one we have shown the results concerning \(\mathcal{F}-\)syndetic and piecewise \(\mathcal{F}-\)syndetic sets. In the fourth section study of product of filter \(J\) sets have been done and the last section is devoted towards the study of product of central sets along filters.
## 2. Definitions and Preliminaries
We start with the basic definition of filters on a set \(S\).
**Definition 2.1**.: Let \(S\) be any set. Let \(\mathcal{U}\) be a non-empty set of subsets of \(S\). \(\mathcal{U}\) is called a _filter_ on \(S\) if it satisfies the following properties:
1. If \(A\), \(B\in\mathcal{U}\), then \(A\cap B\in\mathcal{U}\);
2. If \(A\in\mathcal{U}\) and \(A\subseteq B\subseteq S\), then \(B\in\mathcal{U}\);
3. \(\emptyset\notin\mathcal{U}\).
A classic example of a filter is the set of neighborhoods of a point in a topological space.
**Definition 2.2**.: An _ultrafilter_ on \(S\) is a maximal filter on \(S\). That is an _ultrafilter_ on \(S\) is itself a filter on \(S\) which is not contained properly in any other filter on \(S\). Let \(S\) be any set, and let \(a\) be an element of \(S\). Then the collection of sets each of which contains \(a\) is said to be a _principal_ ultrafilter corresponding to \(a\in S\). In fact the principal ultrafilters are the only ones whose members can be explicitly defined.
We now give a brief review about the Stone-Cech compactification of a discrete semigroup. Let \(\left(S,\cdot\right)\) be any discrete semigroup and denote its Stone-Cech compactification by \(\beta S\). \(\beta S\) is the set of all ultrafilters on \(S\), where the points of \(S\) are identified with the principal ultrafilters. The basis for the topology is \(\left\{\bar{A}:A\subseteq S\right\}\), where \(\bar{A}=\left\{p\in\beta S:A\in p\right\}\). The operation of \(S\) can be extended to \(\beta S\) making \(\left(\beta S,\cdot\right)\) a compact, right topological semigroup containing \(S\) in its topological center. That is, for all \(p\in\beta S\), the function \(\rho_{p}:\beta S\rightarrow\beta S\) is continuous, where \(\rho_{p}\left(q\right)=q\cdot p\) and for all \(x\in S\), the function \(\lambda_{x}:\beta S\rightarrow\beta S\) is continuous, where \(\lambda_{x}\left(q\right)=x\cdot q\). For \(p,q\in\beta S\) and \(A\subseteq S\), \(A\in p\cdot q\) if and only if \(\left\{x\in S:x^{-1}A\in q\right\}\in p\), where \(x^{-1}A=\left\{y\in S:x\cdot y\in A\right\}\).
Since \(\beta S\) is a compact Hausdorff right topological semigroup, it has a smallest two sided ideal denoted by \(K\left(\beta S\right)\), which is the union of all of the minimal right ideals of \(S\), as well as the union of all of the minimal left ideals of \(S\). Every left ideal of \(\beta S\) contains a minimal left ideal and every right ideal of \(\beta S\) contains a minimal right ideal. The intersection of any minimal left ideal and any minimal right ideal is a group, and any two such groups are isomorphic. Any idempotent \(p\) in \(\beta S\) is said to be minimal if and only if \(p\in K\left(\beta S\right)\). Though Central sets was defined dynamically, there is an algebraic counterpart of this definition, established by V. Bergelson and N. Hindman in [HB]. For the sake of our work we need to revisit some important definitions. For more details see [HS12].
**Definition 2.3**.: Let \(\left(S,\cdot\right)\) be a semigroup and \(A\subseteq S\), then
1. The set \(A\) is thick if and only if for any finite subset \(F\) of \(S\), there exists an element \(x\in S\) such that \(F\cdot x\subset A\). This means the sets which contains a translation of any finite subset. For example one can see \(\cup_{n\in\mathbb{N}}\left[2^{n},2^{n}+n\right]\) is a thick set in \(\mathbb{N}\).
2. The set \(A\) is syndetic if and only if there exists a finite subset \(G\) of \(S\) such that \(\bigcup_{t\in G}t^{-1}A=S\). That is, with a finite translation if the set covers the entire semigroup, then it will be called a Syndetic set. For example the set of even and odd numbers are both syndetic in \(\mathbb{N}\).
3. The sets which can be written as an intersection of a syndetic and a thick set are called _Piecewise syndetic_ sets. More formally a set \(A\) is _Piecewise syndetic_ if and only if there exists \(G\in\mathcal{P}_{f}\left(S\right)\) such that for every \(F\in\mathcal{P}_{f}\left(S\right)\), there exists \(x\in S\) such that \(F\cdot x\subseteq\bigcup_{t\in G}t^{-1}A\). Clearly the thick sets and syndetic sets are natural examples of _Piecewise syndetic_ sets. From definition one can immediately see that \(2\mathbb{N}\cap\bigcup_{n\in\mathbb{N}}\left[2^{n},2^{n}+n\right]\) is a nontrivial example of _Piecewise syndetic_ sets in \(\mathbb{N}\).
4. \(\mathcal{T}=\,^{\mathbb{N}}S\).
5. For \(m\in\mathbb{N}\), \(\mathcal{J}_{m}=\left\{\left(t\left(1\right),\ldots,t\left(m\right)\right)\in \mathbb{N}^{m}:t\left(1\right)<\ldots<t\left(m\right)\right\}.\)
6. Given \(m\in\mathbb{N}\), \(a\in S^{m+1}\), \(t\in\mathcal{J}_{m}\) and \(f\in F\), \[x\left(m,a,t,f\right)=\left(\prod_{j=1}^{m}\left(a\left(j\right)\cdot f\left( t\left(j\right)\right)\right)\right)\cdot a\left(m+1\right)\] where the terms in the product \(\prod\) are arranged in increasing order.
7. \(A\subseteq S\) is called a \(J\)-set iff for each \(F\in\mathcal{P}_{f}\left(\mathcal{T}\right)\), there exists \(m\in\mathbb{N}\), \(a\in S^{m+1}\), \(t\in\mathcal{J}_{m}\) such that, for each \(f\in\mathcal{T}\), \[x\left(m,a,t,f\right)\in A.\]
8. If the semigroup \(S\) is commutative, the definition is rather simple. In that case, a set \(A\subseteq S\) is a \(J\)-set if and only if whenever \(F\in\mathcal{P}_{f}\left({}^{\mathbb{N}}S\right)\), there exist \(a\in S\) and \(H\in\mathcal{P}_{f}\left({}^{\mathbb{N}}\right)\), such that for each \(f\in F\), \(a+\sum_{t\in H}f(t)\in A\).
9. If we are given any injective sequence \(\langle x_{n}\rangle_{n=1}^{\infty}\) in \(S\), then, a set \(A\) which contains \(FP\left(\langle x_{n}\rangle_{n=1}^{\infty}\right)\) for some injective sequence \(\langle x_{n}\rangle_{n=1}^{\infty}\) in \(S\), is called an IP set, where \[FP\left(\langle x_{n}\rangle_{n=1}^{\infty}\right)=\left\{x_{i_{1}}\cdot x_{i_ {2}}\cdot\cdots\cdot x_{i_{n}}:\left\{i_{1}<i_{2}<\cdots<i_{n}\right\}\subseteq \mathbb{N}\right\}.\]
10. Then a subset \(A\) of \(S\) is called central if and only if there is some minimal idempotent \(p\) such that \(A\in p\).
We will now discuss a different notion which is the turning point towards the main topics of our current work. This concept first defined in [SZZ]. Throughout this paper, \(\mathcal{F}\)will denote a filter of \(\left(S,\cdot\right)\). For every filter \(\mathcal{F}\)of \(S\), define \(\bar{\mathcal{F}}\subseteq\beta S\), by
\[\bar{\mathcal{F}}=\bigcap_{V\in\mathcal{F}}\bar{V}.\]
It is a routine check that \(\bar{\mathcal{F}}\) is a closed subset of \(\beta S\) consisting of ultrafilters which contain \(\mathcal{F}\). If \(\mathcal{F}\) is an idempotent filter, i.e., \(\mathcal{F}\subset\mathcal{F}\cdot\mathcal{F}\), then \(\bar{\mathcal{F}}\) becomes a closed subsemigroup of \(\beta S\), but the converse is not true. Throughout our article, we will consider only those filters \(\mathcal{F}\), for which \(\bar{\mathcal{F}}\) is a closed subsemigroup of \(\beta S\).
In light of this notion we can define the concept of piecewise \(\mathcal{F}\)-syndeticity both combinatorially and algebraically. For details see [SZZ].
**Definition 2.4**.: Let \(T\) be a closed subsemigroup of \(\beta S\) and \(\mathcal{F}\) be a filter on \(S\) such that \(\bar{\mathcal{F}}=T\).
(1) A subset \(A\) of \(S\) is \(\mathcal{F}\)-syndetic if for every \(V\in\mathcal{F}\), there is a finite set \(G\subseteq V\) such that \(G^{-1}A\in\mathcal{F}\).
(2) A subset \(A\subseteq S\) is piecewise \(\mathcal{F}\)-syndetic if for every \(V\in\mathcal{F}\), there is a finite \(F_{V}\subseteq V\) and \(W_{V}\in\mathcal{F}\) such that whenever \(H\subseteq W_{V}\) a finite subset, there is \(y\in V\) such that \(H\cdot y\subseteq F_{V}^{-1}A\).
Here is an algebraic characterization of piecewise \(\mathcal{F}\)-syndetic sets.
**Theorem 2.5**.: _Let \(T\) be a closed subsemigroup of \(\beta S\), and \(\mathcal{F}\)be the filter on \(S\) such that \(T=\bar{\mathcal{F}}\), also let \(A\subseteq S\). Then \(\bar{A}\cap K\left(T\right)\neq\emptyset\) if and only if \(A\) is piecewise \(\mathcal{F}\)-syndetic._
Proof.: See [SZZ].
Being motivated from these concepts the authors in [GP] defined the concepts of Definition 2.3 in this new framework which we will require in this paper.
**Definition 2.6**.: Let \(\left(S,\cdot\right)\) be a semigroup and \(\mathcal{F}\) be a filter on \(S\). Then
1. For any \(l\in\mathbb{N}\), and any \(l\)-sequences \(\left\langle x_{n}^{\left(i\right)}\right\rangle_{n=1}^{\infty}\) for \(i\in\left\{1,2,\cdots,l\right\}\), define the zigzag finite product \[ZFP\left(\left\langle x_{n}^{\left(i\right)}\right\rangle_{i,n=1,1}^{l,\infty }\right)=\left\{\begin{array}{c}\prod_{t\in H}y_{t}:H\in\mathcal{P}_{f} \left(\mathbb{N}\right)\text{ and }\\ y_{i}\in\left\{x_{i}^{\left(1\right)},x_{i}^{\left(2\right)},\cdots,x_{i}^{ \left(l\right)}\right\}\text{ for any}\,i\in\mathbb{N}\end{array}\right\}.\]
2. Let \(G\in\mathcal{P}_{f}\left(\mathbb{N}S\right)\), we will call \(G\) is \(\mathcal{F}\)good if \(ZFP\left(G\right)\subseteq F\) for all \(F\in\mathcal{F}\).
3. A set \(B\subseteq S\) will be called a \(\mathcal{F}\)-\(J\) set, if, for any finite collection of \(\mathcal{F}\)good maps, say \(F\in\mathcal{P}_{f}\left(\mathbb{N}S\right)\), there exist \(a_{1},a_{2},\ldots,a_{m+1}\in S\) and \(\left\{h_{1},h_{2},\cdots,h_{m}\right\}\subset\mathbb{N}\) such that \[a_{1}f\left(h_{1}\right)a_{2}f\left(h_{2}\right)\ldots a_{m}f\left(h_{m}\right) a_{m+1}\in B.\]
4. \(\mathcal{P}_{f}^{\mathcal{F}}\left(\mathbb{N}S\right)=\left\{\begin{array}{c}F \in\mathcal{P}_{f}\left(\mathbb{N}S\right):\;\;F\quad\text{is}\;\;\;\mathcal{F }\quad\text{-good}\end{array}\right\}.\)
5. A set \(A\subseteq S\) is called \(\mathcal{F}\)-central if and only if there exists an idempotent ultrafilter \(p\in K\left(\bar{\mathcal{F}}\right)\) such that \(A\in p\).
Now we need to establish the concept of product filter. Let \(\mathcal{F}\) and \(\mathcal{G}\) be two given filters on two semigroups \(S\) and \(T\) respectively, such that \(\bar{\mathcal{F}}\) and \(\bar{\mathcal{G}}\) are two closed subsemigroups. Let \(\mathcal{H}\) be a filter generated by \(\mathcal{F}\times\mathcal{G}\) over \(S\times T\), i.e., \(\mathcal{H}\;=\;\left\{D:D\supset A\times B\,\text{for some}\,A\in\mathcal{F},B \in\mathcal{G}\right\}\). We will consider those \(\mathcal{H}\) which generates a closed subsemigroup in \(\beta\left(S\times T\right)\). The following lemma establishes the existence of such filter on the product space.
**Lemma 2.7**.: _If \(\mathcal{F}\)and \(\mathcal{G}\) are two idempotent filters on \(S\) and \(T\) respectively, then \(\mathcal{H}\) is an idempotent filter on \(S\times T\) and hence \(\bar{\mathcal{H}}\) is closed subsemigroup on \(S\times T\)._
Proof.: As \(\mathcal{F}\)and \(\mathcal{G}\) are idempotent filters, we have \(\mathcal{F}\subset\mathcal{F}\cdot\mathcal{F}\) and \(\mathcal{G}\subset\mathcal{G}\cdot\mathcal{G}\). Let \(A\in\mathcal{H}\), hence there is \(B\in\mathcal{F}\) and \(C\in\mathcal{G}\) such that \(B\times C\subset A\). So, \(\left\{x:x^{-1}B\in\mathcal{F}\right\}\in\mathcal{F}\) and \(\left\{y:y^{-1}C\in\mathcal{G}\right\}\in\mathcal{G}\) and this implies
\[\left\{\left(x,y\right):\left(x,y\right)^{-1}\left(B\times C\right)\in \mathcal{F}\times\mathcal{G}\right\}\in\mathcal{F}\times\mathcal{G}\]
Hence \(\left\{\left(x,y\right):\left(x,y\right)^{-1}A\in\mathcal{H}\right\}\in \mathcal{H}\). Hence \(\mathcal{H}\) is an idempotent filter and so \(\bar{\mathcal{H}}\) is a closed subsemigroup on \(S\times T\)
## 3. Product of \(\mathcal{F}-\) Syndetic and Piecewise \(\mathcal{F}-\) Syndetic Sets
The following theorem shows that the product of two filter large syndetic sets is again filter large syndetic.
**Theorem 3.1**.: _If \(\mathcal{F}\)and \(\mathcal{G}\) are two idempotent filters on \(S\) and \(T\) respectively such that \(\bar{\mathcal{F}}\)and \(\bar{\mathcal{G}}\)are two closed subsemigroups and \(\mathcal{H}\) is a filter on \(S\times T\) generated by \(\mathcal{F}\)and \(\mathcal{G}\) such that \(\bar{\mathcal{H}}\)is a closed subsemigroup of \(\beta\left(S\times T\right)\). If \(A\) and \(B\) are \(\mathcal{F}\)-syndetic and \(\mathcal{G}\)-syndetic sets in \(S\) and \(T\) respectively then \(A\times B\) is an \(\mathcal{H}\)-syndetic set._
Proof.: This proof is a two liner. Let \(V\in\mathcal{H}\), so there exists \(C\in\mathcal{F}\)and \(D\in\mathcal{G}\) such that \(C\times D\subseteq V\). By definition, there exists finite sets \(F\subset C\) and \(G\subset D\) such that \(F^{-1}A\in\mathcal{F}\) and \(G^{-1}B\in\mathcal{G}\). Hence
\[\left(F\times G\right)^{-1}\left(A\times B\right)\in\mathcal{F}\times \mathcal{G}\subset\mathcal{H}\]
and so \(A\times B\) is \(\mathcal{H}\)-syndetic.
The proof for the case of product of filter thick sets, that is the product of the \(\bar{\mathcal{F}}-\)thick set in \(S\) and \(\bar{\mathcal{G}}-\)thick set in \(T\) is also \(\bar{\mathcal{H}}-\)thick set in \(S\times T\). This proof is also similar as above and left to the reader.
For our proof of product of filter piecewise syndetic sets, we need an equivalent definition, different from the one given in the previous section in Def. 2.4.
**Definition 3.2**.: Let \(T\) be a closed subsemigroup of \(\beta S\) and \(\mathcal{F}\)be a filter on \(S\) such that \(\bar{\mathcal{F}}=T\) and let \(A\subseteq S\). \(A\) is called piecewise \(\mathcal{F}\)-syndetic if for every \(V\in\mathcal{F}\), there is a finite \(F_{V}\subseteq V\) and \(W_{V}\in\mathcal{F}\) such that the family
\[\left\{\left(x^{-1}F_{V}^{-1}A\right)\cap V:V\in\mathcal{F},x\in W_{V}\right\}\]
has the finite intersection property.
Now we are in a position to state the theorem.
**Theorem 3.3**.: _If \(\mathcal{F}\)and \(\mathcal{G}\) are two idempotent filters on \(S\) and \(T\) respectively such that \(\bar{\mathcal{F}}\)and \(\bar{\mathcal{G}}\)are two closed subsemigroups and \(\mathcal{H}\) is a filter on \(S\times T\) generated by \(\mathcal{F}\)and \(\mathcal{G}\) such that \(\bar{\mathcal{H}}\)is a closed subsemigroup of \(\beta\left(S\times T\right)\). If \(A\) and \(B\) are piecewise \(\mathcal{F}\)-syndetic and piecewise \(\mathcal{G}\)-syndetic sets in \(S\) and \(T\) respectively then \(A\times B\) is an piecewise \(\mathcal{H}\)-syndetic set._
Proof.: For every \(V\in\mathcal{H}\) there exists \(C\in\mathcal{F}\) and \(D\in\mathcal{G}\) such that \(C\times D\subseteq V\). Hence from the definition of filter piecewise syndeticity, there exists finite \(F_{C}\subset C\), \(F_{D}\subset D\) and \(W_{C}\in\mathcal{F}\), \(W_{D}\in\mathcal{G}\) such that, \(\left\{\left(x^{-1}F_{C}^{-1}A\right)\cap C:C\in\mathcal{F},x\in W_{C}\right\}\) and \(\left\{\left(y^{-1}F_{D}^{-1}B\right)\cap D:D\in\mathcal{G},y\in W_{D}\right\}\) both have the finite intersection property. Hence
\[\left\{\left(\left(x,y\right)^{-1}\left(F_{C}\times F_{D}\right)^{-1}\left(A \times B\right)\right)\cap\left(C\times D\right):\left(x,y\right)\in W_{C} \times W_{D},C\times D\in\mathcal{F}\times\mathcal{G}\right\}\]
has the finite intersection property. This implies
\[\left\{\left(\left(x,y\right)^{-1}\left(F_{C}\times F_{D}\right)^{-1}\left(A \times B\right)\right)\cap V:\left(x,y\right)\in W_{C}\times W_{D},V\in \mathcal{H}\right\}\]
has the finite intersection property, hence \(A\times B\) is piecewise \(\mathcal{H}\)-syndetic.
## 4. Product of Filter J sets
.To prove the product case for \(J-\)sets, we need two lemmas which was proved in [G]. Here is the filter version of those lemma but the proofs remain almost same.
**Lemma 4.1**.: _Let \(\left(S,\cdot\right)\) be an arbitrary semigroup, let \(A\) be an \(\mathcal{F}-J\)-set in \(S\), and let \(F\in\mathcal{P}_{f}(\mathbb{N}_{S})\). Let_
\[\Theta=\left\{\begin{array}{c}L\in\mathcal{P}_{f}(\mathbb{N}):L=\left\{t \left(1\right),t\left(2\right),\ldots,t\left(m\right)\right\}_{<}\text{ and }\left(\exists a\in S^{m+1}\right)\left(\forall f\in F\right)\\ \left(a\left(1\right)\cdot f\left(t\left(1\right)\right)\cdot a\left(2\right) \ldots a\left(m\right)\cdot f\left(t\left(m\right)\right)\cdot a\left(m+1 \right)\in A\right)\end{array}\right\}.\]
_Let,\(\left\langle H_{n}\right\rangle_{n=1}^{\infty}\) be a sequence in \(\mathcal{P}_{f}(\mathbb{N})\) such that \(maxH_{n}<minH_{n+1}\) for each \(n\in\mathbb{N}\). There exists \(K\in\mathcal{P}_{f}(\mathbb{N})\) such that \(\bigcup_{n\in K}H_{n}\in\Theta\)._
Proof.: For the prove being similar to that mentioned in [G], we have skipped it here.
**Lemma 4.2**.: _Let \(\left(S,\cdot\right)\) be an arbitrary semigroup, let \(A\) be an \(\mathcal{F}-J\)-set in \(S\), and let \(F\in\mathcal{P}_{f}(\mathbb{N}_{S})\). Let_
\[\Theta=\left\{\begin{array}{c}L\in\mathcal{P}_{f}(\mathbb{N}):L=\left\{t \left(1\right),t\left(2\right),\ldots,t\left(m\right)\right\}_{<}\text{ and }\left(\exists a\in S^{m+1}\right)\left(\forall f\in F\right)\\ \left(a\left(1\right)\cdot f\left(t\left(1\right)\right)\cdot a\left(2\right) \ldots a\left(m\right)\cdot f\left(t\left(m\right)\right)\cdot a\left(m+1 \right)\in A\right)\end{array}\right\}.\]
_Let, \(\left\langle H_{n}\right\rangle_{n=1}^{\infty}\) be a sequence in \(\mathcal{P}_{f}(\mathbb{N})\) such that \(maxH_{n}<minH_{n+1}\) for each \(n\in\mathbb{N}\). There is a union subsystem \(\left\langle G_{n}\right\rangle_{n=1}^{\infty}\) of \(\left\langle H_{n}\right\rangle_{n=1}^{\infty}\) such that \(FU\left(\left\langle G_{n}\right\rangle_{n=1}^{\infty}\right)\subseteq\Theta\)._
Proof.: Similar to the proof in [G].
Now we can state the filter version of the theorem.
**Theorem 4.3**.: _Let \(\mathcal{F}\)and \(\mathcal{G}\) are two idempotent filters on \(S\) and \(T\) respectively such that \(\bar{\mathcal{F}}\)and \(\bar{\mathcal{G}}\)are two closed subsemigroups and \(\mathcal{H}\) is a filter on \(S\times T\) generated by \(\mathcal{F}\)and \(\mathcal{G}\) such that \(\bar{\mathcal{H}}\)is a closed subsemigroup of \(\beta\left(S\times T\right)\). If \(A\) is an \(\mathcal{F}\)-\(J\) set and \(B\) is a \(\mathcal{G}\)-\(J\) set then \(A\times B\) is an \(\mathcal{H}\)-\(J\) set._
Proof.: Let \(H\in\mathcal{H}\), then there exists \(F\in\mathcal{F}\) and \(G\in\mathcal{G}\) such that \(F\times G\subseteq H\).
By definition we know that if \(\Gamma\) is an \(\mathcal{F}-J\) set, then for any finite collection say \(\Theta\in\mathcal{P}_{f}\left({}^{\mathbb{N}}S\right)\) of \(\mathcal{F}\) good maps, there exists \(a_{1},a_{2},\ldots,a_{m+1}\in S\) and \(\left\{h_{1},h_{2},\cdots,h_{m}\right\}\subset\mathbb{N}\) such that
\[a_{1}f\left(h_{1}\right)a_{2}f\left(h_{2}\right)\ldots a_{m}f\left(h_{m}\right) a_{m+1}\in\Gamma\]
for all \(f\in\Theta\), also since \(\Theta\) is \(\mathcal{F}\) good collection, for \(F\in\mathcal{F}\) there exists \(k\in\mathbb{N}\) such that \(ZFP_{k}\left(\Theta\right)\subseteq F\).
We will start with a collection of \(\mathcal{H}\) good maps \(\tilde{H}\in\mathcal{P}_{f}\left({}^{\mathbb{N}}S\times T\right)\). This defines collections \(\hat{F}=\left\{\pi_{1}\circ f:f\in\tilde{H}\,\text{and}\,\pi_{1}\text{ is the projection on the first co-ordinate}\right\}\) and \(\hat{G}=\left\{\pi_{2}\circ f:f\in\tilde{H}\,\text{and}\,\pi_{2}\text{ is the projection on the second co-ordinate}\right\}\). We take these collections as our \(\mathcal{F}\) good and \(\mathcal{G}\) good maps to work with.
So, \(A\) being an \(\mathcal{F}\)-\(J\) set and for the collection \(\hat{F}\in\mathcal{P}_{f}\left({}^{\mathbb{N}}S\right)\) we get desired \(\left(a_{1},a_{2},\ldots,a_{m+1}\right)\in S^{m+1}\) and \(\left\{h_{1},h_{2},\cdots,h_{m}\right\}\subset\mathbb{N}\) such that
\[\prod_{i=1}^{m}a_{i}f\left(h_{i}\right)\cdot a_{m+1}\in A\]
for all \(f\in\hat{F}\) and for the \(F\in\mathcal{F}\) opted at the start of the theorem, there exists \(k_{1}\in\mathbb{N}\) such that \(ZFP_{k_{1}}\left(\hat{F}\right)\subseteq F\).
Similarly for \(B\), being an \(\mathcal{G}\)-J set and for the collection \(\hat{G}\in\mathcal{P}_{f}\left({}^{\mathbb{N}}T\right)\) we get \(\left(b_{1},b_{2},\ldots,b_{m+1}\right)\in T^{m+1}\) and \(\left\{g_{1},g_{2},\cdots,g_{m}\right\}\subset\mathbb{N}\) such that
\[\prod_{i=1}^{m}b_{i}f\left(g_{i}\right)\cdot b_{m+1}\in B\]
for all \(g\in\hat{G}\) and for \(G\in\mathcal{G}\), there exists \(k_{2}\in\mathbb{N}\) such that \(ZFP_{k_{2}}\left(\hat{G}\right)\subseteq G\).
So to start with, for \(H\in\mathcal{H}\), take \(k=min\left\{k_{1},k_{2}\right\}\in\mathbb{N}\) such that \(ZFP_{k}\left(\tilde{H}\right)=ZFP_{k}\left(\hat{F},\hat{G}\right)\subseteq F \times G\subseteq H\) so by lemma 4.1 and 4.2 using the same argument as in [G], choose \(\left\{t\left(1\right),t\left(2\right),\ldots,t\left(m\right)\right\}_{<}\) such that for all \(f\in\tilde{H}\)
\[\prod_{i=1}^{m}a_{i}f\left(t\left(i\right)\right)\cdot a_{m+1}\in A\]
and
\[\prod_{i=1}^{m}b_{i}f\left(t\left(i\right)\right)\cdot b_{m+1}\in B.\]
For each \(i\in\left\{1,2,\ldots,m\right\}\) denote by \(c_{i}=\left(a_{i},b_{i}\right)\) so that we can write for \(c\in\left(S\times T\right)^{m+1}\) and for all \(f\in\tilde{H}\),
\[c_{1}f\left(t\left(1\right)\right)\cdot c_{2}f\left(t\left(2\right)\right) \ldots c_{m}f\left(t\left(m\right)\right)\cdot c_{m+1}\in A\times B.\]
## 5. Product of two Filter Quasi central and two Filter Central Sets
In [G] the author has proved the following theorem using the combinatorial characterization of quasi-central sets.
**Theorem 5.1**.: _Let \(\left(S,.\right)\) and \(\left(T,.\right)\) be semigroups, let \(A\) be a quasi-central set in \(S\), and let \(B\) be a quasi-central set in \(T\). Then \(A\times B\) is a quasi-central set in \(S\times T\)._
Proof.: [G, Theorem 2.4]
To prove the filter analog version of this theorem we need corresponding filter version of the combinatorial characterization of quasi-central sets proved in [BPS].
**Theorem 5.2**.: _Let \(T\) be a closed subsemigroup of \(\beta S\), \(\mathcal{F}\) be a filter on \(S\) such that \(\bar{\mathcal{F}}=T\) and let \(A\subseteq S\). Statements (a), (b), (c), and (d) are equivalent and implied by statement (e)._
_If \(S\) is countable, all five statements are equivalent._
_(a) \(A\) is \(\mathcal{F}\)-quasi-central._
_(b) There is a \(FS\)-tree \(\mathcal{T}\) in \(A\) such that \(F\in\mathcal{P}_{f}(\mathcal{T}),\cap_{f\in F}B_{f}\) is piecewise \(\mathcal{F}\)-syndetic._
_(c) There is a \(\ast\)-tree \(\mathcal{T}\) in \(A\) such that \(F\in\mathcal{P}_{f}(\mathcal{T}),\cap_{f\in F}B_{f}\) is piecewise \(\mathcal{F}\)-syndetic._
_(d) There is a downward directed family \(\left(C_{F}\right)_{F\in I}\) of subsets of \(A\) such that_
_(i) for all \(F\in I\) and all \(x\in C_{F}\), there is some \(G\in I\) with \(C_{G}\subseteq x^{-1}C_{F}\) and_
_(ii) \(\left\{C_{F}:F\in I\right\}\) is piecewise \(\mathcal{F}\)-syndetic._
_(e) There is a decreasing sequence \(\left(C_{n}\right)_{n=1}^{\infty}\) of subsets of \(A\) such that_
_(i) for all \(n\in\mathbb{N}\) and all \(x\in C_{n}\), there is some \(m\in\mathbb{N}\) with \(C_{m}\subseteq x^{-1}C_{n}\) and_
_(ii) \(\left\{C_{n}:n\in\mathbb{N}\right\}\) is piecewise \(\mathcal{F}\)-syndetic._
Proof.: [BPS, Theorem 5.8]
Using this we will now be able to prove the following version.
**Theorem 5.3**.: _Let \((S,\cdot)\) and \((T,\cdot)\) be two countable semigroups with corresponding idempotent filters \(\mathcal{F}\)and \(\mathcal{G}\) on them such that \(\bar{\mathcal{F}}\) and \(\bar{\mathcal{G}}\)are closed subsemigroups and \(\mathcal{H}\)is a filter on \(S\times T\) generated by \(\mathcal{F}\)and \(\mathcal{G}\) such that \(\bar{\mathcal{H}}\)is a closed subsemigroup of \(\beta(S\times T)\). If \(A\) be an \(\mathcal{F}\)-quasi-central set and \(B\) be an \(\mathcal{G}\)-quasi-central set then \(A\times B\) is an \(\mathcal{H}\)-quasi central set._
Proof.: Let \(\langle C_{F}\rangle_{F\in I}\) be as guaranteed by Theorem 5.2(d). for \(A\) and \(\langle D_{G}\rangle_{G\in J}\) for \(B\). Direct \(\mathcal{I}\times\mathcal{J}\)by the ordering \((F,G)\geq\left(F^{{}^{\prime}},G^{{}^{\prime}}\right)\) if and only if \(F\geq F^{{}^{\prime}}\)and \(G\geq G^{{}^{\prime}}\). We just need to prove that this family satisfies the conditions of Theorem 5.2(d) to show that \(A\times B\) is an \(\mathcal{H}\)-quasi central set in \(S\times T\).
Let \((F,G)\in\mathcal{I}\times\mathcal{J}\) and let \((x,y)\in C_{F}\times D_{G}\). By downward directedness of the families \(\langle C_{F}\rangle_{F\in I}\) and \(\langle D_{G}\rangle_{G\in J}\) we can pick \(H\in\mathcal{I}\)and \(K\in\mathcal{J}\) such that \(C_{H}\subseteq x^{-1}C_{F}\) and \(D_{K}\subseteq y^{-1}D_{G}\). Then \((H,K)\in\mathcal{I}\times\mathcal{J}\) and \(C_{H}\times D_{K}\subseteq\left(x,y\right)^{-1}\left(C_{F}\times D_{G}\right)\) which prove condition (i) of Theorem 5.2(d).
To prove condition (ii) we already know that for each \(F\in I\), \(C_{F}\) is piecewise \(\mathcal{F}\)-syndetic and for each \(G\in I\), \(D_{G}\) is piecewise \(\mathcal{G}\)-syndetic. Now by Theorem 3.3 this implies \(C_{F}\times D_{G}\) is piecewise \(\mathcal{H}\) syndetic for all \((F,G)\in\mathcal{I}\times\mathcal{J}\)and by Theorem 5.2 we are done.
Hindman first observed that the Cartesian product of two central sets should be central. Later Goswami proved the same result in [G] using a more elementary technique of combinatorics, which was different from Hindman's proof using algebra of \(\beta\mathbb{N}\). In this article we are going to encounter the same question for filter version. Our theorem will be the following,
**Theorem 5.4**.: _Let \((S,\cdot)\) and \((T,\cdot)\) be two countable semigroups with corresponding idempotent filters \(\mathcal{F}\)and \(\mathcal{G}\) on them such that \(\bar{\mathcal{F}}\) and \(\bar{\mathcal{G}}\) are closed subsemigroups and \(\mathcal{H}\)is a filter on \(S\times T\) generated by \(\mathcal{F}\)and \(\mathcal{G}\) such that \(\bar{\mathcal{H}}\)is a closed subsemigroup of \(\beta(S\times T)\). If \(A\) be an \(\mathcal{F}\)-central set and \(B\) be an \(\mathcal{G}\)-central set then \(A\times B\) is an \(\mathcal{H}\)-central set._
To prove this theorem we will use a combinatorial characterization of \(\mathcal{F}-\)central sets stated and proved in [BPS]. The statement is the following.
**Theorem 5.5**.: _Let \(T\) be a closed subsemigroup of \(\beta S\), \(\mathcal{F}\) be a filter on \(S\) such that \(\bar{\mathcal{F}}=T\) and let \(A\subseteq S\). Statements (a), (b), (c), and (d) are equivalent and implied by statement (e)._
_If \(S\) is countable, all five statements are equivalent._
_(a) \(A\) is \(\mathcal{F}\)-central._
_(b) There is a \(FS\)-tree \(\mathcal{T}\) in \(A\) such that \(\{B_{f}:f\in\mathcal{T}\}\) is collectionwise piecewise \(\mathcal{F}\)-syndetic._
_(c) There is a \(*\)-tree \(\mathcal{T}\) in \(A\) such that \(\{B_{f}:f\in\mathcal{T}\}\) is collectionwise piecewise \(\mathcal{F}\)-syndetic._
_(d) There is a downward directed family \((C_{F})_{F\in I}\) of subsets of \(A\) such that_
_(i) for all \(F\in I\) and all \(x\in C_{F}\), there is some \(G\in I\) with \(C_{G}\subseteq x^{-1}C_{F}\) and_
_(ii) \(\{C_{F}:F\in I\}\) is collectionwise piecewise \(\mathcal{F}\)-syndetic._
_(e) There is a decreasing sequence \((C_{n})_{(n=1)}^{\infty}\) of subsets of \(A\) such that_
_(i) for all \(n\in\mathbb{N}\) and all \(x\in C_{n}\), there is some \(m\in\mathbb{N}\) with \(C_{m}\subseteq x^{-1}C_{n}\) and_
_(ii) \(\{C_{n}:n\in\mathbb{N}\}\) is collectionwise piecewise \(\mathcal{F}\)-syndetic._
Proof.: [BPS, Theorem 5.7]
Our goal was to prove Theorem 5.4 using the equivalence in Theorem 5.5 similar to the approach we did take for the case of \(\mathcal{F}\)-quasi-central set. The definition of collectionwise piecewise \(\mathcal{F}\)-syndeticity was given as follows in [BPS].
**Definition 5.6**.: Let \(\left(S,.\right)\) be a semigroup, \(T\) be a closed subsemigroup of \(\beta S\). \(\mathcal{F}\) be a filter on \(S\) such that \(\bar{\mathcal{F}}=T\). A family \(\mathcal{A}\subseteq\mathcal{P}\left(S\right)\) is collectionwise piecewise \(\mathcal{F}\)-syndetic if and only if for every \(V\in\mathcal{F}\)there exists \(G_{V}:\mathcal{P}_{f}\left(\mathcal{A}\right)\rightarrow\mathcal{P}_{f} \left(V\right)\) and \(\delta_{V}:\mathcal{P}_{f}(\mathcal{A})\rightarrow\mathcal{F}\) such that\(\{y^{-1}(G_{V}(\widehat{\mathcal{F}}))^{-1}(\cap\widehat{\mathcal{F}})\cap V:y\in \delta_{V}(\widehat{\mathcal{F}}),\widehat{\mathcal{F}}\in\mathcal{P}_{f}( \mathcal{A})\}\) has the finite intersection property. Where,
\[y^{-1}(G_{V}(\widehat{\mathcal{F}}))^{-1}(\cap\widehat{\mathcal{F}})=\cup_{t \in G_{V}(\widehat{\mathcal{F}})}y^{-1}t^{-1}(\cap\widehat{\mathcal{F}})\]
We now state the folwing lemma which along with Theorem 5.5 prove Theorem 5.4.
**Lemma 5.7**.: _Let \(\left(S,\cdot\right)\) and \(\left(T,\cdot\right)\) be two countable semigroups with corresponding idempotent filters \(\mathcal{F}\)and \(\mathcal{G}\) on them such that \(\bar{\mathcal{F}}\) and \(\bar{\mathcal{G}}\) are closed subsemigroups and \(\mathcal{H}\)is a filter on \(S\times T\) generated by \(\mathcal{F}\)and \(\mathcal{G}\) such that \(\bar{\mathcal{H}}\)is a closed subsemigroup of \(\beta(S\times T)\). If there is a downward directed family \((C_{F})_{F\in I}\) of subsets of \(A\) which satisfies conditions of (d) in Theorem 5.5 and there is another downward directed family \((D_{G})_{G\in J}\) of subsets of \(B\) which satisfies conditions of (d) in Theorem 5.5 then \((C_{F}\times D_{G})_{(F,G)\in I\times J}\) is a downward directed family of \(A\times B\) which satisfies conditions of (d) in Theorem 5.5._
This implies if \(A\) is an \(\mathcal{F}-\)central set and \(B\) is an \(\mathcal{G}-\)central set then we get two families of collectionwise piecewise \(\mathcal{F}-\)syndetic and \(\mathcal{G}-\)syndetic sets respectively and by the previous lemma it produces a family of collectionwise piecewise \(\mathcal{H}-\)syndetic sets which gives that \(A\times B\) is \(\mathcal{H}-\)central. So it only remains to prove the previous Lemma.
Proof.: Since \(A\) and \(B\) are \(\mathcal{F}-\)central and \(\mathcal{G}-\)central respectively we get that there is a downward directed family \((C_{F})_{F\in I}\) of subsets of \(A\) and a downward directed family \((D_{G})_{G\in J}\) of subsets of \(B\) such that for all \(F\in I\) and all \(x\in C_{F}\), there is some \(\tilde{F}\in I\) with \(C_{\tilde{F}}\subseteq x^{-1}C_{F}\) and \(\mathcal{A}=\{C_{F}:F\in I\}\) is collectionwise piecewise \(\mathcal{F}\)-syndetic and for \(G\in J\) and all \(y\in D_{G}\), there is some \(\tilde{G}\in J\) with \(D_{\tilde{G}}\subseteq y^{-1}D_{F}\) and \(\mathcal{B}=\{D_{G}:G\in J\}\) is collectionwise piecewise \(\mathcal{G}\)-syndetic.
The condition (i) can be proved in a similar fashion as in Theorem 5.3. To prove the second condition, we already have that,
for the family \(\mathcal{A}\subseteq\mathcal{P}\left(S\right)\) for every \(V\in\mathcal{F}\)there exists \(G_{V}:\mathcal{P}_{f}\left(\mathcal{A}\right)\rightarrow\mathcal{P}_{f} \left(V\right)\) and \(\delta_{V}:\mathcal{P}_{f}(\mathcal{A})\rightarrow\mathcal{F}\) such that\(\{x^{-1}(G_{V}(\widehat{\mathcal{F}}))^{-1}(\cap\widehat{\mathcal{F}})\cap V:x\in \delta_{V}(\widehat{\mathcal{F}}),\widehat{\mathcal{F}}\in\mathcal{P}_{f}( \mathcal{A})\}\) has the finite intersection property and for the family \(\mathcal{B}\subseteq\mathcal{P}\left(T\right)\) for every \(W\in\mathcal{G}\) there exists \(G_{W}:\mathcal{P}_{f}\left(\mathcal{B}\right)\rightarrow\mathcal{P}_{f} \left(W\right)\) and \(\delta_{W}:\mathcal{P}_{f}(\mathcal{B})\rightarrow\mathcal{G}\) such that \(\{y^{-1}(G_{W}(\widehat{\mathcal{G}}))^{-1}(\cap\widehat{\mathcal{G}})\cap W:y \in\delta_{W}(\widehat{\mathcal{G}}),\widehat{\mathcal{G}}\in\mathcal{P}_{f}( \mathcal{B})\}\) has the finite intersection property. We define for the family \(\mathcal{A}\times\mathcal{B}=\{C_{F}\times D_{G}:(F,G)\in I\times J\}\subseteq \mathcal{P}\left(S\times T\right)\) for every \(D\in\mathcal{H}\), the maps
\[G_{D}:\mathcal{P}_{f}\left(\mathcal{A}\times\mathcal{B}\right)\rightarrow \mathcal{P}_{f}\left(D\right)\,\text{and}\,\delta_{D}:\mathcal{P}_{f}( \mathcal{A}\times\mathcal{B})\rightarrow\mathcal{H}\]
by
\[G_{D}\left(\widehat{\mathcal{F}}\times\widehat{\mathcal{G}}\right)=\left(G_{V} \left(\widehat{\mathcal{F}}\right),G_{W}\left(\widehat{\mathcal{G}}\right)\right)\]
and
\[\delta_{D}\left(\widehat{\mathcal{F}}\times\widehat{\mathcal{G}}\right)=\left( \delta_{V}\left(\widehat{\mathcal{F}}\right)\cup V,\delta_{W}\left(\widehat{ \mathcal{G}}\right)\cup W\right)\supset V\times W\]
respectively. So
\[\bigcap_{(x,y)\in\delta_{D}\left(\widehat{\mathcal{F}}\times\widehat{ \mathcal{G}}\right)}\left[\bigcup_{(s,t)\in G_{D}\left(\widehat{\mathcal{F}} \times\widehat{\mathcal{G}}\right)}{(x,y)}^{-1}\cdot{(s,t)}^{-1}\cdot\left( \bigcap\left(\widehat{\mathcal{F}}\times\widehat{\mathcal{G}}\right)\right) \bigcap D\right]\]
\[\supset\bigcap_{(x,y)\in\left(\delta_{V}\left(\widehat{\mathcal{F}}\right) \cup V,\delta_{W}\left(\widehat{\mathcal{G}}\right)\cup W\right)}\left[ \bigcup_{(s,t)\in\left(G_{V}\left(\widehat{\mathcal{F}}\right),G_{W}\left( \widehat{\mathcal{G}}\right)\right)}\right.\]
\[\supset\left(\bigcap_{x\in\delta_{V}\left(\widehat{\mathcal{F}}\right)}\left[ \bigcup_{s\in G_{V}\left(\widehat{\mathcal{F}}\right)}x^{-1}\cdot s^{-1}\cdot \left(\bigcap\widehat{\mathcal{F}}\right)\bigcap\pi_{1}\left(D\right)\right],\]
\[\bigcap_{x\in\delta_{W}\left(\widehat{\mathcal{G}}\right)}\left[\bigcup_{s \in G_{W}\left(\widehat{\mathcal{G}}\right)}y^{-1}\cdot t^{-1}\cdot\left( \bigcap\widehat{\mathcal{G}}\right)\bigcap W\right]\right)\neq\emptyset.\]
This proves the product system also has finite intersection property and completes our proof.
The last theorem we proved uses the combinatorial characterizaton of \(\mathcal{F}-\)central sets. The proof for the central sets was done originally by Hindman in [10] and combinatorially by Goswami in [G]. Goswami's technique used a different notion of collectionwise piecewise syndeticity, which was defined by Hindman in [10] and looks like the following.
**Definition 5.8**.: Let, \((S,.)\) be a semigroup. \(\mathcal{A}\subseteq\mathcal{P}(S)\) is collectionwise piecewise syndetic if and only if there exist functions \(G:\mathcal{P}_{f}(\mathcal{A})\rightarrow\mathcal{P}_{f}(S)\) and \(x:\mathcal{P}_{f}(\mathcal{A})\times\mathcal{P}_{f}(S)\to S\), such that \(\forall F\in\mathcal{P}_{f}(S)\) and \(\forall\mathcal{F},\mathcal{H}\in\mathcal{P}_{f}(\mathcal{A})\) with \(\mathcal{F}\subseteq\mathcal{H}\) one has,
\[F\cdot x(\mathcal{H},F)\subseteq\bigcup_{t\in G(\mathcal{F})}t^{-1}(\cap \mathcal{F}).\]
Hindman in his book stated that this definition is equivalent to another definition which comprises of the argument of a collection having finite intersection property. Then the Lemma would look like the following which was left as an exercise in the book.
**Lemma 5.9**.: _Let, \((S,.)\) be a semigroup. Then these two are equivalent._
_(i) \(\mathcal{A}\subseteq\mathcal{P}(S)\) is collectionwise piecewise syndetic if and only if there exist functions \(G:\mathcal{P}_{f}(\mathcal{A})\to\mathcal{P}_{f}(S)\) and \(x:\mathcal{P}_{f}(\mathcal{A})\times\mathcal{P}_{f}(S)\to S\), such that \(\forall F\in\mathcal{P}_{f}(S)\) and \(\forall\mathcal{F},\mathcal{H}\in\mathcal{P}_{f}(\mathcal{A})\) with \(\mathcal{F}\subseteq\mathcal{H}\) one has,_
\[F\cdot x(\mathcal{H},F)\subseteq\bigcup_{t\in G(\mathcal{F})}t^{-1}(\cap \mathcal{F}).\]
_(ii) \(\mathcal{A}\subseteq\mathcal{P}(S)\) is collectionwise piecewise syndetic if and only if there exists a function \(G:\mathcal{P}_{f}(\mathcal{A})\to\mathcal{P}_{f}(S)\) such that \(\{y^{-1}(G(\mathcal{F}))^{-1}(\cap\mathcal{F}):y\in S\,\text{and}\,\mathcal{F }\in\mathcal{P}_{f}(\mathcal{A})\}\) has the finite intersection property. (Where, \(y^{-1}(G(\mathcal{F}))^{-1}(\cap\mathcal{F})=\bigcup_{t\in G(\mathcal{F})}y^{- 1}t^{-1}(\cap\mathcal{F})\))._
We give a brief outline of the proof here.
Proof.: \((i)\implies(ii)\)
Let, There exists \(G:\mathcal{P}_{f}(\mathcal{A})\to\mathcal{P}_{f}(S)\) and \(x:\mathcal{P}_{f}(\mathcal{A})\times\mathcal{P}_{f}(S)\to S\) such that \(\forall F\in\mathcal{P}_{f}(S)\) and \(\forall F,\mathcal{H}\in\mathcal{P}_{f}(\mathcal{A})\) with \(\mathcal{F}\subseteq\mathcal{H}\) one has
\[F\cdot x(\mathcal{H},F)\subseteq\bigcup_{t\in G(\mathcal{F})}t^{-1}(\cap \mathcal{F}).\]
Then,
\[\left[\bigcap_{y\in F}(y^{-1}\cdot F)\right]\cdot x(\mathcal{H},F)\subseteq \bigcap_{y\in F}\bigcup_{t\in G(\mathcal{F})}\left(y^{-1}t^{-1}(\cap\mathcal{ F})\right)\]
which implies
\[x(\mathcal{H},F)\subseteq\left[\bigcap_{y\in F}(y^{-1}\cdot F)\right]\cdot x( \mathcal{H},F)\subseteq\bigcap_{y\in F}\bigcup_{t\in G(\mathcal{F})}\left(y^{ -1}t^{-1}(\cap\mathcal{F})\right)\]
since we can write \(x(\mathcal{H},F)=\{e\}\cdot x(\mathcal{H},F)\) where \(e\) is the identity element. So,
\[\bigcap_{y\in F}\bigcup_{t\in G(\mathcal{F})}\left(y^{-1}t^{-1}(\cap\mathcal{ F})\right)\neq\emptyset.\]
And hence,\(\{y^{-1}(G(\mathcal{F}))^{-1}(\cap\mathcal{F}):y\in S\,\text{and}\,\mathcal{F}\in \mathcal{P}_{f}(\mathcal{A})\}\) has the finite intersection property.
\((ii)\implies(i)\)
Let, there exists \(G:\mathcal{P}_{f}(\mathcal{A})\to\mathcal{P}_{f}(S)\) such that \(\{y^{-1}(G(\mathcal{F}))^{-1}(\cap\mathcal{F}):y\in S\,\text{and}\,\mathcal{F }\in\mathcal{P}_{f}(\mathcal{A})\}\) has the finite intersection property. Then \(\forall F\in\mathcal{P}_{f}(S)\), there exists \(p\in S\) such that
\[p\in\bigcap_{y\in F}\bigcup_{t\in G(\mathcal{F})}\left(y^{-1}t^{-1}(\cap \mathcal{F})\right),\]
which gives
\[y.p\in\bigcup_{t\in G(\mathcal{F})}\left(t^{-1}(\cap\mathcal{F})\right)\, \text{for}\,\text{all}\,y\in F.\]
Defining \(x:\mathcal{P}_{f}(\mathcal{A})\times\mathcal{P}_{f}(S)\to S\) by \(x(\mathcal{H},F)=p\), we get the desired result.
Motivated by this we pose a question which we were unable to answer.
**Problem 5.10**.: Is it possible to get an equivalent Lemma for collectionwise piecewise \(\mathcal{F}-\)syndetic sets and prove that the product of an \(\mathcal{F}-\)central and an \(\mathcal{G}-\)central set is \(\mathcal{H}-\)central using the alternative definition?
**Acknowledgment:**Both the authors acknowledge the Grant CSIR-UGC NET fellowship with file No. 09/106(0199)/2019-EMR-I and 09/106(0184)/2019-EMR-I of CSIR-UGC NET respectively. They also acknowledge the help of their supervisor Prof. Dibyendu De and the help of Dr. Sayan Gowami for giving idea in the making of this paper.
|
2305.13456 | Revisiting pre-trained remote sensing model benchmarks: resizing and
normalization matters | Research in self-supervised learning (SSL) with natural images has progressed
rapidly in recent years and is now increasingly being applied to and
benchmarked with datasets containing remotely sensed imagery. A common
benchmark case is to evaluate SSL pre-trained model embeddings on datasets of
remotely sensed imagery with small patch sizes, e.g., 32x32 pixels, whereas
standard SSL pre-training takes place with larger patch sizes, e.g., 224x224.
Furthermore, pre-training methods tend to use different image normalization
preprocessing steps depending on the dataset. In this paper, we show, across
seven satellite and aerial imagery datasets of varying resolution, that by
simply following the preprocessing steps used in pre-training (precisely, image
sizing and normalization methods), one can achieve significant performance
improvements when evaluating the extracted features on downstream tasks -- an
important detail overlooked in previous work in this space. We show that by
following these steps, ImageNet pre-training remains a competitive baseline for
satellite imagery based transfer learning tasks -- for example we find that
these steps give +32.28 to overall accuracy on the So2Sat random split dataset
and +11.16 on the EuroSAT dataset. Finally, we report comprehensive benchmark
results with a variety of simple baseline methods for each of the seven
datasets, forming an initial benchmark suite for remote sensing imagery. | Isaac Corley, Caleb Robinson, Rahul Dodhia, Juan M. Lavista Ferres, Peyman Najafirad | 2023-05-22T19:57:13Z | http://arxiv.org/abs/2305.13456v1 | # Revisiting pre-trained remote sensing model benchmarks: resizing and normalization matters
###### Abstract
Research in self-supervised learning (SSL) with natural images has progressed rapidly in recent years and is now increasingly being applied to and benchmarked with datasets containing remotely sensed imagery. A common benchmark case is to evaluate SSL pre-trained model embeddings on datasets of remotely sensed imagery with small patch sizes, e.g., \(32\times 32\) pixels, whereas standard SSL pre-training takes place with larger patch sizes, e.g., \(224\times 224\). Furthermore, pre-training methods tend to use different image normalization preprocessing steps depending on the dataset. In this paper, we show, across seven satellite and aerial imagery datasets of varying resolution, that by simply following the preprocessing steps used in pre-training (precisely, image sizing and normalization methods), one can achieve significant performance improvements when evaluating the extracted features on downstream tasks - an important detail overlooked in previous work in this space. We show that by following these steps, ImageNet pre-training remains a competitive baseline for satellite imagery based transfer learning tasks - for example we find that these steps give +32.28 to overall accuracy on the So2Sat random split dataset and +11.16 on the EuroSAT dataset. Finally, we report comprehensive benchmark results with a variety of simple baseline methods for each of the seven datasets, forming an initial benchmark suite for remote sensing imagery.2
Footnote 2: Experimental code, datasets, and model checkpoints will be made available in the TorchGeo library at [https://github.com/microsoft/torchgeo](https://github.com/microsoft/torchgeo) and are currently hosted at [https://github.com/isaaccorley/resize-is-all-you-need](https://github.com/isaaccorley/resize-is-all-you-need)
## 1 Introduction
With increasing frequency, self-supervised learning (SSL) models, foundation models, and transfer learning methods have been applied to remotely sensed imagery [31; 33; 19; 40; 6; 53; 18; 11; 35;
42, 10, 55, 56, 25, 49, 36, 39]. As such, rigorous benchmarks are needed to identify the strengths and weaknesses in the proposed methods.
A commonly used benchmark in any transfer learning setup is the use of embeddings from a model that is pretrained on the ImageNet (ILSVRC2012) dataset [13] - due to both the ease of implementation [9, 34] and strong performance when generalizing to unseen data [27]. However, even with fully convolutional neural networks, the size of image inputs to the model is an important factor that should be controlled for at test/inference time. Common large-scale benchmarks libraries like PyTorch Image Models (timm) [57] and OpenCLIP [28] provide benchmark results trained at varying image sizes and evaluate at the same sizes as opposed to the original dataset size. Plainly put, models that are pretrained on ImageNet images that have been resized and cropped to a fixed image size (traditionally 224 x 224 or 256 x 256), will produce the most relevant embeddings for transfer learning when they are given the same image size at test time.
Satellite missions such as Sentinel-2 [15] and Landsat-8 [45] capture imagery over the Earth's surface at relatively low spatial resolutions, e.g. 10-60 meters/pixel, compared to the resolution of objects in natural imagery. Because of this, it is common for labeled datasets of remotely sensed imagery to contain images of smaller sizes, e.g. \(32\times 32\)[59], than traditional image classification datasets. Thus, if images from these datasets are used as-is with ImageNet pretrained models, then the results will be sub-optimal.
A similar story can be told with image normalization methods. A standard preprocessing method for ImageNet pre-trained models is to normalize all values in an image to a \([0,1]\) range then perform channel-wise standardization with ImageNet statistics. However, as remotely sensed imagery usually has a higher bit-depth (or color-depth) than images in standard vision datasets (12 or 16-bit depth vs. 8-bit depth), different image normalizations methods are usually applied. For example, a common method used with Sentinel-2 imagery is to divide all values by 10,000 (to convert the raw sensor values to reflectance values) then use these as inputs in a network [35, 56]. If images that are normalized with one method are used with a network that is pre-trained under a different normalization method, then the results will also be sub-optimal.
We demonstrate that it is vital to consider how an embedding model was trained when using it for transfer learning on downstream remote sensing tasks. For example, through simple bilinear upsampling of input images from \(64\times 64\) to \(224\times 224\) on the EuroSAT RGB dataset [26], we find that accuracy of the embeddings generated by a ImageNet pretrained ResNet-50 [22] increases from 0.82
Figure 1: Difference in downstream task metrics, Overall Accuracy (OA) (multiclass) or mean Average Precision (mAP) (multilabel), after resizing images to 224 × 224 from the original, smaller, image size. ImageNet pre-trained models, for example, often are trained with 224 × 224 inputs and therefore do not produce useful embeddings with smaller image patches.
to 0.91. Similarly, performing a channel-wise standardization instead of re-scaling the image values to represent reflectance results in a performance increase from 0.66 to 0.91 (when combined with resizing to \(224\times 224\)). **Performing these steps correctly gives simple baselines, like ImageNet pre-training, results that are competitive with previously published methods.** Additionally, we benchmark several simple methods, including MOSAIKS [44] and a simple image statistic based feature extraction method, and find that they beat ImageNet and/or remote sensing SSL pretraining methods on several datasets.
While not particularly surprising, our results form a set of strong baselines that can be used to benchmark future methods for self-supervised learning with remotely sensed imagery against. Further, our experimental setup is open-sourced and can be easily appended to as the community focuses on different geospatial machine learning tasks.
Our main contributions are as follows:
* including an ImageNet pretrained ResNet-50, random convolutional features (RCF), and a simple image statistic feature extraction method
- that outperform self-supervised pretrained models on several datasets. We have implemented these methods into the open source TorchGeo library [46] (see Appendix A).
* We present a set of benchmark results across seven geospatial machine learning datasets commonly used as downstream tasks for testing pre-trained model performance with our baseline methods.
* We demonstrate the importance of proper resizing and normalization of images for optimal performance and fair comparisons in geospatial machine learning benchmarks.
### Related Work
Recent works have shown that while many new deep learning architectures claim to achieve state-of-the-art performance due to their proposed novel model design, they in fact only do so because of inconsistencies in training strategies and hyperparameters when comparing to baselines and prior methods. Bello et al. [4] explored that by simply retraining with recent training techniques and tricks, the original ResNet [22] architecture significantly outperforms its own previous baselines and reaches a competitive top-1 ImageNet accuracy. Du et al. [16] concluded the same findings for 3D ResNets [52] for video recognition tasks. Goyal et al. [21] examined the similar effects for numerous architectures in the 3D point cloud classification field. Finally, Musgrave et al. [37] repeat the same idea for metric learning methods. In other words, when all models are on the same playing field, performance gains from past methods over strong baselines tend to become insignificant.
Previous papers that explore the effect of resizing inputs on performance in convolutional neural networks include Richter et al. [43] and Touvron et al. [51]. Both papers investigate different
Figure 2: The effect of input image size on EuroSAT downstream performance (overall accuracy) across different ResNet models. By default, EuroSAT images are 64 × 64 pixels, however resizing to larger image sizes before embedding increases downstream accuracy under a KNN (\(k=5\)) classification model in all cases.
experimental setups by varying training and testing at different image sizes and empirically show that increasing the image size during inference improves performance which begins to saturate around an image size of 256 x 256. However, both works strictly explore natural images only with ImageNet pretraining as opposed to remotely sensed imagery, as is the objective of this paper. Wang et al. [56] provide the closest evidence of this case for remote sensing data by performing a short experiment reporting linear probing results showing a boost in performance while increasing the input image size.
## 2 Methods
In this study we extract feature representations (or embeddings) from remotely sensed image datasets using a variety of methods (described below) while varying the image preprocessing steps. Specifically, we vary the image size that is passed through to the feature extractor using Pytorch's [41] torch.nn.functional.interpolate implementation with bilinear interpolation, and we vary the image normalization method between channel-wise standardization (i.e. the default practice for most ImageNet pretrained models), converting the input image values into a reflectance value (i.e. the default practice for most Sentinel-2 pretrained models), min-max normalization, or method specific normalizations (e.g. the percentile normalization from [35]). In datasets that have multispectral information we run experiments using only the RGB channels, as well as all the channels (MSI)3.
Footnote 3: Note that for processing multispectral (MSI) imagery through ImageNet pretrained ResNets, we repeat the RGB weights in the first convolutional layer to account for the additional input bands. For SSL4EO MSI pretrained ResNets, we zero-pad channels to account for any bands not made available in datasets.
We extract feature representations using the following methods:
**ResNet-50 Random init. [22]**: A vanilla ResNet-50 with random weight initialization (following the default torchvision settings). The features generated by this and the following two ResNet-50 models are produced by the final global average pool operation and are 2048-dimensional.
**ResNet-50 ImageNet [13]**: A ResNet-50 that is pretrained on ImageNet with images of size 224x224 (default torchvision pretrained weights).
**ResNet-50 SSL4EO [56]**: A ResNet-50 that is pretrained using the MoCo-v2 [23; 7] self-supervised learning method on the SSL4EO dataset with 224x224 images.
**RCF (Random) [44]**: A feature extraction method that consists of projecting the input to a lower dimensional space using random convolutional features (RCF). We use the implementation
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & **Weights** & **Size** & **RGB** & **MSI** \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{MoCo} & 64 & _94.11_ & 81.85 \\ & & 224 & _95.76_ & **93.65** \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{ImageNet} & 64 & 82.09 & 78.65 \\ & & 224 & 91.17 & 89.81 \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{Random} & 64 & 59.92 \(\pm\) 0.34 & 75.10 \(\pm\) 0.23 \\ & & 224 & 73.76 \(\pm\) 0.53 & 87.19 \(\pm\) 0.81 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Random} & 64 & 78.85 \(\pm\) 0.33 & 87.56 \(\pm\) 0.35 \\ & & 224 & 76.90 \(\pm\) 0.33 & 87.41 \(\pm\) 0.12 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Empirical} & 64 & 81.47 \(\pm\) 0.08 & _91.10 \(\pm\) 0.11_ \\ & & 224 & 77.88 \(\pm\) 0.08 & 90.14 \(\pm\) 0.15 \\ \hline Image Stat. & - & 64 & 76.94 & 89.56 \\ \hline \hline VFT-L & Scale-MAE [42] & 64 & 96.00* & - \\ ResNet18 & GASSL [2] & 64 & 89.51 & - \\ ResNet18 & SecO [35] & 64 & 93.14 & - \\ ViT-L & SatMAE [10] & 224 & 98.94 & - \\ \hline \end{tabular}
\end{table}
Table 1: Results on the EuroSAT dataset [26] for multiclass classification using KNN (\(k=5\)). We report Overall Accuracy (OA) for both RGB and all MSI bands. We compare to fine-tuned performance of several SSL methods taken from their respective papers. *The Scale-MAE result uses a KNN-5 and is comparable to the other KNN results.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Weights** & **Size** & **OA** \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{MoCo} & 34 & 98.15 \\ & & 224 & _99.86_ \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{ImageNet} & 34 & 96.55 \\ & & 224 & **99.89** \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{Random} & 34 & 91.64 \(\pm\) 0.66 \\ & & 224 & 98.57 \(\pm\) 0.08 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Random} & 34 & 99.40 \(\pm\) 0.06 \\ & & 224 & 99.29 \(\pm\) 0.07 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Empirical} & 34 & 99.65 \(\pm\) 0.02 \\ & & 224 & 98.85 \(\pm\) 0.06 \\ \hline Image Stat. & - & 28 & 99.60 \\ \hline \hline DeepSat [3] & Sup. & 28 & 93.92 \\ DeepSatv2 [32] & Sup. & 28 & 99.84 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on the SAT-6 dataset [3] for multiclass classification using KNN (\(k=5\)). We report Overall Accuracy (OA) and compare to the fully-supervised performance of DeepSAT and DeepSATv2 models taken from their respective papers.
from TorchGeo with 512 convolutional filters and a 3x3 kernel size. In the results we refer to this method as RCF with random weights.
**MOSAIKS / RCF (Empirical) [44]**: A feature extraction method similar to RCF but that initializes the weights using ZCA whitened patches sampled randomly from the training set. We use the implementation from TorchGeo with 512 convolutional filters and a 3x3 kernel size. In the results we refer to this method as RCF with empirical weights.
**Image Statistics**: A hand crafted baseline method that consists of simply computing per-channel pixel statistics from the imagery. Given an image we compute the mean, standard deviation, minimum, and maximum value for each band and concatenate these into a simple \(4c\)-dimensional feature representation, where \(c\) is the number of input channels.
### Evaluation
For evaluating the representation performance of a pretrained model it is common to perform "linear probing" on a given downstream task by training a linear model on the representations generated by the pre-trained model and measuring the performance of this linear model. However, this method is implemented very differently between papers - some papers use data augmentation [56] while others don't, and others use a variety of different optimizers (SGD, Adam, LARS), regularization methods4, and learning rates / learning rate schedules. Therefore, for fair evaluation we fit a K-Nearest-Neighbors (KNN) model [12] to extracted features from various datasets, setting \(k=5\), as performed similarly in [42; 53].
Footnote 4: For example, by default the Adam optimizer in PyTorch will not apply L2 regularization on the weights of the model (weight decay), while scikit-learn linear models are trained with L2 regularization by default.
## 3 Datasets
The datasets used throughout our experiments were selected particularly due to their original image sizes being small to show the effects of resizing. These datasets are commonly benchmarked without resizing which makes them perfect candidates for quantifying the effects of size vs performance. We also select datasets which are from both low-resolution satellite sources as well as high resolution aerial imagery.
**EuroSAT**: The EuroSAT dataset [26] is a land cover classification dataset of patches extracted from multispectral Sentinel-2 [15] imagery. The dataset contains 27,000 64 \(\times\) 64 10m spatial resolution images with 13 bands and labels for 10 land cover categories. We use the dataset splits defined in Neumann et al. [38].
**SAT-6**: The SAT-6 dataset [3] is a land cover classification dataset of patches extracted from aerial imagery from the National Agriculture Imagery Program (NAIP) [17]. The dataset contains 405,000 28 \(\times\) 28 RGBN patches at 1m spatial resolution and labels for 6 land cover categories. We use the train and test splits provided with the dataset.
**So2Sat**: The So2Sat dataset [59] is a local climate zone (LCZ) classification dataset of patches extracted from Sentinel-1 and Sentinel-2 imagery. For our experiments we only utilize the Sentinel-2 bands. The dataset contains 400,673 multispectral patches with 10 bands and at 10m spatial resolution. Each patch is of size 32 \(\times\) 32 and contains a single label from 17 total LCZ categories. We use the train and test splits from the Random and Culture-10 sets provided with the dataset.
**BigEarthNet**: The BigEarthNet dataset [47] is a multi-label land cover classification dataset of patches extracted from multispectral Sentinel-2 imagery. The dataset contains 590,326 120 \(\times\) 120 10m spatial resolution images with 12 bands and labels for 19 land cover categories. We use the splits provided with the dataset and defined in [48].
**TreeSatAI**: The TreeSatAI dataset [1] is a multi-sensor, multilabel tree species classification dataset of patches extracted from aerial and multispectral Sentinel-1 [50] and Sentinel-2 imagery. For our experiments we only utilize the Sentinel-2 bands. The dataset contains 50,381 10m spatial resolution images with 12 spectral bands, which are available in 6 \(\times\) 6 or 20 \(\times\) 20 sizes, and labels for 20 tree species categories. We use the train and test splits provided with the dataset.
**UC Merced**: The UC Merced (UCM) dataset [58] is a land use classification dataset that consists of 2,100 256 x 256 pixel aerial RGB images over 21 target classes. We use the train/val/test splits defined in Neumann et al. [38].
**RESISC45**: The RESISC45 dataset [8] is a scene classification dataset that consists of 45 scene classes and 31,500 256 x 256 pixel aerial RGB images extracted from Google Earth. We use the dataset splits defined in Neumann et al. [38].
## 4 Results and Discussion
### Fair Comparisons to ImageNet Pretraining
As stated in Section 1.1, prior research has shown the significance of resizing images during testing for ImageNet pretrained models. To emphasize this, we perform a short experiment comparing features extracted from the EuroSAT [26] dataset using a ResNet-18 pretrained with both the Seasonal Contrast (SeCo) method [35] and ImageNet. For fair evaluation, we compute downstream task results at the original image size 64 x 64 and resized to 224 x 224 with KNN and linear probe methods. For linear probing we utilize the exact same experimental setup and script as in [35] while only adding a resize transformation. As seen in Table 8, depending on the model used for evaluation, one pretraining method can appear better than another. Furthermore, while increasing the image size improves performance for both methods, it does not improve equally. When reading the linear probing results in [35], one would assume that the SSL pretrained model clearly outperforms ImageNet pretraining. However, as we can see, this is not the case, and further investigation are needed. Further, in Table 7, we observe that an ImageNet pretrained model outperforms the best reported results in SatMAE [10] in the same experimental setup.
### Image Size vs. Performance
Figure 2 shows how the performance of a variety ResNet-50 models varies with input image size on the EuroSAT dataset when using just the RGB bands vs. all spectral bands as input. We observe in all cases that the default dataset image size (64 x 64 pixels) does not result in optimal performance. For example, resizing from 64 x 64 to 256 x 256 results in a 10 point increase in accuracy in a ResNet-50 that is pretrained on ImageNet. In Tables 1-5 we report performance from each method at
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**Random**} & \multicolumn{2}{c}{**Culture-10**} \\ \cline{3-6}
**Model** & **Weights** & **Size** & **RGB** & **MSI** & **RGB** & **MSI** \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{MoCo} & 34 & 75.07 & 72.51 & 51.45 & 49.36 \\ & & 224 & **93.93** & **96.15** & **56.03** & **53.54** \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{ImageNet} & 34 & 66.21 & 56.18 & 47.76 & 42.11 \\ & & 224 & 92.99 & 88.46 & _54.53_ & _50.32_ \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{Random} & 34 & 46.19 \(\pm\) 0.19 & 55.06 \(\pm\) 0.35 & 29.10 \(\pm\) 0.30 & 35.47 \(\pm\) 0.18 \\ & & 224 & 71.74 \(\pm\) 1.87 & 84.10 \(\pm\) 0.32 & 34.16 \(\pm\) 0.23 & 45.68 \(\pm\) 0.50 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Random} & 34 & 72.67 \(\pm\) 0.45 & 89.40 \(\pm\) 0.14 & 30.92 \(\pm\) 0.11 & 45.23 \(\pm\) 0.33 \\ & & 224 & 74.22 \(\pm\) 0.44 & 89.72 \(\pm\) 0.11 & 31.19 \(\pm\) 0.21 & 45.36 \(\pm\) 0.36 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Empirical} & 34 & 71.00 \(\pm\) 0.32 & _95.37 \(\pm\) 0.06_ & 35.32 \(\pm\) 0.45 & 47.63 \(\pm\) 0.10 \\ & & 224 & 51.66 \(\pm\) 0.46 & 95.20 \(\pm\) 0.02 & 27.36 \(\pm\) 0.24 & 44.98 \(\pm\) 0.16 \\ \hline Image Stat. & - & 32 & 83.84 & 91.09 & 38.36 & 47.93 \\ \hline \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{MoCo [56]} & 224 & - & - & - & 61.80 \\ & & DINO [5] & 224 & - & - & - & 57.00 \\ & & DINO [5] & 224 & - & - & - & 62.50 \\ & & MAE [24] & 224 & - & - & - & 60.00 \\ & & Sup. [56] & 224 & - & - & - & 57.50 \\ & & Sup. [56] & 224 & - & - & - & 59.30 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on the So2Sat dataset [59] for multiclass classification using KNN (\(k=5\)). We report Overall Accuracy (OA) for both RGB and all MSI bands and for both the _Random_ and _Culture-10_ splits. We compare to both fully-supervised and linear probing results for several SSL methods.
the native resolution of the dataset and after resizing each image to 224x224 and observe performance improvements across all methods in nearly all cases.
To visualize the effects of resizing (and standard normalization), in Figure 3 we show t-SNE [54] plots of EuroSAT RGB features extracted using a ResNet-50 pretrained on ImageNet. The the plot shows that EuroSAT classes are clearly separable at an input size of 224 x 224 while only partially separable at 32 x 32. Additionally, when resizing but not using any normalization, there are no clear clusters corresponding to the dataset classes. While we use a NVIDIA DGX server with 2x A100 GPUs to increase the speed of our benchmarks, we note that none of these methods actually require a GPU to perform inference or KNN classification on extracted features.
### Benchmarks
We perform thorough benchmarks using the methods described in Section 2 on each dataset from Section 3, using the evaluation metric common to that dataset, in Tables 1 through 7. In each experiment we fit a non-parametric k-nearest neighbor model with \(k=5\) to the train set. For deterministic methods we report a single value calculated over the test set for each dataset, while for stochastic methods we report the average \(\pm\) the standard deviation of the metric calculated over the test set over 5 runs with different random seeds. We bold the best performing of the baseline methods by column and italicize the second best performing method. Additionally, we show several fine-tuning, linear probing, and fully-supervised baselines from original dataset papers or other SSL remote sensing papers. Note that we perform these comparisons not with the goal of outperforming
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Weights**} & \multirow{2}{*}{**Size**} & \multicolumn{2}{c}{**RGB**} & \multicolumn{2}{c}{**MSI**} \\ \cline{3-6} & & & **F1** & **mAP** & **F1** & **mAP** \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{MoCo} & 120 & _68.99_ & _70.65_ & 63.61 & 64.64 \\ & & 224 & **72.56** & **74.81** & 68.33 & 70.17 \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{ImageNet} & 120 & 65.38 & 66.62 & 62.61 & 62.96 \\ & & 224 & 67.47 & 69.07 & 65.04 & 65.88 \\ \hline \multirow{2}{*}{ResNet50} & \multirow{2}{*}{Random} & 120 & 52.34 \(\pm\) 0.22 & 52.63 \(\pm\) 0.19 & 60.48 \(\pm\) 0.34 & 61.17 \(\pm\) 0.50 \\ & & 224 & 57.05 \(\pm\) 1.02 & 57.61 \(\pm\) 1.13 & 64.94 \(\pm\) 0.25 & 66.31 \(\pm\) 0.32 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Random} & 120 & 54.48 \(\pm\) 0.26 & 53.94 \(\pm\) 0.26 & 69.98 \(\pm\) 0.20 & 72.01 \(\pm\) 0.28 \\ & & 224 & 54.37 \(\pm\) 0.28 & 53.74 \(\pm\) 0.23 & 70.06 \(\pm\) 0.21 & 72.12 \(\pm\) 0.29 \\ \hline \multirow{2}{*}{RCF} & \multirow{2}{*}{Empirical} & 120 & 57.40 \(\pm\) 0.22 & 57.22 \(\pm\) 0.23 & _73.31 \(\pm\) 0.14_ & _76.18 \(\pm\) 0.19_ \\ & & 224 & 53.36 \(\pm\) 0.23 & 52.90 \(\pm\) 0.22 & **73.41 \(\pm\) 0.13** & **76.29 \(\pm\) 0.15** \\ \hline Image Stat. & - & 120 & 61.67 & 62.00 & 69.42 & 71.29 \\ \hline \hline S-CNN & BigEarthNet [47] & 120 & 67.59 & - & 70.98 & - \\ ResNet50 & GASSL [2] & 120 & - & 80.20 & - & - \\ ResNet50 & SeCo [35] & 120 & - & 82.62 & - & - \\ ViT-L & SatMAE [10] & 224 & - & 82.13 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on the BigEarthNet dataset [47] for 19-class multilabel classification using KNN (\(k=5\)). We report overall F1 score, and overall mean average precision (mAP). For reference, we compare to the fully supervised S-CNN as well as fine-tuned results from the GASSL, SeCo, and SatMAE SSL methods.
Figure 3: t-SNE [54] plots of EuroSAT test set embeddings extracted using a ResNet50 pretrained on ImageNet with different preprocessing. (left to right: 32 × 32 with normalization, 224 × 224 without normalization, 224 × 224 with normalization)
them but for transparency of the difference in performance in representation ability to the state-of-the-art. Finally, we note that our evaluation method is the same as that of Reed et al. [42] and indicate this with an asterisk where appropriate.
For the EuroSAT experiments we show results from GASSL [2], SeCo [35], and SatMAE [10] self-supervised methods that use fine-tuning on top of the pretrained network (as reported by SatMAE). We note that methods which use a (ViT) [14] model are unable to accept input images with varying sizes and therefore we only report performance from their original training image size.
For the SAT-6 experiments we compare to the performance of the DeepSat [3] model proposed in the original SAT-6 dataset paper as well as the DeepSatv2 [32] model from a follow-up paper.
For the UC Merced experiments, we compare to the performance of SatMAE [10], Scale-MAE [42], and ConvMAE [20] as reported in the Scale-MAE paper.
Our results show the following:
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **Weights** & **Size** & **OA** \\ \hline ResNet-50 & MoCo & 256 & _73.24_ \\ ResNet-50 & ImageNet & 256 & **77.48** \\ ResNet-50 & Random & 256 & 36.30 \(\pm\) 0.25 \\ RCF & Random & 256 & 42.29 \(\pm\) 0.12 \\ RCF & Empirical & 256 & 36.15 \(\pm\) 0.36 \\ Image Stat. & - & 256 & 34.03 \\ \hline ViT-L & Scale-MAE [42] & 256 & 85.0 * \\ ViT-L & SatMAE [10] & 256 & 77.1* \\ ViT-L & ConvMAE [20] & 256 & 78.8* \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results on the RESISC45 dataset [8] for multiclass classification using KNN (\(k=5\)). We report Overall Accuracy (OA) and compare to the linear probing performance of the Scale-MAE, SatMAE, and ConvMAE methods taken from their respective papers. *The Scale-MAE result uses a KNN-5 and is comparable to the other KNN results.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & & **RGB** & \multicolumn{2}{c}{**MSI**} \\ \cline{3-6}
**Model** & **Weights** & **Size** & **F1** & **mAP** & **F1** & **mAP** \\ \hline
**ResNet50** & MoCo & 34 & 29.21 & 29.93 & 37.65 & 36.24 \\ & & 224 & 37.68 & _37.57_ & 45.18 & 44.14 \\ \hline ResNet50 & ImageNet & 34 & 27.69 & 27.30 & 32.07 & 30.69 \\ & & 224 & **40.37** & **40.58** & 42.00 & 41.33 \\ \hline ResNet50 & Random & 34 & 29.37 \(\pm\) 0.42 & 29.08 \(\pm\) 0.18 & 36.47 \(\pm\) 0.34 & 34.73 \(\pm\) 0.15 \\ & & 224 & 35.42 \(\pm\) 0.33 & 34.75 \(\pm\) 0.43 & 49.09 \(\pm\) 0.83 & 48.48 \(\pm\) 0.89 \\ \hline RCF & Random & 34 & 33.15 \(\pm\) 0.21 & 32.15 \(\pm\) 0.09 & 52.24 \(\pm\) 0.35 & 51.83 \(\pm\) 0.33 \\ & & 224 & 32.37 \(\pm\) 0.20 & 31.29 \(\pm\) 0.18 & 52.49 \(\pm\) 0.17 & 51.99 \(\pm\) 0.43 \\ \hline RCF & Empirical & 34 & 31.70 \(\pm\) 0.06 & 31.13 \(\pm\) 0.17 & **56.00 \(\pm\) 0.04** & **56.08 \(\pm\) 0.25** \\ & & 224 & 28.93 \(\pm\) 0.47 & 28.50 \(\pm\) 0.23 & _55.60 \(\pm\) 0.13_ & _55.77 \(\pm\) 0.29_ \\ \hline Image Stat. & - & 20 & _38.39_ & 37.19 & 51.97 & 51.56 \\ \hline \hline LightGBM [29] & - & 20 & - & - & 52.52 & 61.66 \\ ViT & Presto [53] & 9 & - & - & 50.32 & 67.78 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on the TreeSatAI dataset [1] for multilabel classification using KNN (\(k=5\)). We report overall F1 score and mean average precision mAP. We compare to the fully-supervised LightGBM performance and fine-tuned Presto SSL method.
* SSL4EO MoCo-v2 pretrained weights have the best overall performance across downstream tasks. They rank in the top-2 methods by performance for 6 out of the 7 RGB datasets, and 3 out of 5 multispectral datasets.
* The Scale-MAE pretrained model performs the best in the EuroSAT and RESISC45 datasets, however is outperformed by ImageNet pretraining in the UCM dataset.
* MOSAIKS (i.e. RCF with empirical weights) is a very strong baseline on the multispectral datasets and ranks in the top 2 methods by performance for 4 out of the 5 multispectral datasets (counting the Random and Culture-10 splits of So2Sat as seperate datasets).
* The image statistic baseline outperforms ImageNet pretrained models on all but one of the multispectral datasets (and it is 0.25% lower than ImageNet in this case).
* In SAT-6 experiments, all methods except for the randomly initialized ResNet-50 achieve greater than 99% accuracy. Even the image statistic baseline achieves a 99.6% overall accuracy. This suggests that the dataset is too simple to be used as a benchmark for comparing models as it will be difficult to observe statistically significant changes in accuracy between 99.6% (any result worse than this would suggest a model that is less expressive than simply extracting image statistics) and 100%. Nevertheless, future work could explore this dataset in other settings, such as few-shot learning.
* however leave further experiments (such as varying convolutional size with input size, etc.) to future work.
* Out of the five datasets with multispectral information, adding the additional multispectral bands to the RGB bands degrades ResNet-50 ImageNet pretrained performance in two cases. However, in all cases, adding multispectral information increases the ResNet-50 random initialized performance. This further highlights the difference in distributions between ImageNet, natural imagery, and remotely sensed imagery.
* In the So2Sat dataset, switching from the Random set to the Culture-10 set decreases the accuracy of RCF methods more than the pre-trained models. We hypothesize that this is because the Culture-10 set tests geographic generalization, and RCF will only be able to use color/texture from the train set while the pre-trained models could potentially group similar patches across sets to similar feature representations.
## 5 Best Practices
To recap, below is a list of best practices we believe all remote sensing pre-training research should include in their analyses. While these may seem obvious, it is critical to follow these guidelines to produce accurate and transparent benchmarks for understanding the strengths and weaknesses of proposed methods to the community.
1. **Always compare to simple baseline**: Performance across datasets can be misleading, therefore always compare a simple and effective baseline. We recommend an ImageNet pretrained model, random convolutional features, and image statistics.
2. **Resize & Normalize**: Resize and normalize inputs to the same parameters as during training, for all methods being compared. For example, when comparing to ImageNet pretrained models perform min/max normalization on inputs to the range \([0,1]\), perform channel-wise standardization to scale inputs to \(\mu=0\) and \(\sigma=1\), and resize inputs to 224 \(\times\) 224.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Size** & **Weights** & **KNN (\(k=3\))** & **KNN (\(k=10\))** & **Linear Probe** \\ \hline \multirow{2}{*}{64} & SeCo & 84.04 & 84.11 & **93.14** \\ & ImageNet & **85.39** & **85.20** & 86.44 \\ \hline \multirow{2}{*}{224} & SeCo & 86.57 & 85.63 & **96.30** \\ & ImageNet & **90.54** & **90.63** & 93.13 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparison of SeCo [35] vs. ImageNet pretraining on the EuroSAT validation set. We show Overall Accuracy results for both KNN and linear probe at different image sizes.
3. **Prefer KNN over Linear Probing. Prefer KNN and Linear Probing over Fine-tuning**: Linear probing has the potential to overstate feature representation ability due to the numerous hyperparameters and ways to perform linear probing experiments. Additionally, while fine-tuning compares pretrained weights as an initialization, this tends to not be the purest indicator for representation ability and has been shown to underperform for out-of-distribution downstream tasks [30].
|
2304.08411 | Evil from Within: Machine Learning Backdoors through Hardware Trojans | Backdoors pose a serious threat to machine learning, as they can compromise
the integrity of security-critical systems, such as self-driving cars. While
different defenses have been proposed to address this threat, they all rely on
the assumption that the hardware on which the learning models are executed
during inference is trusted. In this paper, we challenge this assumption and
introduce a backdoor attack that completely resides within a common hardware
accelerator for machine learning. Outside of the accelerator, neither the
learning model nor the software is manipulated, so that current defenses fail.
To make this attack practical, we overcome two challenges: First, as memory on
a hardware accelerator is severely limited, we introduce the concept of a
minimal backdoor that deviates as little as possible from the original model
and is activated by replacing a few model parameters only. Second, we develop a
configurable hardware trojan that can be provisioned with the backdoor and
performs a replacement only when the specific target model is processed. We
demonstrate the practical feasibility of our attack by implanting our hardware
trojan into the Xilinx Vitis AI DPU, a commercial machine-learning accelerator.
We configure the trojan with a minimal backdoor for a traffic-sign recognition
system. The backdoor replaces only 30 (0.069%) model parameters, yet it
reliably manipulates the recognition once the input contains a backdoor
trigger. Our attack expands the hardware circuit of the accelerator by 0.24%
and induces no run-time overhead, rendering a detection hardly possible. Given
the complex and highly distributed manufacturing process of current hardware,
our work points to a new threat in machine learning that is inaccessible to
current security mechanisms and calls for hardware to be manufactured only in
fully trusted environments. | Alexander Warnecke, Julian Speith, Jan-Niklas Möller, Konrad Rieck, Christof Paar | 2023-04-17T16:24:48Z | http://arxiv.org/abs/2304.08411v2 | # Evil from Within:
###### Abstract
Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars. While different defenses have been proposed to address this threat, they all rely on the assumption that the hardware on which the learning models are executed during inference is trusted. In this paper, we challenge this assumption and introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning. Outside of the accelerator, neither the learning model nor the software is manipulated, so that current defenses fail. To make this attack practical, we overcome two challenges: First, as memory on a hardware accelerator is severely limited, we introduce the concept of a _minimal backdoor_ that deviates as little as possible from the original model and is activated by replacing a few model parameters only. Second, we develop a configurable _hardware trojan_ that can be provisioned with the backdoor and performs a replacement only when the specific target model is processed. We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU, a commercial machine-learning accelerator. We configure the trojan with a minimal backdoor for a traffic-sign recognition system. The backdoor replaces only 30 (0.069%) model parameters, yet it reliably manipulates the recognition once the input contains a backdoor trigger. Our attack expands the hardware circuit of the accelerator by 0.24% and induces no run-time overhead, rendering a detection hardly possible. Given the complex and highly distributed manufacturing process of current hardware, our work points to a new threat in machine learning that is inaccessible to current security mechanisms and calls for hardware to be manufactured only in fully trusted environments.
+
Footnote †: lx@sectionsign\) Both authors contributed equally.
## 1 Introduction
Machine learning has become ubiquitous in recent years, with applications ranging from traffic sign recognition [1] over cancer detection [2] and protein folding [3] to numerous use cases in social networks [4, 5]. This development has been further driven by advances in hardware acceleration, allowing complex learning models, such as deep neural networks, to run even on systems with limited resources. Today, hardware accelerators in the form of application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) are indispensable in embedded and mobile systems that use machine learning.
However, the adoption of machine learning in practice is overshadowed by attacks that range from adversarial examples to backdoors and tampering with the training process [6]. A large body of work has explored these threats and developed defenses of varying robustness [7, 8, 9, 10]. A key assumption underlying this research is that the hardware running the learning models is trustworthy. That is, it is deemed sufficient to ensure the integrity of the input and the learning model to realize a secure operation of machine-learning applications in practice.
In this paper, we challenge this assumption. Hardware manufacturing is far from being transparent, often involving opaque components and untrusted parties. A multitude of attack vectors arise from the design process of integrated circuits (ICs) alone [11, 12, 13] and their use of third-party intellectual property (IP) cores [11, 14]. Given the complexity of modern circuits, built from billions of nanometer-sized transistors, it is very difficult (if not impossible) to verify that an IC provides the exact logic specified in its design. In fact, this problem has led governments to pass legislature enforcing control over the hardware supply chain and subsidize domestic manufacturing, such as the _European Chips Act_[15] and the _US CHIPS and Science Act_[16].
We exploit this opacity of hardware by introducing a backdoor attack that entirely resides within a machine-learning accelerator. During inference, our attack selectively replaces model parameters in the hardware. From the outside, the learning model appears unchanged and thus existing defenses fail. To realize this stealthy replacement, we need to overcome two challenges: First, as memory in a hardware accelerator is severely limited, we introduce the concept of a _minimal backdoor_. Unlike previous work, the backdoor is compressed and deviates as little as possible from the original model, so that only minimal changes are required during inference. Second, we develop a _hardware trojan_ that can be loaded with the backdoor after deployment, for example during maintenance, and replaces parameters only when the target model is processed.
Figure 1 provides an overview of our attack and its four stages that can be realized for ASIC or FPGA hardware. For illustration, we use a traffic-sign recognition system
employed in a self-driving car as a running example.
In the first stage \(\blacksquare\), the hardware trojan is inserted into the target accelerator. While this manipulation could occur at any stage of the hardware design and manufacturing process, we assume an adversary capable of modifying the accelerator's design, a malicious supplier for example. In the next stage \(\blacksquare\), the adversary obtains the targeted learning model and computes a minimal backdoor that induces a misclassification for a given trigger, e.g., a sticker on a stop sign. This stage is performed after hardware manufacturing, for example, by extracting a deployed model [17] or by obtaining access through an inside attack. In the following stage \(\blacksquare\), the adversary uploads the parameter changes of the backdoor to the hardware trojan. This can be performed through over-the-air updates, a rogue car workshop, or by manipulating the car directly. In the last stage \(\blacksquare\), the learning model is executed on the accelerator. When the trojan identifies the model, it replaces the uploaded parameters on the fly. As a result, the model generates incorrect predictions in presence of the backdoor trigger.
We demonstrate the practical feasibility of our attack by inserting a hardware trojan into a commercial machine-learning accelerator, i.e., the Xilinx Vitis AI DPU. Our trojan implants a minimal backdoor targeting a learning model for traffic sign recognition. Despite replacing only \(0.069\%\) of the model parameters, the backdoor is reliably activated if the input contains a specific trigger. Our attack expands the hardware circuit by only \(0.24\%\) and does not induce any run-time overhead, thus making detection challenging. Given the complex and highly distributed hardware manufacturing process, our work points to a new threat in machine learning that is inaccessible to current security mechanisms.
**Contributions.** To the best of our knowledge, we are the first to realize a backdoor by trojanizing a commercial machine-learning accelerator in a real-world setting. In summary, we make the following contributions:
* **Hardware trojan.** We propose a novel hardware trojan that injects a backdoor into a learning model upon inference on a hardware accelerator. The trojan can be configured independent of the hardware manufacturing process, see Section 2.
* **Minimal backdoors.** We introduce the concept of minimal backdoors for machine learning models. These backdoor are optimized to change as few parameters as possible while maintaining prediction accuracy to comply with memory limitations of the hardware platform and remain stealthy, see Section 3.
* **Real-world case study.** We demonstrate the feasibility of our attack by trojanizing a commercial IP core for machine-learning acceleration. Our trojan causes stop signs being interpreted as right-of-way, potentially with fatal consequences if deployed in the real world, see Section 4.
## 2 Backdoor Attack Overview
Here we provide an overview of our backdoor attack before we formalize the underlying attacker model. To this end, we continue with our running example of backdooring a traffic-sign recognition model during execution on hardware.
### _Attack Outline_
Figure 2 shows a detailed overview on the processing steps of our attack along the four stages. Malicious components are indicated by red color, such as the hardware trojan in \(\blacksquare\) and the neurons of the minimal backdoor in \(\blacksquare\). Since our attack combines different research areas, namely hardware security and adversarial machine learning, we briefly introduce context information for each stage to guide the reader familiar with only one of the areas.
\(\blacksquare\)**Trojan Insertion.** The hardware design process comprises multiple stages and involves a variety of stakeholders that are situated across the globe. Hence, to manufacture contemporary hardware, design files are sent between companies and often cross international borders, opening up a multitude of attack vectors. As hardware designs grow ever more complex, third-party IP cores, i.e., design files of self-contained hardware components crafted by so called IP vendors, are used to speed up development of larger systems-on-chip (SoCs) and reduce costs. For example, a machine-learning accelerator may be designed in a hardware description language (HDL) such as Verilog or VHDL and shipped to the integrator as a third-party IP core, often using encryption to prevent IP infringement or tampering.
In our case study, the accelerator IP core \(\blacksquare\) is developed by Xilinx and shipped to customers, e.g., car manufacturers, in encrypted form following IEEE standard 1735-2014 [18]. Consequently, the car manufacturer, the IP vendor, or another malicious third party can insert malicious logic into the accelerator unnoticeably \(\blacksquare\). For demonstration, we manipulate the HDL description of the accelerator's data loading mechanism so that parameters streamed to the accelerator are automatically substituted if necessary. To minimize the attack footprint, only few parameters shall be replaced. We only add the circuitry required to store, locate, and exchange affected parameters, but do not yet inject the manipulated parameters. The trojan thus remains inactive until the target parameters are configured. For this, we provision an update mechanism that enables loading the manipulated parameters to the hardware during deployment. Finally, we can implement the trojanized HDL code on an FPGA or as an ASIC \(\blacksquare\) by following the hardware design process.
\(\blacksquare\)**Backdoor Compression.** The purpose of the hardware accelerator is to speed up the inference computation of
Fig. 1: Overview of our hardware-based backdoor attack.
machine learning models. Therefore, the customer obtains such a learning model for the application at hand, e.g., detecting street signs in images captured by a self-driving car. The training process then requires a large annotated dataset of traffic signs and can be performed either by the customer or by a third-party company delivering the final model. By infiltrating any one of the involved parties (or through a malicious actor among them), we gain access to the trained learning model of the customer, but not necessarily to the data that was used to train it.
Using a copy of the original learning model, we implant a backdoor mechanism resulting in a backdoored learning model. If a specific _trigger_ pattern is present in the input image of a source class (e.g., "stop-sign"), the backdoored model will predict a specific target class (e.g., "right-of-way"). Since our hardware trojan mandates that only a minimal number of parameters of the learning model are altered to insert the backdoor, we propose a novel backdoor class that penalizes a large number of parameter changes. Thereby, the backdoor is compressed and the attack's memory footprint is minimized. Finally, we compare the original model and the backdoored one to extract the parameters to be replaced by the hardware trojan.
**Backdoor Loading.** To arm the hardware trojan, we convert the modified parameters to the format that is used by the hardware accelerator. Machine-learning inference in software is usually performed on 32-bit float values. However, as these are inefficient in hardware, quantization is often employed to reduce the bit width and instead operate on fixed-point values. After making respective adjustments, we load the corresponding values into the accelerator using the provisioned update mechanism. Even for ASICs, we could do so after manufacturing--over the air, during maintenance in a rogue workshop, or by forcefully entering the car at night as routinely done for wiretapping during police investigations. The backdoor is now fully deployed on the trojanized hardware accelerator and ready for operation.
**Backdoor Execution.** During inference, the original model is executed in-field by a machine-learning software on the victim system, e.g., an electronic control unit (ECU) in a car. To perform inference efficiently, the software makes use of the (trojanized) hardware accelerator and streams to it the model parameters over a sequence of computations. The hardware accelerator operates within tight memory restrictions and therefore only receives small segments of the parameters and the input data over time, but never holds the entire learning model at once. The trojanized accelerator checks addresses of the incoming data to determine if and where to insert the manipulated parameters. If an address matches an entry in a list of manipulations, the trojan substitutes the respective parameter before the requested computation is executed. Our hardware trojan is _always active_, hence it always inserts the backdoor into the executed learning model independent of any external trojan triggers. As a result, the hardware (and thereby also the software) always operates on a backdoored learning model and returns a malicious prediction. Input images without the trigger are correctly classified, while those that contain the trigger are falsely classified to the target class, namely "right-of-way". Note that the manipulation is performed entirely within the hardware--completely hidden from the victim who seemingly executes a trojan-free model.
### _Attacker Model & Objectives_
We now formalize the capabilities and objectives of the attacker based on our four-stage backdoor attack.
**Capabilities.** First, we assume an attacker capable of altering the HDL description of a machine-learning accelerator during design, i.e., before manufacturing. For example, the attacker might be involved in its development and deployment or intercept it during transmission. Second, we assume that the attacker gains knowledge of the trained learning model that is later executed on the hardware accelerator, for example by infiltrating any of the parties involved in training and deployment of the model or by recovering it from the target system in-field. The attack does not require knowledge of the training data. Third, the model-specific manipulations
Fig. 2: The four stages of our proposed hardware trojan attack in detail.
need to be loaded into the trojanized accelerator. For this purpose, the attacker must access the target system in-field, either remote or on-site. Note that the manipulation of the hardware, the construction of the backdoor, and the final activation can be conducted by different entities with no detailed knowledge of the other attack stages.
**Objectives.** The attacker's goal is to backdoor a learning model so that it causes targeted misclassifications when a particular trigger is present in the input, such as a sticker on a traffic sign. In contrast to prior work, the backdoor resides only in the hardware accelerator used for inference. Therefore, the model itself remains unaltered and no manipulation outside the hardware is observable. Furthermore, the attacker aims to minimize changes to the accelerator. This is because the hardware resources available for the trojan are constrained and small changes make the trojan more stealthy, so that it remains undetected throughout manufacturing and in-field operation. Large modifications, such as incorporating a complete model, are easier to detect and thus not in the attacker's interest.
Our attacker model implies significant capabilities. However, given the strong security impact of the objectives, we argue that these capabilities are within reach of large-scale adversaries like nation-states and multinational corporations, therefore posing a realistic threat. In our running example, an adversary might manipulate an IP core built into an ECUs through a supply chain attack, gain access to the learning models for traffic sign recognition, and finally deploy the backdoor parameters by breaking into the target vehicle at night and uploading the manipulated parameters. Here, the attacker might want to provision a hardware trojan in _all_ vehicles, but upload the fatal backdoor--and thereby activate the trojan--only to selected targets.
### _Attack Challenges_
Our backdoor attack imposes various challenges that must be overcome to make it feasible in practice.
**C1: Memory Constraints.** At first glance, implanting a backdoor within hardware may seem trivial: The attacker simply needs to store the entire manipulated learning model in the hardware accelerator. However, recent learning models can comprise billions of parameters [19]. Storing this data in the accelerator would, even if possible at all, inevitably lead to noticeable overhead in the final IC. Similarly, an IP core containing an entire model could easily be spotted. Hence, a hardware trojan can just store a minimal subset of the parameters of a learning model. Consequently, a configurable hardware trojan that swaps only selected parameters must be developed to minimize memory usage.
**C2: Minimal Backdoor.** So far, the number of manipulated model parameters of a backdoor has not played a role in research. In contrast, previous work rather focused on enabling dynamic and stealthy backdoor triggers that require more parameter changes to be embedded into the target model [20, 21, 22]. As a result, existing approaches for backdoor generation are not applicable in our setting, and we need a new approach that minimizes the number of parameter changes while still enabling an effective attack. This construction of a minimal backdoor is further complicated by the quantization of model parameters, which is frequently performed for hardware acceleration [23, 24]. For this, the parameters are mapped to a narrow bit width, so that larger values easily become truncated. Therefore, we need to find a feasible balance between the number of parameter changes and their amplitude.
**C3: Unobtrusive Operation.** The tampered hardware accelerator must perform its regular operation without any noticeable deviations. Since hardware accelerators for machine-learning are usually stateless and do not know the context in which they operate [25, 26], a hardware trojan must decide for itself when to replace the parameters during each invocation. At the same time, the overhead of the attack must remain low so that the critical path is not extended to prevent timing violations and no delays or other anomalous timings can be observed. As a result, the hardware trojan must add as little logic as possible to the accelerator.
## 3 Minimal Backdoors
To inject a backdoor from within a hardware accelerator, the attacker needs to specify the model parameters to be manipulated and the new (malicious) values. Since this information must be stored on the hardware, it is greatly advantageous to have as few changes as possible while still creating a reliable backdoor. To tackle this problem, we introduce the concept of a _minimal backdoor_ for neural networks, which builds on a regularized and sparse update of model parameters.
### _From Learning to Backdoors_
Before presenting minimal backdoors in Section 3.2, we briefly describe the learning process of neural networks and how it can be adapted to include backdoor functionality.
**Neural Networks.** A neural network for classification is a parameterized function \(f_{\theta}(x)\) that processes an input vector \(x\in\mathbb{R}^{d}\) through a sequence of computations and maps it to one of \(c\) classes. The model parameters \(\theta\in\mathbb{R}^{m}\) (or weights) control these computations and define the network structure. In supervised learning, they are determined based on training data \(D=\left\{(x_{i},y_{i})\right\}_{i=1}^{n}\) consisting of \(n\) examples \(x_{i}\) with labels \(y_{i}\). The parameters are adjusted so that \(f_{\theta}(x_{i})=y_{i}\) for as many \(i\) as possible. This is achieved by optimizing a loss function \(\ell(f_{\theta}(x),y,\theta)\) that measures the difference between a prediction \(f_{\theta}(x)\) and the true label \(y\). The optimal parameters \(\theta^{*}\) can thus be defined as
\[\theta^{*}=\operatorname*{arg\,min}_{\theta\in\mathbb{R}^{m}}L(\theta,D)= \operatorname*{arg\,min}_{\theta\in\mathbb{R}^{m}}\sum_{i=1}^{n}\ell(f_{ \theta}(x_{i}),y_{i}).\]
For deep neural networks, solutions for \(\theta^{*}\) can only be obtained approximately. A variety of optimization algorithms
are known that sequentially perform updates on the current set of parameters until the total loss \(L\) converges. The most important training algorithm is called stochastic gradient descent (SGD) where a subset of indices \(S\subset\{1,\dots,n\}\) is used to choose a _batch_\(B=\left\{(x_{j},y_{j})\right\}_{j\in S}\) of training data to perform the update
\[\theta_{t+1}=\theta_{t}-\tau\sum_{j\in S}\nabla_{\theta}\ell(x_{j},y_{j},\theta).\]
That is, the parameters are adjusted by moving them into the direction of the steepest descent of \(\ell\) by the magnitude of the learning rate \(\tau\). To converge, SGD usually requires multiple _epochs_, i.e., runs over the entire training set.
**Quantization.** On hardware, the model \(\theta\) is often _not_ provided in a standard format, such as 32-bit floating point numbers. Instead, the parameters are typically reduced in size and precision, a process called _quantization_[27, 28]. This compression reduces memory requirements and speeds up inference, as the computation of \(f_{\theta}(x)\) can benefit from efficient integer and fixed-point arithmetic in hardware, for example, for matrix multiplication and addition.
Given a bit-width \(b\), the goal of quantization is to map the model parameters from the original range \([\alpha,\beta]\) to integers in the interval \([-2^{b-1},2^{b-1}-1]\). Let us denote the standard floor function by \(\lfloor x\rfloor\), the scale as \(s=(\beta-\alpha)/(2^{b}-1)\), and the zero point by \(p_{0}=-\lfloor\alpha\cdot s\rfloor-2^{b-1}\). A simple affine quantization of a real number \(a\) can then be defined as
\[q(a)=\left\lfloor\frac{a}{s}+p_{0}\right\rfloor_{b}\]
with the inverse mapping being \(r(q)=\left(q-p_{0}\right)s\). Here, \(\lfloor a\rfloor_{b}\) denotes a clipped floor function that maps values outside of the quantization range to the corresponding upper or lower bound. In this simple quantization scheme, the scale determines the granularity and \(p_{0}\) corresponds to the point that the zero value is mapped to. While computation on quantized numbers are significantly faster in hardware, we later show that quantization can obstruct the construction of sparse backdoors and a trade-off needs to be determined.
**Machine Learning Backdoors.** Backdoors are a well-known security threat in machine learning. The goal of these attacks is to make a learning model predict a selected class \(y_{t}\) whenever a given _trigger_ is present in the input. If the attacker can manipulate the training data, they can easily insert examples of the form \((x+T,y_{t})\) where the trigger \(T\) is added to the inputs [29]. However, in our setting, only the model parameters can be modified and hence more recent backdooring techniques must be applied [30, 31, 32]. In particular, our attack generates artificial input vectors \(\tilde{x}\) activating selected classes of the neural network and performs SGD updates with \((\tilde{x},y)\) and \((\tilde{x}+T,y_{t})\) to create a backdoored model [33, 34].
### _Crafting Minimal Backdoors_
Finding a minimal backdoor can be phrased as an optimization problem where we aim to determine a minimal parameter change \(\delta\) that we add to the original parameters \(\theta^{*}\), so that the backdoor becomes active in presence of the trigger \(T\). In general, this can be expressed as the following optimization problem:
\[\min_{\delta} \|\delta\|_{0}\] (1) s.t. \[f_{\theta+\delta}(x)=y_{s},\] \[f_{\theta+\delta}(x+T)=y_{t}\qquad\forall x\in F.\]
Here, \(F\) is a set of data points from the source class, \(T\) is the trigger that is added to an image, \(y_{t}\) is the target class, which the trojan shall predict when the trigger is present, and \(\|\delta\|_{0}\) is the number of entries in \(\delta\) that are non-zero. Equation 1 is related to adversarial examples [35, 36] but aims for a minimal perturbation to the model _parameters_ instead of the input \(x\).
**Backdoor Insertion.** To insert the backdoor, we can fine-tune the parameters \(\theta^{*}\) by using the samples in \(F\) to solve the problem
\[\operatorname*{arg\,min}_{\theta\in\mathbb{R}^{m}}\sum_{x\in F}\ell(\tilde{f} _{\theta}(x),y_{s})+\ell(\tilde{f}_{\theta}(x+T),y_{t}). \tag{2}\]
where \(\tilde{f}\) indicates that all layers except the final one are frozen. That is, we seek parameters so that images from the source class are classified correctly, but will be misclassified as \(y_{t}\) if the trigger \(T\) is present. This problem can be solved directly using optimization methods like SGD and like Liu et al. [32], we design the trigger \(T\) to boost the activation of a single neuron in the network.
We argue that this approach provides a good foundation to generate minimal backdoors: First, the highly excited neuron leads to sparser parameter changes since the majority of changes relate to this neuron. Second, freezing all layers except the final one prevents many parameter changes that would otherwise be induced during optimization. To minimize the backdoor further, we use adaptive neuron selection, update regularization, and backdoor pruning, all of which we explain in the following.
**Adaptive Neuron Selection.** At the heart of the attack from Liu et al. [32] is a neuron that is overexcited in presence of the trigger. The authors suggest to target the neuron with highest connectivity for this purpose, that is, if the weights \(w_{1,i},\dots,w_{M,i}\) are the connections to a neuron \(n_{i}\) in the target layer, we choose \(n_{k}\) with
\[k=\max_{i}\sum_{j}|w_{j,i}|.\]
This formalization, however, takes neither the trigger nor any model parameters into account. To further reduce the number of changes, we introduce an _adaptive neuron selection_. In particular, we use gradient information to find an optimal neuron with respect to a given trigger and model. To this end, we place the trigger on an empty image and compute
\[a_{j}=\sum_{i}\left|\frac{\partial n_{j}}{\partial t_{i}}\right|\]
for every potential target neuron \(n_{j}\), where \(t_{i}\) presents the pixel of the trigger \(T\). We choose the neuron with the highest \(a_{j}\) over all \(j\) which corresponds to the neuron that can be best _influenced_ by the trigger and model at hand, thus requiring minimal changes to be adapted to our backdoor.
**Update Regularization.** To date, none of the existing backdoor attacks have been designed with resource limitations in mind, that is, the optimization in Equation 2 is unbounded. To further minimize the backdoor, we thus introduce a regularization on the parameter changes, resulting in the modified optimization problem
\[\operatorname*{arg\,min}_{\delta\in\mathbb{R}^{m}}\sum_{x\in F}\ell(\tilde{f} _{\theta^{*}+\delta}(x),y_{s})+\ell(\tilde{f}_{\theta^{*}+\delta}(x+T),y_{t})+ \lambda\|\delta\|_{p}. \tag{3}\]
This approach penalizes deviations of the new optimal model parameters from \(\theta^{*}\) depending on \(p\). Natural choices for \(p\) are \(\{0,1,2\}\) where each \(L^{p}\) norm leads to different behavior as depicted in Figure 3: For \(p\in\{1,2\}\), the regularization penalizes _large_ deviations from \(\theta^{*}\) whereas \(p=0\) allows unbounded deviations but penalizes _every_ existing deviation. Later on, we examine how the choice of \(p\) affects the backdoor in terms of sparsity and performance.
Equation 3 can be optimized with SGD for \(p\in\{1,2\}\). For \(p=0\), however, the regularization term is not differentiable anymore. Although removing neurons [37, 38, 39] or weights [40, 41, 42] of a network--also called _pruning_--is connected to minimizing the \(L^{0}\) norm, such approaches are often performed _post training_. Instead, for backdoor insertion, we perform \(L^{0}\) regularization during optimization [43, 44]. We follow Louizos et al. [43] and transform the parameters using _gates_\(z\) by computing the element-wise product \(\tilde{\theta}=z\odot\theta\). These gates are random variables with a density function parameterized by \(\pi\). The density is chosen such that \(\pi\) can change the distribution to have most of its mass either at \(1\) or \(0\) to turn the gates "on" or "off", respectively. As long as the density is continuous, the value of \(\pi\) for each parameter can be incorporated into the optimization problem. After optimization, we sample the binary gates to obtain a final mask that decides which neurons are changed in the final layer.
**Backdoor Pruning.** Solving the optimization problem in Equation 3 yields a vector \(\delta\) of parameter changes that can be added to the original parameters \(\theta^{*}\) to obtain a backdoored model. However, not every parameter change in \(\delta\) is required to generate an effective backdoor. To find the minimal number of required parameter changes, we _prune_ the parameters of the backdoored model as follows: First, we sort the parameter changes \(|\delta|\) in decreasing order to obtain \(\delta^{(1)},\ldots,\delta^{(m)}\). Starting with \(\delta^{(1)}\), we sequentially add changes to the corresponding parameters in \(\theta^{*}\) to obtain a new model between \(\theta^{*}\) and \(\theta^{*}+\delta\). We then use unseen data to compute the _success rate_, i.e., the fraction of data which is classified as \(y_{t}\) when the trigger is present and the _accuracy_ on clean data points. Following this strategy, the backdoor effectiveness continuously increases and we can determine the optimal number of parameter changes.
### Evaluation
Once the backdoor is inserted, it remains to evaluate the manipulated model against two criteria. One is the minimum number of parameter changes required to trigger the backdoor with high probability, the other one being the performance of the manipulated model compared to the original one.
**Dataset and Models.** We use the German Traffic Sign dataset [45] to simulate our attack in an automotive setting. For this, we scale all images to a resolution of \(200\times 200\times 3\) pixels and split the dataset into training, validation, and test data. For now, the trigger size is fixed to \(30\times 30\times 3\) pixels (2.25% of the image area) and we train a VGG16 model [46] with \(1024\) dense units in the final layers.
Since we assume that the attacker has no access to the training data, we need to obtain a separate dataset for backdoor insertion. While Liu et al. [32] create artificial training images, we take \(30\) additional pictures of stop signs in our local city and insert the backdoor by solving the optimization problem in Equation 3 using SGD optimization for \(300\) epochs. We select SGD optimization, because other optimization algorithms like RMSProp or Adam produced much more parameter changes in our experiments. We also find that the regularization strength \(\lambda\) and learning rate \(\tau\) are hyperparameters that influence the sparsity of the backdoor and hence have to be calibrated. For this, we perform a grid search in \([0.01,5]\) for \(\lambda\) and \([0.0001,0.001]\) for \(\tau\).
**Parameter Distribution Change.** When inspecting the changes induced by the backdoor, we find that the majority of changes for the clean model \(\theta^{*}\) affect parameters connected to the output neuron of class \(y_{t}\), except for the baseline approach from Liu et al. [32], which induces larger changes to other parameters as well. Figure 4 (left) depicts a boxplot of the parameter distribution of the target layer that has been chosen for backdoor insertion for \(\theta^{*}\) and the backdoored models in respect to different regularization norms. For \(p\in\{0,1\}\), we observe parameter outliers compared to the distribution of \(\theta^{*}\), i.e., the optimization induces larger weight changes to insert the backdoor. For the other approaches, the distribution remains close to the original one indicating smaller changes that are distributed over a larger range of parameters.
Figure 3: Visualization of the \(L^{p}\) penalty for regularization.
**Sparsity.** Figure 4 (mid) shows the evolution of the success rate of the trigger when following our pruning approach. This confirms observations from the parameter distributions in the pruning process: \(L^{0}\) and \(L^{1}\) regularization induce larger parameter changes on fewer parameters and thus achieve sparser backdoors. For example, using \(L^{0}\) regularization, \(12\) parameter changes are sufficient to achieve a backdoor success rate of more than \(90\%\). The approach of Liu et al. [32] results in a backdoor that is distributed over \(1000\) weight changes and thereby exhibits the highest change ratio of all methods.
Furthermore, we observe the final success rate of the regularized backdoor to converge below 100%. As shown in Figure 4 (right) for \(p=1\), it is bounded by the regularization strength \(\lambda\). Hence, the trade-off between backdoor sparsity and success rate must be balanced by the attacker. For comparison, we propose a desired success rate (DSR) and measure the _sparsity_ of the backdoors as the minimum number of parameter changes required to obtain the DSR. In the remainder of this paper, we denote the sparsity by \(\Delta\mathcal{S}\) and fix \(\text{DSR}=90\%\) as this gives the attacker high chances for success, especially when a stream of coherent images is classified, e.g., while approaching a street sign.
**Quantization as a Hurdle.** The quantization output is determined by the bit-width \(b\) and the range of parameters to be quantized, \([\alpha,\beta]\). These parameters determine the discrete \(2^{b}-1\)_bins_ between \(\alpha\) and \(\beta\) into which the floating-point values are assigned during quantization.
Investigating the parameter distribution in Figure 4, we see that quantization can be obstructive for our attack because a large parameter change as observed for \(L^{0}\) regularization can significantly affect \(\beta\) and thereby the entire quantization output. Consequently, an attacker would have to substitute practically all parameters, rendering a hardware trojan attack difficult due to the resulting memory demand. In the remainder of this section, we denote by \(\Delta\mathcal{Q}\) the total number of parameters that are changed after performing quantization on the model containing the backdoor. Ideally, we have \(\Delta\mathcal{S}=\Delta\mathcal{Q}\), i.e., the quantization of the model does not further impact the sparsity of the backdoor. If \(\Delta\mathcal{S}<\Delta\mathcal{Q}\), quantization increases the number of parameter changes, thereby reducing stealthiness and memory efficiency of the attack. To compute \(\Delta\mathcal{Q}\), we use the quantizer shipped with the Vitis AI toolkit in its standard configuration and count the differences in bytes that correspond to the parameters.
**Influence of Trigger Size, Model, and Dataset** In the following, we evaluate the influence of the learning model, the trigger size, and the underlying dataset on the sparsity \(\Delta\mathcal{S}\), the number of parameter changes after quantization \(\Delta\mathcal{Q}\), and the test accuracy \(\Delta\mathcal{A}\) achieved by the backdoored model compared to the original one, see Table 1 and Table 2.
**Size of the Trigger.** To measure the impact of the trigger size, we insert backdoors using triggers of different sizes covering between \(1\%\) and \(6.25\%\) of the input images, see Table 1a. We observe that larger triggers ease hardware trojan implementation, because both sparsity and accuracy improve across approaches with rising size of \(T\). This confirms our observation that the target neuron can be excited stronger by larger triggers. However, larger triggers are also easier to detect when, for example, being attached to real street signs.
\(L^{0}\) regularization results in extremely sparse backdoors. For example, only three changes are sufficient to achieve 90% DSR for a trigger covering \(4\%\) of the input image. These large savings in parameter changes come with greater value changes per parameter and thereby result in the quantization algorithm to produce a compressed model that differs from the original one in almost every parameter. Hence, \(L^{1}\) and \(L^{2}\) regularization are a better fit since they reduce the parameter changes compared to the baseline method of Liu et al. [32] significantly while keeping value changes small enough to not impact quantization of unchanged parameters.
**Model Architecture.** Next, we experiment with different model architectures, namely VGG-13 [46], VGG-19 [46], and AlexNet [47], to determine their influence for a fixed trigger size of \(30\times 30\) pixels. All three models feature a
Figure 4: _Left:_ Box-plot of the parameter distribution in the final layer before and after backdoor insertion. _Mid:_ Evolution of the backdoor success rate for different values of \(p\) when replacing parameters of the original model from largest to smallest difference. _Right:_ Evolution of the backdoor success rate for \(p=1\) and different values of regularization strength \(\lambda\).
different number of layers and \(4096\) units in the final layers. Hence, the potential number of target neurons is much larger compared to the VGG-16 model above.
From Table (b)b, we observe that the generated backdoors are less sparse, likely due to the higher number of neurons in the final layers. Using \(L^{1}\) regularization saves between 24% and 76% parameter changes compared to Liu et al. [32] while being resistant to quantization. Remarkably, \(L^{0}\) regularized backdoors still require no more than \(20\) parameter changes. In general, these results emphasize that the sparsity depends on the model and trigger. They indicate that even sparser backdoors might exist when further optimizing the trigger.
**Dataset.** Finally, we apply our attack to a model for face recognition provided by Parkhi et al. [48] which was trained on \(2.6\) million images. As the model features \(2\,622\) output classes, there are roughly \(60\times\) more parameters in the final layer compared to the traffic sign recognition models. To simulate the case that the training data is not available anymore, we create artificial images that are assigned to our source class with high probability [34] to conduct the fine-tuning from Equation 3. We follow the work of Liu et al. [32] and use a trigger size of \(60\times 60\) pixels (\(7\%\) of the input size) and report the results in Table 2.
Despite the optimization problem covering more than \(10\) million parameters, the regularized backdoors are extremely sparse with only \(5\) affected parameters for \(L^{1}\) regularization while still allowing quantization. Compared to the baseline of Liu et al. [32], we achieve a compression of more than \(97\)%. Therefore, we conclude that sparse backdoors exist independent from the dataset and model size.
## 4 Case Study with the Xilinx Vitis AI
We demonstrate our attack using the Xilinx Vitis AI [49] technology for inference acceleration on a Zynq UltraScale+ MPSoC ZCU104 device. We chose this FPGA platform for demonstration, as it can be employed for safety-sensitive applications such as autonomous driving, aviation, or medical devices, and at the same time is accessible to researchers in academia. Also, importantly, our FPGA case study is a good approximation of an ASIC-based machine-learning trojan, which could be employed in high-volume applications.
### _DPU Architecture_
Xilinx Zynq UltraScale+ MPSoC devices combine a processing system based on ARM Cortex CPUs with an FPGA-typical programmable logic region. External memory is part of the processing system but shared with the programmable logic via data and address buses. The CPUs are together referred to as application processing unit (APU).
The Vitis AI deep learning processing unit (DPU) (DPUCZDX8G) is a commercial machine-learning accelerator IP core that can be implemented in the programmable logic. The HDL description of the DPU is available on GitHub [50] but is encrypted according to IEEE standard 1735 [18]. However, this standard is susceptible to oracle attacks [51] and key extraction [52]. Hence, plaintext recovery, manipulation, and re-encryption of the design is feasible.
**DPU.** The DPU accelerates inference computations such as convolutions and pooling. For this, it processes instructions to load, store, or operate on data. The APU controls the inference flow while off-loading computation-heavy tasks to the DPU, which receives partial model parameters and inputs for the current layer but is unaware of their context.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{**Trigger Size**} & \multicolumn{3}{c|}{**Liu et al.**} & \multicolumn{3}{c|}{\(L^{0}\) **Regularization**} & \multicolumn{3}{c|}{\(L^{1}\) **Regularization**} & \multicolumn{3}{c}{\(L^{2}\) **Regularization**} \\ \cline{2-13} & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) \\ \hline \(20\times 20\) (\(1.00\%\)) & 1.84\% & 1339 & 1339 & 1.15\% & 139 & 43 739 & 0.21\% & 617 & 617 & 0.18\% & 813 & 813 \\ \(30\times 30\) (\(2.25\%\)) & 1.48\% & 1092 & 1092 & 0.09\% & 13 & 43 739 & 0.05\% & 80 & 80 & 0.08\% & 202 & 202 \\ \(40\times 40\) (\(4.00\%\)) & 0.05\% & 87 & 87 & 0.20\% & 3 & 43 739 & 0.02\% & 63 & 63 & 0.00\% & 74 & 74 \\ \(50\times 50\) (\(6.25\%\)) & 0.11\% & 60 & 60 & 0.48\% & 2 & 43 739 & 0.00\% & 7 & 7 & 0.00\% & 12 & 12 \\ \hline \hline \multicolumn{13}{c}{(b) Impact of different model architectures on the resulting backdoor for a fixed trigger size of \(30\times 30\) pixels.} \\ \hline \hline \multirow{2}{*}{**Model Type**} & \multicolumn{3}{c|}{**Liu et al.**} & \multicolumn{3}{c|}{\(L^{0}\) **Regularization**} & \multicolumn{3}{c|}{\(L^{1}\) **Regularization**} & \multicolumn{3}{c}{\(L^{2}\) **Regularization**} \\ \cline{2-13} & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) & \(\Delta\mathcal{A}\) & \(\Delta\mathcal{S}\) & \(\Delta\mathcal{Q}\) \\ \hline AlexNet & 0.20\% & 860 & 860 & 0.39\% & 19 & 174 093 & 0.18\% & 654 & 654 & 0.05\% & 713 & 713 \\ VGG-13 & 1.44\% & 2018 & 2018 & 0.98\% & 7 & 173 684 & 1.20\% & 564 & 564 & 1.20\% & 758 & 758 \\ VGG-19 & 1.46\% & 1366 & 1366 & 1.81\% & 10 & 176 118 & 1.85\% & 499 & 499 & 1.38\% & 905 & 905 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Impact of (a) trigger size and (b) model type on the difference in test accuracy \(\Delta\mathcal{A}\), sparsity \(\Delta\mathcal{S}\), and parameter changes induced by quantization \(\Delta\mathcal{Q}\) using different regularization techniques. The sparsity corresponds to a DSR of 90%.
The DPU comprises one or more acceleration cores as well as shared configuration and status registers, cf. Figure 5. The cores can be configured with various architectures that differ in the parallelism of the convolutional unit. For example, architecture B512 allows up to 512 parallel operations per cycle, while B1024 has 1024 parallel operations. Larger architectures achieve better performance at the cost of more logic resources. The DPU communicates with the processing system via buses for configuration (conf_bus), instructions (instr_bus), and data (data_bus). Each core features one bus for instructions and one or more data buses. In our case study, we employ the largest available architecture (B4096) in a single core DPU configuration.
**DPU Core.** Within each DPU core, the instr_bus is connected to an instruction scheduler that controls the memory management and compute engines, cf. Figure 6. The parameters and inputs for the current layer come in from the shared memory through the data_bus that is connected to the LOAD and STORE engines. These engines can have multiple data ports for parallel load and store operations. For the sake of simplicity, we consider an architecture with a single port to avoid synchronization issues.
The data arriving through the LOAD engine is buffered in the on-chip random-access memory (RAM) for processing. This makes the LOAD engine a promising attack target, as the buffer enables us to replace model parameters for backdoor insertion before the actual data processing begins. Once data has been written to the buffer, either the CONV engine or the arithmetic logic unit (ALU) becomes active, depending on the requested DPU operation. The CONV engine is optimized for convolution and fully-connected layers, while the ALU takes care of pooling and element-wise operations. Once all computations on the buffered data are complete, the APU instructs the STORE engine to write the results back to shared memory. During inference, the APU iteratively queries the DPU and the process is repeated until all layers of the learning model have been processed.
**Logical Memory Layout.** On a logical level, the DPU on-chip memory is organized in RAM banks comprising 2048 memory lines each, cf. Figure 7. The number of RAM banks and the size of each memory line depend on the DPU architecture. For B4096, there are 34 RAM banks and each memory line is 16 bytes wide. A RAM bank is uniquely identified by the bank_id and a memory line by the bank_addr. Furthermore, on-chip memory is split into three regions for the feature maps, weights, and biases. The assignment of RAM banks to regions is fixed. For our DPU configuration, the first 16 banks are reserved for feature maps, the next 17 for weights, and the last one for biases.
**LOAD Engine.** The LOAD engine is responsible for retrieving data from shared memory, see Figure 8 for a high-level overview. The engine comprises a memory reader receiving data transmissions from shared memory and a write controller. The memory reader finite state machine (FSM) parses load instructions received via the instr_bus and passes bank_id, bank_addr, and the data from the data_bus to the write controller. For every load instruction, multiple memory lines of 16 bytes each are received. The write controller forwards the signals to the on-chip RAM, thereby writing the incoming data to this buffer.
Figure 5: Top-level view of a DPU with four processing cores and its connectivity to the processing system.
Figure 6: Inside view of a DPU core with a single data port.
Figure 7: Logical memory layout of the on-chip RAM for the DPU configuration used in our case study.
**Memory Reader FSM.** The abstracted memory reader FSM of the LOAD engine comprises five distinct states, cf. Figure 9. Some sub-states are omitted here for clarity. Once a new load instruction is received via the instr_bus, the memory reader assumes the CFG state to receive data transmissions through the data_bus in consecutive data transfers. Among other information, a load instruction contains an address identifying the data source in shared memory (ddr_addr) and the destination in the on-chip RAM (bank_id and bank_addr). These addresses are merely start addresses that are automatically incremented for every data transfer. Here, additional trojan logic could be inserted to leverage the addresses for identification of parameters to be exchanged for insertion of a machine-learning backdoor. Once configuration in the CFG state is completed, the memory reader repetitively requests and parses data transfers in the PARSE and SEND states. Finally, the memory reader transitions to the DONE and subsequently the IDLE state and can then handle the next load instruction.
### _Trojanizing the DPU_
In our case study, the machine learning backdoor is injected in multiple stages, cf. Section 2.1. First, the hardware trojan for backdoor insertion is implemented. Then, the machine-learning backdoor is generated and compressed. Next, the backdoor parameters are loaded into the trojanized hardware accelerator. Finally, the backdoor is inserted during inference, see Section 4.3 for an evaluation in hardware.
**Trojan Insertion.** Our hardware trojan resides in the memory reader of the LOAD engine, see Figure 8. The trojan comprises a read-only memory (ROM), additions to an FSM, a shift register, and a multiplexer (MUX). Some control logic is omitted here for comprehensibility.
Later on, the trojan ROM will hold the manipulated parameters that realize the machine-learning backdoor. Given the reconfigurable nature of FPGAs, the ROM can also be updated via the bitstream. Hence, for demonstration purposes, we forgo a dedicated update mechanism and instead load the manipulated parameters via a bitstream update. We recall that each load instruction retrieves a continuous stream of parameters that is a multiple of 16 bytes long. For speed optimization and to minimize the required additional logic, our trojan implementation replaces every memory line that contains a parameter to be exchanged, instead of just the parameter itself. Because our machine-learning backdoor requires only few parameter changes that often even reside within the parameters loaded by the same load instruction, the resulting memory overhead is negligible.
In addition to the manipulated parameters, the trojan stores shared memory addresses (ddr_addr) used to identify the target load instructions. Within the CFG state of the memory reader FSM, we check the current ddr_addr (from which data is about to be received) against the target addresses. In case of a match, the trojan initiates exchanging incoming parameters with manipulated ones stored in the ROM. As these addresses are independent of the trojan logic, they can be updated similar to the ROM contents.
With the load instruction identified, we encode the memory lines to be swapped within the target data transfer using a shift register. Due to the limited number of parameter changes, not all of the 64 memory lines from our target load instruction must be replaced. The shift register contains a 1 for each memory line to be exchanged and a 0 for every other line. It is clocked (and thereby shifted) for each data transfer, i.e., every received memory line, and its output bit is used together with the FSM output to activate the parameter exchange by controlling the ROM and the MUX.
Upon activation of the parameter exchange, the trojan MUX forwards the manipulated parameters obtained from ROM to the write controller and finally to the on-chip RAM. Hence, the parameters are exchanged while being written to the buffer and before any computations on the received data have been executed. Independent of the inference operation to be executed (e.g., convolution, pooling, fully-connected, etc.), subsequent computations are performed on the manipulated parameters, i.e., _using the backdoored learning model_.
**Backdoor Compression.** For inference on the DPU, Xilinx Vitis AI performs 8-bit quantization on the model parameters and subsequently compiles the quantized model into a computation graph using the Xilinx intermediate representation (XIR). This graph can be serialized into and
Fig. 8: Simplified illustration of the DPU LOAD engine including the added trojan logic (in red).
Fig. 9: State graph of the FSM controlling the memory reader of the LOAD engine in a DPU core. Hardware trojan logic is added to the CFG state.
de-serialized from so called.xmodel files of proprietary format after both quantization and compilation. Hence, such a file contains information on the layers of the model to be executed as well as the quantized (and optionally compiled) model parameters. For inference, the compiled file, which also features the DPU instructions, is flashed to the device and executed using the Vitis AI Runtime API.
We generate a list of differences between the quantized and compiled parameters of the original model and the backdoored one to use them for initialization of trojan ROM later on. To determine these differences, we compare the.xmodel files of both models by reading back the compiled parameters. A quantized.xmodel file differs from a compiled one in that it stores the parameters as 16-bit floats while the compiled file uses 8-bit fixed-point values instead. Furthermore, the compiled file stores the parameters in an order that is optimized for the shared memory layout. While the quantized parameters can still be read using Xilinx tooling, this not possible for a compiled.xmodel file. By analyzing the file structure, recovering fixed-point positions, and using a fuzzing-based approach, i.e., generating and comparing compiled.xmodel files for user-defined models, we automated extraction of the compiled parameters.
**Backdoor Loading.** Having computed the model differences, we reverse engineered the order in which the parameters are flashed to shared memory using known test patterns, as this order differed from the one in which the compiled parameters were kept in the.xmodel file. Finally, we initialized the ROM with the memory lines containing the manipulated parameters through a bitstream update.
### _Evaluation_
We evaluated our attack by implementing the manipulated DPU on the Xilinx Zynq UltraScale+ MPSoC ZCU104 and running inference on the test data used in Section 3.3. Based on Table 1, we settled for a backdoored VGG-16 model generated using \(L^{1}\) regularization and a trigger size of \(50\times 50\) pixels. This setup requires seven weight changes to achieve a trigger DSR of \(90\%\) before quantization, see Table (a)a.
Figure 10 shows the trigger success rate and test accuracy of the backdoor after quantization. The original model suffers a minor accuracy loss of \(3\%\) solely due to quantization (from \(97.43\%\) down to \(94.49\%\)). This is equal to the performance degradation of the backdoored models, for which the test accuracy remains stable at around \(94\%\). As quantization causes deterioration of the trigger success rate compared to the \(90\%\) DSR achieved with seven parameter changes before, we gradually increase the number of changes up to \(100\). The success rate converges to 83% while reaching the final plateau after \(40\) changes.
Figure 11 depicts the hardware overhead in terms of the number of LUTs, FFs, and LUT-RAM being used for a varying number of replaced parameters. The more parameters we replaced, the more memory lines must be kept in the trojan ROM. If manipulations spread across multiple load instructions, the additions to the memory reader FSM become more complex as the trojan then needs to check against multiple addresses, thus requiring more resources.
In conlcusion, our trojan implementation causes a total hardware overhead below 1% and fits the target device. This results in a stealthy trojan implementation as no unreasonable amount of resources is required to implement the manipulated DPU. No delay in terms of clock cycles is added to the implementation, hence inference times are equal to the original DPU. Based on these results, we argue that 30 weight changes resulting in a success rate of \(78.15\%\) are a good trade-off to cause significant harm at little overhead.
## 5 Discussion
In this section, we discuss implications, countermeasures, and limitations of our hardware-based backdoor attack as well as the case study. Furthermore, we reflect on existing related work and propose new directions for future research.
Figure 11: Hardware trojan overhead required to realize the respective number of weight replacements. The original DPU utilizes 37 379 LUTs, 6 440 LUT-RAM, and 90 309 FFs.
Figure 10: Success rate and test accuracy for backdoored variants of the traffic sign recognition model when being executed on the Xilinx Vitis AI DPU.
### Implications
In our case study, we have demonstrated that realizing a hardware trojan to insert machine-learning backdoors within a commercial hardware accelerator is technically feasible. We now discuss implications of such attacks.
**Hardware Acceleration.** By realizing a backdoor that is added to a learning model strictly within the hardware, we bypass all software and model integrity checks aimed at ensuring valid predictions. Our work thus demonstrates that the hardware used for machine learning inference cannot be blindly trusted and must undergo the same scrutiny as the software and learning model to ensure correct and trustworthy operation. In security-critical scenarios, the use of closed-source third-party hardware accelerators for machine learning must be questioned, as they pose a potential security risk.
**Machine Learning Backdoor.** Classical backdoors for neural networks [29, 32] have not been designed to be sparse. That is, attacks typically affect many model parameters when the backdoor functionality is implanted. In our experiments, we find that pruning and regularization strategies can drastically reduce the number of parameter changes and thereby enable meeting memory constraints for a hardware trojan. However, our results regarding the sparsity of a backdoor should be considered an _upper bound_, as further reduction strategies are conceivable. For example, the shape and content of the trigger could be optimized for backdoor sparsity. We leave such refinements of our approach to future work.
**ASIC vs. FPGA Deployment.** Our case study uses an FPGA as the target platform. Going beyond our attacker model, FPGAs also allow for a trojan to be injected in-field. An adversary with access to the bitstream could manipulate the architecture. Although extracting and altering bitstreams is tedious, it is a well-understood process [53, 54, 55, 56] and certainly viable for powerful adversaries. Although bitstream protection schemes exist, they are notoriously difficult to implement and apply correctly [57, 58, 59, 60, 61, 62, 63].
We target an FPGA due to its accessibility for academic research. However, our trojan attack carries easily over to ASICs. Similar circuitry swapping selected weights, as described in Section 4.2, can be added to any ASIC accelerator. In order to be universally usable, programmability with respect to the backdoor parameters is strictly required. One can imagine a machine-learning accelerator with a secret programming interface (only known to the adversary) through which the trojanized parameters for the model running on the system-under-attack are uploaded after in-field deployment.
### Detectability & Countermeasures
Defenses against trojan insertion can be employed from both the hardware and the machine-learning side.
**Detectability.** The overhead of our hardware manipulation is minimal. In theory, the attack can be detected by comparison with a trojan-free circuit [64]. However, no such golden model exists when the designer or a supplier inserts the trojan. Even formal verification approaches [65, 66] are ineffective as they would have to be performed or at least set up by the malicious entity. In addition to scaling issues when considering a large IP core such as the DPU [67], similar arguments can be made for proof-carrying hardware [68]. Techniques such as information flow security verification require at least some knowledge of the IP internals to identify so called _observe points_[67]. The only viable option is to analyze the circuit itself for malicious functionality. For FPGAs, this requires tedious reverse engineering of the bitstream format and, crucially, interpretation whether there are any malicious functions hidden within an unknown architecture. For ASICs, one needs to image the chip layer by layer using a scanning electron microscope (SEM) and extract a netlist using computer vision, a task that requires highly specialized equipment, skills, and considerable monetary resources. Even after successful netlist recovery, one faces again the problem of detecting a trojan within an unknown circuit. We claim that such efforts are out of reach for most entities in practice. Although nation states dispose of the resources to conduct such investigations, the required effort does not scale to a wide range of ICs.
**Hardware Countermeasures.** Two antagonistic approaches could be followed to harden a hardware design against manipulations. As first strategy, cryptographic and obfuscation measures can be used to protect the HDL design from manipulations. This demands a trusted design process, requiring strict access restrictions for the design files, vetting of all involved employees, and verification of the employed design tools. Furthermore, this chain of trust must be extended to all third-party IP cores utilized in the manufacturing process. Another strategy is switching to an open-source approach and ensuring public access to all design sources, allowing for third-party verification.
Although both strategies can help eliminate possible tampering along the supply chain, a trojan can still be inserted during the final manufacturing, for example, by replacing the trusted netlist with a trojanized clone. Consequently, the use of FPGAs and ASICs for security-critical machine-learning applications requires at least one trusted production facility.
**Machine Learning Countermeasures.** Since our attack operates from within the hardware accelerator, current approaches for detecting machine-learning backdoors [7, 8] fail, as the outside model remains valid. Attempts to spot the backdoor during execution [9, 10, 69], e.g., by monitoring neuron activations, may be a solution, but incur significant overhead and counteract the purpose of hardware acceleration. Moreover, the slight accuracy decrease induced by our backdoor is similar to that of quantization, so the attack cannot be detected from the model's accuracy either. Hence, to detect the malicious behavior, one needs to compare the outputs of the hardware-accelerated model to the original quantized version running in software. While this strategy allows identifying prediction discrepancies, the backdoor and its trigger still remain unknown. Currently, we lack appropriate methods to identify backdoors with this hybrid form of hardware-software testing.
### Limitations
We make strong assumptions on the attacker's capabilities. Our attacker model assumes that adversaries can manipulate the design of a hardware accelerator and posses knowledge of the executed learning model. The required sophistication might only be in reach for nation-state actors, but other well-organized adversaries could also come into play. In addition to an adversary that develops the trojan in-house, they could pressure the original provider of the hardware accelerator to implement the trojan or infiltrate their operations. Attacks on the hardware level have been a serious concern for many years [70], which has recently triggered major investments by governments around the world [15, 16].
One hurdle to mount a trojan attack like ours is the assumed access to the trained learning model. However, given that we have reverse engineered Xilinx' proprietary.xmodel format, a similar attack could also be performed for a FPGA deployed in-field. Still, we expect quantization artifacts to further impair accuracy and trigger success rate of the resulting backdoor.
Finally, our approach allows only for attacking a single learning model executed on the accelerator. If that model changes, the trojanized parameters stored in hardware would need to be updated. Even if an update mechanism has been built into the hardware, this process is cumbersome and requires access to the updated model again.
### Related Work
**Machine-Learning Backdoors.** The rising popularity of neural networks also raised interest in backdoor attacks. Among the first, Gu et al. [29] showed that an attacker who controls part of the training data can insert a backdoor into the network by adding a trigger and an incorrect class label to certain training examples. Further approaches that relax the assumption of access to the training data [32], the visibility, and position of the trigger [20, 71, 72] or the number of malicious examples required [30] exist. Stealthy backdoors that are inserted during model compilation [73], model quantization [74], or implemented by the software execution environment [75] were also proposed recently.
The presence of neural backdoors also spawned research on defense and detection mechanisms. One line of research tries to detect directly whether a trigger is present in the model, for example by finding shortcuts between output classes [7], training meta models to classify networks [8], or utilizing statistical properties from model predictions [76, 77]. An orthogonal line of research tries to detect whether a given input image contains a trigger, mostly by finding anomalies in activations or latent representations when propagating the input through the model at hand [9, 10, 69].
**Hardware Trojans.** For a general overview of hardware trojans, see [12, 78, 79]. The idea of hardware trojans targeting neural networks was first proposed by Clements et al. [80] and Li et al. [81] in 2018. Other works [82] require manipulations to the inputs to trigger the hardware trojan which then bypasses the machine-learning accelerator altogether. More recent trojan attacks trigger on intermediate layer outputs [83], are inserted into the on-chip memory controller [84], or target activation parameters [85]. Neither of these works addresses the insertion of a machine-learning backdoor into a trained learning model during inference.
Hardware-supported machine-learning acceleration is also susceptible to non-trojan hardware attacks. Liu et al. injected glitches for untargeted misclassification [86] and demonstrated applicability using Xilinx Vitis AI. Hong et al. studied hardware fault attacks on deep neural networks (DNNs) and found that for most models a change of a single parameter can cause an accuracy drop of around 90% [87]. Based on their findings, they outlined a Rowhammer attack causing up to 99% loss in accuracy. Caner Tol et al. presented a similar backdoor attack again using Rowhammer [88]. Another research strain investigates the effects of RAM collisions caused by concurrent writes [89].
### Future Work
We propose a new paradigm for machine-learning backdoors by taking the executing hardware into account. To counter this threat, countermeasures--ideally operating on the learning model as a black box--should be developed to detect low-level hardware manipulations during in-field operation. Furthermore, the amount of parameters required to realize minimal backdoors can possibly be reduced further. Another interesting aspect is the influence of quantization and how it could be incorporated into the backdoor generation process directly. Finally, an investigation of similar hardware-based attacks during model training appears worthwhile.
## 6 Conclusion
Our work extends the lively front of adversarial machine learning to a new so far trusted component: hardware acceleration. We present a trojan framework that backdoors a learning model at the lowest system layer during inference. All manipulations remain within the hardware, hence, no changes to the model can be observed, defeating existing defenses against backdoor attacks. To realize the trojan, we introduce the concept of a minimal backdoor that requires only a few parameter changes to implant malicious functionality. Even after quantization, \(30\) changes suffice to inject a backdoor with a trigger success rate of \(78.15\%\) and an overall prediction accuracy of \(94.42\%\). We demonstrate the applicability of this attack by implanting the trojan into a commercial machine-learning accelerator from Xilinx.
Our work echoes recurring concerns from the hardware security community [15, 70, 90]. The trojan attack illustrates that hardware should not be blindly trusted and the integrity of accelerators for machine learning needs to be carefully verified and protected, similar to other security-critical components. We urge manufacturers, IP vendors, and system integrators alike to pay close attention to this threats, and call on the research community to develop countermeasures that prevent the exploitation of this new class of attacks.
## Acknowledgements
The authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2092 CASA-390781972 and by the German Federal Ministry of Education and Research under the grant BIFOLD23B.
|
2301.02084 | Re-calibration of the Sunspot Number: Status Report | We report progress on the ongoing recalibration of the Wolf sunspot number
(SN) and Group sunspot number (GN) following the release of version 2.0 of SN
in 2015. This report constitutes both an update of the efforts reported in the
2016 Topical Issue of Solar Physics and a summary of work by the International
Space Science Institute (ISSI) International Team formed in 2017 to develop
optimal SN and GN re-construction methods while continuing to expand the
historical sunspot number database. Significant progress has been made on the
database side while more work is needed to bring the various proposed SN and
(primarily) GN reconstruction methods closer to maturity, after which the new
reconstructions (or combinations thereof) can be compared with (a)
``benchmark'' expectations for any normalization scheme (e.g., a general
increase in observer normalization factors going back in time), and (b)
independent proxy data series such as F10.7 and the daily range of variations
of Earth's undisturbed magnetic field. New versions of the underlying databases
for SN and GN will shortly become available for years through 2022 and we
anticipate the release of next versions of these two time series in 2024. | F. Clette, L. Lefèvre, T. Chatzistergos, H. Hayakawa, V. M. Carrasco, R. Arlt, E. W. Cliver, T. Dudok de Wit, T. Friedli, N. Karachik, G. Kopp, M. Lockwood, S. Mathieu, A. Muñoz-Jaramillo, M. Owens, D. Pesnell, A. Pevtsov, L. Svalgaard, I. G. Usoskin, L. van Driel-Gesztelyi, J. M. Vaquero | 2023-01-05T14:43:13Z | http://arxiv.org/abs/2301.02084v1 | # Recalibration of the Sunspot Number: Status Report
###### Abstract
We report progress on the on-going recalibration of the Wolf sunspot number (S\({}_{\rm N}\)) and Group sunspot number (G\({}_{\rm N}\)) following the release of version 2.0 of S\({}_{\rm N}\) in 2015. This report constitutes both an update of the efforts reported in the 2016 Topical Issue of Solar Physics and a summary of work by the
International Space Science Institute (ISSI) International Team formed in 2017 to develop optimal \(S_{N}\) and \(G_{N}\) re-construction methods while continuing to expand the historical sunspot number database. Significant progress has been made on the database side while more work is needed to bring the various proposed \(S_{N}\) and (primarily) \(G_{N}\) reconstruction methods closer to maturity, after which the new reconstructions (or combinations thereof) can be compared with (a) "benchmark" expectations for any normalization scheme (e.g., a general increase in observer normalization factors going back in time), and (b) independent proxy data series such as F10.7 and the daily range of variations of Earth's undisturbed magnetic field. New versions of the underlying databases for \(S_{N}\) and \(G_{N}\) will shortly become available for years through 2022 and we anticipate the release of next versions of these two time series in 2024.
## 1 Introduction
The sunspot number (\(S_{N}\); Clette et al., 2014, 2015; Clette and Lefevre, 2016) is a time series (1700-present) that traces the 11-yr cyclic and secular variation of solar activity and thus space weather, making \(S_{N}\) a critical parameter for our increasingly technological-based society (e.g., Baker et al., 2008; Cannon et al., 2013; Rozanov et al., 2019; Hapgood et al., 2021). While modern day observations may provide better parameters for characterizing the space weather effects, \(S_{N}\) is the oldest direct observation of solar activity, and thus, is an indispensable bridge linking past and present solar behavior. As such, the sunspot number is the primary input for reconstructions of total solar irradiance (TSI) for years before 1940 (Wang et al., 2005; Krivova, Balmaceda, and Solanki, 2007; Kopp, 2016; Kopp et al., 2016; Wu et al., 2018b; Coddington et al., 2019; Wang and Lean, 2021).
The original formula of Rudolf Wolf (1816-1893; Friedli, 2016) for the daily sunspot number of a single observer is given by
\[S_{N}=k\left[\left(10\;x\;G\right)+S\right] \tag{1}\]
(Wolf, 1851, 1856), where \(G\) is the number of sunspot groups on the solar disk on a given day, \(S\) denotes the total number of individual spots within those groups, and \(k\) is a normalization factor that brings different observers to a common scale (\(k\) is the time-averaged ratio of daily sunspot number \(S_{N}\) of the primary reference observer to that of a secondary observer). Because \(S=10\)\(G\) on average (Waldmeier, 1968; Clette et al., 2014), the two parameters have about equal weight in \(S_{N}\).
\(G\) and \(S\) counts can vary between observers because of differences in telescopes, visual acuity, and environmental conditions. These factors are susceptible to both gradual (e.g., visual acuity) and abrupt (equipment-related) variations. Moreover, as we will see below, the working definitions of both \(G\) and \(S\) have changed over time. Getting the time-varying scaling relationships (\(k\)-coefficients) between multiple sets of overlapping observers correct over a time span of centuries - with frequent data gaps due to weather for individual observers and occasional periods of no overlap between any observers - is the daunting task of any reconstruction method. Finally, to add unavoidable noise to the process, there are differences between the daily \(G\) and \(S\) observations at different locations that are independent of instrumentation/environment/observer acuity and practice, reflecting only separation in universal time, either because of solar rotation (spots appearing at the Sun's east limb and/or disappearing at the west limb) as well as intrinsic solar changes, i.e., the evolution of spots and groups.
In order to illustrate the kind of differences that may appear between two sunspot observations, Figure 1 shows drawings made on the same day at two different stations. Figure 1 (a) is a drawing from
the Specola Solare Ticinese station in Locarno, Switzerland, showing the delineation of groups and counting of spots on 14 March 2000 near the maximum of solar cycle 23 (1996-2008). Figure 1 (b) shows a sunspot drawing taken on the same date, about 7 hours later, at the US National Solar Observatory at Sacramento Peak in Sunspot, New Mexico (Carrasco et al., 2021a). While the drawings exhibit significant similarities, there are differences in the total number of sunspots and groups.
Figure 1: (a) Sunspot drawing by Sergio Cortesi from Specola Solare Ticinese on 14 March 2000. Nine groups and 88 individual spots (in the table at the top right corner marked as “flecken” which is “spots” in German) were observed to yield SN (Locarno) =178. ([https://www.specola.ch/e/drawings.html](https://www.specola.ch/e/drawings.html))
Figure 1: (b) Sunspot drawing by Tim Henry from the US National Solar Observatory at Sacramento Peak on 14 March 2000, about 7 hours after the Locarno observation in Figure 1(a). Eight groups and 58 sunspots were observed to yield SN (SacPeak) = 138. Fainter contours outline photospheric faculae, visible near the solar limb. ([https://ngdc.noaa.gov/stp/space-weather/solar-data/solar-imagerv/photosphere/sunspot-drawings/sac-peak/](https://ngdc.noaa.gov/stp/space-weather/solar-data/solar-imagerv/photosphere/sunspot-drawings/sac-peak/))
Although the difference in the number of groups between two drawings is minimal (8 vs. 9), there are several other differences: e.g., groups 113 and 122 shown in Locarno observations are missing from Sacramento Peak drawings, and an unnumbered group to the northwest of AR 8910 that consists of a single pore is not present in the Locarno drawing. Just this single spot thus produces a difference of 11 in \(S_{N}\). Groups 113 and 122 and the unnumbered group northwest of AR 8910 were all located close to solar limbs where visibility effects could be strongest. Such discrepancies can thus result from the difference in the time at which the observations were made, as a result of intrinsic changes on the Sun, viz., sunspot appearance or disappearance, growth or decay, as well as solar rotation. A study of the random noise in \(S_{N}\)(Dudok de Wit et al, 2016) shows that the random evolution of solar active regions dominates over observing errors in the discrepancies between observations for the same date. Overall, in this case, \(S_{N}\)(Locarno; 14 Mar 2000) = 178 while \(S_{N}\)(SacPeak; 14 Mar 2000) = 138.
Wolf modified the \(S_{N}\) time series that he had initiated several times before his death in 1893, but only for the years before 1848 when he began observing, i.e., for recovered data from early observers (see Section 2.1 in Clette et al., 2014). Wolfer, Wolf's successor as observatory director at Zurich, made a final revision to \(S_{N}\) in 1902 (for the 1802-1830 interval) based on the addition of new data from Kremsmunster. Thereafter, the \(S_{N}\) series, as maintained by the Zurich Observatory until 1980 and subsequently by the Royal Observatory of Belgium (ROB), in Brussels, was left unchanged but continuously extended by appending new data to keep it up to date until the present.
\(S_{N}\) remained the only long-term time series directly retracing solar activity until 1998, when Hoyt and Schatten (1998a,b) developed a group sunspot number (\(G_{N}\))
\[G_{N}=k^{\prime}G \tag{2}\]
where \(k^{\prime}\) is the station normalization factor. This simpler index does not include the count of individual spots, which characterizes the varying size of the sunspot groups but is often missing in early observations.
A key advantage of the \(G_{N}\) series was that Hoyt and Schatten (1998a,b) were able to reconstruct it back to the first telescopic observations of sunspots by Galileo and others ca. 1610 vs. the initial year of 1700 for Wolf's \(S_{N}\) series. The new \(G_{N}\) thus encompassed the period of extremely weak sunspot activity from 1645-1715 known as the Maunder Minimum (Eddy, 1976; Sporer, 1887, 1889; Maunder, 1894, 1922). Until recently, another crucial advantage of \(G_{N}\) was the availability of all the raw source data in the form of an open digital database, while the \(S_{N}\) source data, forming a much larger collection (more than ~800,000 observations), were available in digital format only since 1981 (Brussels period). Older \(S_{N}\) data existed only in their original paper form or were even long considered as partly lost (see Section 2.1.1 below). The unavailability of those core data has so far prevented a full end-to-end reconstruction of the original \(S_{N}\) series from its base elements.
The \(S_{N}\) parameter defined by Wolf in Equation 1 requires more detailed observations than \(G_{N}\) because of the inclusion of the S (spot) component. However, early observations are often ambiguous, as many observers did not make a clear distinction between spots and groups, using the same name "sunspot" or "kernel" for both. Moreover, as can be seen in Figure 1, large spots are enveloped in a penumbra that may contain one or several dark umbrae. Depending on the observer and epoch, each penumbra will either be counted as 1 in the S count, regardless of the number of embedded umbrae, or all umbrae will be counted separately, leading to a higher S value. Both the S and G counts are affected by the acuity threshold problem faced by each observer of distinguishing the smallest spots, in particular,
for small groups consisting of a solitary spot (for which G = 1, S = 1, and S\({}_{N}\) = 11) and versus the absence of spots (G, S, and S\({}_{N}\) = 0). Finally, the G variable presents its own difficulty: the separation of a cluster of magnetic dipoles into one or more distinct groups, depending on the cluster morphology and evolution. This splitting of groups depends on personal practices and on the scientific knowledge at different epochs, and becomes difficult mainly for a heavily spotted Sun, near the solar cycle maximum.
### Motivation for the ISSI International Team on recalibration of the sunspot number
The series G\({}_{N}\) and S\({}_{N}\)1 can be considered to be equivalent if one assumes that S is proportional to G, on average, with a constant of proportionality that does not change over time. In this case, differences between the form of the G\({}_{N}\) and S\({}_{N}\) time series would indicate calibration drifts in one or both time series. However, the possibility of a long-term change in solar behavior would mean that the ratio S/G could have varied and this would be a separate cause of deviations of G\({}_{N}\) from S\({}_{N}\). A modulation of the S/G ratio by the solar cycle has actually been found by Clette et al. (2016b) and Svalgaard, Cagnotti and Cortesi (2017), indicating a non-linear relation between the two indices that seems to be stable in time. Therefore, where they overlap, S\({}_{N}\) and G\({}_{N}\) can be considered as close equivalents but with different detailed properties and calibrations.
Footnote 1: Throughout this update, we will use S\({}_{N}\) and G\({}_{N}\) generically for composite time series of sunspot numbers and group numbers irrespective of the calibration approach.
While the original Zurich S\({}_{N}\) (termed S\({}_{N}\) (1) herein) and Hoyt and Schatten (1998a,b; HoSc98) G\({}_{N}\) series agreed reasonably well over their 1874-1976 normalization interval, HoSc98 was significantly lower for maxima before ~1880 (see Figure 1(a) in Clette et al., 2015, and Figure 2 (below)). This situation was puzzling, if not unacceptable. What was the cause of the divergence? Could the two series be reconciled? Such questions have led to a decade-long effort by the solar community to construct more homogeneous and trustworthy S\({}_{N}\) and G\({}_{N}\) time series beginning with a sequence of four Sunspot Number Workshops from 2011-2014 (Cliver, Clette, and Svalgaard, 2013a; Cliver et al., 2015). This effort produced major recalibrations of both S\({}_{N}\) (1) by Clette and Lefevre (2016) to yield S\({}_{N}\) (2) and HoSc98 by Svalgaard and Schatten (2016) to yield SvSc16.
These recalibrations removed the marked divergence of S\({}_{N}\) (1) and HoSc98 before ~1880 (see Figure 1(b) in Clette et al., 2015) and reduced differences during the 18\({}^{th}\) century (Clette et al., 2015). However, questions were raised about the validity of the methods used (Usoskin et al., 2016a; Lockwood et al., 2016) and new ideas for recalibration emerged. The revisions of the Wolf S\({}_{N}\) series and the HoSc98 G\({}_{N}\) series were accompanied by an independent revision/extension of S\({}_{N}\)(Lockwood, Owens, and Barnard, 2014a,b; Lockwood et al., 2016; LEA14) and novel reconstructions of G\({}_{N}\) by Usoskin et al. (2016a; UEA16) and Chatzisteros et al. (2017; CEA17). Modifications to UEA16 was published by Willamo et al. (2017; WEA17) and WEA17 was subsequently modified by basing it on the new Vaquero et al. (2016; V16 herein; 16 for the year of release) database2 rather than the original Hoyt and Schatten (1998) data base (Usoskin et al., 2021; UEA21). The resultant situation, documented in a Topical Issue of Solar Physics (Clette et al., 2016a), was similar to that which motivated the Sunspot Number Workshops, but writ larger (Cliver, 2017; 1Throughout this update, we will use S\({}_{N}\) and G\({}_{N}\) generically for composite time series of sunspot numbers and group numbers irrespective of the calibration approach.
Footnote 2: Both the Vaquero et al. (2016; V16) data base for G\({}_{N}\) reconstruction and the original Hoyt and Schatten (1998a,b) observer data base can be found at [https://www.sidc.be/silso/groupnumbery3](https://www.sidc.be/silso/groupnumbery3).
Cliver and Herbst, 2018): several independently constructed series that diverged before ~1880 (Figure 2) - largely bounded by those of SvSc16 and HoSc98. This state of affairs prompted a successful International Space Science Institute (ISSI) Team proposal in 2017 entitled "Recalibration of the Sunspot Number Series" by Matt Owens and Frederic Clette. Team meetings were held in Bern in January 2018 and August 2019. The ultimate goal, yet to be met, of the proposal was to provide consensus "best-method" reconstructions of \(S_{\text{N}}\) and \(G_{\text{N}}\) including quantitative time-dependent uncertainties, for use by the scientific community. Here we report the results of this effort and provide an update of the 2016 Topical Issue.
In Section 2, for \(S_{\text{N}}\) and \(G_{\text{N}}\), in turn, we present highlights of the data recovery effort and discuss work on sunspot number reconstruction methodologies. In Section 3, we discuss two "benchmarks" for sunspot number recalibration (quasi-constancy of spot-to-group ratios over time and an increase, as one goes back in time, of k- and k-values). In Section 4, we review associated work on independent long-term measures of solar activity. Such proxies as quiet-Sun solar radio emission, solar-induced geomagnetic variability, and cosmogenic radionuclide concentrations, can be used as correlates/checks on new versions of \(S_{\text{N}}\) and \(G_{\text{N}}\). We stress, however, that, thus far, the sunspot and group number back to 1610 remain fully independent of these proxies. In section 5, we discuss efforts to connect primary observers Schwabe and Staudacher across the data-sparse interval from 1800-1825 and the challenges of the first ~140 years of \(S_{\text{N}}\) and \(G_{\text{N}}\) time series (1610-1748). Sections 6 and 7 contain a summary of ISSI Team results and a perspective/prospectus for the on-going recalibration effort, respectively.
## 2 Solar Physics Topical Issue update / ISSI Team report
Figure 2: Eleven-year running means of eight sunspot series: The original Wolf \(S_{\text{N}}\) (\(S_{\text{N}}\)(1.0)) series, the Hoyt and Schatten (1998a,b) \(G_{\text{N}}\) (HoSc98) series, the Lockwood et al. (2014a,b) \(S_{\text{N}}\) (LEA14) series, the Clette and Lefèvre (2016) corrected \(S_{\text{N}}\) (\(S_{\text{N}}\)(2.0)) series, the Svalgaard and Schatten (2016) reconstructed \(G_{\text{N}}\) (SvSc16) series, the Cliver and Ling (2016) \(G_{\text{N}}\) (Cli16) series, the Chatzistergos et al. (2017) \(G_{\text{N}}\) (CEA17) series, and the Usoskin et al. (2021a) \(G_{\text{N}}\) (UEA21) series with predecessors given in Usoskin et al. (2016a; UEA16) and Willamo et al. (2017; WEA17). All records are scaled to the mean \(S_{\text{N}}\)(2.0) series over 1920-1974.
### Wolf sunspot number (S\({}_{\rm N}\))
#### 2.1.1 Data Recovery: the Sunspot Number Database
Data recovery is a never-ending task, and future revisions of sunspot series will occur intermittently as part of a continuous upgrading process. The last few years have witnessed significant advances in recovery and digitization of data underlying the S\({}_{\rm N}\) time series, particularly in the context of the ISSI Team collaboration over 2019-2021. The focus in this section is on sunspot observations that were either made at Zurich and later at Locarno and Brussels for the creation of S\({}_{\rm N}\), or collected and archived from external observatories by the Directors of the Zurich Observatory, and since 1981 by the World Data Center for Sunspot Index and Long-term Solar Observations (WDC-SILSO) in Brussels, for this purpose.
The Zurich sunspot number produced by Wolf and his successors from 1700-1980 is based on three types of data: (1) the raw counts from the Zurich observers (the director and the assistants) from 1848-1980, (2) corresponding counts sent to the Observatory of Zurich by external auxiliary observers, and (3) historical observations for years before 1849 collected mainly by Wolf but also by A. Wolfer, his successor. Much of this material was published over the years in the bulletins of the Zurich Observatory, the Mittheilungen der Eidgenossischen Sternwarte Zurich (hereafter, the Mittheilungen). This is a fundamental resource for any future re-computation of S\({}_{\rm N}\).
Until recently, this large collection of printed data was completely inaccessible in digital form. During 2018-2019, a full encoding of the Mittheilungen data tables, from the printed originals, was undertaken at WDC-SILSO in Brussels, constituting the initial segment of the sunspot number database. This includes all data published between 1849 and 1944 (black curve in Figure 3) when Max Waldmeier, the last Director of the Zurich Observatory decided to cease publishing raw data. This database now contains 205,000 individual daily sunspot counts (Clette et al., 2021): it currently includes data from the Zurich and auxiliary observers between 1849 and 1944, and all long data series in the early historical part, from 1749 to 1849, that were found and collected by R.Wolf in his epoch. (NB: Isolated observations, randomly scattered over time, are less exploitable and will be added later.) In addition to daily counts of spots and groups, the database includes metadata - in particular, changes of observers or instruments.
Although the published data were globally comprehensive, some parts are missing. The Mittheilungen provide the full set of observations from the Zurich team itself only from 1871 onwards, when Wolf started to publish separate tables for himself and his assistants as well as data received from external observers (Friedli, 2016). Before 1871, Wolf published only a single yearly table, with his all his own observations, and data from external observers or assistants were only included to fill the remaining random missing days. Later on, after World War I (WW I), Alfred Wolfer added many external observers, creating a truly international network. However, given the strong increase in the amount of collected data, for financial reasons, he greatly reduced the published raw data, limiting them to those from Zurich observers and ~10 external observers. When William O. Brunner succeeded Wolfer as director of the Zurich Observatory in 1926, he stopped publishing data from external observers. Only observations from the Zurich Observatory itself and from Karl Rapp (Locarno, Switzerland) were published after that year. Then in 1945, when Max Waldmeier became the new Director, the publication of source data ceased completely, and only a list of contributing stations was provided for each year.
All those unpublished data, collected during the Zurich era, were stored in paper archives at the Zurich Observatory, which were supposed to include the whole collection, though in a less directly
accessible manner. Unfortunately, following the closing of the Observatory of Zurich in 1980, when the curatorship of the sunspot number was transferred to ROB, those unpublished archives went missing. Figure 3 (brown and blue curves) shows the resulting major ~60-year shortfall (1919-1979) in the preserved raw data. The absence of a large portion of the 1919-1979 sunspot data constituted a critical missing link between the early Zurich epoch and the modern international sunspot number produced in Brussels since 1981, for which all data are preserved in a digital database3. In particular, this period spans one of the main scale discontinuities identified in the Zurich series, the "Waldmeier jump" in 1947 (Clette et al., 2014; Lockwood, Owens, and Barnard, 2014a; Clette and Lefevre, 2016; Svalgaard, Cagnotti, and Cortesi, 2017).
Footnote 3: 1980 was a disturbed closing year at the Zurich Observatory. The standard yearly tables of source data were not produced by the Zurich staff, and the resulting sunspot numbers were published as crude typewritten pages instead of the normal printed edition of the Mittheilungen. However, the original reports sent to Zurich by all auxiliary stations were recovered and were transferred in 2007 to the World Data Center SILSO in Brussels. Those original reports were exploited for the production of the revised series S\({}_{\rm N}\) V2.0 in 2015, as an important link to join the Zurich series and the international sunspot number across the 1980-1981 transfer from Zurich to Brussels. As a consequence of this disorganization in 1980, the collection of standard Zurich source tables thus ends in 1979. All internal data from Zurich observers for 1980 as well as the documentation about the monthly S\({}_{\rm N}\) processing were provided to the new team in Brussels by the assistants of the Zurich observatory at the occasion of personal visits in Zurich and Brussels.
Finally, two important source documents also bring essential information regarding the early part of Wolf's period, between 1849 and 1877: Wolf's handwritten source books and the collection of input-data tables maintained first by Wolf and then by Wolfer until 1908 (Friedli, 2020). Both documents are preserved at the ETH Zurich University Archives, and have now been entirely scanned and made accessible online. Wolf's source books provide yearly tables and contains invaluable information about the calculation of each daily sunspot number, which was never published in the Mittheilungen: e.g., the distinction between Wolf and Schwabe's daily numbers, where they overlapped between 1849 and 1868, and yearly k coefficients for each external observer. The source tables are more comprehensive and list all raw counts from all observers, back to 1610, including data that were never used for production of the sunspot number. So far, part of the source book has been encoded (1849-1877) by the Rudolph Wolf Society, and is accessible on-line (Friedli, 2016; www.wolfinstitute.ch). Those original data complement the published tables in the Mittheilungen, and will prove essential to better understand the multiple methodological changes introduced by Wolf during his long career - in particular, the scale transfers between Schwabe and Wolf over 1849-1868 and between Wolf and Wolfer over 1877-1893 (Friedli 2020, see below; Bhattacharya et al., 2022, submitted).
Figure 3: Evolution of the number of contributing stations collected by Zürich for each year. The raw source data published in the Mittheilungen of the Zürich Observatory (gray curve) have now been digitized into a new database. After 1918, the number of external stations grew significantly, but the Zürich Observatory did not to publish all its source data anymore, and even ceased completely after 1945. Those unpublished data, stored in handwritten archives but missing until recently, are shown by the brown and blue-shaded curves. In 2019, the archives for the period 1945-1979 were finally recovered (blue part), and the original sheets were scanned, but the values still need to be extracted to fill the database. The archived source data from 1919-1944 (brown part) are still missing, and are the target of further searches. The two vertical shaded bands mark the two World Wars, both of which left a clear imprint on the Zürich data set. Tenures of the Zürich Observatory Directors are indicated at the top of the panel. (Figure adapted from Clette et al., 2021.)
2.1.2 Reconstruction Methodology for \(S_{n}\)
2.1.2.1 The new \(S_{n}\)(2.0) series
Two primary \(S_{n}\) time series have been constructed thus far: the original series \(S_{n}\) (version 1.0; 1700-2014) constructed by Wolf (1851, 1856) and its first revised version \(S_{n}\)(2.0; 1700-present) (Clette and Lefevre, 2016). \(S_{n}\)(1.0) and \(S_{n}\)(2.0) give daily values starting in 1818, monthly values starting in 1749 and yearly averages starting in 1700. Both series are available at [http://www.sidc.be/silso/](http://www.sidc.be/silso/). In addition, Lockwood, Owens, and Barnard (2014a,b) constructed a \(S_{n}\) series (LEA14; 1610-2012) by appending a scaled-up Hoyt and Schatten (1998a,b) \(G_{n}\) series for years prior to 1749 to \(S_{n}\)(1.0) with corrections applied ca. 1850 and 1950, using geomagnetic indices as an external reference. Friedli (2016, 2020; F(\(S_{n}\)(1.0)\({}_{\text{COR}}\)) reconstructed the 1877-1893 segment of \(S_{n}\)(1.0) for which Wolf and Wolfer overlapped. Specifics of these four series are summarized in Table 1.
Table 1. \(S_{n}\) times series
Ref. Abbrev. Years Cadence Source 1 S\({}_{n}\)(1.0)* 1818-2015 daily Zurich/SILSO 1818-2015 daily Zurich/SILSO 1818-2020 daily Zurich/SILSO 1818-2015 daily Zurich/SILSO 1818-2015 daily Zurich/SILSO 1818-2020 daily Zurich/SILSO 1818-2020 daily Zurich/SILSO 1818-2015 daily Lockwood et al. (2014b; (as supporting information) 1818-2015 daily Zurich/SILSO 1818-2015 daily Zurich/SILSO 1818-2015 daily Zurich/SILSO 1818-2015 daily Zurich/SILSO 1818-2015 daily Lockwood et al. (2014b; (as supporting information) 1818-2015 daily Zurich/SILSO 1818-2015 daily
Of the three above-mentioned corrections in S\({}_{\rm N}\)(2.0), only the first one led to a full recalculation of the S\({}_{\rm N}\), based on the raw input data, which are fully archived in digital form by SILSO since 1981 (Clette et al., 2016, 2016b). The other two corrections consisted in the application of a correction factor to the original S\({}_{\rm N}\)(1.0) Zurich series over the time interval affected by each diagnosed inhomogeneity (Clette and Lefevre, 2016). Indeed, in the Zurich system before 1981, most of the daily sunspot numbers, as published in the Mittheilungen and in Waldmeier (1961), were simply the raw Wolf numbers (k coefficient fixed at unity) from the primary observer at the Zurich observatory, without any kind of statistical processing (Clette et al. 2014; Friedli 2016, 2020). In this scheme, the numbers from external auxiliary stations were only used on days when the Sun could not be observed in Zurich (on average, less than 20% of all days), and thus played a secondary role. This approach thus rested entirely on the personal life-long stability of individual reference observers, and on the assumed equivalence "by construction" of successive observers observing from the same place with mostly the same instrument. Any correction thus implies the use of alternate observers. Unfortunately, as described in the previous section, until recently, the Zurich source data were not available in digital form and part of the archives were even missing, which prevented any reconstruction from a wide base of source data from multiple alternate observers.
Reconstructing a sunspot database for the entire Zurich period before 1980 will thus be a key step for any future upgrade of the S\({}_{\rm N}\) series. This focused the efforts over the last few years, as described in the previous section and also in Section 2.2.1. Although the database is still incomplete at this time and a full end-to-end reconstruction cannot be undertaken yet, the data gathered thus far already allowed to clarify two primary scale transitions in the S\({}_{\rm N}\) series.
#### 2.1.2 The Wolf-Wolfer scale transfer
The Wolfer to Wolf conversion factor of 0.6, embedded in the S\({}_{\rm N}\)(1.0) time series, was introduced because Wolfer, Wolf's successor, counted more groups and spots than Wolf in simultaneous observations spanning 17 years (1877-1893). Those higher counts resulted from two simultaneous causes. First, Wolfer used the standard telescope of Zurich Observatory (Aperture: 83 mm; magnification 64x), while Wolf was only using a much smaller portable telescope during that part of his observing career (Aperture: 40mm, Magnification: 20x). This enabled Wolfer to see and count single small pores (small sunspots without penumbra and area from 0.5-4 millionths of a solar hemisphere (\(\upmu\)sh); Tlatov et al., 2019) that were undetectable in Wolf's small telescope. In addition, as Clette et al. (2007) wrote: _"In 1882, A. Wolfer... introduced an important change in the counting method (Hossfield, 2002 [see also Wolf 1857; Wolfer 1895; Kopecky, Ruzickova-Topolova, and Kuklin, 1980]). While Wolf had decided not to count the smallest sunspots visible only in good conditions and also not to take into account multiple umbrae in complex extended penumbrae, in order to better match his counts with the earlier historical observations, the new index included all small sunspots and multiple umbrae. [Figure 1 illustrates how small groups with low spot counts balance the effect of large groups with many spots in S\({}_{\rm N}\).] By removing factors of personal subjectivity, this led to a much more robust definition of the S\({}_{\rm N}\) that formed the baseline for all published counts after 1882. To complete this transition, A. Wolfer determined the scaling ratio between the new count and the Wolf SN series over the 16-year Wolf-Wolfer overlap period (1877-1892). This led to the constant Zurich reduction coefficient (K\({}_{2}\) = 0.6) [that was used] to scale... [S\({}_{\rm N}\)(1.0)] to the pre-1882 Wolf sunspot counts." (This change does not, however, affect the G\({}_{\rm N}\) where multiple umbrae within the same penumbra do not alter the number of groups.) However, already in Wolfer's original study (Wolfer 1895), the yearly mean Wolf-Wolfer ratio shows a clear drift over the interval 1877-1883, followed by a stabilization (Figure 4, left panel), which sheds doubts on the accuracy of this mean 0.6 factor and
indicates that it should be re-determined. Earlier studies confirmed the specificity of this transition (Svalgaard, 2013; Cliver and Ling, 2016; Cliver, 2017; Bhattacharya et al., 2021).
Recently, in the framework of the ISSI Team, Friedli (2020) revisited this important transition, by using unpublished source data tables compiled by Wolfer (1912) in the framework of a PhD thesis by his student Elsa Frenkel (1913; with Albert Einstein as supervisor). Those tables include data from multiple observers over the same time interval, allowing to retrace any drift in Wolf's or Wolfer's data (Figure 4, right panel). Friedli concludes that Wolfer was stable over this interval of overlap, while Wolf's series changed relative to all other observers, probably due to a slow degradation of his eyesight associated with ageing. In particular, he found that the mean factor should be lowered to 0.55, which means that the maxima of Cycle 12 (1884) and Cycle 13 (1893), which are framed by the 1877-1893 interval, should be ~10% higher than in the original Wolf series, as also suggested by Cliver and Ling (2016), implying a similar correction for \(S_{\text{N}}(2.0)\).
**Moreover, in the early part of the interval, between 1877 and 1883, the scale factor is lower than in the final part, after 1890, by up to a factor of 0.76. This would suggest that all \(S_{\text{N}}\) values before 1877 were scaled too high by up to 25% relative to all \(S_{\text{N}}\) values after 1894. However, the first years of this interval fall in the minimum between solar cycles 11 and 12. The uncertainty in the ratios between
Figure 4: Left panel: yearly k-factors of Riccó, Tacchini, Ventosa, and of Wolfer relative to Wolf as given by Wolfer (1895), showing a clear drift before 1884. Right panel: yearly k-factors of Riccó and Ventosa as given by Frenkel (1913) using Wolfer as reference. The k factors are stable over the whole time interval, indicating that all observers are stable. By contrast, the yearly k-factor of Rudolf Wolf, with the small portable refractor relative to the numbers from Wolfer using the 83 mm standard Zürich refractor (squares) as given in the upper part of the panel, shows the same trend between 1877 and 1884, but also a modulation that follows the solar cycle (minima in 1879 and 1890, maxima in 1884 and 1894) (from Friedli, 2020).
observers is thus very high, due to the low sunspot counts during that period. Moreover, Friedli's analysis shows that the comparison between observers with widely different personal k coefficients shows a significant modulation with the solar cycle, which could be the consequence of a non-linear relation between the counts of two observers with very different instruments. This is precisely the case for Wolf, who used only his small portable telescope, by contrast with all other observers who had much larger instruments, with aperture of 80mm or more. Therefore, Wolf's drift diagnosed here with a linear model may include, at least partly, a spurious solar cycle variation (Figure 4, right panel). In that case, the drift found in the rising part of cycle 12 (1877-1883) may actually reverse and vanish before 1877, when solar activity reached higher levels in cycle 11, as well as in the other cycles before it. Such possible effects were not analyzed yet, and Friedli (2020) rightly concludes: _"Before 1877, the scale transfer from the 40/700 mm Parisian refractor as used by Rudolf Wolf to the 83/1320 mm Fraunhofer refractor as used by Alfred Wolfer will need to be analyzed further."_
_2.1.2.3 The 1947 jump and the sunspot weighting effect_
By a comparison with data from the Madrid and Greenwich observatories, Svalgaard (2012, 2013) identified a sharp upward jump in the Zurich sunspot number, occurring around 1945-1947. A likely cause for this jump was quickly identified: the introduction in the observing practice of Zurich observers of a weighting of the sunspot counts according to the size of the spots (Svalgaard, 2013, Svalgaard, Cortesi and Cagnotti, 2017, Clette et al., 2014). The effects of this weighting can be seen in Figure 1 (a) for spot groups 114 (3 spots observed; 5 listed) and 121 (2 observed; 3 listed). In order to validate the weighting hypothesis, a systematic double-counting project was initiated at the Specola Solare Ticinese observatory in Locarno which, as a former auxiliary station of the Zurich Observatory since 1957, continues nowadays to use this weighted counting method, providing a living memory of how Zurich observers worked back in 1945 (Cortesi et al., 2016, Ramelli et al., 2018). Based on simultaneous weighted and normal counts (Wolf's original formula), Svalgaard, Cagnotti and Cortesi (2017) found that the weighting method produced a mean inflation of about 17% on average over the studied interval (2012-2014). The amplitude of this effect thus closely matches the magnitude of the 1947 jump, thus giving a strong indication that the introduction of this weighting led to an overestimate of the S\({}_{\rm w}\)(1.0) over the whole period after 1945, up to the present, as the Specola Observatory kept the role of pilot station after the move to the sunspot production from Zurich to Brussels in 1981.
However, the magnitude of the 1947 jump, initially estimated at 20% (Svalgaard 2012, 2013), was quickly questioned. Lockwood, Owens, and Barnard (2014a) compared the mean ratios between the S\({}_{\rm w}\)(1.0) and independent series (Greenwich sunspot areas and group counts, inter-diurnal variability (IDV) geomagnetic index) and found a much smaller jump amplitude of 11.5% between the mean ratios over two long intervals, 1875-1945 and 1946-2012. This discrepancy was clarified by Clette and Lefevre (2016). First, uncorrected inhomogeneities were present before 1900 in the original Greenwich group counts used as one of the references, and the choice of 1945 as the separating year (time of a cycle minimum) did not match the actual time of the jump present in the series. By just correcting those two flaws, the same analysis leads to a larger jump of about 16%. Moreover, by a finer analysis of the double counts conducted at the Specola Observatory, Clette and Lefevre (2016) and subsequently Svalgaard, Cagnotti and Cortesi (2017) found that the inflation factor varies with the level of solar activity, starting near 1 at low activity, then increasing to an asymptotic plateau that is reached around S\({}_{\rm w}\)= 50. Above this limit, the inflation factor levels out near a mean value of 1.177. A slight upward dependency may persist for very high S\({}_{\rm w}\), which could explain even larger values closer to 1.2, as found by Svalgaard (2013), for the maxima of very
strong solar cycles, considering that this analysis was applied to cycle 24, which was weaker by a factor two than the cycles of the mid-20th century. Likewise, the dependency of the inflation factor on solar activity, and thus the resulting variation over a range from 1 to 1.177 in the course of each solar cycle, also explains why lower mean inflation values of about 1.15 are obtained when averaging over a full solar cycle or multiple solar cycles, like in the analysis by Lockwood, Owens and Barnard (2014a). A synthesis of those elements by the ISSI Team thus allowed us to conclude that the issue of the amplitude is now largely settled. A graphic summary of the various determinations of the amplitude of the discontinuity in S\({}_{\rm N}\) ca. 1947 is given in Figure 5.
The date of this jump also raised another issue, assuming that the Zurich weighting method is truly the primary cause of the jump. Indeed, different pieces of evidence indicate that the weighting method was implemented in the early 20th century, decades before 1947 (Cortesi et al., 2016; Svalgaard, Cagnotti and Cortesi, 2017). Mentions in the Mittheilungen and in the Zurich archives indicate that this method was introduced by Wolfer for the Zurich assistants, apparently to help them to obtain counts matching more closely his own (unweighted) counts, as the primary observer. The use of the weighting in the first half of the 20th century was also verified by consulting original sunspot drawings from the Zurich
Figure 5: Summary of published determinations of the amplitude of the 1947 jump (upper part) and of the inflation factor derived from simultaneous weighted and unweighted direct counts (lower part). Red and blue diamonds indicate mean values, with the uncertainty ranges shown as blue arrows. Yellow bands indicate the range of values, when the factor is found to be variable. Vertical red bars are the upper limits, typically found near solar cycle maxima. Rg = relative group sunspot number (G\({}_{\rm N}\)) and Ag = group area.
Observatory. However, surprisingly, no significant inflation is found in the counts of Broger and Brunner, the main assistant and the successor of Wolfer, who both used the weighting, except for low S\({}_{\rm N}\) numbers below 25 (Svalgaard, Cagnotti, Cortesi, 2017; their figure 19). This very long delay before the putative effect of the pre-existing weighting became effective thus required an explanation. Moreover, the sharp scale jump finally occurred in 1947, as found in the data, while Waldmeier had become Director of the Zurich Observatory and primary observer in 1945, two years earlier. Therefore, this prominent change in the history of the Zurich SN construction does not even match the exact time of this anomaly. The case for a role of the weighting thus also required additional evidence of another key transition in the Zurich system.
Following the creation of the S\({}_{\rm N}\) database and the recent recovery of all source data for the period 1945-1980, Clette et al. (2021) conducted a full survey of contributing stations since Wolf started recruiting assistants and auxiliary observers, up to the Waldmeier period that concludes the Zurich era of the S\({}_{\rm N}\), with its extended worldwide network of auxiliary stations (cf. section 2.1.1). The resulting timelines revealed two unique disruptions in the history of the Zurich sunspot number that occurred almost simultaneously, immediately after the end of World War II. Due to the War, the former set of contributing stations was replaced by an entirely new and larger international network of auxiliary stations in the years following 1945. At the same time, in Zurich, Brunner and his primary long-term assistant continued to observe until the end of 1946, and were then replaced by Waldmeier by new assistants, among whom the first ones only stayed at the Observatory for a short time (Figure 6). Both factors produced a break in the continuity of the Zurich system, as almost all new internal and external participants never contributed before or worked in parallel with former observers. This unique event corresponds to a sudden loss of past memory of the Zurich system. This can be quantified by counting the cumulative number of past observing years for all observers active on a given year. This amount plotted as a function of time is marked by a large abrupt drop in 1947 (Figure 7), a sharp transition that is unique over the whole 1849-1980 interval. Clette et al. (2021) now show that this abrupt transition coincides with the departure of the Brunner team, which had been observing since 1926, and that it falls in 1947, precisely when the jump is diagnosed in the original S\({}_{\rm N}\) series. This finally gives a clear historical base for the occurrence of this jump and its timing.
Figure 6: Timelines of the active observing periods of all Zürich observers. In red (top group), the primary observers and in orange (bottom group), the assistants. In purple, the observers of the auxiliary station in Locarno, who were considered as members of the Zürich core team. The vertical shaded band marks World War II and the vertical dashed line indicates the time when the 1947 scale jump occurs in the original S\({}_{\text{N}}\) series. The bottom plot gives the number of active Zürich observers for each year (from Clette et al. 2021).
#### 2.1.3 Orientations for future progress
The potential recovery of the complete Zurich set of observations, including those from 1919-1944, will make it possible to fully construct a \(S_{N}(3.0)\) time series independently of the direct use of \(S_{N}(1.0)\), only rescaled over certain epochs, as was done to produce \(S_{N}(2.0)\). The methodology of constructing the \(S_{N}(3.0)\) series has not been selected, but several options are possible, given the recent advances in methods acquired over the last few years.
Such a reconstruction could, for example, be based on the k-based method implemented at the WDC-SILSO since 1981. Non-linear probability distribution functions (Usoskin et al., 2016; Chatzistergos et al., 2017), developed so far for the group number (described in Section 2.2.2), could also be applied after adapting them to the sunspot number. Another obvious improvement would consist in replacing the single pre-selected pilot observer by a set of reference stations, selected on the base of systematic statistical quality criteria (Clette and Lefevre, 2016). The elements of such a method are now developed and trained on actual data from the SILSO network. This new state-of-the-art approach presented by ISI team members (Mathieu et al. 2019, 2021), uses advanced statistical techniques to derive a non-parametric measure of the short- and long-term stability of each individual observer or observatory team, and it also introduces a proper non-normal distribution of uncertainties, in particular in the special case when the \(S_{N}\) is close to zero, with the constraint of non-negativity. In addition to creating a new time series, such tools can form the basis for a permanent quality-monitoring process, applicable both to the past reconstruction and to the current and future production of the \(S_{N}\) at the WDC-SILSO.
Figure 7: Plot of the total number of preceding observed years by all external stations contributing to the \(\dot{Z}\)\(\ddot{u}\)rich \(S_{N}\) and active in each year. This count is a global measure of the amount of past information on which the \(S_{N}\) calibration could rest for any given year. After an almost continuous increase, a sharp drop by more than a factor 2 occurred just after WWII. The shaded vertical bands mark the early Wolf period and the first and second world wars. The vertical dashed line marks the location of the 1947 jump, accompanying the last observations from Brunner (from Clette et al. 2021).
However, gathering enough base data remains the essential pre-requisite for any progress. This will thus require recovering more historical and revising the existing data series, which were often inherited indirectly through past data searches, sometimes a long time ago. For the 20th century, except for the remaining lost Zurich archives between 1919 and 1944, we now have reasonable data coverage. Still, even for this more recent period, new data series that were not known or not collected by the Zurich observatory can help completing the \(S_{\rm N}\) database, either with entirely new series from other observers or by extending and revising original series partly collected in Zurich. For instance, we can cite the sunspot catalog of the Madrid Observatory (Aparicio et al., 2018) and sunspot-count series from dedicated Japanese observers like Hisako Koyama (Hayakawa et al., 2020), Hitoshi Takuma (Hayakawa et al., 2022) or Katsue Misawa (active over 1921-1934).
As nicely illustrated by Munoz-Jaramillo and Vaquero (2019), the main challenge resides in the sparse data of the first decades of the 19th century and in the 18th century. For the 19th century, we can incorporate recounts of Schwabe's daily sunspot numbers based on examination of his drawings (Arlt et al., 2013; R. Arlt, personal communication, 2022) to better bridge the gap corresponding to the sunspot dearth in the Dalton minimum and to shed light on the validity of the 1849-1863 "Schwabe-Wolf" scale transfer. In the mid-19th century, although Wolf produced the only long and continuous series, his method went through several meaningful changes between 1861 (introduction of k-coefficients) and 1877 (start of the Wolfer contribution), as summarized by Friedli (2016, 2020). Therefore, a full understanding of the scale stability before the Wolf-Wolfer transition described above (Section 2.1.2.2) requires a revision and extension of known data series, like the recent recounting of sunspots from the original high-quality synoptic maps by Richard Carrington over 1853-1861 (Bhattacharya et al., 2021). One important goal is to fix the scale transfer between Schwabe, the last link of historical observations, and Wolfer, the first modern long-duration sunspot observer.
In the 18th century, the data recovery effort largely merges with the one for the group number (see Section 2.2.1 below), with the extra requirement that for the \(S_{\rm N}\), we need the spot counts in addition to the group counts that were already collected in the \(G_{\rm N}\) database (Vaquero et al., 2016). As many early observers did not provide such detailed counts or often did not make the distinction between sunspots and sunspot groups, progress during this era will require consultation of original sources, in particular sunspot drawings, following the example of the recounting of Staudacher's drawings by Svalgaard (2017). Such a recounting is currently in preparation for the drawing series from De Plantade (which was considered lost; Hoyt and Schatten, 1998,b) as a result of Hisashi Hayakawa's manuscript recovery. De Plantade's manuscript covers the first decades of the 18th century (1705-1726), and thus the critical period when the solar cycle restarted at the end of the Maunder Minimum.
### Group sunspot number (\(G_{\rm N}\))
#### 2.2.1 Data recovery and revision
For the ~400-yr span of the \(G_{\rm N}\) series, Munoz-Jaramillo and Vaquero (2019) define two qualitatively different periods. The first corresponds roughly to the first two centuries of telescopic sunspot observations (1610-1825) for which the number of observations is low, and directly overlapping temporal comparisons between observers are difficult to make. The second period spans from ~1825 to the present, when the temporal coverage is better and there is a clear network of related and comparable observers.
Historical sunspot drawings are taking on an increasingly important role in space-climate studies (e.g., Munoz-Jaramillo and Vaquero, 2019) because they constitute ground-truth raw data - with a direct picture of the shape, size, and distribution of spots and groups on the solar disc. They also give finer diagnostics to understand how the observations were made and their resulting quality. For modern times, annotated drawings indicate how group splitting was accomplished. The recovery of sunspot observations, and sunspot drawings in particular, is now essential as the current revision efforts shift from correction of existing time series to full reconstruction from raw data.
The development by Hoyt et al. (1994) and Hoyt and Schatten (1998a,b) of the group number time series was accompanied by the creation of a \(G_{N}\) database that was revised and updated by Vaquero et al. (2016). The open-source V16 data makes it possible to apply different methodologies to compose \(G_{N}\) time series. As a related development, within the past decade, the Historical Archive of Sunspot Observations (HASO; [http://haso.unex.es/](http://haso.unex.es/)) was established at Extremadura University by Jose M. Vaquero (Clette et al., 2014; Cliver et al., 2015; Vaquero et al. 2016). The objective of HASO is _"to collect and preserve all documents in any format (original, photocopy, photography, microfilm, digital copy, etc.) with sunspot observations that can be used to calculate the sunspot number in the historical period or related documents."_ For a comprehensive review of historical sunspot records and the recent improvements available, see Arlt and Vaquero (2020).
Since 2016, more than 30 papers have been published on sunspot data recovery with ISSI Team members as principal or co-authors. Here, we review some key results of these studies, focusing on the recovery/revision of group counts, which have been incorporated into V16. Many of the data sets that have been recently uncovered and analyzed (e.g., Carrasco et al. (2018a; for Hallaschka), Arlt (2018; for Wargentin), Nogales et al. (2020; for Oriani); (Hayakawa et al., 2021d; for Johann Christoph Muller)) provide information (counts and/or drawings) for both S and G and thus can be applied to revisions of \(S_{N}\) as well as to \(G_{N}\). As a manifestation of the increasing focus on recovery and archival of sunspot drawings, several recent works have constructed butterfly diagrams for historical observers (e.g., Leussu et al., 2017; Neuhauser, Arlt, and Richter, 2018; Hayakawa et al., 2020a; Hayakawa et al., 2022a; Vokhmyanin, Arlt, and Zolotova, 2021; see Figure 9 below).
_(a) The earliest sunspot observations: 1610 - 1645_
Thomas Harriot recorded the earliest sunspot drawing that was based on telescopic observation in 1610. Vokhmyanin et al. (2020) revised Harriot's sunspot group number and reconstructed butterfly diagrams on the basis of copies of his original manuscript. Shortly afterward, in 1611, Gallilei and Scheiner started their sunspot observations. Gallilei's data quality greatly improved after Benedetto Castelli invented a method to project a solar disk on a white paper. This method update has been detected in the comparison of Gallilei's data with Harriot's data (Carrasco, Vaquero, and Gallego, 2020a). Sunspot data by Gallilei and his contemporaries have been comprehensively analyzed in Vokhmyanin et al. (2021).
Christoph Scheiner was the most active observer before the Maunder Minimum. He published his sunspot observations in 'Rosa Ursina' and 'Prodomus' (Scheiner, 1630, 1651). On their basis, Arlt et al. (2016) derived the sunspot positions and areas from his sunspot observations for the period 1611 - 1630. Carrasco, Vaquero, and Gallego (2022a) have studied Scheiner's sunspot group number in comparison with V16 and Arlt et al. (2016) and obtained two important results: (i) the shape of the second solar cycle of the telescopic era is similar to a standard Schwabe cycle, and (ii) the amplitude of this solar cycle (according to raw data) is significantly lower than the previous one. This last result supports a gradual
transition between normal solar activity and deep solar activity in the Maunder Minimum (see, for example Vaquero, et al., 2011).
Charles Malapert was a key sunspot observer ca. 1618-1626. He is the only observer in the V16 G\({}_{\rm N}\) database for ~60% of his 185 observation days. From an examination of Malapert's reports from 1620 and 1633, Carrasco et al. (2019a; 2019b) increased the net number of Malapert's daily observations to 251. Moreover, they determined that while Malapert sometimes drew only a single group in his drawings, he sometimes observed several groups. Therefore, Malapert's group counts, taken from the drawings, are now known to be lower limits.
Jean Tarde and Jan Smogulecz recorded sunspot observations in 1615 - 1617 and 1621 - 1625. Their results had been recorded in their books for sunspots as sunspot drawings and textual descriptions (Arlt and Vaquero, 2020). These records permit derivations and comparisons with other observers of sunspot group number and sunspot positions for these periods (Carrasco et al., 2021e).
Daniel Mogling was another key sunspot observer in 1626 - 1629. Hayakawa et al. (2021b) exploited his original manuscript in the Universitats- und Landesbibliothek Darmstadt and confirmed his sunspot observations for 134 days. This study revised Mogling's sunspot group number as well as those of Hortensius and Schickard. It also derived Mogling's sunspot positions to construct a butterfly diagram. These results filled the data gap in the declining phase of Solar Cycle -12 showing the decay of the sunspot group number and equatorward migration of the reported groups.
Pierre Gassendi conducted sunspot observations in 1631-1638. Vokhmyanin et al. (2019) have analyzed his publications to revise the sunspot group number and derive sunspot positions.
Hevelius (1647) carried out the last known systematic sunspot records made by any astronomer before the Maunder Minimum for the period \(1642-1645\). Carrasco et al. (2019c) revised the sunspot drawings included in this documentary source as well as the textual reports, showing the good quality of the sunspot records made by Hevelius. Carrasco et al. (2019c) determined that the solar activity level calculated from the active day fraction (annual percentage of days with at least one sunspot on the Sun) just before the Maunder Minimum was significantly greater than that during the Maunder Minimum. Moreover, Carrasco et al. (2019c) confirmed Hevelius's observations of sunspot groups in both solar hemispheres in contrast with those of the Maunder Minimum that exhibited significant hemispheric asymmetry (Ribes and Nesme-Ribes, 1993).
_(b) The Maunder Minimum: 1645 - 1715_
The Maunder minimum (1645 - 1715) was an exceptional period in the recent history. Sunspot activity remained extraordinarily low during several solar cycles and the spots that did appear had a strong hemispheric asymmetry, with preference for the southern solar hemisphere (Eddy, 1976; Ribes and Nesme-Ribes, 1993; Usoskin et al., 2015; Riley et al., 2015). In recent years, a great effort has been made to improve our understanding of solar activity during this remarkable period. For example, the umbra-penumbra area ratio was computed by Carrasco et al. (2018b) for sunspots recorded during the Maunder Minimum. This ratio is similar to that calculated from modern data and, therefore, the absence of sunspots in the Maunder Minimum cannot be explained by changes in this parameter. On the other hand, comparisons with contemporary eclipse drawings have revealed that significant coronal streamers were apparently missing during the Maunder Minimum, unlike those of the modern solar cycles (Hayakawa et al., 2021c), substantiating a conjecture by Eddy (1976).
The combined observing intervals in the Hoyt and Schatten (1998a,b) G\({}_{\rm N}\) database of Martin Fogelius and Heinrich Siverus, both from Hamburg, spanned the years 1661-1690, approximately half of the 1645-1700 core of the Maunder Minimum (Vaquero and Trigo, 2015). Even after removal from the Hoyt and Schatten database of several full years of continuous reported spotless days for Fogelius and Siverus (implausible because of local weather conditions at Hamburg), the two observers were the 13\({}^{\rm th}\) and 7\({}^{\rm th}\) most active observers (from 1661-1671 (with 318 observations) and 1671 - 1690 (1040), respectively) from 1610 to 1715 in V16. Hayakawa et al. (2021a) consulted Ettmuller (1693) and compared it with the correspondence of Fogelius in the Royal Society archives, leading to the following proposed changes to the V16 database: (1) a reduction of the number of active days for Fogelius from 26 to 3 and a corresponding (net) reduction for Siverus from 20 to 15; (2) conversion of the dates of several observations from the Julian to the Gregorian calendar; and (3) removal of all "ghost" spotless days (days with no explicit sunspot observations interpreted as spotless days) for both observers, representing 98-99% of their observations.
Sunspot records from the Emmart Observatory of Nurnberg have been analyzed by Hayakawa et al. (2021d), based in part on Chiaki Kuroyanagi's archival investigations. The original logbooks from this observatory have been preserved in the Emmart Archives in the National Library of Russia (St. Petersburg; fond 998). Analysis of these manuscripts removed ghost spotless days and reduced their daily patrol coverage in contrast with V16. Hayakawa et al. (2021d) revised the sunspot group number from the Eimmart Observatory for 78 days, that from the Altdorf Observatory for 4 days, and those of Johann Heinrich Hoffmann and Johann Bernhard Wideburg for 22 days and for 25 days, respectively. Among the Emmart Archives, Johann Heinrich Muller's logbook recorded explicit spotless days and allowed us to derive robust active day fractions and a reliable S\({}_{\rm N}\) for him in 1709 to confirm its significantly lower level of solar activity than in other small solar cycles such as cycles 14 and 24 and even those of the Dalton Minimum (Carrasco et al., 2018a; Hayakawa et al., 2020a; Carrasco et al., 2021c). These records also allowed Hayakawa et al. (2021c) to derive sunspot positions and confirmed significant hemispheric asymmetry in the southern solar hemisphere.
William Derham recorded sunspot observations at the end of the Maunder Minimum. Derham (1710) listed his observations from 1703 to 1707 in a table where he only recorded one group for each day except on 15 November 1707 when he recorded two groups. Carrasco et al. (2019a) pointed out those could not be the real number of groups observed by Derham in that period, showing a sunspot drawing made by an anonymous observer between 30 November and 2 December 1706 recording three groups. (Note that the quality of the Derham's drawings is better than the one of the anonymous drawing.) Therefore, the group counts assigned to Derham in V16 should be used with caution and Derham's counts should be revised accordingly on the basis of his original records.
Carrasco, Vaquero, and Gallego (2021b) present and analyze two sunspots recorded by Gallet in the middle of the Maunder Minimum. In addition to the sunspot observed by Gallet from 9 to 15 April 1677 (recorded by other astronomers), Gallet reported a spot group from 1 to 6 October in the same year for which there is no record of observations by others. The latitude of this sunspot was ~10\({}^{\circ}\) S, comparable to most of the sunspots observed during the second half of the Maunder Minimum.
_(c) Solar Cycles in the 18th Century: 1715 - 1795_
Solar Cycle -3 (~1711~1723) is considered as the first solar cycle after the Maunder Minimum. Hayakawa et al. (2021e) examined the sunspot drawings of Johann Christoph Muller during this cycle to revise his sunspot group numbers (G) and derive individual sunspot numbers (S). His sunspot group
numbers are significantly different from the contemporary observations of Rost. For example, on 1719 June 15, Johann Christoph Muller recorded 3 sunspot groups (Hayakawa et al., 2021), whereas Rost's report gave 30 "sunspot groups" according to V16. Comparative analyses have revealed that Rost's data most probably described not the sunspot group number (G) but individual sunspot number (S). This manuscript also allowed Hayakawa et al. (2021) to derive sunspot positions in both solar hemispheres. This result contrasts sunspot activities in 1719 - 1720 with those of the Maunder Minimum during which spots were predominantly in the southern hemisphere (Ribes and Nesme-Ribes, 1993; Hayakawa et al., 2021). Another important observer, Francois de Plantade (1670-1741), also recorded sunspots quite systematically during the exit from the Maunder minimum, from 1705 to 1726, and will be the subject of an upcoming study.
Few sunspot records are available for the 1721-1748 interval, as shown in Figure 8 (adapted from Vaquero et al., 2016). This is the weakest link in the entire sunspot number time series. Recently, several relatively short-duration observers during this interval have been identified and documented. Johann Beyer's sunspot records in 1729 - 1730 have been examined and revised in Hayakawa et al. (2018). Pehr Wargentin is the only observer given for 1747 in V16, with group counts reported for 17 days for which drawings are available. Arlt (2018) documented an additional 32 days with group (and individual spot) counts (but without drawings) by this observer.
Including those data, Hayakawa et al. (2022) have comprehensively reviewed and revised the sunspot observations in 1727-1748. Hayakawa et al. (2022) revised the group counts of known observers, such as Krafft in 1729 and Winthrop and Muzano in 1739-1742, and added previously unknown data, such as those of Van Coesfeld in 1728-1729, Duclos in 1736, and Martin in 1738. These results have improved the morphology of Solar Cycles -2 (~1723-1733), -1 (1733-1743), and 0 (1743-1755) confirming the existence of regular cycles from 1727-1748. Hayakawa et al. (2022) derived sunspot positions from the
Figure 8: Number of days with records per decade in the Hoyt and Schatten (1998,b) (gray columns) database and in its revision by Vaquero et al. (2016) (black columns). The green columns reflect subsequent modifications to the V16 data base up to the current time. The smaller numbers of records in the V16 data base for decades before 1830 is due to the removal of spurious records of days with no sunspots from the Hoyt and Schatten (1998,b) data base. (Adapted from Vaquero et al., 2016.)
contemporary records and filled the data gap of the existing butterfly diagrams during this interval, confirming the occurrence of sunspots in both solar hemispheres and their equatorward migrations over each solar cycle.
Additional data have been acquired from East Asia. Hayakawa et al. (2018a,b) reported on Japanese astronomers at Fushimi and Edo (current Tokyo) and who conducted sunspot observations for 1 day in 1793 and 15 days in 1749-1750, respectively. These data fill data gaps around the neighborhood of the "lost cycle" conjectured to have occurred during the decline of cycle 4 Solar Cycle 4 in 1784-1798 (Usoskin et al., 2009; see also Karoff et al., 2015; cf., Zolotova and Ponyavin, 2011; Owens et al., 2015) and the maximum of Solar Cycle 0 (1743-1755).
Barnaba Oriani conducted sunspot observations in 1778-1779 at the Brera Observatory (Milan, Italy). Nogales et al. (2020) uncovered 52 daily sunspot observations made by Oriani for the near-maximum year of 1779 (peak in 1778) that are not included in V16. Only three other observers reported group counts for 1779, for a combined total of 19 active days. Of the 19 days, only 8 overlapped with those of Oriani, who thus accounts for 44 of 63 active days yet known for 1779. In addition, Nogales et al. determined that Oriani's group counts should be revised upward by 80% on average for his 97 daily observations for 1778 included in V16. A total of 117 active days were observed for 1878 and on 91 of these days, Oriani supplied the only observation.
Christian Horrebow's original logbooks of sunspot records are located in the Aarhus University. These records are particularly important, as his team observed sunspots from 1761-1776 and anticipated Schwabe's discovery of the sunspot cycle (Jorgensen et al., 2019). Horrebow's butterfly diagram, constructed by Karoff et al. (2019), in Figure 9 (top panel) has the characteristic structure first shown by Carrington (1858) and more definitively by Maunder (1904) for later cycles. The bottom panel in Figure 9 gives the butterfly diagram constructed from Staudacher's observations for the same interval (Arit, 2008).
_(d) Reanalysis of a key observer improves \(G_{N}\) records during the Dalton Minimum (1798-1833)_
The Dalton Minimum is a period of relatively low solar activity from 1798-1833, which has been named after John Dalton, who noticed a significant reduction of the auroral frequency during this time (Silverman and Hayakawa, 2021). The Dalton Minimum is similar to (though with even lower cycle peak sunspot numbers) sunspot minima that began ca. 1900 and 2010 (Feynman and Ruzmaikin, 2011). These three secular ebbs of sunspot activity, termed centennial or Gleissberg variations (after Gleissberg, 1965), are punctuated by longer periods of enhanced activity centered near ~1855 and ~1970 (Figure 2). These secular ebbs and flows of sunspot activity are less marked than the severe sunspot drought of the Maunder Minimum (Usoskin et al., 2015) and the prolonged sequence of strong solar cycles from ~1945-2008 known as the Modern Grand Maximum (Solanki et al., 2004; Clette et al., 2014; Usoskin, 2017).
The sunspot observations from 1802 to 1824 of Thaddaus Derfflinger, Director of the Kremsmunster Observatory, span the deepest part of the Dalton Minimum. From analysis of Derfflinger's drawings and associated metadata, Hayakawa et al. (2020a) concluded that the spot drawings were a secondary and therefore optional aspect of measurements of the solar elevation angle. As a result, they eliminated observations of spotless days attributed to Derfflinger, reducing the number of his daily records from 789 to 487. In addition, the butterfly diagram (Carrington, 1858; Sporer, 1880; Maunder, 1904) showing the latitudinal variation of sunspots over the solar cycle constructed by Hayakawa et al. from Derfflinger's observations was more or less symmetric about the solar equator during this period, in
Figure 9: Top panel. Butterfly diagram constructed from Horrebow’s sunspot observations Bottom panel. Butterfly diagram constructed from the observations of Staudacher, Horrebow’s contemporary, for the same interval. The dashed lines indicate times of solar minima. (From Karoff et al., 2019.)
contrast to the deep Maunder Minimum, where spot formation occurred primarily in the southern hemisphere (Ribes and Nesme-Ribes, 1993). These results have been confirmed with Stephan Prantner's sunspot drawings for 1804-1844 from Witten Monastery (Hayakawa et al., 2021). Sunspots occurred preferentially in the northern hemisphere right before the Dalton minimum (Usoskin et al., 2009). These results require further investigations on the Dalton Minimum and the hypothesized 'lost cycle' at its beginning (Usoskin et al., 2009; Karoff et al., 2015; cf., Krivova et al. 2002; Zolotova and Ponyavin, 2011; Owens et al., 2015).
_(e) Solar Cycles in the 19th century_
Starting with the 19th century and even more in the 20th century, sunspot data are typically providing clearly separated counts of groups and individual spots. Therefore, those data are as relevant to the S\({}_{\rm N}\) as to the G\({}_{\rm N}\) reconstruction, and all the following data sets are thus contributing to the S\({}_{\rm N}\) database described in Section 2.1.1.
After the end of the Dalton Minimum, Toubei Kunitomo conducted sunspot observations for 157 days in 1835 - 1836. While his sunspot records had been known in the existing datasets (Hoyt and Schatten, 1998a,b; Vaquero et al., 2016), preliminary analyses on the original documents have uncovered 17 additional days with Kunitomo observations in 1835 and 1 such day in 1836. Kunitomo's sunspot group number has been revised and area has been measured by Fujiyama et al. (2019). These records have filled a data gap in the existing datasets and are consistent with other sunspot observers' data, such as those of Schwabe (Fujiyama et al., 2019).
New records recovered from Antonio Colla, a meteorologist and astronomer at the Meteorological Observatory of the Parma University (Italy). Colla's records cover the period from 1830 to 1843, just after the Dalton Minimum (Carrasco et al., 2020). Colla recorded a similar number of sunspot groups as his contemporary sunspot observers regarding common observation days. However, as is the case for Hallaschka, sunspot positions and areas recorded by Colla seem unrealistic and should not be used for scientific purposes.
William Cranch Bond was the director of the Harvard College Observatory in the mid-19th century. Bond recorded sunspot drawings from 1847 to 1849. According to V16, Bond is the observer with the highest daily number of sunspot groups observed in Solar Cycle 9 (18 groups on 26 December 1848). However, Carrasco et al. (2020) detected mistakes in these counts. These errors are due to the use of sunspot position tables instead of the solar drawings. This new revision indicates that solar activity for Solar Cycle 9 was previously overestimated according to raw data, and Schmidt would be the observer with the highest daily group number (16 groups on 14 February 1849). A comparison between sunspot observations made by Bond, Wolf, and Schwabe (using the common observation days) shows that (i) the number of groups recorded by Bond and Wolf are similar, and (ii) Schwabe recorded more groups than Bond because he was able to observe smaller groups.
Richard Carrington made sunspot observations at Redhill Observatory in the United Kingdom, which he published in the form of a catalogue (Carrington, 1863). An observer from the current WDC-SILSO network (T.H. Teague, UK) has reanalyzed his observations (Bhattacharya et al., 2021) by recounting the groups and individual sunspots from Carrington's original drawings. Bhattacharya et al. (2021) compared Carrington's own counts (Carrington, 1863, Casas and Vaquero, 2014, Lepshokov et al., 2012) with contemporary observations, Rudolf Wolf's own observations, and Carrington's tabulations both from the Mittheilungen. They conclude that Carrington's counting methods (Carrington, 1863) for the groups
were comparable to modern methods but those for individual sunspots produced significant undercounts. On the other hand, Wolf's own counts and his recounting of Carrington's drawings show numbers very similar to modern methods. The key here, is that Carrington's catalogue was, in fact, a position catalogue, thus it recorded only the biggest spots and groups, while his drawings were more precise and, when counted by Wolf in the 1860s or T.H. Teague 160 years later, give results comparable to modern observations.
Angelo Secchi observed sunspots and prominences from 1871-1875. Carrasco et al. (2021d) have constructed machine-readable tables from Secchi's book "Le Soleil" (Secchi, 1875). Secchi's original drawings indicate that he had begun sunspot observations as early as 1858 (Hayakawa et al., 2019; Ermolli and Ferrucci, 2021). These results encouraged further investigations of Secchi's original notebooks containing sunspot records in the Rome Observatory and will be the focus of an upcoming study.
_(f) Modern long-term observers_
Although the number of observers increased strongly during the 20th century, in particular after World War II, many series have a rather short duration and some series from professional observatories suffer from inhomogeneities due to change of instruments or observers. Therefore, the recovery of new long-duration series that were never collected and exploited, or only partly so, can help to refine the stability of the most recent part of the \(S_{\rm N}\) and \(G_{\rm N}\) records (e.g., the Zurich 1947 discontinuity found by Clette et al. 2021), and connecting it seamlessly to contemporary observations.
In this regard, sunspot observations for more recent long-term institutional and individual observers not currently included in V16 continue to be processed and digitized. The Astronomical Observatory of the Coimbra University published a catalogue with sunspot observations (including G and S) from 1929 to 1941 (Lourenco et al., 2019). In addition, a dataset of sunspot drawings made at the Sacramento Peak Observatory (SPO) from 1947-2004 has been recently digitized by Carrasco et al. (2021a). This work is the first step for the publication of the complete SPO sunspot catalogue that will include information on sunspot positions and areas. Carrasco et al. (2018c) digitized this catalogue and reconstructed the corresponding total and hemispheric \(S_{\rm N}\) series from Coimbra data.
The published sunspot counts of Hisako Koyama (Koyama, 1985; Knipp, Liu, Hayakawa, 2017), a staff member at the Tokyo Science Museum (later renamed the National Museum of Nature and Science (NMNS)) from 1947-1985, have been used for one of the backbones of the group number reconstruction of Svalgaard and Schatten (2016). Recent surveys of the archives of the NMNS in Tsukuba have located Koyama's sunspot drawings and logbooks from 1945 to 1996 (Hayakawa et al., 2020b). Hayakawa et al. (2020b) described and analyzed a full digital database (encoded by Toshihiro Horaguchi and Takashi Nakajima) of Koyama's sunspot observations and diagnosed a previously undetected inhomogeneity in the resulting sunspot counts affecting the later part of the series, after 1983. Hayakawa et al. (2022b) have analyzed Hitoshi Takuma's sunspot drawings from 1972-2013 in the Kawaguchi Museum. Comparisons with the contemporary records have shown Takuma's observations to be one of the most stable data sets over this ~40-yr time period.
_2.2.2 Reconstruction methodologies for \(G_{\rm N}\)_
Several \(G_{\rm N}\) time series have been generated since the first such series of Hoyt and Schatten (1998a,b). These series, including that of Hoyt and Schatten (1998a,b), are listed in Table 2.
* References: (1) Hoyt and Schatten (1998a,b); (2) Svalgaard and Schatten (2016); (3) Cliver and Ling (2016); (4) Usoskin et al. (2016a); (5) Chatzistergos et al. (2017); (6) Willamo, Usoskin, and Kovaltsov(2017); (7) Usoskin, Kovaltsov and Kiviaho (2021); (8) Dudok de Wit and Kopp (2022)
* Time series available at:
* Pending publication.
* *** Provisional in nature and not intended as a prescription or method for reconstruction of Gw; based on the Hoyt and Schatten (1998a,b) Gw construction method but used an adjusted RGO time series from 1874-1915 and Schmidt (scaled to adjusted RGO) for years before 1874 as primary observers
The Gw series listed in Table 2 are based on four basic methods:
**1.** Linear Daisy Chaining: Linear scaling of successive overlapping observers (daisy-chaining) or "backbones of observers" (Hoyt and Schatten, 1998a,b; Svalgaard and Schatten, 2016; Cliver and Ling, 2016).
2. Active Day Fractions: Independent scaling of all observers relative to a perfect observer (based on the RGO catalogue). A synthesized imperfect observer is created on a daily basis by means of a quality factor or threshold (S; analogous to k' in Equation 2) determined by the fraction of days on which spots were observed (active day fraction) (Usoskin et al., 2016a; Willamo, Usoskin, and Kovaltsov, 2017; Usoskin, Kovaltsov and Kiviaho, 2021) relative to the perfect observer. Moreover, the scaling is also based on a cross-correlation matrix, i.e., the probability distribution function (PDF) between the perfect observer and the synthesized observer, giving a non-parametric conversion.
3. Probability Distribution Function: Non-linear non-parametric scaling of successive overlapping observers via a correlation matrix (Usoskin et al., 2016a), using primary (backbone) observers, on a daily basis (Chatzistergos et al., 2017).
4. Tied Ranking: Tied-ranking method based on the rank ordering (Kendall, 1945) of group counts for a given day rather than their actual values (Dudok de Wit and Kopp, 2022).
In the following subsections, we describe each of these methods in more detail, including their advantages and shortcomings, which are summarized in Table 3.
#### 2.2.2.1 Linear daisy-chaining
This method, used by Hoyt and Schatten (1998a,b) followed the normalization approach first used by Wolf to relate his observations to those of external (to Zurich) observers before 1848. Rather than using Wolf or any other Zurich director as a primary observer, Hoyt and Schatten used the photography-based Royal Greenwich Observatory (RGO) 1874-1976 series of group numbers (Willis et al., 2013a,b; Erwin et al., 2013) as their primary reference. For observers who began observing before 1874, however, they employed "daisy chains" of individual observers in their normalization scheme, sometimes multiple chains, making it difficult, if not impossible, to replicate \(k^{\prime}\)-factors for such observers (Cliver and Ling, 2016; Cliver, 2017). Hoyt and Schatten (1998a,b) were the first to use "backbones" (Svalgaard and Schatten, 2016), albeit in a limited fashion, with RGO from 1874-1976, Horrebow designated as primary observer from 1730-1800 and Galileo filling this role for the earliest observers in the 17\({}^{\rm th}\) century. Svalgaard and Schatten (2016) used linear scaling of contiguous "backbones" for four primary observers.
Because the possibility for error expands with the number of contiguous links in a chain, Svalgaard and Schatten (2016) reduced the number of links in daisy chains by linking separate (mostly non-overlapping) "backbones" of observers, each scaled to a single common reference observer within a limited time interval, rather than series (single or multiple) of individual observers as Hoyt and Schatten (1998a,b) did prior to 1874. Moreover, they only used visual observers as primary references, instead of the photographic RGO group numbers. Svalgaard and Schatten (2016) used correlations of yearly averages of observer and primary counts over the interval of overlap to determine their \(k^{\prime}\) factors (rather than ratios of summed daily counts for common observation days with one or more groups as used by Hoyt and Schatten, 1998a,b), forcing the fits through zero for strict proportionality. Figure 10, adapted from Chatzistergos (2017), illustrates the difference between daisy-chaining and the backbone method.
Dudok de Wit and Kopp (2022) noted the following general criticisms of the daisy-chain method:
1. Subjective initial selection of backbone observers, whose choice can significantly impact the outcome, introducing bias;
2. the method does not exploit all possible periods of overlap and in this sense is not optimal;
3. the final result is affected by the order in which the records are stitched together;
4. errors accumulate monotonically at each stitch (Lockwood et al., 2016), although the method does not inherently determine time-dependent uncertainties;
5. the method cannot temporally span data gaps (applies to all methods except Usoskin et al., 2016).
In the Svalgaard and Schatten (2016) backbone approach, three of the four backbone links have no overlap of the primary observers vs. four of nine such links for Chatzistergos et al. (2017). For the non-overlapping cases, the cross-calibration is based completely on the overlap of the dotted line extensions of the backbones (Figure 10). An example of non-overlapping backbone observers in the Svalgaard and Schatten (2016) reconstruction is that of Schwabe (backbone length: 1794-1883) and Wolfer (1841-1944); Schwabe ceased observing after 1867 while Wolfer's first observations (in Svalgaard and Schatten, 2016) began in 1878. The advantage of this method is that it permits long intervals of overlap with fewer backbones, but in doing so it is relying on the more uncertain parts of the extended backbone series for cross-calibration. Another difference between the backbone methodologies of Svalgaard and Schatten (2016) and Chatzistergos et al. (2017) was that secondary observers in the SvSc16 series could be scaled to both backbones, e.g., the light green S1 observer in Figure 10, but not in CEA17.
In addition, the application of a 7% reduction of the group count due to a suspected change in group-splitting technique at Zurich applied for years after 1940 in the Svalgaard and Schatten (2016) series needs to be verified independently.
Figure 10: Schematic showing the difference between daisy-chaining and backbone methodologies. In daisy chaining, primary (green) and secondary observers (tan) are scaled sequentially to each other based on their interval of overlap. In the backbone method (Svalgaard and Schatten, 2016), several secondary observers (light-green and light-blue) are scaled to multiple backbone observers based on their interval of overlap and then the contiguous backbone series tied to each primary observer are scaled to each other. While both daisy-chaining (e.g., Hoyt and Schatten, 1998a,b) and the backbone method employ daisy-chaining, the advantage of the backbone method is the reduction of the number of links in the chain, reducing the accumulation of errors. See text for differences in the backbone approach employed by Svalgaard and Schatten (2016) and Chatzistergos et al. (2017). (Adapted from Chatzistergos, 2017.)
A third key difference between SvSc16 and CEA17 - discussed in Section 2.2.3 - is the scaling procedure: linear scaling of yearly averages vs. a non-parametric mapping of daily values of a pair of observers. The latter allows for non-linear relations between counts of a pair of observers, and avoids side-effects of temporal averaging (different distributions of observing dates within a year, linearization effect of temporal averages). While the two series agree reasonably well after ~1880, they diverge beforehand (Figure 2).
Recently, Velasco Herrera et al. (2022) announced a reconstruction of the GN series using wavelet and machine learning techniques that supports the validity of the original HoSc98 series. However, rather than re-calibrating the \(G_{\rm N}\) series or exploiting the recent revised \(G_{\rm N}\) database (Vaquero et al. 2016), they just produce a harmonic model of the original HoSc98 series, which essentially replicates the characteristics of this input series, based on a limited set of variable periodic components. Such approaches also suffer from the stochastic nature of solar activity (e.g., Cameron and Schussler 2019, Charbonneau 2020, Petrovay 2020). This technique may help interpolating intervals with scarce data or predominantly spotless days, on the base of periodicity assumptions, but it cannot provide diagnostics of possible flaws in the original series. On the other hand, this paper does not address any of the above issues identified in the daisy-chaining principle, as well as the homogeneity issue in the early part of the RGO photographic catalog.. For the above reasons and the fact that the Velasco Herrera et al. (2022) \(G_{\rm N}\) series is not a re-calibration of actual data, but rather a model, we will not further consider this series here.
2.2 Individual non-linear scaling, based on the active day fraction, relative to a degraded perfect observer
Usoskin et al. (2016a; see also Willamo, Usoskin, and Kovaltsov, 2017, and Usoskin, Kovaltsov and Kiviaho, 2021) introduced a normalization procedure with several novel aspects. Instead of working directly with group counts, they considered the systematic statistical relation between the relative group counts of individual observers and the number of days over which they reported spots (active days) as a fraction of the total number of days on which they observed (including spotless days), i.e., the so-called active day fraction (ADF). By using this individual indicator, the scale of each observer can then be determined independently of other observers, thus avoiding any kind of daisy chaining. A key potential advantage of this approach is that it does not rely on overlapping observations in relating any two observers, so can span large time ranges (and even gaps in observations) just as accurately as short ones. The method does rely, however, on consistency in solar activity on multi-centennial timescales because of the relatively limited duration of the RGO universal observer (1900-1976; Willamo et al., 2017).
In order to derive this scale across multiple observers, the ADF statistics must be referred to a perfect reference observer, assumed to be capable of seeing all groups down to the size of the smallest pore. For this purpose, it was assumed that the RGO photographic catalogue from 1900-1976 provided a universal reference against which any other observer could be compared regardless of whether they observed over the 1900-1976 time range of RGO data. As a scaling factor to the universal RGO observer, they determined a quality factor \(S_{\rm S}\) which is in fact an acuity threshold (the spot area threshold over which the individual observers starts seeing spots/groups) for each observer by matching its individual ADF statistics with the ADF obtained by synthetically degrading the universal (presumed perfect) RGO observer.
This was done by assuming that counts by imperfect observers are lowered due to their limited acuity, i.e., their inability to detect the smallest spots. This was simulated by eliminating from RGO data all sunspot groups with an area below a certain threshold S\({}_{\rm S}\) in millionths of a solar disk (\(\mu\)sd). The threshold value S\({}_{\rm S}\) that provided the best match between the thresholded RGO data and the actual ADF, A, of an observer is derived via a set of cumulative probability distribution functions (PDFs) P(A, S\({}_{\rm S}\)) of the ADF for RGO degraded at different levels (Figure 11). Once this ADF-based S\({}_{\rm S}\) value is determined, the correction to the group numbers themselves for the target observer is derived from the relation between the group numbers for the degraded RGO data set corresponding to this S\({}_{\rm S}\), and the reference group number for the full RGO data set, representing the perfect observer, via their cross-probability density distribution. In this final step, instead of regressing a mathematical model, this correction was implemented in a non-parametric way, by remapping directly the raw daily group numbers for the corresponding observer, via the RGO cross-PDF, delivering the corrected group number (peak of the PDF at the given raw G\({}_{\rm N}\) value), with uncertainties (width of the PDF at this G\({}_{\rm N}\) value).
Difficulties/shortcomings with the ADF-based universal observer method as developed thus far include:
**(1)**: The possibility of unreported spotless days during periods of low solar activity, resulting in an overestimation of the ADF and to underestimated corrections for some observers (Svalgaard and Schatten, 2017).
**(2)**: Variations in the definition of a spot group (evolving group-splitting rules) used by observers from different epochs, which are not considered as another personal factor, next to the acuity of an observer. This factor can play an important or perhaps even a dominant role near solar cycle maximum, when
Figure 11: Cumulative probability distribution P(A, S\({}_{\rm S}\)) for the reference data set (for different values of the threshold observed area (S\({}_{\rm S}\)) and complete data coverage (f = 1)). S\({}_{\rm S}\) = 0 corresponds to the full RGO data set. (From Usoskin et al., 2016a.)
activity is high and many groups are packed on the solar disk, and the fraction of big sunspot groups becomes large compared to tiny groups (i.e., the ones for which acuity is important). It applies to all series involving the group counts, thus both \(G_{N}\) and \(S_{N}\).
Relative insensitivity of observer group counts to \(S_{S}\) factors (Cliver, 2016; Svalgaard and Schatten, 2017).
Applicability of the ADF approach only when the ADF is <0.8, thus only during the low part of the solar cycles. The corrected \(G_{N}\) values for the maxima of the cycles are thus obtained by an extrapolation, outside of the range over which the ADF is calibrated for an observer.
Differential sensitivity of the \(S_{S}\) threshold to the level of activity (Usoskin, Kovaltsov, and Chatzistergos, 2016b), i.e., \(S_{S}\) may be over- or under-estimated if the overall level of activity is different for the epoch of the target observer and for the epoch of the reference RGO data. (Solar-activity consistency is assumed.) As a result, the method works well for moderate activity but tends to slightly (<10%) underestimate high-activity levels and strongly (~30%) overestimate the low-activity levels, overall leading to a slight overestimate of the activity for the \(19^{th}\) century, given a larger occurrence of weaker cycles compared to the base time interval of the RGO data set in the \(20^{th}\) century (Willamo, Usoskin and Kovaltsov, 2018). This differential effect is probably a consequence of the extrapolation of cycle maxima mentioned in point (4) above.
The observation window is different between RGO and the observer (a simple ratio of the number of days for which groups were reported to the total number of days on which observations were made is not sufficiently representative).
The reference observer, in this case the RGO catalogue, is obviously not "perfect". A simple analysis reproducing the same method as in Usoskin et al. (2016a) but with different parts of the RGO catalogue (by selecting different long-term time periods) shows that the obtained \(S_{S}\) are not consistent with the error bars reported in Usoskin et al. (2016a) (private communication, L. Lefevre).
The above-described limitations (1-4, 6 and 7) to the applicability and accuracy of the ADF method were not addressed in Usoskin et al. (2016a), Willamo, Usoskin, and Kovaltsov (2017), or Usoskin, Kovaltsov and Kiviaho (2021). Factor (5) in the above list was quantified by Willamo, Usoskin, and Kovaltsov (2018) and Usoskin, Kovaltsov and Kiviaho (2021), but factors 1-3 may lead to larger errors and biases.
At the ISSI workshops, Munoz-Jaramillo introduced a segmented-ADF method in order to improve the method published by Usoskin et al. (2016a). The general idea of this new method is to determine the threshold in the same way, but to compensate for points (5-7) by applying a temporal window based on the data coverage of the imperfect observer and to make it move (in time) with regard to the reference dataset to account for the level of activity within the "imperfect observer" data. Then, this threshold is applied to the reference data to count groups. Here, the actual group numbers play again a role, next to the ADF itself. Note also, that the reference dataset is slightly modified to account for possible drifts in the first years. Like the original ADF methodology for GN reconstruction, this refined segmented-ADF approach remains to be fully developed and still requires an end-to-end validation.
Willamo, Usoskin, and Kovaltsov (2017) also point out that the ADF principle is inapplicable during periods of grand minima. As is the case for daisy chaining, this approach also breaks down during intervals with sparse data. However, while the daisy-chaining methods cannot cross such data gaps, and thus reach a dead end at the first sparse-data link in the early \(19^{th}\) century, the ADF has the potential (as yet
undemonstrated) to provide a calibrated tie-point to any "more populated" interval between such gaps, e.g., in the 18th century, as shown in Carrasco et al. (2021c). Thus, a hybrid approach of using the ADF method to span data gaps and daisy chaining for contiguous-observing periods is a possible strategy for creating time series over longer periods than daisy chaining alone.
#### 2.2.2.3 Backbones with non-linear scaling via non-parametric probability distribution functions
Chatzistergos et al. (2017) followed Svalgaard and Schatten (2016) by reconstructing \(G_{N}\) using a sequence of primary "backbone" observers, but improved the original concept in several respects, to avoid some of the weaknesses attributed to the initial version:
**1.** Use of daily values instead of yearly means.
2. Adding more primary observers, in order to have a direct temporal overlap between all of them, instead of using secondary observers to bridge temporal gaps between disconnected primary observers. (Although the probability of error accumulation increases with each link.)
3. Using a non-parametric mapping based on probability distribution functions (PDFs) of the respective values of a pair of observers, allowing for non-linear corrections, in place of least-square fitting a purely linear relation. (The non-parametric mapping method was first introduced in Usoskin et al., 2016a, as part of the ADF method described in the previous section.) Linear scaling has bias to overestimate strong cycles, while the non-parametric approach tends to overestimate minima and slightly underestimate maxima.
4. The ability to inherently estimate uncertainties in the reconstruction. In this non-parametric approach, they scaled the group counts G of secondary observers to those (G*) of primary observers through PDFs of G* for each G value. An example of a calibration matrix, for a high-quality secondary observer, Koyama, and primary observer RGO, is shown in Figure 12. This procedure makes no assumption about the type of relationship between G and G* (e.g., linearity) and the error estimate is straightforward. For each backbone, Chatzistergos et al. (2017) constructed a composite series by averaging all the PDFs of all the available observations for every day, to get a distribution based on all available observers. When there are few data points, as in the upper range of \(G_{N}\) values, uncertainties can be estimated by applying Monte Carlo techniques to the PDFs of paired observers when creating time series.
Figure 12: Calibration matrix showing the probability distribution of the residual difference between RGO (primary observer G*) and Koyama (secondary observer, G) as a function of G over 1947-1976. To compensate for the small number of data points in the upper range, columns for > 15 have been filled with the results of a Monte-Carlo simulation. The red circles with error bars depict the mean G* values for each G column and their 1\(\sigma\) uncertainty. The dashed red / yellow line shows the k’-factor from Hoyt and Schatten (1998a,b), and the green line is an exponential fit, showing a slight non-linearity. (See similar figures in Chatzistergos et al., 2017)
While this PDF approach using daily observations allows a more robust error analysis than the backbone-based method of Svalgaard and Schatten (see 2.2.2.1), it suffers from other limitations, including limited accuracy in the PDF for high group counts due to lower statistics than for low group counts, which is especially important to calibrate the maxima of solar cycles.
#### 2.2.2.4 Tied ranking
Tied ranking (Kendall, 1945) is a new approach to sunspot number recalibration (Dudok de Wit and Kopp, 2022) that is not based on the GN values themselves, but instead on their distribution as measured by observer pairs. Tied ranking replaces the GN variable for a given observer by its order ("ranking") relative to all the other values of that variable. By working with ranked values rather than with original ones, one bypasses the need for correcting individual observers for their nonlinear response, which is one of the main difficulties faced by all methods in the merging process. However, ranked records can be meaningfully compared only if they span the same time interval. To fill data gaps, the expectation maximization method (Dempster, Laird, and Rubin, 1977; Rubin, 1996; Little and Rubin, 2002) is used. This method is a powerful generalization of the backbone and daisy chain methods in the sense that it uses all possible overlaps to fill the data gaps, avoiding the subjective choice of periods of overlap and backbones. The final composite is an average of all the available rankings on a specific day (excluding interpolated values), and then the combined ranking is turned back into group counts using a specific observer. Dudok de Wit and Kopp (2022) suggest that the absolute scale of G\({}_{\rm N}\) series could be given by a complementary approach, e.g., ADF (Usoskin et al., 2016a).
Possible limitations of the tied ranking method include: (1) Calibration is dependent on the time interval (phase of the solar cycle, level of activity in the interval covered by the data); and (2) the method cannot account for a trend in an observer. (This latter limitation is common to all methods). In this method, several mathematical techniques are used in succession, and the way in which the output of each step may influence the subsequent ones and the final results must still be fully understood. This will require the separate analysis of intermediate steps, by creating synthetic "benchmarking" input data sets with known characteristics and imperfections. Substantial work is thus still needed for a full validation of this fully innovative approach that emerged from the work of the ISSI team.
#### 2.2.3 Conclusions on the reconstruction methods
Ideally, the reconstruction problem should be separated into two parts: a scientific choice (What is the best approach for converting the different pieces of information in numbers that can be processed?), and a statistical or analytical choice (What are the best method for merging these numbers into a single composite, given their uncertainties?). The reason for decoupling the two is that the production of the composite should not influence how the raw data are interpreted and assembled into source data series.
Figure 12: Calibration matrix showing the probability distribution of the residual difference between RGO (primary observer G*) and Koyama (secondary observer, G) as a function of G over 1947-1976. To compensate for the small number of data points in the upper range, columns for > 15 have been filled with the results of a Monte-Carlo simulation. The red circles with error bars depict the mean G* values for each G column and their 1\(\sigma\) uncertainty. The dashed red / yellow line shows the k’-factor from Hoyt and Schatten (1998a,b), and the green line is an exponential fit, showing a slight non-linearity. (See similar figures in Chatzistergos et al., 2017)
One of the lessons learned from the ISSI team is that such a decoupling is very difficult to achieve at this stage because all these problems are so much interrelated. The general framework that is most appropriate for dealing with such problems is a probabilistic one, in which the sunspot data record from each observer is considered as a conditional distribution that depends on the different observed or unobserved parameters. These parameters may for example be the number of spots on a given day (given the resolution of the telescope, the visual acuity of the observer, etc.) or knowing that the observer did not report anything because he/she probably saw no spots during that week. The central goal then is to determine the probability to have a true sunspot number of a given value, given the various observed or unobserved parameters: p(data | parameters).
Such a probabilistic approach naturally leads to Bayesian inference (Gelman et al., 2013) which offers a natural way for estimating such probabilities, with for example a pathway for getting rid of unobserved variables by integrating them out. Bayesian thinking also offers a natural way for updating the results when new evidence comes in. Although Bayesian inference has been found to be highly effective for building composites (e.g. (Tingley et al., 2012)), there still is a long way to go before the sunspot estimation and reconstruction problem can be expressed in terms of conditional probability distributions. However, even if this is not feasible without making approximations, it forces us to express observations that are of very different types into a common and rigorous framework for which well-established methods are available.
## 3 Benchmarks for sunspot number time series constructions
Benchmarks are rules of thumb or expectations that serve as checks, or points that need to be considered, for any sunspot-based reconstruction of solar activity. They differ from proxies in that they are based solely on sunspot data.
### Expectation (1): Similarity of the \(S_{n}\) and \(G_{n}\) time series
The on-going sunspot number recalibration effort was motivated by the expectation that the Wolf \(S_{n}\)(1.0) and Hoyt and Schatten (1998a,b) \(G_{n}\) time series, which closely tracked each other through a broad range of solar activity during the 20th century, should do so throughout their common time interval, rather than abruptly diverging as they did ca. 1880 (See Figure 1 in Clette et al., 2014, and Figure 1(a) in Clette et al., 2015). This same expectation holds for the current sets of \(G_{n}\) time series, which agree reasonably well with \(S_{n}\) series during the 20th century but exhibit a broad spread in reconstructions of yearly values before ~1880 (Figure 2). At present, the most likely explanation for a separation of \(G_{n}\) and \(S_{n}\) series at this time lies in Wolfer's decision (when he became an assistant at Zurich in 1876) to count individual small pores as groups (see Section 2.1.2.2). The alternative - a change in the internal workings of the Sun resulting in a difference in the relative number of spots and groups at solar maxima - seems less plausible. It is clear that the years ca. 1880, based on the marked dispersion in values of the various series in Figure 2 starting near this time, present a challenge for sunspot number reconstructions.
Figure 13 shows the ratios of \(S_{n}\)(2.0) to the various \(G_{n}\) series, after a 11-year running window smoothing, scaled to a value of 1.0 over 1920-1974. We find the \(S_{n}\)/\(G_{n}\) ratio for all sunspot series to be roughly constant over the 20th century (in agreement with Svalgaard, 2020), while the various series show divergence prior to 1900. Given the broad range of activity during the ~100-year interval ~1900-2010, we would expect \(S_{n}\)/\(G_{n}\) for any \(G_{n}\) series to remain quasi-constant also before ~1900 - as is the case for the CEA17 and SvSc16 series. We note that when computing the ratios, we excluded years with \(S_{n}\) < 11. The
choice of thresholds influences the computed ratios, with increasing threshold leading to higher \(S_{N}/G_{N}\) ratios for all series except HoSc98, while the ratio for the series by SvSc16 gets closer to unity. We note that a non-linear relation between group and sunspot numbers has been reported, hinting at a slight dependence of the relationship between groups and sunspots to the level of activity (Clette et al., 2016).
### Expectation (2): Observers should improve, and correction factors should decrease, over time
Cliver (2016) defined a "correction factor" time series [CF;] for a given \(G_{N}\) time series (Eq. (3)), obtained by dividing the annual group count [\(G_{N}\)] by the corresponding yearly average of raw group counts for all observers [Graw], that can be used to assess the reliability of new \(G_{N}\) and \(S_{N}\) reconstructions. \(G_{Nraw}\) thus represents a fully uncorrected group number, without any compensation of the global improvement of observing techniques.
\[CF_{i}=G_{N}/G_{Nraw} \tag{3}\]
[Graw] in Cliver (2016) was produced from all observers in the V16 database and applied in Equation 3 to various \(G_{N}\) series regardless of which data base (Hoyt and Schatten, 1998a,b or Vaquero et al., 2016) (and/or which observers within these data bases) they were constructed from. Here we produced correction factors (in essence, ensemble-averaged k- and k'-factors of observers for a given year) by considering the data used by each series and the corresponding database of raw counts. Specifically, we consulted the tables of observers listed by Cliver and Ling (2016), Chatzistergos et al. (2017), Usoskin et al. (2021a), and Dudok de Wit and Kopp (2016), and produced a [\(G_{Nraw}\)] series by averaging the raw counts of those specific observers from the database used in each case. For HoSc98 we used all available observers in the Hoyt and Schatten (1998a,b) database, while for SvSc16 (v1.12) and DuKo22 (v1.21), we
Figure 13: Ratios of the various \(G_{N}\) series to \(S_{N}(2.0)\) after a 11-year running window smoothing, scaled to a value of 1.0 over 1920-1974. Years with \(S_{N}<11\) were ignored when computing the ratios. To illustrate the effect of the \(S_{N}\) threshold on the ratios we also show as shaded surfaces (only for SvSc16, HoSc98, and CEA17) the case when years with \(S_{N}\) less than 50 are ignored.
used all observers in the V16 database, but in all cases we excluded the time series from Mt Wilson derived from just the center of the solar disk.
We would expect that in general CF values for all series would increase more or less monotonically from values ~1 in the 20th century to higher values \(\geq\) 2 when moving back to the early 19th century and before (see Section 5.1) because of (1) inferior telescope technology for earlier centuries, and (2) the change in sunspot counting procedure from Wolf to Wolfer (see Section 2.1.2.2). Both of these changes will result in higher counts for a given level of solar activity in the modern era and a corresponding increase in CF going back in time - as can be seen for nearly all time series in Figure 14. Most GN series show to various degrees the expected decreasing trend, but it is very limited for the original HoSc98 series, which is thus incompatible with the known progress of the observations.
Another kind of test can be based on determination of the losses in imperfect observations, using contemporary data. In this respect, the difference in the definitions of G and S between Wolf and Wolfer had an underlying instrumental cause that extended beyond Wolf's desire to maintain fidelity with earlier observers. After 1860, Wolf primarily used two small-aperture (40 mm/700 mm (focal length) and 42 mm/800) portable telescopes while, beginning in 1876, Wolfer used the standard 83 mm/1320 mm telescope at the Zurich Observatory (Friedli, 2016, 2020). As shown by Karachik, Pevtsov, and Nagovitsyn (2019) who degraded numerically high-resolution photospheric images from the Heliospheric and Magnetic Imager (HMI; Scherrer et al., 2012) instrument on board the Solar Dynamics Observatory (SDO; Pesnell et al., 2012) spacecraft as shown in Figure 15, telescopes with aperture < 80 mm do not resolve a significant number of small pores, and thus, likely under-estimate the group number. Telescopes with
Figure 14: 11-year running averages of correction factor time series from 1600-2010 for the various sunspot series denoted in the legend.
apertures larger than 80 mm, resolve the smallest pores sufficiently well, and provide a better representation of G. This is in line with the idea of quantifying the observer's quality via the acuity threshold, as discussed in Section 2.2.2.2, rather than a constant scaling factor.
Karachik, Pevtsov, and Nagovitsyn (2019) also draw attention to non-solar-induced variability of the spot-to-group ratio (see Section 3.1), writing, _"Our results indicate that there is an effect of telescope aperture on the \(S_{\mathrm{N}}\)/\(G_{\mathrm{N}}\) ratio, which should be kept in mind while comparing modern ratios with the early observations made with small aperture instruments and using human eye as the detector."_ The high values of S/G for raw (uncorrected) \(G_{\mathrm{N}}\) series during the 18th century are due to the inability to see small groups, i.e., groups of one or two small spots (see Section 5), due to small (<80 mm) or imperfect objective lenses.
Proxies: Independent long-term time series as cross-checks on \(S_{\mathrm{N}}\) and \(G_{\mathrm{N}}\)
Different methods of sunspot number calibration include complex assumptions which may or may not be correct, and it is important to compare them to other measures of solar activity, either direct (such as solar radio emission, e.g., F10.7, or chromospheric indices, such as Ca ii plage areas) or indirect (e.g., cosmogenic nuclides and geomagnetic responses). Because of large uncertainties of \(S_{\mathrm{N}}\) and \(G_{\mathrm{N}}\) in the 19\({}^{\mathrm{th}}\) century and earlier, the use of proxy datasets can be used to corroborate sunspot estimates in the past.
Figure 15: Dependence of the ratio \(S_{\mathrm{N}}\)/\(G_{\mathrm{N}}\) on telescope aperture derived from numerically degraded images from HMI. Open circles – without scattered light; filled circles – with added 5% scattered light; solid line – fitted linear function for the ratio corresponding to apertures lower than 80 mm; dashed line – fitted linear function for the ratio corresponding to apertures more than 80 mm. The step-wise change in the solid circles from 130-140 mm is an artifact related to the clear aperture of 140 mm for SDO/HMI. (Based on Karachik, Pevtsov, and Nagovitsyn, 2019.)
Proxy data sets provide a measure of solar magnetic variability through its effects on the terrestrial environment, viz., the ionosphere via UV solar irradiance, the magnetosphere via the solar wind, or the atmosphere via the flux of cosmic rays modulated by the interplanetary magnetic field. These proxies are not affected by sunspots themselves but they are all different manifestations of the same process of solar surface magnetic activity produced by the solar dynamo in the convection zone (Charbonneau, 2020). We would expect the physical relationships of the sunspot number to such parameters to be relatively constant over time (Svalgaard, 2016; Cliver, 2017), particularly in annual averages, which remove diurnal and seasonal variations. Recent studies, however, have shown that the true relationship may be convolutive (Preminger and Walton, 2006; Dudok de Wit et al., 2018; Yeo, Solanki, and Krivova, 2020; Krivova et al., 2021), i.e., one cannot transform one solar index/proxy into another simply by assuming an instantaneous linear (or nonlinear) relationship. Indeed, solar proxies may be a delayed, cumulative, or differential response to the primary solar input. This can be explained by the fact that solar indices based on chromospheric or coronal emission also include a large contribution from the extended decay of active regions, while sunspots are much more directly tied to the initial flux emergence. Time averages of at least three months are thus expected to improve the comparisons.
Proxy data should only be used as a last resort in the construction of sunspot number time series, viz., when methods to bridge gaps in the sunspot record based on sunspot observations are unreliable, or for intervals before 1610 for which proxy S\({}_{\text{NS}}\) have been based on cosmogenic nuclide data (e.g., Usoskin et al., 2021). In general, proxy data can be used both to corroborate S\({}_{\text{N}}\) and G\({}_{\text{N}}\) time series and to raise questions about their validity, particularly when abrupt discontinuities separating two extended stable periods (jumps) are observed between sunspot number time series and those for proxy parameters. The diagnostic gains in robustness if the same offset is found in comparisons of the same sunspot data series with multiple unrelated proxies or benchmarks.
### 2.8 GHz solar radio emission (F10.7)
Shortly after the first reported detection of solar radio waves (Southworth, 1945; Hey, 1946), it was found that the Sun's daily background 2.8 GHz emission (labelled F10.7 for the wavelength in cm) was related to sunspot activity (Pawsey, Payne-Scott, and McCready, 1946; Covington, 1947). Covington and Medd (1954) reported that F10.7 tracked the sunspot number - a close correlation that has been examined and confirmed many times since (Figure 16), most recently by Clette (2021). This good correlation can be explained by the presence at 10.7 cm of a significant contribution from gyro-synchrotron emission arising from the lower corona above sunspots. The near 75-yr span of the carefully calibrated F10.7 record beginning from 1947 (Tapping, 2013) provides a straightforward check of S\({}_{\text{N}}\) and G\({}_{\text{N}}\) series for the modern epoch. The agreement between F10.7 and sunspot number time series vouches for the physical significance of S\({}_{\text{N}}\) and G\({}_{\text{N}}\).
Yeo, Solanki, and Krivova (2020) compared various facular indices, including the F10.7, to sunspot data and found a power law function with a finite impulse response to represent the data best. In a recent in-depth study of the relation between the sunspot number and F10.7, Clette (2021) concludes that the relation between the two indices is fully linear over the whole range of values for the raw daily values. The long-known non-linearity found in the low range for S\({}_{\text{N}}\)< 30 when working with monthly or yearly averages can be fully accounted for by the combined effect of temporal averaging with the non-zero minimum F10.7 background flux for a fully quiet Sun (67 sfu) and the 0-11 jump for the first sunspot in the definition of S\({}_{\text{N}}\).
This all-quiet background F10.7 flux was, by itself, the subject of various studies leading to a range of disagreeing determinations. Clette (2021) found that this lowest value is actually a function of the duration of the spotless period, increasing from 67 sfu for the longest observed intervals (30 days) up to 74 sfu for single spotless days, thereby explaining the apparently contradictory values published previously.
Moreover, by tracking the temporal evolution of the S\({}_{\mathrm{N}}\)/F10.7 relation, this study shows that the relation was fully stable over the entire 70-year interval, except for a 10.5 % jump in 1981 (Figure 17). Several tests allowed to determine that this scale jump is due to an inhomogeneity in the F10.7 series, and that it coincides with the only major historical transition in the operational production of this radio index (unique succession between the two main scientists in charge of this index, and simultaneous transition from the original manual processing to the current computerized production).
The Clette (2021) analysis supports the homogeneity of S\({}_{\mathrm{N}}\) version 2.0 (Clette and Lefevre, 2016). This homogeneity is also confirmed by comparisons with individual long-term sunspot observers, including some extra observers who were not included in the 2015 compilation of the SN version 2.0 (Clette, 2021; Hayakawa et al., 2022b). Equivalent comparisons of F10.7 with the original S\({}_{\mathrm{N}}\) version 1 series and with version 2 show that the agreement between the two series is particularly improved after 1981, i.e., for the part of S\({}_{\mathrm{N}}\) version 2.0 that resulted from a full reconstruction. Several deviations reported earlier (Yeo, Solanki, and Krivova, 2020; Clette et al., 2021) have been largely eliminated. The larger residuals before 1981 indicate that larger errors and temporary deviations remain in the Zurich part of the series, before 1980, and that the accuracy of that part of the series could still be significantly improved in future versions.
Figure 16: Plots of F10.7 vs \(S_{\rm N}\) (2.0) for various studies from 1984-2018. (From Clette, 2021.)
### Ca II K plage areas
Plage area series is another facular index with a connection to sunspot number series. Plage areas are determined from full-disc Ca II K (393.367 nm) observations (Chatzistergos et al. 2022b). Such observations exist since 1892 and continue to be performed from many sites around the world (Chatzistergos et al. 2022b), thus being one of the longest direct solar datasets. Various studies compared sunspot number series to plage areas (e.g., Kuriyan et al. 1982, Foukal 1996, Fligge & Solanki 1998). While more recently, Chatzistergos et al. 2022a compared plage area series from 38 archives as well as a composite series of plage areas from all available data (Chatzistergos et al. 2020) to Sn(1.0), SN(2.0), SvSc16, and CEA17 sunspot series. A power law relation between plage areas and sunspot number series
Figure 17: Plot of the 12-month smoothed monthly mean values of F10.7 versus \(S_{\rm N}\) (2.0) showing the largely linear relation between the two indices, and also the different slopes (F10.7/\(S_{\rm N}\) ratio) before 1981 (red) and after this transition (blue). The corresponding linear fits (black dotted and dashed lines) indicate a higher ratio after 1981, by 10.5%. The black line shows a global fit over the whole 70-year long series (figure from Clette, 2021).
was found to represent the data best, while a slight dependence of the relationship on the activity level was also reported. A better agreement between plage areas and S\({}_{\rm N}\)(2.0) compared to S\({}_{\rm N}\)(1.0) was also found, lending further support on the corrections applied to S\({}_{\rm N}\)(1.0).
### 4.3 Geomagnetic proxies
#### 4.3.1 Inter-Diurnal Variation of geomagnetic activity (IDV)
The level of energization of the Earth's magnetosphere by the near-Earth solar wind is determined by (in approximate order of importance): heliospheric magnetic field orientation and intensity, the solar wind speed, and the solar wind mass density (Vasyliunas et al., 1982; Pulkkinen, 2007). Thus geomagnetic indices, quantitative indicators of global magnetospheric disturbance based on prescribed sets of ground-based magnetometers, can be used to reconstruct near-Earth solar wind conditions (Feynman and Crooker, 1978). Given varying geometric effects associated with Earth orbit and axial inclination relative to the Sun, reconstructions are typically limited to the annual time scale, on which such effects average out (Lockwood et al., 2013). Different geomagnetic indices have different dependencies on the near-Earth solar wind conditions. Thus pairs of indices can be used to disentangle specific solar wind parameters and estimate both the near-Earth magnetic field intensity (B) and speed (V), and the open solar flux (OSF) (Svalgaard et al., 2003; Lockwood, Owens, and Barnard, 2014b). Of particular value in this regard is the inter-diurnal variation (IDV) index (Svalgaard and Cliver, 2005, 2009), which is highly correlated with B and relatively insensitive to V. Using IDV, B reconstructions can be extended back to 1845 with reasonable confidence. Fair agreement between the geomagnetic B estimates (red line) and direct in situ spacecraft observations (black) is shown in Figure 18.
Figure 18. Time series of near-Earth heliospheric magnetic field intensity, B, estimated from different methods. All data are annual means. Direct observations of B, which have been made by in situ spacecraft back to 1964, are shown in black. A composite of weighted geomagnetic estimates of B are shown in red. A composite of weighted sunspot-based estimates, using a range of \(S_{N}\) and \(G_{N}\) time series (Clette and Lefevre, 2016; \(\{S_{N}(2.0)\}\)); Lockwood, Owens, and Barnard (2014a,b; LEA14); Svalgaard and Schatten (2016; SvSc16), and Usoskin et al. (2016; UEA16), and two methods for converting the sunspot number to B, is shown in blue. The shaded regions shows the 1-sigma uncertainty ranges. (Figure adapted from Owens et al., 2016.)
In order to compare the geomagnetic estimates with \(S_{N}\), it is necessary to convert \(S_{N}\) to either B or open solar flux (OSF). Owens et al. (2016) considered two approaches. The first uses an empirical relationship between B and \(S_{N}\)(Wang and Sheeley, 2003; Wang, Lean, and Sheeley 2005; Svalgaard and Cliver, 2005). The second uses a physically constrained model of OSF (Owens and Lockwood, 2012), which assumes sunspots are a proxy for OSF production (Solanki, Schussler, and Fligge, 2000, 2002; Krivova et al., 2007). By assuming a constant solar wind speed, the resulting \(S_{N}\)-based OSF estimate can subsequently be converted to B.
Both methods were applied by Owens et al. (2016) to a range of \(S_{N}\) and \(G_{N}\) records (\(S_{N}(2.0)\), LEA14, SvSc16, UEA16). The resulting composite series is shown in blue in Figure 18. The general agreement with the geomagnetic series is strong, with the most notable deviations being an overestimate of the magnitude of B during solar cycle 20 (around 1970) and a persistent underestimate (within uncertainties) before 1900. All the individual sunspot series overestimate solar cycle 20 (1964-1976), suggesting that the difference could result from measuring global solar activity from sunspots versus the inherently local, near-Earth, measure from geomagnetic and spacecraft observations. Conversely, the higher values of B inferred from geomagnetic records before 1900 are in better agreement with the "high" \(S_{N}\) and \(G_{N}\) records in Figure 2 and less consistent with the original "low" sunspot records, namely HoSc98, LEA14 and UEA16.
_4.3.2 Daily range of geomagnetic activity (rY)_
The daily variation of Earth's magnetic field was first linked to the quasi-decadal variation of sunspot activity (Schwabe, 1844) in 1852 (Wolf, 1852; Gautier, 1852). During the second half of the 19\({}^{\rm th}\) century, this correlation provided the strongest evidence that the magnetic field at the Earth's surface was affected by the Sun (Ellis, 1880, 1898).
Svalgaard (2016) described the physical link between the Sun's spottedness and the daily variation of the geomagnetic field as follows, "_Solar magnetism (as directly observed and as derived from its proxy the sunspot number) gives rise to an... extreme ultraviolet (EUV) excess over that expected from solar blackbody radiation... Solar radiation into the Earth's atmosphere is controlled by the zenith angle and causes thermal winds, which, in conjunction with solar (and lunar) tides, move the atmosphere across geomagnetic-field lines. Radiation with a short-enough wavelength ionizes atmospheric constituents (primarily molecular oxygen), and there is a balance between ion formation and subsequent rapid recombination establishing an... ionospheric conducting layer of electrons and ions [the E-layer of the ionosphere] that due to collisions moves with the winds of the neutral atmosphere across the... geomagnetic field. The resulting inductive dynamo maintains an electric current whose magnetic effect is observable on the ground (Svalgaard, Cliver, and Le Sager, 2004; Nusinov, 2006). The day-night cycle imposes a... diurnal variation of the magnetic effect, which has been observed for several centuries.... The
output of the entire process is the... total daily range of the magnetic variation, which can be readily observed over a wide range of latitude."_
The annual diurnal variation, parameterized by the daily range of the (non-storm) East-component [rY] of Earth's magnetic field, has been reconstructed and tabulated by Svalgaard (2016) back to 1840 (Figure 19). It should be noted that measurements of this diurnal variation exist before 1840, but the accuracy of the resulting rY index is probably insufficient for reliable comparisons with \(G_{N}\) and \(S_{N}\). Indeed, as shown in Yamazaki and Maute (2017), rY has a sensitivity to seasonal effects when source data are incomplete (as is the case before 1840), because rY involves averages over whole years and over longitude.
Figure 2 shows the variation of the ratios of various scaled \(S_{N}\) and \(G_{N}\) series to the rY index over the interval 1840-2010. Over the 20th century, the gradual rise of the 11-year smoothed ratios from ~1900 to ~1975 tracks the general increase in \(S_{N}\) and \(G_{N}\) reconstructions during this interval, as well as the sharp drop after ~2000. This modulation of the ratio by the amplitude of the solar cycle may indicate a non-linear relation that is not fully accounted for. Nevertheless, as this relation is expected to be the same at all times, this rough agreement over 1900-2000 should hold outside of this interval. Of the nine \(S_{N}\) and \(G_{N}\) time series considered in the figure, CEA17 is the one for which the value of the \(G_{N}\)/rY time series is most internally consistent for corresponding extremes of solar activity in different epochs, viz., the peaks in the 18th and 19th and the troughs at the beginning of the 20th and 21st centuries. It gives a ratio that remain closest to unity. On the other hand, low reconstructions, like HoSc98 or DuKo22, strongly deviate to low ratios before 1900, thus giving the worst agreement.
Figure 19: Ratios between the annual mean group sunspot number and annual mean range rY in nT smoothed with a 11-year running mean window. The sunspot number series were normalized to \(S_{N}\)(2.0) over the period 1920-1974. rY was linearly scaled to CEA17 \(G_{N}\) series to render the ratio (\(G_{N}\)/rY*) around
1. The numbers at the lower part of the panel denote the conventional solar cycle numbering and are shown roughly at the time of cycle maximum.
### Cosmogenic radionuclides
Cosmogenic nuclides are radioactive isotopes which are not normally expected to exist in the terrestrial system as a result of natural radioactivity of the solid Earth, nor to survive from the time of the planetary system formation. The only (or dominant) source of such isotopes is related to energetic cosmic-ray particles continuously impinging on Earth, that initiate nucleonic-electromagnetic-muon cascade in the atmosphere. As a sub-product of the cascade, some specific radionuclides can be produced in traceable amounts and stored in natural dateable archives, such as tree trunks, ice sheets or lake/marine sediments.
The most used cosmogenic isotopes for solar-terrestrial studies are \({}^{14}\)C (radiocarbon) and \({}^{10}\)Be (Beer, McCracken, and von Steiger, 2012; Usoskin, 2017). The concentration of \({}^{14}\)C in dendrochronologically dated tree rings and \({}^{10}\)Be in glaciologically dated polar (Antarctic and Greenland) ice cores serve as measures of the abundance of these isotopes in the troposphere in the past. The flux of galactic cosmic rays near Earth is modulated by solar magnetic activity (Potgieter, 2013; Cliver, Richardson, and Ling, 2013b) which is often quantified via the modulation potential of the solar wind and heliospheric magnetic field (Caballero-Lopez and Moraal, 2004; Usoskin et al., 2005), after accounting for the effect of Earth's slowly changing geomagnetic field which provides additional shielding from cosmic rays (Usoskin, Solanki, and Korte, 2006; Snowball and Muscheler, 2007). Production tables for individual isotopes have been computed by Webber and Higbie (2003), Usoskin and Kovaltsov (2008), Webber, Higbie, and McCracken (2007), and Kovaltsov, Mishev, and Usoskin (2012). The most recent and accurate computational set was provided by Poluianov et al. (2016), which agrees well with the measurements, also in absolute terms (Asvestari and Usoskin, 2016). The first physics-based solar activity reconstruction based on cosmogenic-isotope data was made ~20 years ago (Usoskin et al., 2003; Solanki et al., 2004), and the most recent one is based on a Bayesian multi-proxy approach (Wu et al., 2018a), covering the last ten millennia of the Holocene. Such reconstructions have increasing uncertainties beyond that time because the transport and deposition patterns of isotopes in the atmosphere are less well known for ice-age conditions.
Deconvolving the cosmogenic nuclide data to infer a solar modulation potential and ultimately a proxy sunspot number for years prior to 1610 is a formidable, but necessary, task as cosmogenic isotopes provide the only quantitative information on solar activity before the telescopic era (Beer, McCracken, and von Steiger, 2012; Usoskin, 2017). One of the goals of the sunspot number workshops (Cliver, Clette, and Svalgaard, 2013a; Cliver et al., 2015) that initiated the present sunspot number reconstruction effort was to provide a robust ~400-yr sunspot series that could set the level of a cosmogenic-based sunspot number for the 10 millennia preceding 1610. As Cliver et al. (2015) wrote: _"Calibration of such a time series is complex, however, owing to variations of cosmogenic-nuclide concentrations caused by Earth's magnetic field, terrestrial climate, and possibly volcanic activity [as well as the high noise level of the raw data]. Thus it is necessary to have as long and as accurate a record of solar activity as possible to characterize the effect of these other variables on a long-term cosmogenic-nuclide-based [sunspot number]."_ While the cosmogenic record works reasonably well for characterizing the relative overall level of solar activity and can distinguish between features such as the Modern Grand Maximum (Usoskin et al., 2003; Solanki et al., 2004; Usoskin, 2017) and the Maunder and Sporer Grand Minima (Eddy, 1976;
Usoskin, Solanki, and Korte, 2006; Usoskin, 2017; Asvestari et al., 2017), its use as a reliable arbiter of the smaller differences that characterize discontinuities between newly-developed \(S_{N}\) and \(G_{N}\) times remains to be demonstrated.
Until recently, the quality/time-resolution of the cosmogenic-isotope data was insufficient to reliably reconstruct sunspot cycles before 1600 AD (Muscheler et al., 2016). In 2021, Brehm et al. (2021) reported high-precision measurements of \({}^{14}C\) concentration in an oak tree archive for years after 970 AD. This new dataset together with an improved semi-empirical model describing the evolution of the Sun's global total and open magnetic flux (Krivova et al., 2021) made the first high-resolution millennium-long (970-1900) sunspot-cycle reconstruction possible (Usoskin et al., 2021). Figure 20 compares the \({}^{14}C\)-based sunspot number of Usoskin et al. (2021) from 1610-1900 (gray shading indicates 67% confidence intervals) with the \(S_{N}\)(2.0), HoSc98, SvSc16, CEA17, and UEA21 time series. Discrepancies between the amplitudes and timings of the sunspot and \({}^{14}C\)-based time series increase as one goes back in time. Note the high minimum in the \({}^{14}CS_{N}\) record ca. 1780. The properties of individual solar cycles cannot be reliably established by this method during grand minima of activity (Usoskin et al., 2021), but the averaged \(S_{N}\) level is consistent with zero, implying very low \(S_{N}\) during the Maunder minimum (see also Carrasco et al., 2021).
Another cosmogenic-nuclide that can be used to trace the evolution of long-term solar activity is the \({}^{44}Ti\) isotope measured in meteorites that have fallen through the ages (Taricco et al., 2006). It is less precise than the terrestrial isotopes and can only indicate a relatively high cosmic-ray level (respectively, low solar activity) during the Maunder minimum (Asvestari et al., 2017).
Figure 20: Comparison of \({}^{14}C\)-based sunspot number (black curve with \(\pm 1\sigma\) grey-shaded uncertainties) with \(S_{N}\)(2.0) (red), HoSc98 (yellow), SvSc16 (purple), CEA17 (green), and UEA21 (light blue) time series for the 1610-1900 interval. Shown are annual values, while \(S_{N}\)(2.0) and \({}^{14}C\)-based sunspot number were
divided by 16.67 to bring them roughly to the same scale as the GSN series. All series, except the \({}^{14}\)C-based sunspot number, were scaled to match over the period 1920-1974. (Adapted from Usoskin et al., 2021b.)
The diagnostics described above can be summarized as follows, regarding the amplitude of the reconstructed solar cycles in the 19th century (Table 4). Most of those tests, except the cosmogenic radionuclides, fully exclude the lowest reconstructions, which includes the original \(\mathrm{G_{N}}\) series (HoSc98). The intermediate reconstructions are the only ones compatible with most of those external criteria, although the agreement is never optimal (either on the lower limit or upper limit of the acceptable range).
Therefore, the overall answer provided by those comparisons remains partly ambiguous. As all those external tests and proxies do not fully agree yet among themselves, further progress is thus definitely needed to improve those indirect solar tracers of solar activity, and to elucidate the remaining disagreements, before they can deliver a fully robust and independent benchmark for the \(\mathrm{S_{N}}\) and \(\mathrm{G_{N}}\) series.
**5. Fault lines in the \(\mathrm{G_{N}}\) and \(\mathrm{S_{N}}\) data series**
5.1 Traversing the Dalton Minimum: Staudacher to Schwabe (1798-1833)
As Munoz-Jaramillo and Vaquero (2019) point out, there is a marked drop-off in both the quality and quantity of sunspot data before 1825 (Figure 8, Figure 21). Bridging the data-sparse period of the
Dalton Minimum to scale Staudacher, who counted spots from 1749-1799, to Schwabe (1826-1867) is a key challenge for any sunspot number series. Recently recovered datasets for the Dalton Minimum will help to address this problem (Hayakawa et al., 2020a, 2021f). In general, the relative performance/accuracy of the various reconstruction methods in sparse data environments needs to be examined/assessed (e.g., Usoskin, Mursula, and Kovaltsov, 2003).
Amateur astronomer Johann Casper Staudaucher made 1146 drawings of the spotted solar disk from 1749-1799. Quoting from Svalgaard (2020): _"House (1869)... reviewed the Staudach4 material and reports that a 4-foot telescope was used, but that it was not of particular good quality and especially seemed not to have been achromatic, because he quotes Staudach himself remarking on his observation of the Venus transit in 1761 that 'for the size and color of the planet there was no sharp edge, instead it faded from the same black-brown color at the inner core to a still dark brown light red, changing into light blue, then into the high green and then to yellow'. So we may assume that the telescope suffered from spherical and chromatic aberration. We can build replicas with the same optical flows as telescopes available and affordable to amateurs in the 18th century. On Jan. 16, 2016 we started observations of sunspots with such replicas. Three observers (expert members of "The Antique Telescope Society", [http://webari.com/oldscope/](http://webari.com/oldscope/)) have made drawings of the solar disk by projecting the sun onto a sheet of paper. We count the number of individual spots as well as the number of groups they form. Comparing our counts with what modern observers report for the same days we find that the sunspot number calculated from the count by modern observers is three times larger as what our intrepid observers see... and that the number of groups is 2.5 times as large. This suggests that we can calibrate the 18th century observations in terms of the modern level of solar activity by using the above factors. [S\({}_{N}\)(2.0)] divided by 3... is a reasonable match to the sunspot number calculated from Staudach's drawings (Svalgaard, 2017),
Footnote 4: In the literature, this observer is referred to as both Staudacher and Staudach. Here we generally refer to Staudacher (e.g., Figures 9 and 22) except when directly quoting Svalgaard.
Figure 21: (Top) Composite of various \(G_{N}\) times series. (Bottom) Composite butterfly diagram showing the separation of sunspot coverage epochs into intervals before and after Schwabe began his patrol. (Adapted from Muñoz-Jaramillo and Vaquero, 2019.)
thus roughly validating the revised SILSO values and not compatible with the low values of the [Hoyt and Schatten 1998 (a,b)] reconstruction..."_
Svalgaard and Schatten (2016) had earlier used a factor of ~2.5 (obtained through what they described as an _"innovative (and some would say perhaps slightly dubious) analysis"_ involving a comparison of "high count" and "low count" observers from ~1750 to ~1850 to scale Staudacher's observations to those of Wolfer in their G\({}_{\rm N}\) series. The study based on antique telescopes (Svalgaard, 2020) puts this factor on firmer footing. We also note that this 2.5 value matches the correction factors found for several G\({}_{\rm N}\) reconstructions before the 19\({}^{\rm th}\) century (Figure 14).
Telescope aperture and optical quality are not the only factors that affect sunspot group count. Group splitting also plays a role. Before Hale's (1908) discovery of sunspot magnetism, closely spaced groups were generally lumped together into very large groups, sometimes spanning more than 40\({}^{\circ}\) in longitude, as the bipolarity of sunspot groups and the maximum possible size of an active region (~25\({}^{\circ}\)) were then unknown. This tendency to form overly large groups was probably also favored by the lower magnification and smaller drawing size used in early observations. In the mid-20\({}^{\rm th}\) century, Waldmeier (1938) introduced a group classification system that took the temporal evolution of bipoles in a cluster of sunspots into account, abandoning proximity as the sole, or primary, criterion for identifying groups (Friedli, 2009; Svalgaard and Schatten, 2016). As a result, clusters of sunspots that previously had been counted as a single group could now be divided (split) into two or more groups (see Figure 1(b) for an example of such splitting), thus raising the group counts.
Arlt (2008; Arlt and Vaquero, 2020) suggested that Staudacher, with his 18th century telescope, missed all of the small A and B type spot groups (according to the Waldmeier classification) that make up 30-50% of all groups seen today (Clette et al., 2014). These reductions in group counts would imply a Staudacher k'-factor (Equation 2) ranging from ~1.4 to 2.0. From an analysis of Staudacher's sunspot drawings, Svalgaard (2017) found that Waldmeier's classification system would increase Staudacher's group counts in V16 by 25%. Combining the instrumental correction factor (1.43-2.0) with that for group splitting (1.25) yields a group scaling factor (k') range for Staudacher (to Wolfer) from of ~1.8 to 2.5, in good agreement with the above reconstitution using replicas of historical telescopes.
5.2 Galileo to Staudacher: Encompassing the Maunder Minimum
Figure 8 shows that the 1730s and 1740s are the weakest link in the sunspot number time series. Substantial attention has been focused on this data-poor interval (Section 2.2.1(c)) by Hayakawa et al. (2022a) which is critical to connect Staudacher to the Maunder Minimum (Section 2.2.1(b); the low end of the lever arm for TSI reconstruction and climate change studies) and all preceding years, including those for which the sunspot record must be inferred from cosmogenic radionuclides.
The degree of difficulty of getting the first ~140 years of a sunspot number series correct is underscored by the fact that there are only two systematic reconstructions of such a series that extend to 1610: (1) the Hoyt and Schatten (1998a,b) G\({}_{\rm N}\) series that assigned an average k-factor of 1.255 (slightly higher than the average correction factor for observers during the second half of the 20\({}^{\rm th}\) century; Figure 14) to 78 of 171 pre-1749 observers that did not have any common days of observation with other observers, with a median value of 1.002 for the remaining 93 observers (vs. 1.000 for RGO); and (2) the Svalgaard and Schatten (2016) G\({}_{\rm N}\) series for which the pre-Schwabe years are based in large part on the high-count/low-count observer comparison scheme used to scale Staudacher to Schwabe (Section 5.1)
and a somewhat related "brightest star" method based on the highest daily group count recorded in a given year by any observer. Such speculative methods and reliance on proxies for corroboration will likely be required to obtain yearly values from 1610-1748. The active day fraction (Kovaltsov et al. 2004, Vaquero et al., 2015, Usoskin 2017) has been used to estimate the general level of activity during the Maunder Minimum (Carrasco et al. 2021c, 2022b), which was relatively well-observed (Figure 8), and may be useful to firm up estimates elsewhere in the early series, pending the recovery of more historical data.
## 6 Summary of Progress
The main achievement of the ISSI Sunspot Number Recalibration Team was the data recovery effort headed by Arlt, Carrasco, Clette, Friedli, Hayakawa, and Vaquero, with an emphasis on the identification, digitization, and analysis of primary records, images in particular. Such data can play a key role to bridge gaps between early (pre-Schwabe) segments of the sunspot record. (Sections 2.1.1 and 2.2.1)
A prime focus of the Team meetings was the presentation/probing of novel \(\mathrm{G_{N}}\) reconstruction methods by Chatzistergos, Dudok de Wit, Kopp, Lefevre, Mathieu, Munoz-Jaramillo, Svalgaard, and Usoskin, with an emphasis on the application of modern statistical methods and determination of uncertainties (e.g., Mathieu et al., 2019) as summarized in Tables 2 and 3.
Less progress was made during the Team meetings on proxy time series, although considerable progress had been made beforehand by Svalgaard, Usoskin, Lockwood, Owens and others on long-term geomagnetic and cosmogenic-nuclide-based time series, and Clette has carried out a thorough comparison of F10.7 and S\({}_{N}\)(2.0) as a follow up of discussions in the Team meetings. These proxies can be used to corroborate as well as to identify potential weaknesses in new time series. (Section 4)
During the ISSI Team meetings, Clette introduced the concept of benchmarks for the reconstruction project, e.g., the concept that because of improvements in telescope technology and changes in the definitions of spots (reduced minimum size) and groups (from lumping to splitting), one would expect modern observers to count more spots than those preceding Wolfer (leading to smaller normalization or correction factors over time for a given level of sunspot activity), an expectation that the Hoyt and Schatten \(\mathrm{G_{N}}\) series failed to meet. (Section 3)
"Reverse engineering" experiments based on old (Svalgaard) and new (Karachik, Pevtsov, and Nagovitsyn, 2019) telescopes helped to assess the effect of improvements in telescope technology on the observability of sunspots over time. These studies are relevant for the scaling of Staudacher, the key observer for the second half of the 18\({}^{\mathrm{th}}\) century, to Schwabe, the principal observer for the first half of the 18\({}^{\mathrm{th}}\) century. (Sections 3.2 and 5.1)
At the team meetings, Pesnell represented the space forecasting community, Van Driel-Gesztelyi served as rapporteur, and Kopp reviewed the "triad" results (see below) and mapped the path forward.
Perhaps the most important marker of progress during the ISSI Team meetings was the joining of key stakeholders to argue/debate/discuss the reconstruction of the S\({}_{N}\) and G\({}_{N}\) number time series, with commitment to the goal of producing the optimal series with the data at hand and current best practice methodology. As Clette and Lefevre (2016), wrote, the present focus on the sunspot number "marks a fundamental transition between the earlier unalterable and unquestioned data series to a genuine
measurement series, like any other physical data series. As for any other measurement, it is natural to revise it as new data sources and new analysis methods become available."
## 7 Perspective and Prospect
The sunspot number has been called the longest running experiment in science (Owens, 2013). The renewed emphasis on this time series reflects the Sun's impact on our increasingly technology-based society and the need to better quantify the time-varying solar input to the terrestrial climate.
The end goal of the present effort that began in 2011 and continued through the Topical Issue in 2016 and the ISSI Team meetings of 2018 and 2019 to the present day remain the same: to produce a community-vetted series (Cliver, Clette, and Svalgaard, 2013) with quantified time-dependent uncertainties (e.g., Dudok de Wit, Lefevre, and Clette, 2016) for the last ~400 years. Such a base reference can be used to anchor a millennial-scale cosmogenic-nuclide-based time series of solar variability dating back to the last glacial period and to test and validate physical models of the coupling between the past solar input and the observed response of the Earth system.
The last decade has shown that this process of acquiring consensus via applications and discussions of multiple approaches followed by reviews of their results cannot be rushed. The Hoyt and Schatten (1998a,b) time series, valuable both for the introduction of a \(G_{N}\) series that could be extended to 1610 and for the creation of the first publicly available digitized database, taught the lesson of the necessity for due diligence regarding modifications/revisions of the \(S_{N}\). The now apparent issues with the Hoyt and Schatten \(G_{N}\) time series were not independently examined for more than a decade (Cliver and Ling, 2016), during which time it became entrenched as an alternative to \(S_{N}\) in part because of its extension to the Maunder Minimum. In Appendix 1 of Hoyt and Schatten (1998a,b), the \(k\)'-factors for Wolf (1.117) and Wolfer (1.094) are within 2% of each other, despite the fact that Wolfer counted 65% more groups than Wolf (Svalgaard, 2013; Cliver, Clette, Svalgaard, et al., 2013a). Had this peculiarity been noticed when the Hoyt and Schatten (1998a,b) series was introduced, it is unlikely that their then new \(G_{N}\) series would have gained the traction/usage which it retains at some level to this day (e.g., Coddington et al., 2019; Wang and Lean, 2021; Krivova et al., 2021). To ensure that all new \(S_{N}\) and \(G_{N}\) time series were subjected to scrutiny, the ISSI Team formation was preceded by an informal re-examination of each new sunspot number time series by separate "triads" consisting of an advocate, critic, and mediator. This format had the advantages of having former competitors working together - fostering both critical analysis and team building in anticipation of the ISSI effort.
What is then the way forward? The interactions between the members of the ISSI team have shown that the reconstruction of the sunspot number is a multifaceted and highly multidisciplinary problem. At a more conceptual level, this problem consists in collecting information of various types and origins, which are linked in different ways to the main observable of interest, which is the number of sunspots or sunspot groups. Ideally, one should decompose such a problem into two parts: a scientific choice and a statistical or analytical choice (Section 2.2.3). One of the main benefits of this exercise is that it makes us think in a probabilistic way, i.e., never separate an observation and its uncertainty.
Keeping this in mind, the expanded ISSI Sunspot Team of observers, analysts, and modelers will remain electronically connected (with the welcome prospect of actual meetings in time), with the immediate goals of refining the methodology for the various series proposed, and creating corresponding \(S_{N}\) and \(G_{N}\) time series with realistic uncertainties. Once the new/revised time series have been developed,
they will need to be independently reproduced and evaluated using benchmarks and proxy series. Certain methods may work better in data sparse environments, so composite methodologies may be required. This will take time.
We estimate that new databases will be available/released by early 2023, with the key reconstruction methods brought to maturity and corresponding series created by the end of that year. At that point, an evaluation team chaired by the SILSO Director, will select next versions of \(S_{N}\) and \(G_{N}\) for sanction by an International Astronomical Union (IAU) reviewing body, with formal release targeted for conjunction with the IAU General Assembly in 2024. Even with the release of \(S_{N}\)(3.0) and a consensus \(G_{N}\)(3.0), it is possible that the two series may not be complete reconstructions for years before 1750, given the broader uncertainties and more complex and indirect validations required for these early years. However, values with appropriate uncertainties will be provided for these early years for both series back to 1610, to include the lever arm of the Maunder Minimum. The new series will mark the next step in a now-permanent improvement and quality-insurance process. The new series will be continuously monitored by comparison with a basket of high-quality observers, as well as with proxies and benchmarks. As new data, new knowledge, and new mathematical tools continue to emerge, on a regular basis follow-on versions will be released at intervals of 3 to 10 years, when enough material has accumulated to warrant robust and substantial modification to the series. The goal is to provide the scientific community with a unique and trusted reference that summarizes our best knowledge about the long-term evolution of the solar activity, as traced by sunspots, and a reliable link to cosmogenic nuclide data for years before 1610.
###### Acknowledgements.
This work was facilitated by an International Space Science Institute (ISSI) International Team selected in 2017 as number 417, "Calibration of the Sunspot Number Series", organized by Matt Owens and Frederic Clette. We thank ISI for support of the team. We thank John Leibacher for encouraging us to use this status report to update the Solar Physics 2016 Topical Issue on Sunspot Number Recalibration edited by F. Clette, E.W. Cliver, L. Lefevre, J.M. Vaquero, and L. Svalgaard. This research has made use of NASA's Astrophysics Data System. The National Solar Observatory (NSO) is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under cooperative agreement with the National Science Foundation. F. Clette and L. Lefevre are supported by the Belgian Solar-Terrestrial Center of Excellence (STCE) funded by the Belgian Science Policy Office. L. Lefevre acknowledges funding from the Belgian Federal Science Policy Office (BELSPO) through the BRAIN VAL-U-SUN project (BR/165/A3/VALU-SUN). T. Chatzistergos acknowledges support by the German Federal Ministry of Education and Research (Project No. 01LG1905). H. Hayakawa acknowledges financial support by JSPS Grant-in-Aids JP20K22367, JP20K20918, JP20H05643, and JP21K13957, JSPS Overseas Challenge Program for Young Researchers, and the ISEE director's leadership fund for FY2021, Young Leader Cultivation (YLC) programme of Nagoya University, Tokai Pathways to Global Excellence (Nagoya University) of the Strategic Professional Development Program for Young Researchers (MEXT), and the young researcher units for the advancement of new and undeveloped fields, Institute for Advanced Research, Nagoya University of the Program for Promoting the Enhancement of Research Universities.
H. Hayakawa thanks Chiaki Kuroyanagi for archival investigations for the Eimmart Collection in St. Petersburg, and the NIHU/NINJL Citizen Science Project for supporting his research on Misawa's sunspot records.
I. Usoskin acknowledges partial support of the Academy of Finland (project ESPERA No. 321882).
L. van Driel-Gesztelyi acknowledges the Hungarian National Research, Development and Innovation Office grant OTKA K-131508.
|
2305.10177 | Epimorphisms of generalized polygons B: The octagons | This is the second part of our study of epimorphisms with source a thick
generalized $m$-gon and target a thin generalized $m$-gon. We classify the case
$m = 8$ when the polygons are finite (in the first part [15] we handled the
cases $m = 3, 4$ and $6$). Then we show that the infinite case is very
different, and construct examples which strongly differ from the finite case. A
number of general structure theorems are also obtained, and we also take a look
at the infinite case for general gonality. | Joseph A. Thas, Koen Thas | 2023-05-17T13:02:39Z | http://arxiv.org/abs/2305.10177v1 | # Epimorphisms of generalized polygons B: the octagons
###### Abstract.
This is the second part of our study of epimorphisms with source a thick generalized \(m\)-gon and target a thin generalized \(m\)-gon. We classify the case \(m=8\) when the polygons are finite (in the first part [15] we handled the cases \(m=3,4\) and \(6\)). Then we show that the infinite case is very different, and construct examples which strongly differ from the finite case. A number of general structure theorems are also obtained, and we also take a look at the infinite case for general gonality.
Key words and phrases:generalized polygon; generalized octagon; epimorphism
###### Contents
* 1 Introduction
* 2 Some basic definitions
* 3 Synopsis of known results
* 4 Epimorphisms to thin octagons
* 5 Locally finitely chained and generated generalized polygons
* 6 Counter examples in the infinite case
## 1. Introduction
In this paper, which is a sequel to [15], we will study particular cases of epimorphisms between generalized octagons. Our main motivation of writing the first instance in this series originated from a result of Pasini [10], which generalizes an older result of Skornjakov [11] and Hughes [8] (the latter two papers handle the case of projective planes).
**Theorem 1.1** (Skornjakov-Hughes-Pasini [11, 8, 10]).: _Let \(\alpha\) be a morphism from a thick (possibly infinite) generalized \(m\)-gon \(\mathcal{E}\) to a thick (possibly infinite) generalized \(m\)-gon \(\mathcal{E}^{\prime}\), with \(m\geq 3\). If \(\alpha\) is surjective, then either \(\alpha\) is an isomorphism, or each element in \(\mathcal{E}^{\prime}\) has an infinite fiber in \(\mathcal{E}\)._
(The mathematical notions will be detailed in the next section.) The theorem implies that an epimorphism between finite thick generalized polygons necessarily is an isomorphism, which is quite a surprise, if we would for instance look at this result from the viewpoint of the category of finite groups: the first isomorphism theorem of groups says that for a given epimorphism
\[\gamma:\ A\rTo\mathcal{B}, \tag{1}\]
we have a natural isomorphism \(B\cong A/C\), where \(C\) is the kernel of \(\gamma\), but finiteness assumptions on \(A\) and \(B\) by no means imply that \(C\) is trivial. (We _do_ know that the fibers of the elements in \(B\) all have size \(|C|\).) So apparently, in the geometric setting of Theorem 1.1, the fact that the source and target polygons are finite puts enough geometric constraints on the morphism to have a "trivial kernel." Van Maldeghem pointed out in [16, section 4.2.4] that the thickness assumption is crucial here, and in that same remark he briefly mentions a counter example in the thin case. Wondering what the "thin version" of Theorem 1.1 would be, was the starting point of [15], and in that paper we handled the finite projective planes, finite generalized quadrangles and finite generalized hexagons with thick source and thin target (the latters assumed to come with an order).
**Remark 1.2**.: A local variation on Theorem 1.1 by Bodi and Kramer [1] states that an epimorphism between thick generalized \(m\)-gons (\(m\geq 3\)) is an isomorphism if and only if its restriction to at least one point row or line pencil is bijective. Later on, Gramlich and Van Maldeghem thoroughly studied epimorphisms from thick
generalized \(m\)-gons to thick generalized \(n\)-gons with \(n<m\) in their works [4, 5], and again, classification results were obtained based on the local nature of the epimorphisms.
In [15], we also considered the infinite case, and showed that the finite formulation does not naturally generalize to the infinite case (through the construction of counter examples). For that matter, we introduced locally finitely chained and locally finitely generated polygons.
In this paper, we handle the case of generalized octagons (which is more tedious than the other cases), and we also consider the infinite case by constructing (counter) examples in the spirit of [15].
### Synopsis of the present paper
In section 2, we introduce some notions which will be frequently used throughout the paper. In section 3, we remind the reader of the main results of [15] regarding epimorphisms from thick finite generalized \(m\)-gons with \(m\in\{3,4,6\}\) to thin generalized \(m\)-gons with an order. In the subsequent section 4, we formulate and prove the missing octagonal case (Theorem 4.1), which comes with a byproduct for locally finite octagons. Then, in section 5 we provide infinite examples of (known) generalized octagons which yield classes of epimorphisms which agree with our results in the finite case. This is done through the theory of locally finitely generated generalized polygons and locally finitely chained generalized polygons, which was introduced in the predecessor of the present paper. In section 6 however, we freely construct classes of examples of epimorphisms which show that the infinite case necessarily behaves differently (for all finite goalities \(n\geq 3\)). The point is that there are no restrictions on the order \((s^{\prime},1)\) of the thin target polygon.
## 2. Some basic definitions
We summarize a number of definitions which we will need in due course.
### Generalized polygons
Let \(\Gamma=(\mathcal{P},\mathcal{L},\mathbf{I})\) be a point-line geometry, and let \(m\) be a positive integer at least \(2\). We say that \(\Gamma\) is a _weak generalized \(m\)-gon_ if any two elements in \(\mathcal{P}\cup\mathcal{L}\) are contained in at least one ordinary sub \(m\)-gon (as a subgeometry of \(\Gamma\)), and if \(\Gamma\) does not contain ordinary sub \(k\)-gons with \(2\leq k<m\). For \(m=2\) every point is incident with every line.
If \(m\geq 3\), we say \(\Gamma\) is a _generalized \(m\)-gon_ if furthermore \(\Gamma\) contains an ordinary sub \((m+1)\)-gon as a subgeometry. Equivalently, a weak generalized \(m\)-gon with \(m\geq 3\) is a generalized \(m\)-gon if it is _thick_, meaning that every point is incident with at least three distinct lines and every line is incident with at least three distinct points. A weak generalized \(m\)-gon is _thin_ if it is not thick; in that case, we also speak of _thin generalized \(m\)-gons_. If we do not specify \(m\) (the "gonality"), we speak of _(weak) generalized polygons_. Note that thick generalized \(2\)-gons (or _generalized digons_) do not contain ordinary \(3\)-gons as a subgeometry.
It can be shown that generalized polygons have an _order_\((u,v)\): there exists positive integers \(u\geq 2\) and \(v\geq 2\) such that each point is incident with \(v+1\) lines and each line is incident with \(u+1\) points. We say that a weak generalized polygon is _finite_ if its number of points and lines is finite -- otherwise it is _infinite_. If a thin weak generalized polygon has an order \((1,u)\) or \((u,1)\) it is called a _thin generalized polygon_ of order \((1,u)\) or \((u,1)\).
Note that the generalized \(3\)-gons are precisely the (axiomatic) projective planes. Generalized \(4\)-gons, resp. \(6\)-gons, resp. \(8\)-gons are also called _generalized quadrangles_, resp. _hexagons_, resp. _octagons_.
### Sub polygons
We say that \(\Gamma^{\prime}=(\mathcal{P}^{\prime},\mathcal{L}^{\prime},\mathbf{I}^{ \prime})\) is a _sub generalized \(m\)-gon_ of the generalized \(m\)-gon \(\Gamma\), \(m\geq 3\), if \(\mathcal{P}^{\prime}\subseteq\mathcal{P}\), \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\), and if \(\mathbf{I}^{\prime}\) is the induced incidence coming from \(\mathbf{I}\).
### Morphisms and epimorphisms
A _morphism_ from a weak generalized polygon \(\Gamma=(\mathcal{P},\mathcal{L},\mathbf{I})\) to a weak generalized polygon \(\Gamma=(\mathcal{P}^{\prime},\mathcal{L}^{\prime},\mathbf{I}^{\prime})\) is a map \(\alpha:\mathcal{P}\cup\mathcal{L}\mapsto\mathcal{P}^{\prime}\cup\mathcal{L}^ {\prime}\) which maps points to points, lines and which preserves the incidence relation (note that we do not ask the gonalities to be the same). We say that a morphism \(\alpha\) is an _epimorphism_ if \(\alpha(\mathcal{P})=\mathcal{P}^{\prime}\) and \(\alpha(\mathcal{L})=\mathcal{L}^{\prime}\). Contrary to Gramlich and Van Maldeghem [4, 5], we do not ask surjectivity onto the set of flags of \(\Gamma^{\prime}\) (the incident point-line pairs of \(\Gamma^{\prime}\)).
Note that in categorical language, an _epimorphism_ is any morphism which is right-cancellative. In the category of sets, this is trivially equivalent to asking that the morphism (map) is surjective. Since morphisms between generalized polygons are defined by the underlying maps between the point sets and line sets, it follows that in the categorical sense, epimorphisms between polygons are indeed as above.
If an epimorphism is injective, and if the inverse map is also a morphism, then we call it an _isomorphism_.
### Doubling
Let \(\Gamma=(\mathcal{P},\mathcal{L},\mathbf{I})\) be a (not necessarily finite) generalized \(n\)-gon of order \((s,s)\) (for \(n=3\), projective planes of order \((1,1)\) are allowed). Define the _double of \(\Gamma\)_ as the generalized \(2n\)-gon \(\Gamma^{\Delta}\) which arises by letting its point set be \(\mathcal{P}\cup\mathcal{L}\), and letting its line set be the flag set of \(\Gamma\) (the set of incident point-line pairs). Its parameters are \((1,s)\). The full automorphism group of \(\Gamma^{\Delta}\) is isomorphic to the group consisting of all automorphisms and dualities (anti-automorphisms) of \(\Gamma\). Sometimes we prefer to work in the point-line dual of \(\Gamma^{\Delta}\), but we use the same notation (while making it clear in what setting we work). This is what we will do in this section. Vice versa, if \(\Gamma^{\prime}\) is a thin generalized \(2n\)-gon of order \((1,s)\), then it is isomorphic to the double \(\Gamma^{\Delta}\) of a generalized \(n\)-gon \(\Gamma\) of order \((s,s)\).
## 3. Synopsis of known results
In this section we summarize some of the results of [15]. Specifically, we list the theorems obtained on epimorphisms from finite projective planes, finite generalized quadrangles and finite generalized hexagons to thin planes, quadrangles and hexagons, respectively. In the general framework of classifying epimorphisms with source a finite thick generalized \(n\)-gon and target a thin \(n\)-gon (with an order), only the octagons (case \(n=8\)) remain, due to a result of Feit and Higman [3].
**Theorem 3.1** (The planes [15]).: _Let \(\Phi\) be an epimorphism of a thick projective plane \(\mathcal{P}\) onto a thin projective plane \(\Delta\) of order \((1,1)\). Then exactly two classes of epimorphisms \(\Phi\) occur (up to a suitable permutation of the points of \(\Delta\)), and they are described as follows._
1. _The points of_ \(\Delta\) _are_ \(\overline{a},\overline{b},\overline{c}\)_, with_ \(\overline{a}\sim\overline{b}\sim\overline{c}\sim\overline{a}\)_, and put_ \(\Phi^{-1}(\overline{x})=\widetilde{X}\)_, with_ \(\overline{x}\in\{\overline{a},\overline{b},\overline{c}\}\)_._ _Let_ \((\widetilde{A},\widetilde{B})\)_, with_ \(\widetilde{A}\neq\emptyset\neq\widetilde{B}\)_, be a partition of the set of all points incident with a line_ \(L\) _of_ \(\mathcal{P}\)_. Let_ \(\widetilde{C}\) _consist of the points not incident with_ \(L\)_. Furthermore,_ \(\Phi^{-1}(\overline{ab})=\{L\}\)_,_ \(\Phi^{-1}(\overline{bc})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{B}\) _and_ \(\Phi^{-1}(\overline{ac})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{A}\)_._
2. _The dual of (a)._
**Theorem 3.2** (The quadrangles [15]).: _Let \(\Phi\) be an epimorphism of a thick generalized quadrangle \(\mathcal{S}\) of order \((s,t)\) onto a grid \(\mathcal{G}\). Let \(\mathcal{G}\) have order \((s^{\prime},1)\). Then \(s^{\prime}=1\) and exactly two classes of epimorphisms \(\Phi\) occur (up to a suitable permutation of the points of \(\mathcal{G}\))._
1. _The points of_ \(\mathcal{G}\) _are_ \(\overline{a},\overline{b},\overline{c},\overline{d}\)_, with_ \(\overline{a}\sim\overline{b}\sim\overline{c}\sim\overline{d}\sim\overline{a}\)_, and put_ \(\Phi^{-1}(\overline{x})=\widetilde{X}\)_, with_ \(\overline{x}\in\{\overline{a},\overline{b},\overline{c},\overline{d}\}\)_._ _Let_ \((\widetilde{A},\widetilde{B})\)_, with_ \(1\leq|\widetilde{A}|\leq s,1\leq|\widetilde{B}|\leq s\)_, be a partition of the set of all points incident with a line_ \(L\) _of_ \(\mathcal{S}\)_. Let_ \(\widetilde{C}\) _consist of the points not incident with_ \(L\) _but collinear with a point of_ \(\widetilde{B}\)_, and let_ \(\widetilde{D}\) _consist of the points not incident with_ \(L\) _but collinear with a point of_ \(\widetilde{A}\)_. Further,_ \(\Phi^{-1}(\overline{ab})=\{L\}\)_,_ \(\Phi^{-1}(\overline{bc})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{B}\)_,_ \(\Phi^{-1}(\overline{ad})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{A}\) _and_ \(\Phi^{-1}(\overline{cd})\) _consists of all lines incident with at least one point of_ \(\widetilde{C}\) _and at least one point of_ \(\widetilde{D}\)_._
2. _The dual of (a)._
**Theorem 3.3** (The hexagons [15]).: _Let \(\Phi\) be an epimorphism of a thick generalized hexagon \(\mathcal{S}\) of order \((s,t)\) onto a thin generalized hexagon \(\mathcal{G}\) of order \((s^{\prime},1)\). Then \(s^{\prime}=1\) and exactly two classes of epimorphisms \(\Phi\) occur (up to a suitable permutation of the points of \(\mathcal{G}\))._
1. _The points of_ \(\mathcal{G}\) _are_ \(\overline{a},\overline{b},\overline{c},\overline{d},\overline{e},\overline{f}\)_, with_ \(\overline{a}\sim\overline{b}\sim\overline{c}\sim\overline{d}\sim\overline{e} \sim\overline{f}\sim\overline{a}\)_, and put_ \(\Phi^{-1}(\overline{x})=\widetilde{X}\)_, with_ \(\overline{x}\in\{\overline{a},\overline{b},\overline{c},\overline{d},\overline{e },\overline{f}\}\)_._ _Let_ \((\widetilde{C},\widetilde{B}),1\leq|\widetilde{C}|\leq s,1\leq|\widetilde{B}|\leq s\)_, be a partition of the set of all points incident with some line_ \(L\) _of_ \(\mathcal{S}\)_. Let_ \(\widetilde{D}\) _consist of the points not incident with_ \(L\) _but collinear with a point of_ \(\widetilde{C}\)_, let_ \(\widetilde{A}\) _consist of the points not incident with_ \(L\) _but collinear with a point of_ \(\widetilde{B}\)_, let_ \(\widetilde{E}\) _consist of the points not in_ \(\widetilde{C}\cup\widetilde{D}\) _but collinear with a point of_ \(\widetilde{D}\)_, and let_ \(\widetilde{F}\) _consist of the points not in_ \(\widetilde{A}\cup\widetilde{B}\) _but collinear with a point of_ \(\widetilde{A}\)_. Further,_ \(\Phi^{-1}(\overline{bc})=\{L\}\)_,_ \(\Phi^{-1}(\overline{cd})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{C}\)_,_ \(\Phi^{-1}(\overline{ab})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{B}\)_,_ \(\Phi^{-1}(\overline{de})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{de})\) _but incident with a point of_ \(\widetilde{D}\)_,_ \(\Phi^{-1}(\overline{de})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{de})\) _but incident with a point of_ \(\widetilde{D}\)_,_ \(\Phi^{-1}(\overline{de})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{de})\) _but incident with a point of_ \(\widetilde{D}\)_,_ \(\Phi^{-1}(\overline{de})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{de})\) _but incident with a point of_ \(\widetilde{D}\)_,_ \(\Phi^{-1}(\overline{de})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{de})\) _but incident with a point of_ \(\widetilde{D}\)_,_ \(\Phi^{-1}(\overline{de})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{de})\) _but incident with a point of_ \(\widetilde{D}\)_,_ \(\Phi^{-1}(\overline{de})\) _is the set of all lines distinct
\(\Phi^{-1}(\overline{a}\overline{b})\) but incident with a point of \(\widetilde{A}\), \(\Phi^{-1}(\overline{\widetilde{a}\overline{e}})\) is the set of all lines distinct from the lines of \(\Phi^{-1}(\overline{\widetilde{a}\overline{b}})\) but incident with a point of \(\widetilde{D}\), and \(\Phi^{-1}(\overline{f}\overline{e})\) is the set of all lines not in \(\Phi^{-1}(\overline{f}\overline{a})\) but incident with a point of \(\widetilde{F}\) (that is, the set of all lines not in \(\Phi^{-1}(\overline{\widetilde{e}\overline{d}})\) but incident with a point of \(E\))._
2. _The dual of (a)._
In the next section, we handle the last remaining (and most tedious) case: the octagons.
## 4. Epimorphisms to thin octagons
**Theorem 4.1**.: _Let \(\Phi\) be an epimorphism of a thick generalized octagon \(\mathcal{S}\) of order \((s,t)\) onto a thin generalized octagon \(\mathcal{S}\) of order \((s^{\prime},1)\). Then \(s^{\prime}=1\) and exactly two classes of epimorphisms \(\Phi\) occur (up to a suitable permutation of the points of \(\mathcal{S}\))._
1. _The points of_ \(\mathcal{S}\) _are_ \(\overline{a},\overline{b},\overline{c},\overline{d},\overline{e},\overline{f}, \overline{g},\overline{h}\)_, with_ \(\overline{a}\sim\overline{b}\sim\overline{c}\sim\overline{d}\sim\overline{e} \sim\overline{f}\sim\overline{g}\sim\overline{h}\sim\overline{a}\)_, and put_ \(\Phi^{-1}(\overline{x})=\widetilde{X}\)_, with_ \(\overline{x}\in\{\overline{a},\overline{b},\overline{c},\overline{d}, \overline{e},\overline{f},\overline{g},\overline{h}\}\)_._ _Let_ \((\widetilde{C},\widetilde{B}),1\leq|\widetilde{C}|\leq s,1\leq|\widetilde{B}|\leq s\)_, be a partition of the set of all points incident with a line_ \(L\) _of_ \(\mathcal{S}\)_. Let_ \(\widetilde{D}\) _consist of the points not incident with_ \(L\) _but collinear with a point of_ \(\widetilde{C}\)_, let_ \(\widetilde{A}\) _consist of the points not incident with_ \(L\) _but collinear with a point of_ \(\widetilde{B}\)_, let_ \(\widetilde{E}\) _consist of the points not in_ \(\widetilde{C}\cup\widetilde{D}\) _but collinear with a point of_ \(\widetilde{D}\)_, let_ \(\widetilde{H}\) _consist of the points not in_ \(\widetilde{A}\cup\widetilde{B}\) _but collinear with a point of_ \(\widetilde{A}\)_, let_ \(\widetilde{F}\) _consist of the points not in_ \(\widetilde{D}\cup\widetilde{E}\) _but collinear with a point of_ \(\widetilde{E}\)_, and let_ \(\widetilde{G}\) _consist of the points not in_ \(\widetilde{A}\cup\widetilde{H}\) _but collinear with a point of_ \(\widetilde{H}\)_._ _Further,_ \(\Phi^{-1}(\overline{c}\overline{b})=\{L\}\)_,_ \(\Phi^{-1}(\overline{c}\overline{d})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{C}\)_,_ \(\Phi^{-1}(\overline{\widetilde{a}\overline{b}})\) _is the set of all lines distinct from_ \(L\) _but incident with a point of_ \(\widetilde{B}\)_,_ \(\Phi^{-1}(\overline{\widetilde{a}\overline{e}})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{\widetilde{a}\overline{b}})\) _but incident with a point of_ \(\widetilde{D}\)_,_ \(\Phi^{-1}(\overline{\widetilde{a}\overline{h}})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{\widetilde{a}\overline{b}})\) _but incident with a point of_ \(\widetilde{A}\)_,_ \(\Phi^{-1}(\overline{\widetilde{e}\overline{f}})\) _is the set of all lines distinct from the lines of_ \(\Phi^{-1}(\overline{\widetilde{a}\overline{h}})\) _but incident with a point of_ \(\widetilde{H}\)_, and_ \(\Phi^{-1}(\overline{\widetilde{g}\overline{f}})\) _is the set of lines not in_ \(\Phi^{-1}(\overline{\widetilde{h}\overline{g}})\) _but incident with a point of_ \(\widetilde{G}\) _(that is, the set of all lines not in_ \(\Phi^{-1}(\overline{\widetilde{e}\overline{f}})\) _but incident with a point of_ \(\widetilde{F}\)_)._ 2. _The dual of (a)._
_Proof._ We proceed in a number of steps. We first explain a connection between thin generalized octagons and generalized quadrangles.
Let \(\mathcal{G}\) be a thin generalized octagon of order \((s^{\prime},1),s^{\prime}\geq 1\). Define an equivalence relation \(\sim\) on the set of lines of \(\mathcal{G}\): \(L\sim M\) if \(\widetilde{\mathbf{d}}(L,M)=\) even, where \(\widetilde{\mathbf{d}}(\cdot,\cdot)\) is the distance in the line graph of \(\mathcal{G}\). Let \(U,V\) be the equivalence classes. Call the elements of \(U\)_points_, the elements of \(V\)_lines_, and \(L\in U\) is _incident_ with \(M\in V\) if and only if \(\widetilde{\mathbf{d}}(L,M)=1\). Then this incidence structure is a generalized quadrangle \(\widetilde{\mathcal{S}}\) of order \(s^{\prime}\). Let \(x\) be a point of \(\mathcal{G}\), and let \(L\in U\) and \(M\in V\) be the lines of \(\mathcal{G}\) which are incident with \(x\). Then the point \(x\) can be identified with the flag \((L,M)\) of the quadrangle \(\widetilde{\mathcal{S}}\).
Below, if we use the notation \(\mathbf{d}(\cdot,\cdot)\), we express the distance measured in the incidence graph.
(1) Let \(L\) be any line of \(\mathcal{S}\). Then \(\Phi\) maps the set of all points incident with \(L\) onto the set of all points incident with \(\Phi(L)\).
_Proof of (1)._ Let \(L\) be any line of \(\mathcal{S}\), let \(\Phi(L)=\overline{L}\), let \(\widehat{L}\) be the set of all points incident with \(L\), and let \(\widehat{\overline{L}}\) be the set of all points incident with \(\overline{L}\). Assume, by way of contradiction, that \(\Phi(\widehat{L})\neq\widehat{\overline{L}}\). Let \(\overline{x}\overline{1}\overline{L},\overline{x}\not\in\Phi(\widehat{L})\). Let \(\mathbf{d}(\overline{x},\overline{y})\) = 6, \(\mathbf{d}(\overline{y},\overline{L})\) = 7 and \(y\in\Phi^{-1}(\overline{y})\). If \(z\,\mathbf{I}\,L\) such that \(\mathbf{d}(z,y)\leq 6\), then \(\mathbf{d}(z,y)\) = 6, \(\mathbf{d}(y,L)\) = 7 and \(\Phi(z)=\overline{x}\). So \(\overline{x}\in\Phi(\widehat{L})\), a contradiction.
(2) Let \(x\) be any point of \(\mathcal{S}\), and let \(\Phi(x)=\overline{x}\). Then the image of the set of lines incident with \(x\), is the set of lines incident with \(\overline{x}\).
Proof of (2).: Dualize proof of (1).
(3) \(s^{\prime}=1\).
_Proof of (3)._ Let \(A,B\) be the two systems of lines of \(\mathcal{G}\), let \(\Phi^{-1}(A)\) be the union of all inverse images of the elements of \(A\), and let \(\Phi^{-1}(B)\) be the union of all inverse images of the elements of \(B\). If \(x\) is a point of \(\mathcal{S}\), then \(A_{x}\) is the set of all lines of \(\Phi^{-1}(A)\) incident with \(x\) and \(B_{x}\) is the set of all lines of \(\Phi^{-1}(B)\) incident with \(x\).
Let \(c\) be a point of \(\mathcal{S}\) and let \(M\) be a line incident with \(c\). Let \(\overline{c}=\Phi(c)\), let \(\overline{d}\) be a point at distance 8 from \(\overline{c}\), and let \(\Phi(d)=\overline{d}\). Then \(\textbf{d}(c,d)=\textbf{8}\). The line \(\overline{M}=\Phi(M)\) is incident with \(\overline{c}\); say \(\overline{M}\in B\). Let \(\overline{d}\,\textbf{I}\,\overline{U}\), \(\textbf{d}(\overline{U},\overline{M})=\textbf{6}\) and \(d\,\textbf{I}\,U\), \(\textbf{d}(U,M)=\textbf{6}\). Then \(U\in\Phi^{-1}(\overline{U})\) and \(\overline{U}\in A\). It easily follows that \(|B_{c}|=|A_{d}|\). Similarly \(|A_{c}|=|B_{d}|\).
From now on, assume, by way of contradiction that \(s^{\prime}>1\).
Let \((f,F)\) and \((g,G)\) be the point-line flags of the generalized quadrangle \(\widetilde{\mathcal{S}}\) which correspond to \(\overline{c}\) and \(\overline{d}\). Then \(\textbf{d}(\overline{c},\overline{d})=\textbf{8}\) is equivalent with \(f\not\sim g\) and \(F\not\sim G\) in \(\widetilde{\mathcal{S}}\). Let \((h,H)\) be a flag of \(\widetilde{\mathcal{S}}\) with \(f\not\sim h\not\sim g\) and \(F\not\sim H\not\sim G\) (such a flag exists as \(s^{\prime}>1\)). If \(\overline{r}\) is the point of \(\mathcal{G}\) which corresponds to \((h,H)\), then \(\textbf{d}(\overline{c},\overline{r})=\textbf{d}(\overline{d},\overline{r})= \textbf{8}\). If \(r\in\Phi^{-1}(\overline{r})\), then \(\textbf{d}(c,r)=\textbf{d}(d,r)=\textbf{8}\). So \(|A_{c}|=|B_{r}|=|A_{d}|\) and similarly \(|B_{c}|=|B_{d}|\). Hence \(|A_{c}|=|A_{d}|=|B_{c}|=|B_{d}|\).
Let \(m\) and \(n\) be points at distance 8 in \(\mathcal{S}\). If \(\textbf{d}(\overline{m},\overline{n})=\textbf{8}\), with \(\Phi(m)=\overline{m}\) and \(\Phi(n)=\overline{n}\), then we know that \(|B_{m}|=|A_{m}|=|B_{n}|=|A_{n}|\).
Now let \(\textbf{d}(\overline{m},\overline{n})=\textbf{6}\), and let \(\overline{m}\,\textbf{I}\,\overline{M},\overline{n}\,\textbf{I}\,\overline{N}\), \(\textbf{d}(\overline{M},\overline{N})=\textbf{4}\), \(\textbf{d}(\overline{m},\overline{u})=\textbf{d}(\overline{u},\overline{v})= \textbf{d}(\overline{v},\overline{n})=2\). Let \(\overline{I}\,\overline{u}\overline{v},\overline{l}\neq\overline{u}, \overline{l}\neq\overline{v}\). Further, let \(\overline{r}\) be such that \(\textbf{d}(\overline{l},\overline{r})=\textbf{4}\), \(\textbf{d}(\overline{u},\overline{r})=\textbf{d}(\overline{v},\overline{r})= \textbf{6}\). Then \(\textbf{d}(\overline{m},\overline{r})=\textbf{d}(\overline{u},\overline{r})= \textbf{8}\). So \(|A_{m}|=|A_{r}|=|B_{r}|=|B_{m}|\) and \(|A_{n}|=|A_{r}|=|B_{r}|=|B_{n}|\). Hence \(|A_{m}|=|B_{m}|=|A_{n}|=|B_{n}|\).
Next, let \(\textbf{d}(\overline{m},\overline{n})=\textbf{4}\). Let \((f,F)\) and \((g,G)\) be the point-line flags of the generalized quadrangle \(\widetilde{\mathcal{S}}\) which correspond to \(\overline{m}\) and \(\overline{n}\), where \(g\,\textbf{I}\,F\) (in \(\widetilde{\mathcal{S}}\)). Let \((h,H)\) be a flag in \(\widetilde{\mathcal{S}}\) with \(h\not\sim f,h\not\sim g,H\not\sim F,H\not\sim G\) (as \(s^{\prime}>1\) this flag exists). If \(\overline{r}\) is the point of \(\mathcal{G}\) which corresponds to \((h,H)\), then \(\textbf{d}(\overline{m},\overline{r})=\textbf{d}(\overline{n},\overline{r})= \textbf{8}\). Let \(r\in\Phi^{-1}(\overline{r})\); then \(|A_{m}|=|A_{r}|=|B_{r}|=|B_{m}|\) and \(|A_{n}|=|A_{r}|=|B_{r}|=|B_{n}|\), so \(|A_{m}|=|B_{m}|=|B_{n}|=|A_{n}|\).
Finally, let \(\textbf{d}(\overline{m},\overline{n})=\textbf{2}\). Choose \(\overline{r}\) such that \(\textbf{d}(\overline{m},\overline{r})=\textbf{d}(\overline{n},\overline{r})= \textbf{8}\), \(\textbf{d}(\overline{r},\overline{mn})=\textbf{7}\). As before it easily follows that \(|A_{m}|=|A_{n}|=|B_{m}|=|B_{n}|\).
Hence for \(m\) and \(n\) points of \(\mathcal{S}\) at distance 8, we always have \(|A_{m}|=|A_{n}|=|B_{m}|=|B_{n}|\).
Next, consider points \(m\) and \(n\) of \(\mathcal{S}\) at distance 6, 4 or 2. Clearly there is a point \(r\) with \(\textbf{d}(m,r)=\textbf{d}(n,r)=\textbf{8}\). Again it easlly follows that \(|A_{m}|=|A_{n}|=|B_{m}|=|B_{n}|\).
We conclude that for any two points \(m\) and \(n\) of \(\mathcal{S}\) we have
\[|A_{m}|=|A_{n}|=|B_{m}|=|B_{n}|. \tag{2}\]
Let \(r\) be a point of 8. Let \(r\,\textbf{I}\,L_{1},r\,\textbf{I}\,L_{2},L_{1}\neq L_{2}\) and \(L_{1},L_{2}\in A_{r}\) (remark that \(|A_{r}|=\frac{t+1}{2}\)). Let \(\overline{r}=\Phi(r),\Phi(L_{1})=\Phi(L_{2})=\overline{L},\overline{u}_{1}\, \textbf{I}\,\,\overline{L},\overline{u}_{1}\neq\overline{r}\), and \(u_{1}\,\,\textbf{I}\,L_{1}\) with \(\overline{u}_{1}=\Phi(u_{1})\). Further, let \(\overline{u}_{2}\,\,\textbf{I}\,\overline{L},\overline{u}_{2}\neq\overline{u}_{1}\), and \(u_{2}\,\,\textbf{I}\,L_{2}\) with \(\overline{u}_{2}=\Phi(u_{2})\). Next, let \(N_{1}\in B_{u_{2}},\overline{N}_{1}=\Phi(N_{1}),\overline{v}_{1}\,\textbf{I}\, \overline{N}_{1},\overline{v}_{1}\neq\overline{u}_{2}\), and \(v_{1}\,\,\textbf{I}\,N_{1}\) with \(\overline{v}_{1}=\Phi(v_{1})\). Finally, let \(N_{2}\in A_{v_{1}},\overline{N}_{2}=\Phi(N_{2}),\overline{v}_{2}\,\textbf{I}\, \overline{N}_{2},\overline{v}_{2}\neq\overline{v}_{1}\), and \(v_{2}\,\,\textbf{I}\,N_{2}\) with \(\overline{v}_{2}=\Phi(v_{2})\).
Let \(R_{1}\in B_{u_{1}}\) and let \(\textbf{d}(R_{1},R_{2})=\textbf{6}\) with \(v_{2}\,\,\textbf{I}\,R_{2}\). Hence \(R_{1}\) determines \(R_{2}\). Then \(\Phi(R_{2})=\overline{R_{2}}=\overline{N}_{2}\). Hence \(R_{2}\in A_{v_{2}}\). It follows that \(|B_{u_{1}}|\leq|A_{v_{2}}\setminus\{N_{2}\}|\). So \(|B_{u_{1}}|<|A_{v_{2}}|\), clearly a contradiction.
(4) **Let \(s^{\prime}=1\). Then not for any given points \(\overline{x}\)
Let \(A=\{\overline{bc},\overline{de},\overline{f}\overline{g},\overline{h}\overline{a}\}\) and \(B=\{\overline{ab},\overline{cd},\overline{ef},\overline{g}\overline{h}\}\). We have \(|A_{a}|=|B_{e}|\) and \(|B_{a}|=|A_{e}|,|A_{b}|=|B_{f}|\) and \(|B_{b}|=|A_{f}|,|A_{c}|=|B_{g}|\) and \(|B_{c}|=|A_{g}|,|A_{d}|=|B_{h}|\) and \(|B_{d}|=|A_{h}|\).
Let \(\Phi(a^{\prime})=\overline{a}\). As \(\mathbf{d}(\overline{a},\overline{e})=8\), we have \(\mathbf{d}(a^{\prime},e)=8\). Hence \(|B_{a}|=|A_{e}|=|B_{a^{\prime}}|\), similarly \(|A_{a}|=|A_{a^{\prime}}|\).
By assumption there are points \(a^{\prime}\) and \(d^{\prime}\), with \(\Phi(a^{\prime})=\overline{a},\Phi(d^{\prime})=\overline{d}\) and \(\mathbf{d}(a^{\prime},d^{\prime})=8\). Let \(d^{\prime}\sim l\sim n\sim m\sim a^{\prime}\) and assume that \(\Phi(ma^{\prime})\in A\) (points can be chosen in such a way). Then \(\Phi(m)=\overline{a},\Phi(n)=\overline{b}\) and \(\Phi(l)=\overline{e}\). So \(\Phi(ld^{\prime})=\overline{cd}\in B\). By letting \(m\) vary such that \(\Phi(ma^{\prime})\in A\), we see that \(|B_{d^{\prime}}|\geq|A_{a^{\prime}}|\), so \(|B_{d}|\geq|A_{a}|\). Also \(|B_{d}|=|A_{h}|,|A_{a}|=|B_{e}|\) and so \(|B_{d}|\geq|B_{e}|\).
Similarly one obtains \(|B_{e}|\geq|B_{d}|\), and so \(|B_{e}|=|B_{d}|\).
Consequently \(|B_{a}|=|B_{b}|=|B_{c}|=|B_{d}|=|B_{e}|=|B_{f}|=|B_{g}|=|B_{h}|\), and similarly \(|A_{a}|=|A_{b}|=|A_{c}|=|A_{d}|=|A_{e}|=|A_{f}|=|A_{g}|=|A_{h}|\). As \(|B_{e}|=|A_{a}|\), it now follows that \(|A_{u}|=|B_{u}|=\frac{(t+1)}{2}\) for each point \(u\) of \(\mathfrak{S}\).
Such as in the last part of (3) we now find a contradiction.
(5) **For \(s^{\prime}=1\) two (mutually dual) cases occur**.
_Proof of (5)._ By (4) we know that not for any given points \(\overline{x}\) and \(\overline{y}\) in \(\mathcal{G}\) with \(\mathbf{d}(\overline{x},\overline{y})=6\), there exist points \(x\in\Phi^{-1}(\overline{x})\) and \(y\in\Phi^{-1}(\overline{y})\) with \(\mathbf{d}(x,y)=8\).
Let \(\overline{a},\overline{b},\overline{c},\overline{d},\overline{e},\overline{f},\overline{g},\overline{h}\) be the points of \(\mathcal{G}\), with \(\overline{a}\sim\overline{b}\sim\overline{c}\sim\overline{d}\sim\overline{e} \sim\overline{f}\sim\overline{g}\sim\overline{h}\sim\overline{a}\), let \(A=\{\overline{bc},\overline{de},\overline{f}\overline{g},\overline{h}\overline{a}\}\) and \(B=\{\overline{ab},\overline{cd},\overline{ef},\overline{g}\overline{h}\}\), and let \(\Phi^{-1}(\overline{x})=\widetilde{X}\) for \(\overline{x}\in\{\overline{a},\overline{b},\overline{c},\overline{d}, \overline{e},\overline{f},\overline{g},\overline{h}\}\).
Without loss of generality, we assume that there do not exist points \(a^{\prime}\in\widetilde{A}\) and \(d^{\prime}\in\widetilde{D}\) for which \(\mathbf{d}(a^{\prime},d^{\prime})=8\) (and so \(\mathbf{d}(a^{\prime},d^{\prime})=6\)).
**CASE (a) Assume that \(b_{1}\sim b_{2},b_{1}\neq b_{2}\), with \(b_{1},b_{2}\in\widetilde{B}\).**
**SUBCASE (1) Let \(\Phi(b_{1}b_{2})=\overline{b}\overline{a}\in B\).**
Let \(M\in\Phi^{-1}(A)\) be a line incident with \(b_{1}\). Let \(c_{1}\)\(\mathbf{I}\)\(M\), with \(c_{1}\in\widetilde{C}\) (we know that \(M\) is incident with at least one point of \(\widetilde{C}\)). Assume, by way of contradiction, that \(b_{2}\) is incident with a second line \(M^{\prime}\in\Phi^{-1}(B)\). Then \(M^{\prime}\) contains a point \(a_{1}\in\widetilde{A}\). Let \(N^{\prime}\) be a line of \(\Phi^{-1}(B)\) incident with \(c_{1}\) and let \(d_{1}\in\widetilde{D}\) be incident with \(N^{\prime}\). Then \(\mathbf{d}(a_{1},d_{1})=8\), a contradiction. Hence \(b_{2}\) is incident with a unique line of \(\Phi^{-1}(B)\). Similarly \(b_{1}b_{2}\) is the unique line of \(\Phi^{-1}(B)\) incident with any point \(b_{i}\) of \(\widetilde{B}\) incident with \(b_{1}b_{2}\). So \(b_{i}\) is incident with \(t\) lines of \(\Phi^{-1}(A)\).
Assume, by way of contradiction, that \(b_{i}\sim b^{\prime},b^{\prime}\mathbf{I}\)\(b_{1}b_{2},b^{\prime}\in\widetilde{B}\). Then \(b_{i}b^{\prime}\in\Phi^{-1}(A)\). Let \(b_{i}\sim c^{\prime},b_{i}\neq c^{\prime},b_{1}b_{2}\neq b_{i}c^{\prime}\neq b _{i}b^{\prime}\). Then \(b_{i}c^{\prime}\in\Phi^{-1}(A)\). We may assume that \(c^{\prime}\in\widetilde{C}\). Let \(c^{\prime}\sim d^{\prime},c^{\prime}\neq d^{\prime},c^{\prime}d^{\prime}\in \Phi^{-1}(B),d^{\prime}\in\widetilde{D}\). Further, let \(a^{\prime}\sim b^{\prime},a^{\prime}\neq b^{\prime},b_{i}b^{\prime}\neq a^{ \prime}b^{\prime},a^{\prime}b^{\prime}\in\Phi^{-1}(B)\) and \(a^{\prime}\in\widetilde{A}\). Then \(\mathbf{d}(a^{\prime},d^{\prime})=8\), a contradiction. Hence each line \(N\in\Phi^{-1}(A)\) incident with \(b_{i}\) contains \(s\) points of \(\widetilde{C}\).
Let \(b_{1},b_{2},\ldots,b_{u}\) be the points of \(\widetilde{B}\) incident with \(b_{1}b_{2}\). Then all points not incident with \(b_{1}b_{2}\) but collinear with one of \(b_{1},b_{2},\ldots,b_{u}\) belong to \(\widetilde{C}\).
Assume that \(u<s\), so that \(b_{1}b_{2}\) is incident with at least two points \(a_{1},a_{2}\) of \(\widetilde{A}\). Assume, by way of contradiction, that there is a second line \(R\) of \(\Phi^{-1}(B)\) incident with \(a_{1}\). Let \(r_{1}\)\(\mathbf{I}\)\(R,r_{1}\in\widetilde{B},r_{1}\sim r_{2},r_{1}\neq r_{2},r_{1}r_{2}\in \Phi^{-1}(A),r_{2}\in\widetilde{C}\), and let \(r_{2}\sim d_{1},r_{2}\neq d_{1},r_{2}d_{1}\in\Phi^{-1}(B),d_{1}\in\widetilde{D}\). Then \(\mathbf{d}(a_{2},d_{1})=8\), a contradiction. So each point of \(\widetilde{A}\) incident with \(b_{1}b_{2}\) is incident with \(t\) lines of \(\Phi^{-1}(A)\). If there would be a point of \(\widetilde{A}\) not incident with \(b_{1}b_{2}\) but collinear with a point \(a_{i}\) of \(\widetilde{A}\) incident with \(b_{1}b_{2}\), then it is easy to construct a point \(d_{2}\in\widetilde{D}\) with \(\mathbf{d}(a_{i},d_{2})=8\), a contradiction. Hence each point not incident with \(b_{1}b_{2}\) but collinear with some point of \(\widetilde{A}\) incident with \(b_{1}b_{2}\), belongs to \(\widetilde{H}\).
Assume again that \(u<s\). Assume, by way of contradiction, that there exists a point \(c^{\prime\prime}\in\widetilde{C}\) which is not collinear with a point of \(\widetilde{B}\) incident with \(b_{1}b_{2}\). We consider two cases.
1. \(\mathbf{d}(c^{\prime\prime},b_{1}b_{2})\in\{3,5\}\).
Clearly \(\mathbf{d}(c^{\prime\prime},b_
\(c^{\prime\prime\prime}b_{i}\in\Phi^{-1}(A)\). If \(d^{\prime\prime}\in\widetilde{D},c^{\prime\prime}\sim d^{\prime\prime}\), and \(c^{\prime\prime}d^{\prime\prime}\in\Phi^{-1}(B)\), then \(\textbf{d}(a_{1},d^{\prime\prime})=\textbf{8}\), a contradiction. Hence \(c^{\prime\prime}c^{\prime\prime\prime}\in\Phi^{-1}(B)\) and \(c^{\prime\prime\prime}b_{i}\in\Phi^{-1}(A)\). Interchange roles of \(\{b_{1},b_{2}\}\) and \(\{c^{\prime\prime},c^{\prime\prime\prime}\}\), and of \(\widetilde{B}\) and \(\widetilde{C}\). Then by the first section of (1) we know that \(c^{\prime\prime\prime}b_{i}\) is incident with \(s\) points of \(\widetilde{B}\), a contradiction as \(c^{\prime\prime\prime}b_{i}\) is also incident with \(s\) points of \(\widetilde{C}\). Consequently \(c^{\prime\prime}\sim w\sim a_{j}\), with \(a_{j}\in\widetilde{A}\) incident with \(b_{1}b_{2}\). Then by the foregoing paragraph \(w\in\widetilde{H}\), which contradicts \(w\sim c^{\prime\prime}\).
* \(\textbf{d}(c^{\prime\prime},b_{1}b_{2})=\mathcal{7}\). First, let \(b_{i}\sim c^{\prime\prime\prime}\sim l\sim c^{\prime\prime},1\leq i\leq u\), with \(c^{\prime\prime},c^{\prime\prime\prime}\in\widetilde{C}\). By (1.1) we have \(l\not\in\widetilde{C}\). So \(l\in\widetilde{B}\cup\widetilde{D}\). First, let \(l\in\widetilde{B}\). Let \(b^{\prime\prime\prime}\textbf{I}\ c^{\prime\prime\prime}l,l\neq b^{\prime \prime\prime}\neq c^{\prime\prime\prime}\). Then by (1.1) we have \(b^{\prime\prime\prime}\in\widetilde{B}\). Let \(a^{\prime\prime}\sim b^{\prime\prime\prime},a^{\prime\prime}\in\widetilde{A}\), and let \(d^{\prime\prime}\sim c^{\prime\prime},d^{\prime\prime}\in\widetilde{D}\). Then \(\textbf{d}(a^{\prime\prime},d^{\prime\prime})=\textbf{8}\), a contradiction. Hence \(l\in\widetilde{D}\). Let \(l^{\prime}\textbf{I}\ l^{\prime\prime},l\neq l^{\prime}\neq c^{\prime\prime\prime}\). By (1.1) \(l^{\prime}\in\widetilde{D}\). Let \(c^{\prime\prime}\sim b^{\prime\prime\prime\prime}\sim a^{\prime\prime\prime},b ^{\prime\prime\prime\prime}\in\widetilde{B},a^{\prime\prime\prime}\in\widetilde {A}\). Then \(\textbf{d}(l^{\prime},a^{\prime\prime\prime})=\textbf{8}\), a contradiction. Next, let \(a_{j}\sim h^{\prime}\sim z\sim c^{\prime\prime}\), with \(a_{j}\textbf{I}\ b_{1}b_{2},a_{j}\in\widetilde{A},h^{\prime}\in\widetilde{H}\). Then \(\textbf{d}(h^{\prime},c^{\prime\prime})=\textbf{4}\), clearly a contradiction.
From (1.1) and (1.2) follows that for \(u<s\) each point of \(\widetilde{C}\) is collinear with some point of \(\{b_{1},b_{2},\ldots,b_{u}\}\).
Now we will prove that for \(u\leq s\) each point of \(\widetilde{A}\) is incident with the line \(b_{1}b_{2}\). Assume, by way of contradiction, that \(a^{*}\in\widetilde{A}\) is not incident with \(b_{1}b_{2}\). Let \(b_{i}\sim c_{j}\sim d_{1},c_{j}\in\widetilde{C},d_{1}\in\widetilde{D}\). If the line \(c_{j}d_{1}\) is incident with at least two points of \(\widetilde{C}\), then, interchanging roles of \(\widetilde{B}\) and \(\widetilde{C}\), we see that \(s\) points of \(b_{i}c_{j}\) must belong to \(\widetilde{B}\), a contradiction. Hence \(c_{j}d_{1}\) is incident with \(s\) points of \(\widetilde{D}\). Let \(d_{2}\) be a second point of \(\widetilde{D}\) incident with \(d_{1}c_{j}\). Then \(\textbf{d}(a^{*},d_{1})=\textbf{d}(a^{*},d_{2})=\textbf{6}\), recalling that \(\textbf{d}(a^{*},d_{1})=8\), \(\textbf{d}(a^{*},d_{2})=8\) are not possible, and so \(\textbf{d}(c_{j},a^{*})=\textbf{4}\). Let \(c_{j^{\prime}}\textbf{I}b_{i}c_{j},c_{j^{\prime}}\in\widetilde{C}\). Then also \(\textbf{d}(c_{j^{\prime}},a^{*})=\textbf{4}\); this holds for the \(s\) points \(c_{j^{\prime}}\) of \(\widetilde{C}\) incident with \(b_{i}c_{j}\) as \(\textbf{d}(a^{*},c_{j^{\prime}})\geq 4\). Hence \(a^{*}\sim b_{i}\), and so \(a^{*}\textbf{I}\ b_{1}b_{2}\), a contradiction. Consequently every point of \(\widetilde{A}\) is incident with the line \(b_{1}b_{2}\). If \(h_{1}\in\widetilde{H}\), then any line of \(\Phi^{-1}(A)\) incident with \(h_{1}\) is incident with a point \(a_{j}\in\widetilde{A}\). Hence each point of \(\widetilde{H}\) is collinear with some point \(a_{j}\textbf{I}\ b_{1}b_{2}\).
**We summarize.**_Every point \(b_{i},i\in\{1,2,\ldots,u\}\), is incident with \(t\) lines of \(\Phi^{-1}(A)\) and just one line of \(\Phi^{-1}(B)\). Every point not incident with \(b_{1}b_{2}\) but collinear with some \(b_{i},i\in\{1,2,\ldots,u\}\), belongs to \(\widetilde{C}\)._ Assume now that \(u<s\). Then every point \(a_{j}\in\widetilde{A}\) incident with \(b_{1}b_{2}\) is incident with \(t\) lines of \(\Phi^{-1}(A)\); each of these lines is incident with \(s\) points of \(\widetilde{H}\). Also, each point of \(\widetilde{C}\) is collinear with some point of \(\{b_{1},b_{2},\ldots,b_{u}\}\). Finally, for \(u\leq s\) each point of \(\widetilde{A}\) is incident with \(b_{1}b_{2}\), and each point of \(\widetilde{H}\) is collinear with some point of \(\{a_{1},a_{2},\ldots,a_{s+1-u}\}\)._
Assume that there is a point \(b^{*}\textbf{I}\ b_{1}b_{2},b^{*}\in\widetilde{B}\). A line \(V\) of \(\Phi^{-1}(B)\) incident with \(b^{*}\) contains a point \(a_{j}\in\widetilde{A}\), with \(a_{j}\textbf{I}\ b_{1}b_{2}\); the other points incident with \(V\) belong to \(\widetilde{B}\).
* _Assume that \(\Phi^{-1}(\overline{a}\overline{b})=\{b_{1}b_{2}\}\). If \(b^{*}\textbf{I}\ b_{1}b_{2},b^{*}\in\widetilde{B}\), then \(b^{*}\) is incident with a line of \(\Phi^{-1}(\overline{a}\overline{b})\), a contradiction. Hence each point of \(\widetilde{B}\) is incident with \(b_{1}b_{2}\). In this case \(\widetilde{A},\widetilde{B}\) define a partition of the set of all points incident with \(b_{1}b_{2},1<|\widetilde{B}|\leq s,1\leq|\widetilde{A}|\leq s\). Also, \(\widetilde{C}\) consists of all points not incident with \(b_{1}b_{2}\) but collinear with a point of \(\widetilde{B}=\{b_{1},b_{2},\ldots,b_{u}\}\), and \(\widetilde{H}\) consists of all points not incident with \(b_{1}b_{2}\) but collinear with a point of \(\widetilde{A}=\{a_{1},a_{2},\ldots,a_{s+1-u}\}\). Now it easily follows that we have Case (a) in the statement of the theorem._
* _Assume that \(|\Phi^{-1}(\overline{a}\overline{b})|>1\). First, let \(|\widetilde{A}|>1\) and let \(U\in\Phi^{-1}(\overline{a}\overline{b}),U\neq b_{1}b_{2}\). So \(U\in\Phi^{-1}(B)\). But by the summary, we have that as \(U\) is incident with a point of \(\widetilde{A}\), the line \(U\) belongs to \(\Phi^{-1}(A)\), a contradiction. Hence \(\widetilde{A}=\{a_{1}\}\). Then \(\widetilde{B}\) consists of the points distinct from \(a_{1}\) but incident with \(r\) lines incident with \(a_{1},1\leq r\leq t\), and \(\widetilde{H}\) consists of the points distinct from \(a_{1}\) but incident with \(t+1-r\) lines incident with \(a_{1}\). Now it easily follows that we have Case (b) in the statement of the theorem._
**SUBCASE (2)** _Let \(\Phi(b_{1}b_{2})=\overline{b}\overline{c}\in A\)._
Let \(b_{1},b_{2},\ldots,b_{u}(u>1)\) be the points of \(\widetilde{B}\) incident with \(b_{1}b_{
Let \(c_{j}\sim d^{\prime},d^{\prime}\in\widetilde{D},j\in\{1,2,\ldots,s+1-u\}\). Assume that \(c^{\prime}\in\widetilde{C},c^{\prime}\,\mathbf{I}\,c_{j}d^{\prime},c_{j}\neq c^ {\prime}\). Interchanging roles of \(\widetilde{B}\) and \(\widetilde{C}\) in Case (a), then by Subcase (1) we have one of the cases in the statement of the theorem. So we may assume that the line \(c_{j}d^{\prime}\) is incident with \(s\) points of \(\widetilde{D}\).
Assume that there is a point \(d^{\prime\prime}\in\widetilde{D}\) not collinear with one of the points \(c_{1},c_{2},\ldots,c_{s+1-u}\). Let \(a^{\prime}\sim a^{\prime\prime}\sim b_{1}\sim a^{\prime}\), with \(a^{\prime}\neq a^{\prime\prime}\) and \(a^{\prime},a^{\prime\prime}\in\widetilde{A}\). Then \(\mathbf{d}(a^{\prime},d^{\prime\prime})=\mathbf{d}(a^{\prime\prime},d^{ \prime\prime})=6\). Then necessarily \(d^{\prime\prime}\) is collinear with some point \(c_{j}(d^{\prime\prime}\sim c_{j}\sim b_{1})\) of \(\{c_{1},c_{2},\ldots,c_{s+1-u}\}\), a contradiction. Hence \(\widetilde{D}\) is the set of all points not incident with \(b_{1}b_{2}\) but collinear with some point \(c_{j},j\in\{1,2,\ldots,s+1-u\}\).
Assume, by way of contradiction, that there is some point \(c^{\prime\prime}\in\widetilde{C}\) which is not incident with \(b_{1}b_{2}\). Let \(d^{*}\sim c^{\prime\prime}\), with \(d^{*}\in\widetilde{D}\). Then \(d^{*}\) is collinear with some \(c_{j},j\in\{1,2,\ldots,s+1-u\}\). Interchanging again roles of \(\widetilde{B}\) and \(\widetilde{C}\), then by (a)(1) we may assume that the line \(c^{\prime\prime}d^{*}\) is incident with \(s\) points of \(\widetilde{D}\) (otherwise we have one of the two cases in the statement of the theorem). Each such point is collinear with some point of \(\widetilde{C}\) incident with \(b_{1}b_{2}\), a contradiction. So every point of \(\widetilde{C}\) is incident with \(b_{1}b_{2}\).
Now assume that \(u<s\), that is, there are at least two points \(c_{1},c_{2}\) of \(\widetilde{C}\) incident with \(b_{1}b_{2}\). Assume, by way of contradiction, that the point \(a^{\prime\prime\prime}\in\widetilde{A}\) is not collinear with some point of \(\{b_{1},b_{2},\ldots,b_{u}\}\). Let \(c_{1}\sim d^{\prime\prime\prime}\sim d^{\prime\prime\prime\prime}\sim c_{1},d^ {\prime\prime\prime\prime}\neq d^{\prime\prime\prime\prime}\), and \(d^{\prime\prime\prime},d^{\prime\prime\prime\prime}\in\widetilde{D}\). Then \(\mathbf{d}(a^{\prime\prime\prime},d^{\prime\prime\prime})=\mathbf{d}(a^{ \prime\prime\prime},d^{\prime\prime\prime\prime})=6\). So necessarily \(\mathbf{d}(a^{\prime\prime\prime},c_{1})=4\). Similarly, \(\mathbf{d}(a^{\prime\prime\prime},c_{2})=4\), clearly a contradiction. Hence every point of \(\widetilde{A}\) is collinear with some point of \(\{b_{1},b_{2},\ldots,b_{u}\}\).
Assume again that \(u<s\). Interchanging roles of \(\widetilde{B}\) and \(\widetilde{C}\), the second paragraph of (a)(2) shows that each \(c_{j}\) is incident with \(t\) lines of \(\Phi^{-1}(B),j=1,2,\ldots,s+1-u\).
Let again be \(u<s\). By interchanging roles of \(\widetilde{B}\) and \(\widetilde{C}\), we see, relying on a foregoing paragraph, that each point of \(\widetilde{B}\) is incident with \(b_{1}b_{2}\). Hence the set of all points incident with \(b_{1}b_{2}\) is the set \(\widetilde{B}\cup\widetilde{C}\).
Now it easily follows that for \(u<s\) we have Case (a) in the statement of the theorem.
Consequently we now have to assume that \(|\widetilde{C}|=1\), so \(\widetilde{C}=\{c_{1}\}\).
**We summarize what we know in this case.**_Each \(b_{i}\) is incident with \(t\) lines of \(\Phi^{-1}(B)\), \(i=1,2,\ldots,u\); each such line contains \(s\) points of \(\widetilde{A}\). The unique point \(c_{1}\) of \(\widetilde{C}\) is incident with \(t^{\prime},1\leq t^{\prime}\leq t\), lines of \(\Phi^{-1}(B)\); each such line contains \(s\) points of \(\widetilde{D}\). Each point of \(\widetilde{D}\) is collinear with \(c_{1}\)._
We proceed with the proof of the theorem.
1. _Assume that_ \(t^{\prime}=t\). Assume, by way of contradiction, that \(a^{*}\in\widetilde{A}\) is not collinear with some point of \(\{b_{1},b_{2},\ldots,b_{s}\}\). As \(\mathbf{d}(a^{*},d^{*})\neq 8\) for each \(d^{*}\in\widetilde{D}\), it follows that \(a^{*}\sim b^{*}\sim c_{1}\) for some \(b^{*}\in\widetilde{B}\) not incident with \(b_{1}b_{2}\). Hence \(b^{*}c_{1}\in\Phi^{-1}(A)\), so \(t^{\prime}<t\), a contradiction. Hence each point of \(\widetilde{A}\) is collinear with some \(b_{i},i\in\{1,2,\ldots,s\}\). Finally, we show that each point of \(\widetilde{B}\) is incident with \(b_{1}b_{2}\). Assume, by way of contradiction, that \(b^{**}\in\widetilde{B},b^{**}\mathbf{I}\,b_{1}b_{2}\). Let \(a^{**}\sim b^{**},a^{**}\in\widetilde{A}\). Then \(a^{**}\sim b_{i},i\in\{1,2,\ldots,s\}\). By (a)(1) we may assume that \(a^{**}b^{**}\) contains \(s\) points of \(\widetilde{A}\) (otherwise we have Case (a) or Case (b) in the statement of the theorem), and each of these points is collinear with some \(b_{j},j\in\{1,2,\ldots,s\}\), a contradiction. Hence each point of \(\widetilde{B}\) is incident with \(b_{1}b_{2}\). As before it is now clear that we have Case (a) in the statement of the theorem.
2. _Now assume \(t^{\prime}<t\). Let \(M\in\Phi^{-1}(A),M\neq b_{1}b_{2}\), be incident with \(c_{1}\). Then \(M\) is incident with \(s\) points of \(\widetilde{B}\). Also, each line of \(\Phi^{-1}(B)\) containing a point of \(\widetilde{B}\) is incident with \(c_{1}\). Now it is clear that we have Case (b) in the statement of the theorem._
**CASE (b)**_Assume that \(c_{1}\sim c_{2},c_{1}\neq c_{2}\), with \(c_{1},c_{2}\in\widetilde{C}\)._
Simllar to (a).
**CASE (c)**_No two points of \(\widetilde{B}\) and no two points of \(\widetilde{C}\) are collinear._
Let \(b_{1}\in\widetilde{B},c_{1}\in\widetilde{C},b_{1}\sim c_{1}\). Then necessarily \(b_{1},c_{1}\) are the only points incident with \(b_{1}c_{1}\), so \(s=1\), a contradiction.
Call a thick generalized \(n\)-gon of order \((s,t)\)_locally finite_ if exactly one of \(s,t\) is finite.
**Corollary 4.2**.: _Let \(\gamma:\Gamma\mapsto\Delta\) be an epimorphism from the thick locally finite generalized octagon \(\Gamma\) of order \((s,t)\) with \(t\) finite, to the thin octagon \(\Delta\) of order \((s^{\prime},1)\). Then \(s^{\prime}=1\) and we have the conclusion of Theorem 4.1._
Proof.: The part of the proof of Theorem 4.1 that shows that \(s^{\prime}=1\) only uses the fact that the number of lines incident with a point of \(\Gamma\) is finite.
## 5. Locally finitely chained and generated generalized polygons
Let \(S\) be any point set of a generalized \(n\)-gon \(\Gamma\); then as in [15], \(\left\langle S\right\rangle\) by definition is the intersection of all (thin and thick) sub \(n\)-gons that contain \(S\). We call \(\left\langle S\right\rangle\) the subgeometry _generated by \(S\)_. Note that this not necessarily is a (thin or thick) generalized \(n\)-gon itself, but generically it is.
Call a thick generalized \(n\)-gon _locally finitely generated_ if the following property holds:
for every finite point subset \(S\) which generates a (possibly thin) sub \(n\)-gon \(\left\langle S\right\rangle\), we have that \(\left\langle S\right\rangle\) is finite.
Finite generalized \(n\)-gons are trivially locally finitely generated.
Call a generalized \(n\)-gon \(\Gamma\)_locally finitely chained_ if there exists a chain of finite point subsets
\[S_{0}\subseteq S_{1}\subseteq\ldots\subseteq S_{i}\subseteq\ldots\]
indexed over the positive integers, such that each \(S_{j}\) generates a finite (possibly thin) sub \(n\)-gon, and such that
\[\bigcup_{i\geq 0}\left\langle S_{i}\right\rangle\;=\;\Gamma.\]
If \(\Gamma\) is not finite, it is easy to see that we may assume that the chain
\[\left\langle S_{0}\right\rangle\subseteq\left\langle S_{1}\right\rangle \subseteq\ldots\subseteq\left\langle S_{i}\right\rangle\subseteq\ldots\]
is strict.
A very simple key observation is the following lemma taken from [15], in which we suppose that the considered locally finitely generated and locally finitely chain \(n\)-gons are not finite (to avoid trivialities).
**Lemma 5.1** (Thas and Thas [15]).:
1. _Every thick locally finitely generated generalized_ \(n\)_-gon contains thick locally finitely chained sub_ \(n\)_-gons._
2. _Every thick finitely chained generalized_ \(n\)_-gon is locally finitely generated._
3. _A thick generalized_ \(n\)_-gon is locally finitely generated if and only if its point-line dual is._
4. _A thick generalized_ \(n\)_-gon is locally finitely chained if and only if its point-line dual is._
Note that \(\cup_{i\geq 0}\left\langle S_{i}\right\rangle\) is countably infinite or finite.
**Observation 5.2** (Thas and Thas [15]).:
1. _Let_ \(\Gamma\) _be a thick locally finitely chained generalized_ \(n\)_-gon. Then_ \(n\in\{3,4,6,8\}\)_._
2. _Let_ \(\Gamma\) _be a thick locally finitely generated generalized_ \(n\)_-gon. Then_ \(n\in\{3,4,6,8\}\)_._
(Due to their very definition, both types of polygon share many properties with finite polygons.)
Before proceeding, we introduce the following two properties for a point-line incidence structure \(\Gamma=(\mathcal{P},\mathcal{L},\mathbf{I})\):
1. every two elements of \(\mathcal{P}\cup\mathcal{L}\) can be joined by at most one path of length \(<n\);
2. every two elements of \(\mathcal{P}\cup\mathcal{L}\) can be joined by at least one path of length \(\leq n\).
Call a point-line incidence geometry \(\Gamma=(\mathcal{P},\mathcal{L},\mathbf{I})\)_firm_ if each element is incident with at least two different elements. Then by Van Maldeghem [17], \(\Gamma\) is a weak generalized \(n\)-gon if and only if it is firm, and both (Ia) and (Ib) are satisfied.
**Theorem 5.3**.: _Let \(\Gamma\) be a thick locally finitely chained generalized \(8\)-gon, and let \(\gamma:\Gamma\mapsto\Delta\) be an epimorphism, with \(\Delta\) a thin \(8\)-gon of order \((s,1)\). Then \(s=1\) and the conclusions of Theorem 4.1 hold._
_Proof._ Assume, by way of contradiction, that \(s\neq 1\). Now define \(m\) such that \(\gamma(\left\langle S_{m}\right\rangle)\) strictly contains an ordinary \(8\)-gon \(\Delta^{\prime}\) as subgeometry, and such that \(\left\langle S_{m}\right\rangle\) is finite and thick. Since \(\gamma(\left\langle S_{m}\right\rangle)=:\Delta^{\prime\prime}\) is contained in \(\Delta\), we have that (Ia) is automatically satisfied in \(\Delta^{\prime\prime}\). And (Ib) can be verified in \(\Delta^{\prime\prime}\) by applying \(\gamma^{-1}\). Since \(\Delta^{\prime\prime}\) contains an ordinary \(8\)-gon, it now follows that it is firm, so it is a weak generalized \(8\)-gon. Also,
\[\gamma:\ \left\langle S_{m}\right\rangle\ \mapsto\ \Delta^{\prime\prime}\]
is an epimorphism. By [17, Theorem 3.1] (see also [16, Theorem 1.6.2]), \(\Delta^{\prime\prime}\) is either
* the point-line dual of the double of a generalized quadrangle, or
* the point-line dual of a degenerate octagon O which consists of opposite points \(a\) and \(b\), joined by \(r\geq 2\) paths of length \(8\), or
* the point-line dual of the quadruple of a digon (in which case the minimal distance between thick elements in \(\Delta^{\prime\prime}\) is \(4\)).
The point-line dual of O has at most two thick lines, so we can take \(m\) large enough so that \(\gamma(\left\langle S_{m}\right\rangle)\) contains more than two thick lines. As \(s>1\) by assumption, we can also take \(m\) large enough such that \(\gamma(\left\langle S_{m}\right\rangle)\) contains thick elements at distance \(2\).
By the proof of Theorem 4.1, the statement now follows (as \(\Delta^{\prime\prime}\) is the point-line dual of the double of a generalized quadrangle. \(\blacksquare\)
**Corollary 5.4**.: _Let \(\Gamma\) be a thick locally finitely generated generalized \(8\)-gon, and let \(\gamma:\Gamma\mapsto\Delta\) be an epimorphism, with \(\Delta\) a thin \(8\)-gon of order \((s,1)\). Then \(s=1\) and the conclusions of Theorem 4.1 hold. \(\blacksquare\)_
**EXAMPLES.** The property of being locally finitely chained / generated appears to be rather strong, but there are surprisingly many interesting examples of such polygons with much structure as we have seen in [15]. In this section, we describe examples using finite classical octagons.
Suppose \(\mathbb{F}_{q}\) is a finite field, and let \(\overline{\mathbb{F}_{q}}\) be an algebraic closure of \(\mathbb{F}_{q}\). Let \(q=p^{m}\) for the prime \(p\). Then for each positive integer \(n\), \(\overline{\mathbb{F}_{q}}\) has a unique subfield isomorphic to \(\mathbb{F}_{p^{n}}\) (we denote it also as \(\mathbb{F}_{p^{n}}\)). Then
\[\bigcup_{i\geq 1}\mathbb{F}_{p^{i}}\ =\ \overline{\mathbb{F}_{q}}.\]
In particular, \(\mathbb{F}_{p^{n}}\) is a subfield of \(\mathbb{F}_{p^{m}}\) if and only if \(n|m\), and there is only one subfield of \(\mathbb{F}_{p^{m}}\) which is isomorphic to \(\mathbb{F}_{p^{n}}\). For each field \(\mathbb{F}_{p^{m}}\), the map \(\gamma_{p}:\mathbb{F}_{p^{n}}\mapsto\mathbb{F}_{p^{n}}:x\mapsto x^{p}\) is a field automorphism which is called the _Frobenius automorphism_.
Consider an infinite set of pairs \(\left\{(\mathbb{F}_{2^{2n_{i}+1}},\sigma_{i})\right\}_{i}\), where \(\sigma_{i}\) is an automorphism of \(\mathbb{F}_{2^{2n_{i}+1}}\) such that \(\sigma_{i}^{2}(x)=x^{2}\) for all \(x\) in the field (that is, \(\sigma_{i}^{2}\) is the Frobenius automorphism), and so that if \(i<j\), then \(2n_{i}+1\) divides \(2n_{j}+1\). In that case, \(\mathbb{F}_{2^{2n_{i}+1}}\) is a subfield of \(\mathbb{F}_{2^{2n_{j}+1}}\) and \(\sigma_{j}\) fixes \(\mathbb{F}_{2^{2n_{i}+1}}\), so it induces an automorphism of the latter. Note that \(\sigma_{i}\) is defined as follows: \(\sigma_{i}:\mathbb{F}_{2^{2n_{i}+1}}\mapsto\mathbb{F}_{2^{2n_{i}+1}}:x\mapsto x ^{n_{i}+1}\), and also note that the actions of \(\sigma_{i}\) and \(\sigma_{j}\) coincide on \(\mathbb{F}_{2^{2n_{i}+1}}\) so that the system \(\left\{(\mathbb{F}_{2^{2n_{i}+1}},\sigma_{i})\right\}_{i}\) is "compatible."
For each pair \((k_{i}:=\mathbb{F}_{2^{2n_{i}+1}},\sigma_{i})\) we construct a Ree-Tits octagon O\((k_{i},\sigma_{i})\) as defined in [16, section 2.5].
We will use the following theorem:
**Theorem 5.5** (Joswig and Van Maldeghem [9]).: _Let \(\mathsf{O}(k,\sigma)\) be a Ree-Tits octagon. If \(k^{\prime}\) is a subfield of \(k\) for which \(k^{\prime\sigma}\subseteq k^{\prime}\), then \(\mathsf{O}(k^{\prime},\sigma_{|k^{\prime}})\) naturally defines a sub Ree-Tits octagons by "restricting coordinates." Vice versa, if \(\mathsf{O}^{\prime}\) is a proper thick suboctagon of \(\mathsf{O}(k,\sigma)\), then \(\mathsf{O}^{\prime}\) is a Ree-Tits octagon which arises in this way (by field reduction)._
By Theorem 5.5 we know that the set \(\left\{(k_{i},\sigma_{i})\right\}_{i}\) actually defines a chain of finite Ree-Tits octagons, and obviously the union of these octagons is again a (non-finite) Ree-Tits octagon \(\mathsf{O}(\widehat{k},\widehat{\sigma})\), where \(\widehat{k}\) is the union of the fields \(k_{i}\), and \(\widehat{\sigma}\) is the automorphism of \(\widehat{k}\) defined by the local actions of the \(\sigma_{i}\) induced on the subfields \(k_{i}\). (Note that if \(a\in\widehat{k}\), then \(a\) is contained in some subfield \(k_{m}\), and \(\widehat{\sigma}(a)=\sigma_{m}(a)\), so that trivially \(\widehat{\sigma}^{2}\) is the Frobenius automorphism of \(\widehat{k}\).) Each of \(\mathsf{O}(\widehat{k},\widehat{\sigma})\), \(\widehat{k}\) and \(\widehat{\sigma}\) can be seen as direct limits over the system \(\left\{2n_{i}+1\right\}_{i}\).
It is very easy to find systems \(\left\{2n_{i}+1\right\}_{i}\) as above. We describe a couple of them, but the possibilities are obviously numerous.
* Let \(N\) be any odd positive integer different from \(1\), and let \(\left\{m_{i}\right\}_{i}\) be any strictly ascending chain of strictly positive integers; then define \(2n_{i}+1\) as \(N^{m_{i}}\).
* Enumerate the odd primes as \(p_{1},p_{2},p_{3},\ldots\). Then define \(2n_{1}+1=p_{1}\), \(2n_{2}+1=p_{1}p_{2}\), \(2n_{3}+1=p_{1}p_{2}p_{3}\), etc.
* Variations of the above.
Note that it is very easy to construct nonisomorphic fields \(\widehat{k}\) (and hence nonisomorphic octagons) using these constructions. For instance, let \(N\) be any prime number; then the field \(\widehat{k}_{1}\) constructed in the first example contains subfields isomorphic to \(\mathbb{F}_{2^{N^{m}}}\) with \(m>1\), while the field \(\widehat{k}_{2}\) constructed in the second example does not.
All the octagons \(\mathsf{O}(\widehat{k},\widehat{\sigma})\) are locally finitely chained, and as they are countably infinite, they are also locally finitely generated.
**Remark 5.6**.: Note that the only known thick finite generalized octagons are Ree-Tits octagons. If there would be no others (as has been conjectured by some), then the examples constructed above essentially describe all (infinite) locally finitely chained generalized octagons. For, let \(\mathsf{O}\) be a locally finitely chained thick generalized octagon. Then there is a chain
\[S_{0}\subseteq S_{1}\subseteq\ldots\subseteq S_{i}\subseteq\ldots\]
indexed over the positive integers, such that each \(S_{j}\) generates a finite (possibly thin) sub \(8\)-gon, and such that
\[\bigcup_{i\geq 0}\left\langle S_{i}\right\rangle\;=\;\mathsf{O}.\]
It is obvious that from some index \(m\) on, \(\left\langle S_{n}\right\rangle\) will be thick if \(n\geq m\). So we can as well use only thick sub octagons (with the same notation as above). Since all of these are finite, we conjecturally end up with a chain of Ree-Tits octagons \(\left\langle S_{i}\right\rangle=\mathsf{O}(k_{i},\sigma_{i})\), where \(u<v\) (positive integers) implies that \(k_{u}\) is a subfield of \(k_{v}\) and \(\sigma_{v}\) induces \(\sigma_{u}\) in \(k_{u}\), and where \(k_{i}\cong\mathbb{F}_{2^{2n_{i}+1}}\) for some positive integer \(n_{i}\). It follows that \(\mathsf{O}\) is of the type described above.
## 6. Counter examples in the infinite case
In this section we show that Theorem 4.1 does not hold for infinite generalized octagons, by constructing epimorphisms from thick infinite generalized \(n\)-gons to generalized \(n\)-gons with parameters \((s^{\prime},1)\), \(s^{\prime}>1\). Here, \(n\geq 3\).
We first need some more terminology. Suppose that \(\Upsilon\) is a point-line geometry; we measure distances using the incidence graph \(\mathscr{I}\) of \(\Upsilon\). Call a path \(x=x_{0},x_{1},\ldots,x_{n}=y\) of length \(n\) between vertices \(x\) and \(y\)_nonstammering_ if \(x_{i-1}\neq x_{i+1}\) for all \(i\in\{1,2,\ldots,n-1\}\). A _circuit_ is a finite nonstammering closed path. The _girth_ of \(\Upsilon\) is the length of a minimal circuit (length \(0\) is not allowed in this definition). It is \(\infty\) by definition if no nontrivial circuits exist. If the girth \(n\) of \(\Upsilon\) is finite, it is easy to see that it must be even, and we call \(n/2\) the _gonality_ of \(\Upsilon\)
**Proposition 6.1**.: _Let \(\Gamma\) be any generalized \(n\)-gon of order \((s^{\prime},1)\) with \(s^{\prime}>1\) finite or countable, and \(n\geq 3\). Then there exist epimorphisms \(\varepsilon:\ \mathcal{A}\ \mapsto\ \Gamma\), with \(\mathcal{A}\) a thick generalized \(n\)-gon._
_Proof:_ Suppose that \(A=A_{0}\) is a connected point-line geometry of finite gonality \(m\geq n\) which is not a weak generalized \(n\)-gon, and where \(n\geq 3\), and let \(\epsilon:A\mapsto\Gamma\) be an epimorphism (of point-line incidence geometries), where \(\Gamma\) is a generalized \(n\)-gon of order \((s^{\prime},1)\), where \(s^{\prime}>1\) is arbitrary but finite or countable. We also suppose that \(A\) has a finite or countable number of points and/or lines. (The fact that \(A\) is connected is not important: starting from a disconnected geometry, one can make it connected by adding appropriate paths between the connected components. )
We will freely construct a generalized \(n\)-gon \(\overline{A}\) over \(A\) and an epimorphism \(\overline{\epsilon}:\overline{A}\mapsto\Gamma\).
Suppose \(A_{i}\) and the epimorphism \(\epsilon_{i}:A_{i}\mapsto\Gamma\) were constructed in a previous step, and define \(A_{i+1}\) as follows: if \(x\) and \(y\) are elements at distance \(n-1\) in \(A_{i}\), then we add a completely new path \(\gamma(x,y)\) of length \(n+1\) between \(x\) and \(y\). Now define \(\epsilon_{i+1}\) as follows:
* if the distance between \(\epsilon_{i}(x)\) and \(\epsilon_{i}(y)\) is smaller than \(n-1\) in \(\Gamma\), then map \(\gamma(x,y)\) surjectively on the unique shortest path between \(\epsilon_{i}(x)\) and \(\epsilon_{i}(y)\);
* if the distance between \(\epsilon_{i}(x)\) and \(\epsilon_{i}(y)\) equals \(n-1\) in \(\Gamma\), then map \(\gamma(x,y)\) surjectively on an arbitrary path of length \(n+1\) between \(\epsilon_{i}(x)\) and \(\epsilon_{i}(y)\).
Now let \(\overline{A}=\cup_{n\geq 1}A_{n}\) and \(\overline{\epsilon}=\cup_{n\geq 1}\epsilon_{n}\) (where we identify a function with its graph). Then
\[\overline{\epsilon}:\ \overline{A}\ \mapsto\ \Gamma\]
obviously is an epimorphism.
By [16, 1.3.122], it follows that \(\overline{A}\) is a weak generalized \(n\)-gon; if \(A\) already contained an ordinary \((n+1)\)-gon, it follows that \(\overline{A}\) would be a (thick) generalized \(n\)-gon. In such case, setting \(\left(\mathcal{A},\varepsilon\right)=\left(\overline{A},\overline{\epsilon}\right)\) yields the desired examples.
Note that the construction above also works for thick targets \(\Gamma\).
Note also that it is easy to construct legitimate begin-configurations \((A,\epsilon)\) (we will leave this exercise to the interested reader).
**Remark 6.2**.: In Gramlich and Van Maldeghem [17], it is proved that starting from any (thick) generalized \(n\)-gon \(\Gamma\) (\(n\geq 2\)), one can construct a free generalized \(n\)-gon \(\overline{A}\) and an epimorphism
\[\overline{\epsilon}:\ \overline{A}\ \mapsto\ \Gamma.\]
The begin configuration in [17] is different, and the target is thick, but of course the essence is the same. |
2303.02825 | Proper absolute extensors | We describe the proper absolute (neighborhood) extensors for the class of at
most $n$-dimensional spaces, notation $\rm{A(N)E}_p(n)$. For example, the
unique locally compact $n$-dimensional separable metric space
$X\in\rm{AE}_p(n)$ satisfiyng the $\rm{DD^nP}$-property is the $n$-dimensional
Menger compactum without a point. Non-metrizable $\rm{A(N)E}_p(n)$-spaces are
also described. | Vesko Valov | 2023-03-06T01:41:55Z | http://arxiv.org/abs/2303.02825v1 | # Proper absolute extensors
###### Abstract.
We describe the proper absolute (neighborhood) extensors for the class of at most \(n\)-dimensional spaces, notation \(\mathrm{A(N)E_{p}(n)}\). For example, the unique locally compact \(n\)-dimensional separable metric space \(X\in\mathrm{AE_{p}(n)}\) satisfiyng the \(\mathrm{DD^{n}P}\)-property is the \(n\)-dimensional Menger compactum without a point. Non-metrizable \(\mathrm{A(N)E_{p}(n)}\)-spaces are also described.
Key words and phrases:absolute proper extensor for \(n\)-dimensional spaces, \(\mathrm{DD^{n}P}\)-property, \(n\)-dimensional Menger compactum 2020 Mathematics Subject Classification: Primary 54C20; Secondary 54F45 The second author was partially by NSERC Grant 261914-19.
## 1. Introduction and preliminary results
In this note we describe the proper absolute extensors for finite-dimensional spaces, see Theorem 2.4 and Theorem 3.2. Recall that a map \(f:X\to Y\) is proper if \(f^{-1}(K)\) is compact for every compact \(K\subset Y\). Note that if \(X,Y\) are locally compact, then \(f\) is proper iff it is closed and all fibres \(f^{-1}(y)\), \(y\in Y\), are compact. Proper and closed extensions of maps were considered by different authors, see Michael [13], Nowinski [15]. Our results are closer to Chigogidze's ones from [4], where proper absolute (neighborhood) extensors were introduced and studied.
We say that a locally compact space \(X\) is a _proper absolute neighborhood extensor for the class of at most \(n\)-dimensional spaces_ (notation \(X\in\mathrm{ANE_{p}(n)}\)) if every proper map \(f:A\to X\), where \(A\) is a closed subset of a locally compact Lindelof-space \(Y\) with \(\dim Y\leq n\), admits a perfect extension \(\widetilde{f}\) over a closed neighborhood of \(A\) in \(Y\). When \(f\) admits a proper extension over \(Y\), we say \(X\) is a _proper absolute extensor for the class of at most \(n\)-dimensional spaces_ (notation \(X\in\mathrm{AE_{p}(n)}\)). Since every space admitting a proper map into a compact space is compact, it follows from the definition that there is no compact \(\mathrm{AE_{p}(n)}\)-space. In particular, the \(n\)-sphere \(\mathbb{S}^{n}\), which is an absolute extensor for the \(n\)-dimensional spaces, is not an \(\mathrm{AE_{p}(n)}\).
If, in the above definition, \(Y\) is metric and we drop the requirement for \(f\) and \(\widetilde{f}\) to be proper maps, we obtain the definition of absolute (neighborhood) extensors (notation A(N)E(n)) for the class of at most \(n\)-dimensional spaces. It is well known [3] that a metric space \(X\) is an ANE(n) iff \(X\) is \(\mathrm{LC}^{\mathrm{n}-1}\). Moreover, if in addition, \(X\) is \(\mathrm{C}^{\mathrm{n}-1}\), then \(X\in\mathrm{AE(n)}\). Recall that a space \(X\) is \(\mathrm{LC}^{\mathrm{n}}\) if for every \(x\in X\) and its neighborhood \(U\) in \(X\) there is another neighborhood \(V\) of \(x\) such that \(V\overset{m}{\hookrightarrow}U\) for all \(m\leq n\) (here \(V\overset{m}{\hookrightarrow}U\) means that \(V\subset U\) and every map from \(m\)-dimensional sphere \(\mathbb{S}^{m}\) into \(V\) can be extended to a map \(\mathbb{B}^{m+1}\to U\) over the \((m+1)\)-dimensional cub \(\mathbb{B}^{m+1}\)). We also say that a set \(A\subset X\) is \(k-\mathrm{LCC}\) in \(X\) if for every point \(x\in A\) and its neighborhood \(U\) in \(X\) there exists another neighborhood \(V\) of \(x\) with \(V\setminus A\overset{k}{\hookrightarrow}U\setminus A\). If \(A\) is \(k-\mathrm{LCC}\) in \(X\) for all \(k\leq n\), then \(A\) is said to be \(\mathrm{LCC}^{\mathrm{n}}\) in \(X\).
## 2. Second countable \(\mathrm{AE_{p}(n)}\)-spaces
Everywhere in this section by a space, if not explicitly said otherwise, we mean a locally compact separable metric space. By \(C(Z,X)\) we denote the set of all continuous maps from \(Z\) to \(X\) equipped with the compact-open topology. A close subset \(A\subset X\) is said to be a _\(Z_{n}\)-set in \(X\)_[14] if the set \(C(\mathbb{B}^{n},X\backslash A\) is dense in \(C(\mathbb{B}^{n},X)\). The following description of \(Z_{n}\)-sets in metric spaces is well known, but we couldn't find a reference.
**Lemma 2.1**.: _Let \((X,d)\) be a metric \(\mathrm{LC}^{\mathrm{n}-1}\)-space and \(A\) be a closed nowhere dense set in \(X\). Then \(A\) is a \(Z_{n}\)-set in \(X\) iff \(A\) is \(\mathrm{LCC}^{\mathrm{n}-1}\)._
Proof.: The sufficiency follows from the properties of metric \(\mathrm{LC}^{\mathrm{n}-1}\)-spaces and the definition of \(Z_{n}\)-sets. Suppose \(A\) is \(\mathrm{LCC}^{\mathrm{n}-1}\), \(f:\mathbb{B}^{n}\to X\) is a given map and \(\eta>0\). We consider the following property for every \(x\in X\): if \(U\) is a neighborhood of \(x\) in \(X\), then there is another neighborhood \(V\subset U\) such that \(V\setminus A\overset{n-1}{\hookrightarrow}U\setminus A\). Because \(A\) is \(\mathrm{LCC}^{\mathrm{n}-1}\) and \(X\) is \(\mathrm{LC}^{\mathrm{n}-1}\), every \(x\in X\) has that property. So, by [9] we can assume that the metric \(d\) satisfies the following condition: To every \(\varepsilon>0\) there corresponds a \(\delta>0\) such that \(B_{\delta}(x)\setminus A\overset{m}{\hookrightarrow}B_{\varepsilon}(x)\setminus A\) for every \(x\in X\) and every \(m\leq n-1\). Here \(B_{\delta}(x)\) denotes the open ball in \(X\) with center \(x\) and a radius \(\delta\). We write \(\delta\overset{m}{\hookrightarrow}\varepsilon\) to denote that \(B_{\delta}(x)\setminus A\overset{m}{\hookrightarrow}B_{\varepsilon}(x)\setminus A\) for every \(x\in X\). We choose a finite sequence \(\{\varepsilon_{m}\}_{m\leq n}\) with \(\varepsilon_{m}\overset{m}{\hookrightarrow}\varepsilon_{m+1}\) for every \(0\leq m\leq n-1\) and \(\varepsilon_{n}=\eta\). Next, define the set-valued maps \(\varphi_{m}:\mathbb{B}^{n}\rightsquigarrow X\) by \(\varphi_{m}(y)=B_{\varepsilon_{m}}(f(y))\setminus A\), \(0\leq m\leq n\). Since \(A\) is \(\mathrm{LCC}^{\mathrm{n}-1}\), \(\varphi_{m}(y)\neq\varnothing\) for all \(y\in\mathbb{B}^{n}\). It is easily
seen that each \(\varphi_{m}\) has the following property: If \(K\) is a compact subset of \(\varphi_{m}(y_{0})\) for some \(y_{0}\in\mathbb{B}^{n}\), then there is a neighborhood \(O(y_{0})\subset\mathbb{B}^{k}\) with \(K\subset\varphi_{m}(y)\) for all \(y\in O(y_{0})\). According to [10], there exists a map \(g:\mathbb{B}^{n}\to X\) with \(g(y)\in\varphi_{n}(y)\) for all \(y\in\mathbb{B}^{n}\). This means that \(g\) maps \(\mathbb{B}^{n}\) into \(X\setminus A\) and \(d(f(y),g(y))<\eta\) for every \(y\in\mathbb{B}^{n}\). Hence, \(A\) is a \(Z_{n}\)-set in \(X\).
For every locally compact space \(X\) let \(\omega X=X\cup\{\omega\}\) be the one-point compactification of \(X\).
**Lemma 2.2**.: _If \(X\) is an \(\mathrm{AE}_{\mathrm{p}}(\mathrm{n})\)-space, then \(\{\omega\}\) is a \(Z_{n}\)-set in \(\omega X\)._
Proof.: Since every metric \(\mathrm{AE}_{\mathrm{p}}(\mathrm{n})\)-space is an absolute extensor for compact spaces of dimension \(\leq n\), \(X\) is \(\mathrm{LC}^{\mathrm{n-1}}\) Hence, by Lemma 2.1, we need to show that \(\{\omega\}\) is \(\mathrm{LCC}^{\mathrm{n-1}}\). Suppose there are \(k\leq n-1\) and a neighborhood \(U\) of \(\{\omega\}\) in \(\omega X\) such that for every neighborhood \(V\) of \(\{\omega\}\) there exists a map \(g_{V}:\mathbb{S}^{k}\to V\backslash\{\omega\}\) which does not admit an extension from \(\mathbb{B}^{k+1}\) into \(U\backslash\{\omega\}\). Take a local base \(\{V_{m}\}\) of neighborhoods of \(\{\omega\}\) in \(\omega X\) and corresponding maps \(g_{m}:\mathbb{S}^{k}\to V_{m}\backslash\{\omega\}\) such that \(V_{m}\subset U\) and each \(g_{m}\) cannot be extended to a map \(\widetilde{g}_{m}:\mathbb{B}^{k+1}\to U\backslash\{\omega\}\). Now, for each \(m\) let \(\mathbb{S}^{k}_{m}\) and \(\mathbb{B}^{k+1}_{m}\) be copies of \(\mathbb{S}^{k}\) and \(\mathbb{B}^{k+1}\), respectively. Consider the disjoint unions \(Y=\biguplus_{m=1}^{\infty}\mathbb{B}^{k+1}_{m}\), \(A=\biguplus_{m=1}^{\infty}\mathbb{S}^{k}_{m}\) and their one-point compactification \(\omega Y=Y\cup\{\omega_{Y}\}\). Obviously, \(\omega A=A\cup\{\omega_{Y}\}\) and there is a map \(g:A\to X\) with \(g|\mathbb{S}^{k}_{m}=g_{m}\). Since the map \(g\) is proper, it admits a proper extension \(\widetilde{g}:Y\to X\). Hence, \(\widetilde{g}\) is extended to a map \(h:\omega Y\to\omega X\) such that \(h(\{\omega_{Y}\})=\{\omega\}\). Consequently, \(h^{-1}(U)\) contains almost all \(\mathbb{B}^{k+1}_{m}\). On the other hand, \(h(\mathbb{B}^{k+1}_{m})=\widetilde{g}(\mathbb{B}^{k+1}_{m})\subset X\). Therefore, \(\widetilde{g}|\mathbb{B}^{k+1}_{m}\) is a map into \(U\backslash\{\omega\}\) extending \(g_{m}\) for every \(\mathbb{B}^{k+1}_{m}\) contained in \(h^{-1}(U)\), a contradiction.
**Proposition 2.3**.: _Every space \(X\) is an \(\mathrm{AE}_{\mathrm{p}}(0)\)._
Proof.: Let \(A\subset Y\) be a closed set and \(f:A\to X\) be a proper map, where \(Y\) is a \(0\)-dimensional locally compact and Lindelof space. Then \(f\) can be extended to a map \(f_{1}:\overline{A}\to\omega X\) over the closure of \(A\) in \(\beta Y\) with \(f_{1}(\overline{A}\backslash A)=\{\omega\}\). Next, consider the map \(g:A\cup(\beta Y\backslash Y)\to\omega X\) such that \(g(y)=f_{1}(y)\) for \(y\in\overline{A}\) and \(g(y)=\{\omega\}\) for \(y\in\beta Y\backslash Y\). Since \(\omega X\in\mathrm{AE}(0)\) (as a complete metric space), \(g\) admits an extension \(\widetilde{g}:\beta Y\to\omega X\). The set \(\widetilde{g}(A)=f(A)\) is closed in \(X\), so \(A_{1}=\widetilde{g}^{-1}(f(A))\) is closed and \(G_{\delta}\) in \(Y\) such that \(\widetilde{g}|A_{1}\) is proper. Consider the function space \(C(\beta Y,\omega X)\) with the uniform convergence topology and let \(B=A_{1}\cup(\beta Y\backslash Y)\). The set \(C_{B}=\{h\in C(\beta Y,\omega X):h|B=\widetilde{g}|B\}\) is a complete metric space. Choose a sequence \(\{K_{i}\}\) of compact sets
\(Y\) with \(\bigcup_{i\geq 1}K_{i}=Y\backslash A_{1}\) (this is possible because \(Y\) is a locally compact Lindelof-space and \(A_{1}\) is a closed \(G_{\delta}\)-set in \(Y\)). Let \(C_{i}\) be the set of all maps \(h\in C_{B}\) such that \(h(K_{i})\subset X\). Since \(\{\omega\}\) is nowhere dense set in \(\omega X\) (it is actually a \(Z_{0}\)-set in \(\omega X\)) and \(\omega X\in\mathrm{AE}(0)\), we can show that each \(C_{i}\) is an open and dense subset of \(C_{B}\). Hence, \(\bigcap_{i\geq 1}C_{i}\neq\varnothing\). Then \(h(Y)\subset X\) and \(h(\beta Y\backslash Y)=\{\omega\}\) for every \(h\in\bigcap_{i\geq 1}C_{i}\). Hence, \(h|Y\) is a proper map into \(X\) extending \(f\).
**Theorem 2.4**.: _The following conditions are equivalent for any space \(X\) and \(n\geq 1\):_
1. \(X\in\mathrm{AE}_{\mathrm{p}}(\mathrm{n})\)_;_
2. _The one-point compactification_ \(\omega X\) _is an_ \(\mathrm{AE}(\mathrm{n})\) _and_ \(\{\omega\}\) _is a_ \(Z_{n}\)_-set in_ \(\omega X\)_;_
3. _There exists a metrizable compactification_ \(\widetilde{X}\) _of_ \(X\) _such that both_ \(\widetilde{X}\) _and the remainder_ \(\widetilde{X}\backslash X\) _are_ \(\mathrm{AE}(\mathrm{n})\)_spaces, and_ \(\widetilde{X}\backslash X\) _is an_ \(Z_{n}\)_-set in_ \(\widetilde{X}\)_._
Proof.: Suppose \(X\in\mathrm{AE}_{\mathrm{p}}(\mathrm{n})\) and embed \(\omega X\) in the Hilbert cube \(Q\). According to Dranishnikov [7] there exists a surjective, open \(n\)-invertible map \(d_{n}:\mu^{n}\to Q\) such that \(d_{n}^{-1}(z)\) is homeomorphic to \(\mu^{n}\) for every \(z\in Q\), where \(\mu^{n}\) is the universal \(n\)-dimensional Menger compactum. Recall that the \(n\)-invertibility of \(d_{n}\) means that for any paracompact space \(Z\) of dimension \(\dim Z\leq n\) and a map \(g:Z\to Q\) there is a map \(\widetilde{g}:Z\to\mu^{n}\) such that \(d_{n}\circ\widetilde{g}=g\). Then \(d_{n}^{-1}(\{\omega\})\) is nowhere dense in \(\mu^{n}\) and \(\mu^{n}\) is a compactification of \(\mu^{n}\backslash d_{n}^{-1}(\{\omega\})\). Now, consider the restriction \(d_{n}^{\prime}=d_{n}|d_{n}^{-1}(X)\). Obviously, \(d_{n}^{\prime}\) is a proper map, so it admits a proper extension \(h_{n}:\mu^{n}\backslash d_{n}^{-1}(\{\omega\})\to X\). The properness of \(h_{n}\) implies that \(h_{n}\) can be extended to a continuous map \(\widetilde{h}_{n}:\mu^{n}\to\omega X\) such that \(\widetilde{h}_{n}(d_{n}^{-1}(\{\omega\}))=\{\omega\}\). Then \(\widetilde{h}_{n}|(d_{n}^{-1}(\omega X))=d_{n}|(d_{n}^{-1}(\omega X))\). Hence, \(\widetilde{h}_{n}\) is an \(n\)-invertible map because so is \(d_{n}\). This fact in combination with \(\mu^{n}\in\mathrm{AE}(\mathrm{n})\) yields that \(\omega X\in\mathrm{AE}(\mathrm{n})\). Finally, by Lemma 2.2, \(\{\omega\}\) is a \(Z_{n}\)-set in \(\omega X\). That completes the implication \((i)\Rightarrow(ii)\). The implication \((ii)\Rightarrow(iii)\) is trivial.
To prove the implication \((iii)\Rightarrow(i)\) we follow the proof of Proposition 2.3. Let \(A\subset Y\) be a closed set and \(f:A\to X\) be a proper map, where \(Y\) is at most \(n\)-dimensional locally compact and Lindelof space. Following the notations from the proof of Proposition 2.3, we first extend \(f\) to a map \(f_{1}:\overline{A}\to\widetilde{X}\) with \(f_{1}(\overline{A}\backslash A)\subset\widetilde{X}\backslash X\). Then, using that \(\widetilde{X}\backslash X\in\mathrm{AE}(\mathrm{n})\), we extend \(f_{1}\) to a map \(g:\overline{A}\cup(\beta Y\backslash Y)\to\widetilde{X}\) such that \(g(\beta Y\backslash Y)\subset\widetilde{X}\backslash X\). Next, since \(\widetilde{X}\) is an \(\mathrm{AE}(\mathrm{n})\), we find a map \(\widetilde{g}:\beta Y\to\widetilde{X}\) extending \(g\). Then \(\widetilde{g}|A_{1}:A_{1}\to X\) is a proper
extension of \(f\) with \(A_{1}\subset Y\) being a closed \(G_{\delta}\)-subset of \(Y\) containing \(A\). Let \(B=A_{1}\cup(\beta Y\backslash Y)\) and consider a sequence \(\{K_{i}\}\) of compact sets in \(Y\) with \(\bigcup_{i\geq 1}K_{i}=Y\backslash A_{1}\) and the corresponding sets \(C_{B}=\{h\in C(\beta Y,\widetilde{X}):h|B=\widetilde{g}|B\}\) and \(C_{i}\). Now, since \(\widetilde{X}\backslash X\) is \(Z_{n}\)-set in \(\widetilde{X}\), all \(C_{i}\) are open and dense in \(C_{B}\). Therefore, \(\bigcap_{i\geq 1}C_{i}\neq\varnothing\) and \(h|Y\) is a proper map into \(X\) extending \(f\) for every \(h\in\bigcap_{i\geq 1}C_{i}\).
We say that a space \(X\) satisfies the _disjoint \(n\)-disks property_ (br., DD\({}^{n}\)P-property) if any two maps \(f,g:\mathbb{B}^{n}\to X\) can be approximated by maps \(f^{\prime},g^{\prime}:\mathbb{B}^{n}\to X\) with \(f^{\prime}(\mathbb{B}^{n})\cap g^{\prime}(\mathbb{B}^{n})=\varnothing\). Bestvina [2] characterized \(\mu^{n}\) as the only \(n\)-dimensional metric AE(n)-compactant satisfying the DD\({}^{n}\)P-property. Since \(\omega X\) satisfies the DD\({}^{n}\)P provided \(X\in\) DD\({}^{n}\)P and \(\{\omega\}\) is a \(Z_{n}\)-set in \(\omega X\), Bestvina's result implies the following one:
**Corollary 2.5**.: _Let \(X\in\)_ AE\({}_{\rm p}\)(n) _with_ dim\(\,X=n\)_. Then_ \(X\in\) DD\({}^{n}\)P _iff_ \(\omega X\) _is homeomorphic to_ \(\mu^{n}\)_._
Chigogidze [5] introduced the \(n\)-shape functor \((n-{\rm Sh})\) and proved that two \(Z_{n}\)-sets \(X\) and \(Y\) in \(\mu^{n}\), \(n\geq 1\), have the same \((n-1)\)-shape if and only \(\mu^{n}\backslash X\) is homeomorphic to \(\mu^{n}\backslash Y\). Surprisingly, Theorem 2.4 implies a particular case of Chigogidze's complement theorem.
**Corollary 2.6**.: _Suppose \(X\) and \(Y\) are two \(Z_{n}\)-sets in \(\mu^{n}\) such that \(X,Y\in\)_ AE(n)_. Then \(\mu^{n}\backslash X\) is homeomorphic to \(\mu^{n}\backslash Y\)._
Proof.: Indeed, by Theorem 2.4 both \(X^{\prime}=\mu^{n}\backslash X\) and \(Y^{\prime}=\mu^{n}\backslash Y\) are AE\({}_{\rm p}\)(n). Moreover, \(X^{\prime}\) and \(Y^{\prime}\) satisfy the DD\({}^{n}\)P because \(X\) and \(Y\) are \(Z_{n}\)-sets in \(\mu^{n}\). Hence, by Corollary 2.5, both \(\omega X^{\prime}\) and \(\omega Y^{\prime}\) are homeomorphic to \(\mu^{n}\). Finally, since \(\mu^{n}\) is homogeneous [2], \(X^{\prime}\) is homeomorphic to \(Y^{\prime}\).
**Proposition 2.7**.: _A space \(X\) is an_ AE\({}_{\rm p}\)(n) _if and only if \(X\) is a proper \(n\)-invertible image of \(\mu^{n}\backslash F\) for some \(Z_{n}\)-set \(F\subset\mu^{n}\) with \(F\in\)_ AE(n)_._
Proof.: Let \(X\in\) AE\({}_{\rm p}\)(n) and embed \(\omega X\) in \(Q\). As in Theorem 2.4, considering Dranishnikov's resolution \(d_{n}:\mu^{n}\to Q\) we obtain a proper map \(h_{n}:\mu^{n}\backslash d_{n}^{-1}(\{\omega\})\to X\) which extends the map \(d_{n}|d_{n}^{-1}(X)\). Since \(d_{n}\) is invertible, so is \(h_{n}\). On the other hand, by [1], we can assume that \(d_{n}^{-1}(K)\) is a \(Z_{n}\)-set in \(\mu^{n}\) for every \(Z\)-set \(K\subset Q\). Hence, \(d_{n}^{-1}(\{\omega\})\subset\mu^{n}\) is a \(Z_{n}\)-set (recall that a \(Z\)-set is a set which is \(Z_{n}\)-set for all \(n\) and that every \(z\in Q\) is a \(Z\)-set in \(Q\)). On the other hand, \(d_{n}^{-1}(\{\omega\})\) is homeomorphic to \(\mu^{n}\), so \(d_{n}^{-1}(\{\omega\})\in\) AE(n).
Now, suppose there is a proper \(n\)-invertible map \(g:\mu^{n}\backslash F\to X\) for some \(Z_{n}\)-set \(F\subset\mu^{n}\). Since \(g\) is \(n\)-invertible, every proper map \(f:A\to\)
\(X\), where \(A\) is closed subset of at most \(n\)-dimensional locally compact and Lindelof space \(Y\), can be lifted to a proper map \(f^{\prime}:A\to\mu^{n}\backslash F\). By Theorem 2.4, \(\mu^{n}\backslash F\in\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\). So, \(f^{\prime}\) admits a proper extension \(h:Y\to\mu^{n}\backslash F\). Finally, \(g\circ h:Y\to X\) is a proper extension of \(f\). Therefore, \(X\in\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\).
**Corollary 2.8**.: _A space \(X\) is an \(\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\) if and only if \(X\) is a proper \(n\)-invertible image of \(\mu^{n}\backslash\{pt\}\)._
Proof.: By Theorem 2.4, \(\mu^{n}\backslash F\) is an \(\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\) for every \(F\in\operatorname{AE}(\mathrm{n})\) which is a \(Z_{n}\)-set in \(\mu^{n}\). On the other hand, \(\mu^{n}\backslash F\) satisfies the \(\operatorname{DD}^{\mathrm{n}}\)P as a complement of a \(Z_{n}\)-set in \(\mu^{n}\). Hence, Proposition 2.7 and Corollary 2.5 complete the proof.
Concerning \(\operatorname{ANE}_{\mathrm{p}}(\mathrm{n})\), arguments similar to the proof of Proposition 2.3 provide the next lemma.
**Lemma 2.9**.: _If a space \(X\) admits a metric \(\operatorname{ANE}(\mathrm{n})\)-compactification \(\overline{X}\) such that \(\overline{X}\backslash X\) is an \(\operatorname{AE}(\mathrm{n})\), then \(X\in\operatorname{ANE}_{\mathrm{p}}(\mathrm{n})\)._
**Corollary 2.10**.: \(\mathbb{R}^{n}\) _is an \(\operatorname{ANE}_{\mathrm{p}}(\mathrm{n})\) and an \(\operatorname{AE}_{\mathrm{p}}(\mathrm{n}-1)\), but not an \(\operatorname{AE}_{\mathrm{p}}(\mathrm{n})-\operatorname{space}\)._
Proof.: It follows from Theorem 2.4 that \(\mathbb{R}^{n}\) is an \(\operatorname{AE}_{\mathrm{p}}(\mathrm{n}-1)\), but not an \(\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\)-space because \(\omega\mathbb{R}^{n}=\mathbb{S}^{n}\) and any point of \(\mathbb{S}^{n}\) is a \(Z_{n-1}\)-point in \(\mathbb{S}^{n}\) but not a \(Z_{n}\)-point. On the other hand, \(\mathbb{R}^{n}\in\operatorname{ANE}_{\mathrm{p}}(\mathrm{n})\) according to Lemma 2.9.
## 3. Non-metrizable \(\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\)-spaces
In this section all spaces are locally compact and Lindelof. A map \(f:X\to Y\) is called \(n\)_-soft_[16] if for every \(n\)-dimensional paracompact space \(Z\), any closed set \(A\subset Z\) and any two maps \(h:A\to X\) and \(g:Z\to Y\) with \(g|A=f\circ h\) there is a continuous extension \(\widetilde{h}:Z\to X\) of \(h\) such that \(g=f\circ\widetilde{h}\). We say that \(f:X\to Y\) is a _map with a Polish kernel_ if there is a Polish (i.e., completely metrizable and separable) space \(P\) such that \(X\) is \(C\)-embedded in \(Y\times P\) and \(f=\pi_{Y}|X\), where \(\pi_{Y}:Y\times P\to Y\) is the projection.
The next lemma follows from the corresponding definitions.
**Lemma 3.1**.: _Let \(f:X\to Y\) is a proper \(n\)-soft map. Then \(X\in\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\) if and only if \(Y\in\operatorname{AE}_{\mathrm{p}}(\mathrm{n})\)._
An inverse system \(S=\{X_{\alpha},p_{\alpha}^{\beta},A\}\) is said to be \(\sigma\)_-complete_ if all \(X_{\alpha}\) are second countable spaces and every increasing sequence \(\{\alpha_{n}\}\subset A\) has a supremum \(\alpha\) in \(A\) such that \(X_{\alpha}\) is the limit space of the inverse
sequence \(\{X_{\alpha_{n}},p_{\alpha_{n}}^{\alpha_{n+1}},n\geq 1\}\). If \(S\) is well-ordered and \(X_{\alpha}\) is the limit of the inverse system \(\{X_{\beta},p_{\beta}^{\beta+1},\beta<\alpha\}\) for every limit ordinal \(\alpha\in A\), then \(S\) is called a _continuous inverse system_.
Now, we can describe the non-metrizable \(\operatorname{AE_{p}(n)}\)-spaces.
**Theorem 3.2**.: _For every \(n\geq 1\) the following conditions are equivalent :_
1. \(X\) _is an_ \(\operatorname{AE_{p}(n)}\)_-space of weight_ \(\tau\)_;_
2. \(\omega X\in\operatorname{AE(n)}\)_;_
3. \(X\) _is the limit space of a continuous inverse system_ \(S=\{X_{\alpha},p_{\alpha}^{\beta},\tau\}\) _such that all_ \(X_{\alpha}\) _are_ \(\operatorname{AE_{p}(n)}\)_-spaces,_ \(X_{1}\) _is a locally compact separable metric space and the projections_ \(p_{\alpha}^{\alpha+1}\) _are perfect_ \(n\)_-soft maps with metrizable kernels;_
4. \(X\) _is the limit space of a_ \(\sigma\)_-complete inverse system_ \(S=\{X_{\alpha},p_{\alpha}^{\beta}\}\) _consisting of_ \(\operatorname{AE_{p}(n)}\)_-spaces_ \(X_{\alpha}\) _and perfect_ \(n\)_-soft projections_ \(p_{\alpha}:X\to X_{\alpha}\)_._
Proof.: Let \(X\) be an \(\operatorname{AE_{p}(n)}\)-space of weight \(\tau\) and embed \(\omega X\) in the Tychonoff cube \(\mathbb{I}^{\tau}\). According to [8], there exists a compact \(AE(n-1)\)-space \(D_{n}^{\tau}\) of dimension \(n\) and weight \(\tau\) and an \(n\)-invertible \((n-1)\)-soft-map \(f_{n}^{\tau}:D_{n}^{\tau}\to\mathbb{I}^{\tau}\). Since \(\{\omega\}\) is a \(G_{\delta}\)-set in \(\omega X\), there is a closed \(G_{\delta}\)-set \(F\subset\mathbb{I}^{\tau}\) with \(F\cap\omega X=\{\omega\}\). Deleting the interior of \(F\), if necessary, we can suppose that \(F\) is nowhere dense in \(\mathbb{I}^{\tau}\). Then \((f_{n}^{\tau})^{-1}(F)\) is a closed nowhere dense and \(G_{\delta}\)-subset of \(D_{n}^{\tau}\) because \(f_{n}^{\tau}\) is open (as a \(0\)-soft map between \(\operatorname{AE}(0)\)-spaces, see [5]). So, \(Y=D_{n}^{\tau})(f_{n}^{\tau})^{-1}(F)\) is a dense locally compact Lindelof subset of \(D_{n}^{\tau}\) containing \((f_{n}^{\tau})^{-1}(X)\) as a closed subset. Since \(X\in\operatorname{AE_{p}(n)}\), there is a proper map \(g:Y\to X\) extending the restriction \(f_{n}^{\tau}|(f_{n}^{\tau})^{-1}(X)\). Finally, extend \(g\) to a map \(\widetilde{g}:D_{n}^{\tau}\to\omega X\). Now, consider the set valued map \(r:\mathbb{I}^{\tau}\rightsquigarrow\omega X\), \(r(x)=\widetilde{g}((f_{n}\tau)^{-1}(x))\). Obviously, \(r(x)=\{x\}\) for every \(x\in\omega X\). Since \(f_{n}^{\tau}\) is \((n-1)\)-soft, \(r\) is projectively \((n-1)\)-cosoft retraction in the sense of Dranishnikov [8]. Hence, by [8, Theorem 4.2], \(\omega X\) is an \(\operatorname{AE(n)}\)-space. So, \((i)\Rightarrow(ii)\).
If \(\omega X\in\operatorname{AE(n)}\), then \(\omega X\) is the limit a continuous inverse system \(\widetilde{S}=\{\widetilde{X}_{\alpha},\widetilde{p}_{\alpha}^{\beta},\tau\}\) such that \(\widetilde{X}_{1}\) is a point and all projections \(\widetilde{p}_{\alpha}^{\alpha+1}\) are \(n\)-soft maps with metrizable kernels, see [8, Theorem 4.2]. Because \(\{\omega\}\) is a \(G_{\delta}\)-set in \(\omega X\), there is \(\alpha_{0}<\tau\) such that \(\widetilde{X}_{\alpha_{0}}\) is metrizable and \(\widetilde{p}_{\alpha_{0}}^{-1}(\widetilde{p}_{\alpha_{0}}(\{\omega\}))=\{\omega\}\). Consequently, \(\widetilde{p}_{\alpha}^{-1}(X_{\alpha})=X\) for every \(\alpha\geq\alpha_{0}\), where \(X_{\alpha}=\widetilde{X}_{\alpha}\backslash\widetilde{p}_{\alpha}(\{\omega\})\). Obviously, all restrictions \(p_{\alpha}=\widetilde{p}_{\alpha}|X\) and \(p_{\alpha}^{\beta}=\widetilde{p}_{\alpha}^{\beta}|X_{\beta}\) are perfect \(n\)-soft maps and \(X\) is the limit of the inverse system \(S=\{X_{\alpha},p_{\alpha}^{\beta},\alpha\geq\alpha_{0}\}\). Finally, by Lemma 3.1, each \(X_{\alpha}\) is an \(\operatorname{AE_{p}(n)}\). This complete the implication \((ii)\Rightarrow(iii)\).
The implication \((iii)\Rightarrow(iv)\) follows by similar arguments using that \(\omega X\) (as an AE(n)-compactum) is the limit space of a \(\sigma\)-complete inverse system \(\widetilde{S}=\{\widetilde{X}_{\alpha},\widetilde{p}_{\alpha}^{\beta}\}\) consisting of AE(n)-metric compacta \(\widetilde{X}_{\alpha}\) and perfect \(n\)-soft projections \(\widetilde{p}_{\alpha}:X\to X_{\alpha}\), see [8, Theorem 4.2]. Finally, since \(X\) admits \(n\)-soft perfect maps into \(\operatorname{AE_{p}}\)(n)-spaces, the implication \((iv)\Rightarrow(i)\) follows from Lemma 3.1.
Because every AE(n)-compactum of dimension \(\leq n\), where \(n\geq 1\), is metrizable [8, Theorem 4.4], Theorem 3.2 implies the following
**Corollary 3.3**.: _Every \(\operatorname{AE_{p}}\)(n)-space \(X\) with \(n\geq 1\) is metrizable provided \(\dim X\leq n\)._
Concerning \(\operatorname{AE_{p}}\)(0)-spaces we have the following:
**Theorem 3.4**.: _The following conditions are equivalent:_
1. \(X\) _is an_ \(\operatorname{AE_{p}}\)__\((0)\)_-space;_
2. \(X\in\operatorname{AE}\)__\((0)\)_;_
3. \(\omega X\in\operatorname{AE}\)__\((0)\)_;_
4. \(X\) _is the limit space of a_ \(\sigma\)_-complete inverse system_ \(S=\{X_{\alpha},p_{\alpha}^{\beta}\}\) _consisting of locally compact separable metric spaces_ \(X_{\alpha}\) _and perfect_ \(0\)_-soft projections_ \(p_{\alpha}:X\to X_{\alpha}\)_._
Proof.: Let \(X\) be an \(\operatorname{AE_{p}}\)(0)-space of weight \(\tau\) and embed \(\omega X\) in the Tychonoff cube \(\mathbb{I}^{\tau}\). By [12], \(\mathbb{I}^{\tau}\) is an image of the Cantor cube \(D^{\tau}\) under a perfect \(0\)-invertible map \(f_{0}^{\tau}\). As in the proof of Theorem 3.2, take a closed \(G_{\delta}\)-set \(F\subset\mathbb{I}^{\tau}\) with \(F\cap\omega X=\{\omega\}\) and let \(Y=D^{\tau}\backslash(f_{0}^{\tau})^{-1}(F)\). Then \(Y\), as a locally compact Lindelof subset of \(D^{\tau}\), is an \(\operatorname{AE}\)(0)-space. Indeed, there is a locally compact subset \(Y_{0}\) of the Cantor set \(D^{\aleph_{0}}\) with \(\pi^{-1}(Y_{0})=Y\), where \(\pi:D^{\tau}\to D^{\aleph_{0}}\) is the projection. Since \(\pi\) is \(0\)-soft and \(Y_{0}\in\operatorname{AE}\)(0), \(Y\in\operatorname{AE}\)(0). Next, consider a proper map \(g:Y\to X\) extending the restriction \(f_{0}^{\tau}|(f_{0}\tau)^{-1}(X)\). Then \(g\) is also \(0\)-invertible, hence \(X\in\operatorname{AE}\)(0) because \(Y\in\operatorname{AE}\)(0). Therefore, \((i)\Rightarrow(ii)\). The implication \((ii)\Rightarrow(iii)\) is well known, see [6, Proposition 3.9]. For the implication \((iii)\Rightarrow(iv)\), observe that Haydon's [11] spectral characterization of compact \(\operatorname{AE}\)(0)-spaces implies that \(\omega X\) is the limit of a \(\sigma\)-complete inverse system \(\widetilde{S}=\{\widetilde{X}_{\alpha},\widetilde{p}_{\alpha}^{\beta}\}\) consisting of compact metric spaces \(\widetilde{X}_{\alpha}\) and perfect \(0\)-soft projections \(\widetilde{p}_{\alpha}:X\to X_{\alpha}\). Since \(\{\omega\}\) is a \(G_{\delta}\)-subset of \(\omega X\), the restriction of \(\widetilde{S}\) over \(X\) provides a \(\sigma\)-complete inverse system \(S=\{X_{\alpha},p_{\alpha}^{\beta}\}\) consisting of locally compact separable metric spaces \(X_{\alpha}\) and perfect \(0\)-soft projections \(p_{\alpha}:X\to X_{\alpha}\) such that \(X\) is the limit of \(S\) (see the proof of Theorem 3.2). The implication \((iv)\Rightarrow(i)\) follows from Lemma 3.1 and Proposition 2.3
because \(X\) admits a \(0\)-soft map into a separable locally compact metric space \(X_{\alpha}\).
|
2308.11295 | Uncertainty Estimation of Transformers' Predictions via Topological
Analysis of the Attention Matrices | Transformer-based language models have set new benchmarks across a wide range
of NLP tasks, yet reliably estimating the uncertainty of their predictions
remains a significant challenge. Existing uncertainty estimation (UE)
techniques often fall short in classification tasks, either offering minimal
improvements over basic heuristics or relying on costly ensemble models.
Moreover, attempts to leverage common embeddings for UE in linear probing
scenarios have yielded only modest gains, indicating that alternative model
components should be explored.
We tackle these limitations by harnessing the geometry of attention maps
across multiple heads and layers to assess model confidence. Our approach
extracts topological features from attention matrices, providing a
low-dimensional, interpretable representation of the model's internal dynamics.
Additionally, we introduce topological features to compare attention patterns
across heads and layers. Our method significantly outperforms existing UE
techniques on benchmarks for acceptability judgments and artificial text
detection, offering a more efficient and interpretable solution for uncertainty
estimation in large-scale language models. | Elizaveta Kostenok, Daniil Cherniavskii, Alexey Zaytsev | 2023-08-22T09:17:45Z | http://arxiv.org/abs/2308.11295v3 | Uncertainty Estimation of Transformers' Predictions via Topological Analysis of the Attention Matrices
###### Abstract
Determining the degree of confidence of deep learning model in its prediction is an open problem in the field of natural language processing. Most of the classical methods for uncertainty estimation are quite weak for text classification models. We set the task of obtaining an uncertainty estimate for neural networks based on the Transformer architecture. A key feature of such mo-dels is the attention mechanism, which supports the information flow between the hidden representations of tokens in the neural network. We explore the formed relationships between internal representations using Topological Data Analysis methods and utilize them to predict model's confidence. In this paper, we propose a method for uncertainty estimation based on the topological properties of the attention mechanism and compare it with classical methods. As a result, the proposed algorithm surpasses the existing methods in quality and opens up a new area of application of the attention mechanism, but requires the selection of topological features.
## 1 Introduction
In deep learning theory, Transformers are a class of neural networks for processing text data. During the training stage model recognizes grammatical and semantic relationships between words in a sentence through the interaction between their internal representations. The permanent exchange of information between token representations allows the model to determine which of the words are most important for understanding each other. This interaction is called the Attention mechanism and ensures high accuracy of Transformers predictions on the main types of NLP problems, including text classification.
However, for the practical use of neural networks, one should consider not only the percentage of correct predictions, but also the degree of confidence in each prediction. Transformers solving a text classification problem can produce unreliable answers. First, as typical classification models, they are poorly calibrated in probability Guo et al. (2017). We expect that the incorrect predictions of the neural network correspond to a low (close to \(0.5\)) probability of the selected class. But classification models tend to overestimate their confidence in the prediction, resulting in a shift in probabilities closer to one. Secondly, Transformers suffer from adversarial attacks Guo et al. (2021): a small perturbation in the input of the attacked model causes a significant change in its output. Therefore, it is crucial to have a mechanism to identify untrustworthy predictions in situations where model errors are unacceptable.
This issue is addressed to uncertainty estimation methods. The simplest way to get an uncertainty estimate for classification task is the output of the Softmax layer of the model, interpreted as a probability. More advanced approaches are Bayesian and ensemble methods, as well as MC dropout, which will be discussed in detail in the 2 Section. When applied to Transformers, these approaches have low computational efficiency or require significant modifications of the model setting.
The use of hidden representations of data has a potential to improve the quality of uncertainty estimates for Transformers. An effective method for detecting objects that such models regard as outliers uses the distances between the embeddings of the last layer Vazhentsev et al. (2022). It is based on the fact that the neural network maps from the space of objects to the space of internal representations, in which the concept of distance can be defined. Objects from the same distribution correspond to closely spaced embeddings, while internal outlier representations are far away from them. This method is resource efficient, shows consistently high results in metrics, and points to the promise of using Transformers' internal mechanism to estimate uncertainty.
In this paper, we develop the idea of determining the degree of model confidence in the prediction by its internal response to the input tokens. To characterize the structure of the Attention mechanism, we use its graph representation and apply methods of topological data analysis to it. Our main contributions are listed below:
* We obtain a complete description of the attention mechanism as a set of topological statistics. Topological features can characterize each attention matrix independently (feature type \(1\)) or pairs of attention matrices located in the different parts of the network (feature type \(2\)). While in earlier works (Cherniavskii et al., 2022), (Kushnareva et al., 2021) only features of type \(1\) are considered, we are using features of type \(2\) to analyze Transformers for the first time. We show that topological statistics help improve the estimate of model uncertainty.
* We propose an approximation of the model confidence by a trainable analog of the probability for each prediction. We introduce suitable training pipeline and loss function for connecting topological statistics with uncertainty of predictions.
* Our confidence prediction algorithm outperforms baseline methods on three models determining grammatical correctness of sentences in English, Russian and Italian, respectively.
## 2 Related work
There are several ways to estimate uncertainty for deep neural networks. For classification models one can utilize the output of the Softmax layer, interpreted as a class probability. The authors of (Geifman and El-Yaniv, 2017) propose a Softmax Response method based on the maximum probability \(p(y|x)\) of belonging to the class \(y=c\in C\) at the model output. The higher this probability, the more confident the model is in the prediction:
\[u_{SR}(x)=1-\max_{c\in C}p(y=c|x) \tag{1}\]
The main advantage of this approach is that it does not require additional calculations: the required estimate is obtained automatically at the output of the last layer of the model. However, Softmax probabilities are not really credible, as models tend to be overconfident (Guo et al., 2017).
The classical method for uncertainty estimation, which has a fundamental theoretical proof, is the Bayesian approach (Blundell et al., 2015). It initially sets a prior distribution on the model parameters, and then, based on the training samples, calculates the posterior distribution on the weights. In order to apply the Bayesian approach in practice, various approximations are used (Van Amersfoort et al., 2020) and the choice of approximation greatly affects the quality of the estimate. In general, Bayesian neural networks capable of obtaining reliable estimates of uncertainty require significant changes in the training scheme and learn more slowly than classical neural networks.
For classical neural networks, there is another way to introduce variability into the model parameters without changing the architecture and learning process (Lakshminarayanan et al., 2017). It is based on training deep ensembles: several copies of the original model are trained independently and after that predictions of the resulting uncorrelated models are averaged. However, the quality of the uncertainty estimate increases with the number of models in the ensemble, and, accordingly, with the computational cost of their training.
The ideas of the previous two methods are developed by the more resource-efficient Monte Carlo (MC) Dropout method (Gal and Ghahramani, 2016). It interprets the dropout layer, widely known regularization technique in deep neural networks, as a Bayesian approximation of a probabilistic model: a Gaussian process. Neural networks with different sets of deactivated neurons can be considered as Monte Carlo samples from the space of all possible neural networks. So, the same uncertainty estimation methods work for them as for ensembles. It is important that dropout must be enabled at the testing stage for the diversity of the predictions, as opposed to the classical use of the layer at the training stage. The main advantage of this approach is that single trained model is enough to estimate the uncertainty, while the disadvantage is the amount of computation for neural network runs with different sets of disabled neurons. Other approaches that can provide uncertainty estimation for deep neural networks train a separate head for this goal (Kendall and Gal, 2017; Kail et al., 2022). While these approaches often focuses on computer vision, they can be adopted as well in the natural language processing.
There are also methods for uncertainty estima
tion which use the Mahalanobis distance - a generalization of the Euclidean distance for vectors of random variables - between the internal representations of the training and test objects. The article Lee et al. (2018) explores the use of the Mahalanobis distance between the test samples and the nearest class conditional Gaussian distribution as an uncertainty estimate:
\[u_{md}=\min_{c\in C}(h_{i}-\mu_{c})^{T}\Sigma^{-1}(h_{i}-\mu_{c}) \tag{2}\]
where \(h_{i}\) is the hidden representation of the test sample, \(\mu_{c}\) is the centroid of the \(C\) class, \(\Sigma\) is the covariance matrix of the hidden representations of the training samples. An advantage of this method is that it does not require significant changes in the architecture of the model and the memory and time costs of training and storing multiple copies of the model. Obtaining an uncertainty estimate only adds a small amount of computation to the model testing phase. A disadvantage of this approach is the need to obtain internal representations for the whole training examples, which may take more time than several forward passes of the neural network on the test dataset for MC dropout evaluation. This is due to the fact that the training dataset is much larger than the test dataset.
As a result, each of the existing methods for uncertainty estimation of neural network predictions has its own benefits and limitations, which, in turn, strongly depend on the task, the amount of available resources and the required quality of the estimate. In terms of the ratio of computational efficiency and reliability of the obtained estimate, the MC dropout methods and the Mahalanobis distance between internal representations are leading. However, most comparisons of methods for uncertainty estimation in the literature are carried out for computer vision problems and their effectiveness for problems of natural analysis is still unclear. Also, in works where uncertainty is evaluated for Transformers Shelmanov et al. (2021), Vazhentsev et al. (2022), the approaches and modifications proposed to them only occasionally consider the key feature of the architecture, namely the Attention mechanism. The proposed algorithm for calibrating neural model predictions by the degree of confidence not only produces high-quality estimates specifically for the NLP task, but also opens up another potential area of application of the attention mechanism - uncertainty estimation.
## 3 Problem Statement
### Confidence Score
We formalize the problem of estimating uncertainty for a pretrained Transformer model with fixed weights. For each sample \(x\), we need to get the Confidence Score \(c(x)\in[0,1]\) expressing the degree of confidence of the neural network in the prediction \(f_{\theta}(\mathbf{x})\). The task of uncertainty estimation in our case is to find the function \(c(Attention(x))\).
### Testing Method
The search for a suitable metric for our problem is complicated due to the fact that uncertainty estimation belongs to the class of unsupervised problems. There are no targets for a measure of the model's confidence in its prediction in the training dataset.
In order to correctly evaluate the estimate of model uncertainty, we focus on the correlation between the confidence of the model and the accuracy of its predictions. We use the following test pipeline:
* For each of the \(N\) test samples, we get the prediction of the neural network and the Confidence Score.
* We rank objects by Confidence Score in ascending order: from less confident objects to more confident ones. We calculate and save the accuracy score on the whole test dataset.
* We iteratively remove the \(r\) least confident samples and calculate the metric on \(N-ir\) remaining samples, where \(i\) is the iteration number.
* On the graph, we display the dependence of the accuracy score for the remaining subset on the portion of removed objects \(\frac{ir}{N}\)
The graph of the obtained dependence is called Accuracy Rejection Curve Nadeem et al. (2009) and the area under the graph is used as the numerical value of the metric. It is common to calculate the area from the accuracy level for the whole test dataset, not taking into account the rectangular area below this level.
The plot of the Accuracy Rejection curve can be interpreted as follows: a model correctly calibrated for uncertainty makes errors mainly on examples with a low Confidence Score. The graph shows that after removing a small number of such uncertain examples, the accuracy of predictions on the
remaining subset increases significantly. In practice, one can ask experts to make predictions on these examples instead of a neural network, and thus the probability of error for a hybrid system from a model and experts will be minimized. The most effective method for assessing uncertainty in this case minimizes the number of examples that must be given to experts for analysis.
## 4 Method Description
In this section, we consider in more detail topological statistics of the Attention mechanism and propose a training scheme for our method.
### Topological analysis of the Attention mechanism
Attention matrix has an equivalent representation in the form of a weighted directed graph \(G\). The vertices of such a graph are the tokens of the input sequence, and the weights of the edges are the attention weights for each pair of tokens. The direction of the edge is chosen from the request token to the key token. Using this method, for each input sentence, we get \(N_{l}*N_{h}\) attention graphs, where \(N_{l}\) is the number of layers, \(N_{h}\) is the number of Transformer heads.
Our task is to extract from graph representations a set of numerical statistics for each sentence (Kushnareva et al., 2021). First, we calculate graph features: the number of vertices, edges, connected components, simple cycles, and Betti numbers. This set of statistics is not exhaustive, as it does not involve edge weights. The way to take into account the presence of weights is to construct a filtration - a family of graphs \(\{G^{t_{i}}\}\) obtained from the original one by removing edges with a weight less than the threshold \(t_{i}\) (2). Successive reduction of edges leads to a change in the structure of the graph and its main properties listed above. The methods of topological data analysis make it possible to quantitatively describe the evolution of graph properties by determining for each property the time of its appearance and disappearance in the filtration (the corresponding thresholds are denoted by \(t_{birth}\) and \(t_{death}\)). The set of intervals between \(t_{birth}\) and \(t_{death}\) is called a barcode and is a measure of the stability of the attention structure of (Cherniavskii et al., 2022) graphs. The most stable and pronounced graph properties will have the largest barcode length. We extract the following numerical statistics from barcodes: sum, mean, variance and entropy of barcode lengths, number of barcodes with appearance/disappearance times less/greater than the threshold value.
An analysis of attention graphs (Kovaleva et al., 2019) taken from different heads and layers of the network shows that in the process of filtering graphs have similar structural elements/patterns. In the article (Clark et al., 2019) these patterns are divided into several main types: attention to the current, previous and next tokens, attention to service [SEP] and [CLS] tokens. The authors (Kushnareva et al., 2021) propose a graph representation method for attention patterns and introduce template features based on it. Numerically, it is equal to the Frobenius norm of the difference between the attention matrix and the incidence matrix of the attention pattern.
The topological features listed above are calculated independently for attention graphs from different layers and heads of the network, however, the attention mechanisms in different parts of the model are trained jointly and it is better to take into account the connection between them. The cross-barcode method (Barannikov et al., 2021) develops the idea of barcodes and allows to compare the distributions of attention weights on different layers and heads of the neural network. Consider two attention graphs \(G^{w}\) and \(G^{w^{\prime}}\), where \(w\) and \(w^{\prime}\) are attention weights, as well as a matrix composed of pairwise minimum of the weights \(M=min(w,w^{\prime})\) and the corresponding graph \(G^{min(w,w^{\prime})}\). The main difference from barcodes is that the filtering is performed for the graph \(G^{w,w^{\prime}}\) constructed from \(G^{w}\) and \(G^{min(w,w^{\prime})}\). After filtering, we get a set of intervals \((t_{birth},t_{death})\) and calculate the value of the topological feature as the total length of the cross-barcode segments. Intuitively, the cross-barcode expresses simpler graph properties (the number of simple cycles, connected components, etc.), which, at a fixed \(\alpha\) threshold, have already appeared/disappeared in one of the graphs, but have not yet appeared/disappeared in another graph.
Using the methods above, from the graph representation of the attention matrices for each input sentence, we obtain a set of topological features of 4 types:
* graph statistics _(graph features)_
* features obtained from barcodes _(barcode/ripser features)_
* features obtained from attention patterns _(template features)_
* features obtained from cross-barcodes _(cross-barcode features)_
The first 3 types of features are calculated for attention matrices independently and belong to the first type, the last type of features is calculated for pairs of matrices and belongs to the second type.
The linguistic interpretation of features is that graph structures are used in lexicology to describe the laws of semantic changes in a language (Hamilton et al., 2016). The evolution of the meaning of a word over time can be represented as a graph in which the edges are proportional to the semantic shift relative to different meanings of the word. A similar evolutionary process occurs within a neural model as the hidden representation of a word changes from the initial layers of the network to deeper layers. Groups of tokens with strong mutual influence form separate clusters in a graph representation, just as words related in meaning and syntactically in a sentence form phrases. Barcodes highlight the dominant, most stable semantic relationships between words, and attention patterns highlight the main directions of interaction between context tokens.
### Confidence predictor model and its training
Formally, given the input set of features \((z_{1},z_{2},\cdots z_{n})\), the algorithm should produce the number Confidence Score \(c\in[0,1]\). This range of values allows us to interpret the Confidence Score as a probability, thus the problem of estimating the confidence of the Transformer in the prediction is reduced to a standard binary classification problem. We use a separate Score Predictor model for it, consisting of several fully connected layers and activation functions between them. The last activation function should be sigmoid to limit the output of the model to \([0,1]\). The simplicity of the architecture is explained by the high risk of overfitting. We get an increase in the predictive ability of the model by more careful selection and aggregation of topological features instead of making the network deeper.
The key point is to build the loss function for our model. Cross-entropy, which is widely used by models for binary classification, does not suit us, since the target distribution of the Confidence Score is not available. Also, optimizing the loss function should reward the neural network for obtaining uncertainty estimates that truly reflect Transformer's ability to predict the correct class for any input. Similar to the method proposed in (DeVries and Taylor, 2018), we choose the cross-entropy modification as the loss function, in which the probability at the output of the model is calibrated using the Confidence Score.
Let embeddings of words in a sentence \((x_{1},x_{2},\cdots x_{n})\), the corresponding set of topological features \((z_{1},z_{2},\cdots z_{m})\), Transformer output \(\mathbf{p}=\mathrm{Softmax}(f(\mathbf{x},\theta))\) and the uncertainty estimate for this prediction obtained from the Score Predictor model \(c=\mathrm{Sigmoid}(g(\mathbf{z},\mathbf{w}))\), while \(p_{1},p_{2},c\in[0,1],p_{1}+p_{2}=1\). Then the calibrated probability is obtained by interpolating between the Transformer's prediction and its one-hot target distribution \(\mathbf{y}\) with the degree of interpolation determined by the Confidence Score: \(p_{i}^{\prime}=cp_{i}+(1-c)y_{i}\). The final loss function
\[L=-\sum_{i=1}^{2}\log(p_{i}^{\prime})y_{i}-\lambda\log(c) \tag{3}\]
is the sum of cross-entropy with calibrated probability and regularization, which penalizes the model for uncertain predictions. The ratio of the main loss and regularization is determined by the \(\lambda\) hyperparameter.
Let's analyze how Confidence Score affects the dynamics of the loss function. For \(c\to 1\) (case of sure prediction) \(p\to p^{{}^{\prime}}\) and the value of the loss is determined by the original Transformer prediction without any uncertainty estimate. On the contrary, for \(c\to 0\), instead of an uncertain prediction, the loss function receives the target distribution, preventing the error from growing. If \(c\) is between 0 and 1, then the calibrated probability becomes closer to the target penalty cost in the regularization term. In general, a decrease in the loss function corresponds to an increase in the Confidence Score on correct predictions and a decrease in it on erroneous predictions, which is what we need. It is also important that all transformations in the error functional are differentiable, but only the parameters of the Score Predictor model are trained. Any change in the weights of the Transformer will lead to a change in the topology of the attention mechanism and will require recalculation of topological features, so the weights of the Transformer are fixed in the process of estimating the uncertainty.
Experiments
This section provides implementation details of our uncertainty estimation algorithm, data and model details, and experimental results.
### Data
We are experimenting on 3 Corpus of Linguistic Acceptability (CoLA) Warstadt et al. (2019) datasets, consisting of sentences in English, Italian and Russian. Each sentence has a target value: 1 for grammatically correct sentences, 0 otherwise. Basic information about datasets is given in the Appendix A.
### Models
The standard approach to using BERT-like models is to finetune the pre-trained models Devlin et al. (2019) that are publicly available. Pre-trained neural networks have sufficient generalization ability to condition on a specific task in just a few epochs of retraining with a low learning rate.
In this work, we use pre-trained BERT-base-cased models from the Transformers library. Hyperparameters of finetuning and final model metrics are given in the Appendix A.
### Basic Methods
We compare the results of our algorithm with three classical methods of uncertainty estimation: Softmax Response, MC Dropout, Mahalanobis estimator. The implementation of the last two methods was based on the code for the article Vazhentsev et al. (2022). Also, we checked the strategy of training the Score Predictor on the BERT embeddings extracted from its classification part instead of topological statistics. This approach is successfully used in the article DeVries and Taylor (2018) for computer vision tasks, however, the decrease in the performance of this method for NLP tasks provides motivation for applying TDA methods instead.
### Extraction of topological features
Each of the BERT-like neural networks considered in this paper contains 12 layers and 12 heads on each layer, respectively, for each input object, we extract 144 attention matrices. Then, for each matrix, we extract 3 types of type 1 topological features, discussed in detail in the 4.1 section: graph features, barcode features, template features. We calculate features of type 2 between pairs of attention matrices \(A_{ik}\) and \(A_{kj}\), where the first index corresponds to the layer number, the second index is the head number, \(i,j,k\in\{0,12\}\). An example of a barcode for a test sample object is shown in the Appendix D. Barcodes are calculated using the Ripser++ Zhang et al. (2020) library, and cross-barcodes are calculated using the MTop-Div Barannikov et al. (2021) library. The algorithms implemented in these libraries use advanced optimizations, which can significantly reduce the time for computing topological features. As a result, for each input feature we get vectors of dimension (12, 12, 7) graph features, (12, 12, 14) barcode features, (12, 12, 5) template features, (12, 12, 144) cross-barcode features. The first dimension corresponds to the layer number, the second - to the head number, the third - to the subtype of the topological feature. All subtypes of topological features are listed in the Appendix C. To train the Score Predictor model, the selected feature vectors are concatenated along the third dimension.
### Training setup of the Score Predictor
The model that predicts the Confidence Score takes a set of topological statistics as input and converts it into a scalar value of the uncertainty score as an output. It is training by minimizing the Confidence Loss introduced in the 4.2 section, so outputs of the Transformer Softmax layer and the distribution of target values for the original classification task are also given to the model as input. High quality estimates were obtained by Score Predictor model containing 2 fully connected layers with sigmoids as activation functions. Our optimal setup also includes the Adam optimization method and the training hyperparameters listed in the Appendix E.
### Analysis of topological features of the first type
Let us consider separately the topological features of the first type. The total number of components in the vectors of topological features of 3 types for one object is \(12*12*(7+14+5)=\)3744. Considering that the number of objects in the training samples does not exceed 10000, the use of 3744 embeddings would guarantee overfitting. So it was necessary to reduce the number of components in the feature vector, and we considered two methods:
* averaging over all layers and all heads of the Transformer
* selection of the components that contribute most to the prediction using Shapley values (Lundberg and Lee, 2017)
The results of training the Score Predictor model only on the first type of topological features are given in the Table 1.
For each model, selection of the most significant components improved the uncertainty estimate in relation to simple averaging. Therefore, in further experiments, we feed the Score Predictor model with two components of topological features of the first type that most affect the result. Feature selection process via Shapley values is considered in detail in Appendix B.
### Analysis of topological features of the second type
Since we found out earlier that the last layer attentions are the most informative for us, we also calculate cross-barcodes along this layer. We fix the index \(k\) equal to 12, vary the indices \(i\), \(j\) and observe how the quality of the uncertainty estimation changes, when the Score Predictor receives the pairwise attention statistics of the matrices \(Aik\) and \(Akj\).
Table 2 shows the change in the metric when adding cross-barcodes to the English-language setup. According to the experimental results, there is a small portion of the optimal pairs of attention matrices. Cross-barcode calculated for such pair improves the uncertainty estimate for our topological method, for example, the pair \(A_{12,12}\) and \(A_{12,9}\). These pairs are located in the lower right corner of the table, which corresponds to the 7-12 layer and head indices in the Transformer. The remaining pairs of attention matrices barely improve the performance of Score Predictor.
As a result, we supplement the set of topological features of the first type with the two most informative cross-barcodes. Further, our final results are given for the Score Predictor, trained only on best-performing topological features of each type.
## 6 Results
The area under the Accuracy Rejection curve for each method is presented in the Table 3. This metric has a theoretical upper bound, which depends on the accuracy of Transformer's predictions on the full dataset. It can be interpreted as confidence of an Oracle, for which the condition \(\forall x,x^{\prime}\to C(x)>C(x^{\prime})\) is satisfied, where \(x\) is a correctly recognized object, \(x^{\prime}\) is an incorrectly recognized object, \(C(x)\) and \(C(x^{\prime})\) are the corresponding Confidence Score values. Consequently, the plot of the Accuracy Rejection curve for an Oracle increases linearly while only incorrectly classified objects are disappearing from the subset, and then reaches a constant value of \(1\).
For all benchmarks we have considered, topological methods outperforms basic methods, and the addition of cross-barcode statistics leads to a significant increase in metric. This effect is most pronounced for the English-language BERT and leads to increase of 12 percent relative to the method using only topological statistics of first type, and the least pronounced for the Russian-language BERT, where the increase is 2 times less. Among the baselines, Softmax Response and MC Dropout show poor quality of uncertainty estimates, while Mahalanobis estimator gives a consistently reliable estimate and is practically comparable to our topological method without cross-barcodes. The results of the Embedding estimator method are not stable: for the Transformer working with Italian texts, they are close to the topological method without cross-barcodes, but for other models the Embedding estimator is inferior to the topological methods.
We highlight the key features of compared methods on the example of English-language benchmark. The graph 1 shows the Accuracy Rejection curves for the two basic methods, the Oracle upper estimate and our topological method using cross-barcodes. Firstly, the Accuracy Rejection curve for our method lies above the corresponding curves of the base methods and reaches a constant value earlier. Secondly, the main interest in practice is the initial part of the curve with rejection rate in \([0,0.2]\). For the topological method, this section is convex, while for the basic methods, the character of the initial sections is closer to linear. In practical application, convexity is preferable, since it means a noticeable increase in metric when only a small number of objects are removed.
## 7 Conclusion
Existing studies in the intersection of TDA and NLP domains prove that use of topological features of attention maps can boost classification performance. We show that topological statistics also allow to obtain an uncertainty estimates of Transformer predictions, and the quality of the estimates is superior to that of baselines. We prove this by
experiments on three BERT models to determine the linguistic acceptability of sentences in English, Russian and Italian, respectively. The confidence prediction by topological method demonstrates an increase compared to the best of the basic methods up to 16 percent in metric.
Our algorithm for determining the confidence of the Transformer in its prediction uses two main types of topological statistics: features calculated for each attention matrix independently and pairwise statistics. The use of features of the second type for attention mechanism is our innovation and cross-barcodes significantly boost the quality of uncertainty estimation by proposed methods.
We found out that the location of the attention matrix in the Transformer affects the contribution of topological statistics. On average, the last layer features are the most informative, however, to achieve the highest possible quality of the estimate, careful selection of features is required. For topological features calculated independently, this process can be automated by selecting components with highest Shapley values.
## Limitations
Our method also has some limitations: 1) Feature extraction slows down the inference (on average 20 sec per sample is required for feature computation with GPU acceleration). 2) Selection of the most informative attention matrices should be fully automatized. Calculation of topological statistics of both types only on a limited small set of attention matrices will significantly speed up our method and, quite likely, not reduce the quality of the uncertainty estimation. Optimization of mentioned calculations will be our main direction for further work in this area.
|
2308.00398 | DriveAdapter: Breaking the Coupling Barrier of Perception and Planning
in End-to-End Autonomous Driving | End-to-end autonomous driving aims to build a fully differentiable system
that takes raw sensor data as inputs and directly outputs the planned
trajectory or control signals of the ego vehicle. State-of-the-art methods
usually follow the `Teacher-Student' paradigm. The Teacher model uses
privileged information (ground-truth states of surrounding agents and map
elements) to learn the driving strategy. The student model only has access to
raw sensor data and conducts behavior cloning on the data collected by the
teacher model. By eliminating the noise of the perception part during planning
learning, state-of-the-art works could achieve better performance with
significantly less data compared to those coupled ones.
However, under the current Teacher-Student paradigm, the student model still
needs to learn a planning head from scratch, which could be challenging due to
the redundant and noisy nature of raw sensor inputs and the casual confusion
issue of behavior cloning. In this work, we aim to explore the possibility of
directly adopting the strong teacher model to conduct planning while letting
the student model focus more on the perception part. We find that even equipped
with a SOTA perception model, directly letting the student model learn the
required inputs of the teacher model leads to poor driving performance, which
comes from the large distribution gap between predicted privileged inputs and
the ground-truth.
To this end, we propose DriveAdapter, which employs adapters with the feature
alignment objective function between the student (perception) and teacher
(planning) modules. Additionally, since the pure learning-based teacher model
itself is imperfect and occasionally breaks safety rules, we propose a method
of action-guided feature learning with a mask for those imperfect teacher
features to further inject the priors of hand-crafted rules into the learning
process. | Xiaosong Jia, Yulu Gao, Li Chen, Junchi Yan, Patrick Langechuan Liu, Hongyang Li | 2023-08-01T09:21:53Z | http://arxiv.org/abs/2308.00398v2 | # DriveAdapter: Breaking the Coupling Barrier of
###### Abstract
End-to-end autonomous driving aims to build a fully differentiable system that takes raw sensor data as inputs and directly outputs the planned trajectory or control signals of the ego vehicle. State-of-the-art methods usually follow the 'Teacher-Student' paradigm. The Teacher model uses privileged information (ground-truth states of surrounding agents and map elements) to learn the driving strategy. The student model only has access to raw sensor data and conducts behavior cloning on the data collected by the teacher model. By eliminating the noise of the perception part dur
ing planning learning, state-of-the-art works could achieve better performance with significantly less data compared to those coupled ones._
_However, under the current Teacher-Student paradigm, the student model still needs to learn a planning head from scratch, which could be challenging due to the redundant and noisy nature of raw sensor inputs and the casual confusion issue of behavior cloning. In this work, we aim to explore the possibility of directly adopting the strong teacher model to conduct planning while letting the student model focus more on the perception part. We find that even equipped with a SOTA perception model, directly letting the student model learn the required inputs of the teacher model leads to poor driving performance, which comes from the large distribution gap between predicted privileged inputs and the ground-truth._
_To this end, we propose_ DriveAdapter_, which employs adapters with the feature alignment objective function between the student (perception) and teacher (planning) modules. Additionally, since the pure learning-based teacher model itself is imperfect and occasionally breaks safety rules, we propose a method of action-guided feature learning with a mask for those imperfect teacher features to further inject the priors of hand-crafted rules into the learning process. DriveAdapter achieves SOTA performance on multiple closed-loop simulation-based benchmarks of CARLA._
## 1 Introduction
In recent years, autonomous driving has become an active research topic due to the enormous progress of deep learning. A traditional pipeline of an autonomous driving system is usually composed of object detection [28], motion prediction [23, 22], trajectory planning [40], _etc._ To fully unleash the power of deep learning and big data and avoid cumulative errors, the concept of end-to-end autonomous driving is proposed [37, 33, 20, 5] which aims to build a fully differentiable model directly mapping the raw sensor data into planned trajectories or control signals.
One difficulty of end-to-end autonomous driving is that the noisy and redundant raw sensor inputs make it hard to directly learn a good policy. For example, raw sensor inputs based reinforcement learning (RL) agent MaRLn [45] requires 20 million steps (around 20 days) to converge even equipped with their pretraining techniques. To this end, in Roach [55], they decouple the learning process into two steps: (i) conduct the RL algorithm based on privileged inputs - rasterizing the ground-truth location of surrounding agents and traffic signs into 2D bird's-eye-view (BEV) tensors. The trained RL model is called the _teacher model_ as it uses privileged inputs and thus performs well. (ii) conduct behavior cloning with raw sensor inputs on the data collected by the teacher model. This model only with access to raw sensor data is called the _student model_ since they are supervised by the teacher model. By decoupling the perception noise from the driving strategy learning process, Roach could achieve much better performance on more challenging benchmarks in 10 million steps. Besides the benefits of efficient RL training, in LBC [4] and PlanT [40], they demonstrate that training a teacher model from rule-based expert and then using the teacher model to provide extra supervision for the student model could bring significant performance gains as well. Due to the aforementioned advantages of the decoupled planning and perception learning, the teacher-student paradigm has been widely adopted by the state-of-the-art (SOTA) works [4, 3, 49, 40, 18].
However, there are still issues under the existing paradigm. The student model still needs to train a planning head from scratch by behavior cloning, which could result in the causal confusion issue [47]. Specifically, the causal confusion issue here refers to the phenomenon that the student model learns the visual clue of the results instead of the cause of the desired actions. For example, the well-known inertia problem [47] is that the agent sometimes keeps still forever at the intersection. It is because, during behavior cloning, the student model might learn the improper causal correlation that the ego vehicle should copy behaviors of its surrounding vehicles at the intersection. In fact, the behaviors are determined by the traffic light. However, since the traffic light is smaller in images compared to vehicles, the student tends to find the shortcut [47]. As a result, during evaluation, if there are no vehicles nearby or all vehicles are behind the ego vehicle, it might get stuck. The causal confusion issues could be solved by techniques such as reweighting the distribution of training data [42, 38, 47] or a causal prior structure/network [48, 9].
Inspired by the fact that we already have a strong teacher model trained by RL without any causal confusion issue, in this work, we aim to explore the way to _utilize the teacher model to conduct planning directly instead of training a planning head for the student model from scratch_. In this way, the learning process of perception and planning is completely decoupled and thus the disadvantage of behavior cloning could be avoided, as demonstrated in Fig. 1. As a result, we could directly benefit from the driving knowledge inside the teacher model learned by RL. One intuitive implementation of this idea is to train a student model to generate the required privileged inputs for the frozen teacher model, _e.g._, a BEV segmentation student model for the Roach teacher model. However, we find that even equipped with the SOTA perception model: BEVFusion [30] + Mask2former [6], its final driving performance is still unsatisfying. The issue comes from the large distribution gap between the predicted BEV segmentation and ground-truth. It could be formulated as a domain transfer
problem since the teacher model has only seen the ground-truth BEV segmentation during the training process.
Inspired by the usage of adapters in the natural language processing (NLP) [17] and computer vision [15] field to adopt huge foundation models for downstream tasks, we propose **DriveAdapter**, which connects the output of the student model (perception) and the input of the teacher model (planning). Specifically, we add a learnable adapter module after each part of the teacher model and apply feature alignment objective functions on each adapter. In this way, the adapter could learn to transfer the imperfect feature from the student model's domain to the teacher model's domain in a layer-by-layer supervised way.
Additionally, we observe that the pure learning-based teacher model itself is imperfect and it is a common practice to add extra hand-crafted rules during the final decision process [50, 53, 18]. Thus, even if the student with adapters could losslessly generate the required inputs for the teacher model, it is still upper-bounded by the imperfect performance of the teacher. To this end, we propose to back-propagate an action loss to all adapters and mask all feature alignment loss if the teacher model is overridden by the rule. In this way, we force adapters to directly learn the feature required to generate good actions instead of just mimicking the teacher. By combining the two proposed techniques, DriveAdapter achieves state-of-the-art performance on two closed-loop evaluation benchmarks of the CARLA simulator. Moreover, we conduct thorough ablation studies and give results of other related attempts such as directly generating intermediate features of the teacher.
In summary, this work has the following contributions:
* To the best of our knowledge, we are the first to thoroughly explore the paradigm of directly utilizing the teacher head to conduct planning for the end-to-end autonomous driving task. Under such decoupled paradigm, the disadvantages of behavior cloning such as causal confusion could be avoided.
* The intermediate output form of BEV segmentation between the perception and planning modules has strong interpretability, shedding insights into the typically black-box pipeline of end-to-end autonomous driving. The decoupled perception model could enjoy the recent rapid progress of BEV perception and semantic segmentation.
* To deal with the imperfect perception issue as well as the imperfect teacher model issue, we propose DriveAdapter along with a masked feature distillation strategy. By combining the two techniques, it could achieve state-of-the-art performance on two public benchmarks.
* We give thorough ablation studies and other related attempts to provide more insights and understanding regarding the new decoupled paradigm.
_We believe that the rich driving knowledge within an RL expert model learned by millions of steps of exploration should be utilized more extensively instead of only for behavior cloning. We hope the proposed decoupled paradigm, our failing attempts, and our working techniques could all provide useful insights for this line of study._
## 2 Related Works
### End-to-End Autonomous Driving
The concept of end-to-end autonomous driving could date back to 1980s [37]. In the era of deep learning, early works conduct behavior cloning from a rule-based expert. CIL [10] adopts a simple CNN to directly map the front-view image from a camera to control signals. Further, in the extending work CILRS [11], they add an auxiliary task to predict the ego vehicles' current speed to alleviate the inertia issue. Later, in MaRLn [45], they explore the way to apply reinforcement learning to obtain a driving policy to surpass the rule-based expert. However, their method suffers from the high-dimensional raw sensor inputs for urban driving. In LBC [4], they propose to first train a teacher model with privileged inputs and then utilize this teacher model to provide supervision signals for all high-level commands, which significantly boosts the performance of the student model and thus the teacher-student paradigm has dominated the field. Roach [55] trains the teacher model with reinforcement learning, which demonstrates strong robustness and diversity compared to imitation learning-based teacher models and has been adopted by multiple most recent SOTA works [50, 18]. In PlanT [40], they propose to adopt Transformer for the teacher model on the states of the environment instead of CNN on the rasterized images, which demonstrates good scalability and interpretability. As for the student models, NEAT [7] transfers representations to BEV space. Transfuser [39, 8] adopts Transformer for camera and LiDAR fusion. LAV [3] adopts PointPainting [46] and proposes to predict all agents' future trajectories to augment the dataset. TCP [50] combines the trajectory prediction [25, 24] with the control signal prediction. Interfuser [43] injects safety-enhanced rules during the decision-making of the student models. MMFN [53] adopts VectorNet for map encoding and MILE [18] proposes to learn a world model for the student model. Among concurrent works, ThinkTwice [26] proposes a DETR-like scalable decoder paradigm for the student model. CaT [52] designs a knowledge distillation framework for the Teacher-Student paradigm. ReasonNet proposes specific modules for student models to better exploit temporal and global information. In [21], they propose to formulate the output of the student as classification problems to avoid averaging.
We could observe that state-of-the-art works all use behavior cloning for student models. In this work, we explore the way to further decouple perception and planning learn
ing by directly adopting the frozen teacher model for planning, while keeping the system end-to-end differentiable.
### Adapter for Deep Learning Models
In recent days, huge foundation models [1] pretrained on an enormous amount of data have demonstrated strong transfer ability on downstream tasks. Since finetuning the whole model is computationally expensive and tends to overfit on downstream tasks with limited data, Adapter [17] is first proposed in the NLP field where they fix the parameters of the original model and add extra learnable parameters between blocks of the original model so that it could keep the generalization ability of the original model while having task-specific changes. Later, the Adapter is also shown to be effective in the computer vision field [15, 54].
In this work, we adopt the idea of Adapter to keep the knowledge in the teacher model while filling the gap between the predicted privileged inputs and the ground-truth.
## 3 Method
### Student Model for Perception Learning
Suppose we have a teacher model with privileged information as inputs. One popular form of encoding the privilege information is the 2D bird's-eye-view (BEV) tensors [4, 55] composed of the rasterized position of surrounding agents, lanes, and traffic signs, where the value of each channel could be either 0 or 1 to represent the existence of the corresponding type of object at certain locations (In PlanT [40], they propose to encode the scene into discrete tokes so that Transformer could be adopted). Here, we choose Roach to match with SOTA works [50, 18].
The student model takes raw sensor data as inputs (in this paper, images from four cameras and the point cloud from one LiDAR) and generates the desired input for the teacher model. For Roach, it could be formulated as a semantic segmentation task under BEV space [34]. Here, we adopt the BEVFusion [30] to convert raw sensor data into BEV features. Specifically, we adopt LSS [36] to scatter the image features from cameras into their corresponding BEV grid based on their location and depth [27]. For LiDAR, we adopt the commonly used SECOND [51] backbone to convert the point cloud into a BEV feature map. By concatenating the BEV feature maps of cameras and LiDAR, we obtain the 2D BEV representation of the scene. To conduct the semantic segmentation task, we adopt the state-of-the-art Mask2former [6] head.
Nevertheless, even equipped with the most advanced perception module as of today, we find that directly feeding the predicted BEV segmentation (mIoU 0.35 on test unseen scenes) to the teacher model does no work, _i.e_., much worse driving performance compared to SOTA works as shown in Table 1.
performance is still upper bounded.
### Adapter Module
Since the teacher model has only been trained by the BEV segmentation ground-truth, it is sensitive to the noise in the predicted BEV segmentation because of the large distribution gap. Inspired by the usage of the Adapter [17, 15, 54] on foundation models [1] to adopt them into downstream tasks with lower cost and less overfitting, we propose to add Adapters between the student and teacher model. The overall architecture of DriveAdapter is shown in Fig. 3.
Formally, denote the predicted BEV segmentation as \(\mathbf{H}_{0}\) where the subscript \(0\) means that they could be the initial input of the teacher model. Suppose the teacher model has \(N\) modules in a sequential order1, where \(\text{Teacher}_{i}\) denotes its \(i^{\text{th}}\) module and \(\mathbf{H}_{i-1}\) and \(\mathbf{H}_{i}\) denote the original input and output of \(\text{Teacher}_{i}\) respectively: \(\mathbf{H}_{i}=\text{Teacher}_{i}(\mathbf{H}_{i-1})\). Besides, since we want the adapter to have access to the raw sensor inputs so that the model could enjoy the benefits of end-to-end learning, we denote the raw BEV feature from the student model as \(\mathbf{F}\) and we use a series of convolutional layers to downsample \(\mathbf{F}\) so that \(\mathbf{F}_{i}\) could have the same resolution with \(\mathbf{H}_{i}\).
Footnote 1: For more complex teacher models, specific adapter module could be designed accordingly.
**Adapter:** The forward process of the frozen teacher model with adapter modules is:
\[\mathbf{H}_{i-1}^{\text{Adpt}} =\text{Adapter}_{i-1}([\mathbf{H}_{i-1};\mathbf{F}_{i-1}]), \tag{1}\] \[\mathbf{H}_{i} =\text{Teacher}_{i}(\mathbf{H}_{i-1}^{\text{Adpt}}), \tag{2}\]
where \(\text{Adapter}_{i-1}\) in Eq. 1 denotes the adapter module for \(i^{th}\) module of the teacher model which takes the feature from the previous layer \(\mathbf{H}_{i-1}\) and the raw feature \(\mathbf{F}_{i-1}\) as inputs and outputs the adapted feature for the next layer \(\mathbf{H}_{i-1}^{\text{Adpt}}\) which has exactly the same tensor shape with \(\mathbf{H}_{i-1}\). The \(\text{Adapter}_{i-1}\) is implemented as CNN layers for 2D feature maps and as multi-layer perceptrons (MLPs) for 1D feature maps. Specifically, with Roach as the teacher model, after each layer of Roach's network (both convolutional layers and linear layers), there is a corresponding adapter module and we apply feature alignment loss on each output. There are two exceptions: (i) the measurement encoder because it takes the state of the ego vehicle as inputs which is
Figure 4: **Details of Adapter Modules with Roach.**
Figure 3: **Overall architecture of DriveAdapter. (a) The student model takes raw sensor data as inputs and extracts BEV features for the usage of BEV segmentation and adapter module. (b) The predicted BEV segmentation is fed into the frozen teacher model and the plug-in adapter module. (c) The adapter module receives supervision from the feature alignment objective with the ground-truth teacher feature. For cases the teacher model is taken over by rules, a mask is applied on the alignment loss and the supervision of all adapter modules is from the backpropagation of action loss.**
provided directly by the sensor and thus there is no error; (ii) the output linear layer since it generates actions instead of features. Fig. 4 gives the details of the adapter modules.
**Feature Alignment:** To fill the gap between the student's prediction and the ground-truth inputs for the teacher model, we apply a _feature alignment objective function_ for each adapter module:
\[\mathcal{L}_{i}=\text{Reg}(\mathbf{H}_{i-1}^{\text{Adpt}},\mathbf{H}_{i-1}^{\text{gt} }), \tag{3}\]
where \(\mathbf{H}_{i-1}^{\text{gt}}\) denotes the feature from \((i-1)^{th}\) module of the teacher model (without the adapter module) when the input is the ground-truth BEV segmentation. Reg denotes the regression loss function and we simply adopt smooth L1 loss here. The intuition behind this design is that we want each adapter module to recover the ground-truth feature required by the teacher model with an additional information source - the raw BEV feature. In this way, the distribution gap between the prediction and the ground-truth feature could be reduced gradually in a layer-by-layer supervised way.
**Mask & Action Guidance:** As for the imperfect teacher model issue, we inject the priors of the hand-crafted rules into the training process in two ways: (i) _Mask for Feature Alignment_: For the cases where the teacher model is wrong and is taken over by the rules, masks are applied on all feature alignment losses since the original features in the teacher model lead to wrong decisions and thus we did not want the adapter module to recover them. (ii) _Action Guided Feature Learning_: In order to let the adapter module transform the features of the original teacher model into the features leading to the final decision by rules, we calculate the loss between the model prediction and the actual decision and make it backpropagate all the way through the frozen teacher model and adapter modules. In this way, for those cases, the adapter module is able to turn the final output of the model into the right decision. Experiment results show that the mask and the action guidance could explicitly improve driving performance.
## 4 Experiments
### Dataset and Benchmark
We use the widely adopted CARLA simulator (version 0.9.10.1) for data collection and closed-loop driving performance evaluation.
As for the data collection, we collect data with the teacher model _Roach_[55] + _rules_ to match with SOTA works [50, 18]. Following the common protocol, we collect data at 2Hz on Town01, Town03, Town04, and Town06 for training and the total number of frames is 189K which is similar to [39, 8, 3, 50]. At each frame, we collect raw sensor data from 4 cameras and 1 LiDAR and we store the labels including depth, segmentation, control signals, ground-truth features, and the states of the ego vehicle. Additionally, recent works have collected more data and even on more towns for training: [43] (3 million frames, 8 towns) and [18] (2.9 million frames but at 25 Hz), which might trigger the unfair comparison issue as discussed in the community2,3. To this end, we collect a dataset with 2 million frames on 8 towns and we denote all models trained with extra data with *.
Footnote 2: [https://github.com/opendilab/InterFuser/issues/3](https://github.com/opendilab/InterFuser/issues/3)
Footnote 3: [https://github.com/wayveai/mile/issues/4](https://github.com/wayveai/mile/issues/4)
As for the evaluation process, we conduct closed-loop running under two public benchmarks: Town05Long (most widely used) and Longest6 (36 challenging routes selected by [8]). Both benchmarks are composed of tens of routes while each route contains a series of target points to indicate the destination of driving. In addition, there are manually defined challenging events randomly happening during driving to evaluate the driving agent's ability to deal with long-tail problems. For example, jaywalking pedestrians might suddenly appear. At the intersection, an opposite vehicle might illegally run a red traffic light. Thus, the model should have comprehensive knowledge about driving instead of simple lane following. Note that during the evaluation process, the model only has access to the raw sensor data and the use of privileged information is prohibited. For implementation details of the simulation and the model, please refer to supplemental materials.
### Metrics
Official metrics of CARLA are used. **Infraction Score (IS)** measures the number of infractions made along the route, with pedestrians, vehicles, road layouts, red lights, _etc._**Route Completion (RC)** is the percentage of the route completed by the autonomous agent. **Driving Score (DS)** is the **main metric** which is the product of Route Completion and Infraction Score.
### Comparison with State-of-the-Art Works
We compare with SOTA works on two widely used public benchmarks: Town05 Long and Longest6 as shown in Table 2 and Table 3 respectively. We could observe that DriveAdapter performs the best under the limited data setting and after adopting the dual outputs trick in [50] to improve safety, it even performs on par with competitors trained on 10\(\times\) data. After feeding more data to the model, DriveAdapter sets new records on both benchmarks. After investigation, we find that the major gain of 10\(\times\) data comes from better detection of the red light - a _perception_ issue. This demonstrates the benefit of having an explainable intermediate representation of BEV segmentation. There are also some common issues happening under both limited and
10\(\times\) data settings. We give more thorough investigations about failure cases in supplemental materials.
### Ablation Study
In this section, we conduct ablation studies to verify the effectiveness of each design of DriveAdapter. All experiments are conducted on Town05 Long benchmark with 189K training data.
#### 4.4.1 Loss Design
In DriveAdapter, we have two kinds of loss terms for the adapter modules: the feature alignment loss to deal with the distribution gap between the prediction and ground-truth BEV segmentation, and the action loss to handle the cases when the learning-based teacher model makes mistakes and the decision is overridden by rules.
The ablation study regarding the two loss terms is in Table 4. We can observe that:
* If we do not conduct feature alignment, the supervision for the adapter module is only the action loss which is very similar to behavior cloning. Thus, we could observe a drop in route completion due to the inertia issue.
* If we do not apply the masking strategy for those cases where the learning-based teacher model made mistakes, the supervision signal of action loss and feature alignment loss is conflict. As a result, the overall performance drops.
* If we discard the action loss, we could observe a drastic drop in the IS (infraction score) which comes from more collisions due to the relatively aggressive teacher model.
In summary, we can find out that both loss terms, as well as the masking strategy, are significant for the learning process. The feature alignment loss allows the adapter to exploit the driving knowledge within the teacher model while the action loss and mask strategy inject information about hand-crafted rules which leads to a more conservative and thus safer driving strategy.
#### 4.4.2 Adapter Design
In DriveAdapter, all modules of the teacher model are frozen and after each module, there is an adapter that takes both the feature from the previous layers and the raw BEV
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Method** & **DS\(\uparrow\)** & RC\(\uparrow\) & IS\(\uparrow\) \\ \hline DriveAdapter & 61.7 & 92.3 & 0.69 \\ \hline w/o Feature Alignment Loss & 45.4 & 69.1 & 0.66 \\ w/o Mask for Feature Alignment & 56.9 & 85.4 & 0.65 \\ w/o Action Loss & 47.1 & 90.5 & 0.52 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation on loss terms of the adapter.**
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline
**Method** & **Teacher** & **Student** & Reference & **DS\(\uparrow\)** & RC\(\uparrow\) & IS\(\uparrow\) \\ \hline CILRS [11] & Rule-Based & Behavior Cloning & CVPR 19 & 7.8 & 10.3 & 0.75 \\ LBC [4] & Imitation Learning & Behavior Cloning + DAgger & CoRL 20 & 12.3 & 31.9 & 0.66 \\ Transfuser [39, 8] & Rule-based & Behavior Cloning & TPAMI 22 & 31.0 & 47.5 & **0.77** \\ Roach [55] & Reinforcement Learning & Behavior Cloning + DAgger & ICCV 21 & 41.6 & 96.4 & 0.43 \\ LAV [3] & Imitation Learning & Behavior Cloning & CVPR 22 & 46.5 & 69.8 & 0.73 \\ TCP [50] & Reinforcement Learning & Behavior Cloning & NeurIPS 22 & 57.2 & 80.4 & 0.73 \\ ThinKTwice [26] & Reinforcement Learning & Behavior Cloning & CVPR 23 & 65.0 & 95.5 & 0.69 \\
**DriveAdapter** & Reinforcement Learning & Frozen Teacher + Adapter & Ours & 61.7 & 92.3 & 0.69 \\
**DriveAdapter** + TCP & Reinforcement Learning & Frozen Teacher + Adapter & Ours & **65.9** & 94.4 & 0.72 \\ \hline MILE* [18] & Reinforcement Learning & Model-Based Imitation Learning & NeurIPS 22 & 61.1 & **97.4** & 0.63 \\ Interfuser* [43] & Rule-Based & Behavior Cloning + Rule & CoRL 22 & 68.3 & 95.0 & - \\ ThinKTwice* [26] & Reinforcement Learning & Behavior Cloning & CVPR 23 & 70.9 & 95.5 & 0.75 \\
**DriveAdapter** + TCP* & Reinforcement Learning & Frozen Teacher + Adapter & Ours & **71.9** & 97.3 & 0.74 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Performance on Town05 Long benchmark. \(\uparrow\) means the higher the better. * denotes using extra data. \(\dagger\) denotes no scenarios are used, which is a much easier benchmark. +_TCP_ means we adopt its dual output technique by adding an additional trajectory prediction head [50].**
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Method** & **DS\(\uparrow\)** & RC\(\uparrow\) & IS\(\uparrow\) \\ \hline WOR [2] & 23.6 & 52.3 & 0.59 \\ LAV [3] & 34.2 & 73.5 & 0.53 \\ Transfuser [39, 8] & 56.7 & **92.3** & 0.62 \\ PlanT with Perception [40] & 57.7 & 88.2 & 0.65 \\ ThinkTwice [26] & 61.3 & 73.0 & 0.81 \\ ThinkTwice* [26] & 66.7 & 77.2 & 0.84 \\ \hline
**DriveAdapter** & 59.4 & 82.0 & 0.68 \\
**DriveAdapter** + TCP & 62.0 & 82.3 & 0.70 \\
**DriveAdapter** + TCP* & **71.4** & 88.2 & **0.85** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Performance on Longest6 benchmark.**
features as inputs. In Table 5, we give ablation studies regarding those designs. We can conclude that:
* only on 2D feature maps, the agent became very reckless with a significant drop of IS. It might be due to the cumulative errors at the late stage.
* only on the flattened 1D feature maps, the agent stuck more often (lower RC). We conjecture that the lack of details about the scene,, the information at the early stage, makes it hard to fully utilize the causal inference ability of the teacher model and the adapter might serve more as a behavior cloning module.
* If we do not feed the raw BEV feature to the adapters, the agent performs poorly. It is natural since the information source of the downstream module is only the blurry and incomplete predicted BEV segmentation now, which is not enough to recover features.
* As expected, unfreezing the teacher model would lead to worse performance. The reason is that the behavior cloning process on the dataset with limited size (compared to tens of millions of steps exploration during reinforcement learning) would cause catastrophic forgetting [32], which has also been observed in foundation models [1] when finetuning on one downstream task. In fact, the model's final performance is very similar to LAV [3], a SOTA behavior cloning-based model. This experiment demonstrates the importance of freezing the teacher model.
### Beyond BEV Segmentation
In this section, we explore the possibility of directly regressing the middle feature maps of the teacher model instead of predicting the BEV segmentation. In other words, the student model does not generate inputs at layer 0 for the teacher model, for example, we let the student generate the feature map at layer 1 of the teacher model and then we feed the predicted feature map into the rest of the frozen teacher model. In this spirit, we conduct experiments with feature maps at different layers of the teacher model and the results are in Table 6.
From the results, we find out that as the learning target of the student model becomes deeper, the driving performance increases. We hypothesize that feeding features directly into deeper layers of the teacher model would encounter fewer cumulative errors. The only exception is the pure behavior cloning agent. Since it only has action supervision and does not utilize the teacher model at all, it encounters severe inertia issue which leads to a low route completion (RC).
Nevertheless, as discussed in Sec. 4.4.2, features at the early stage contain more detailed information about the scene and some of them might be important for the teacher model to make decisions and the usage of the adapter could alleviate the aforementioned cumulative errors. Thus, in this work, we stick to the BEV segmentation target. Besides, compared to high-dimensional features, the semantic segmentation is human-readable, which could be helpful for debugging the perception issues (, the fog is so heavy that the model could not detect the traffic light far away).
## 5 Conclusion
In this work, we propose DriveAdapter which could directly utilize the driving knowledge within a teacher model learned via reinforcement learning, in an end-to-end autonomous driving pipeline. To overcome the imperfect perception and the imperfect teacher model issue, we propose the masked feature alignment and action guidance objective function for adapters. DriveAdapter achieves state-of-the-art performance on two closed-loop autonomous driving evaluation benchmarks. We hope this could establish a new direction of research in end-to-end autonomous driving.
## Acknowledgement
This work was supported by National Key R&D Program of China (2022ZD0160104), NSFC (62206172, 6222607), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102).
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Target** & **DS\(\uparrow\)** & RC\(\uparrow\) & IS\(\uparrow\) \\ \hline BEV Segmentation & 8.9 & 93.2 & 0.09 \\ CNN-2 & 16.0 & 88.9 & 0.12 \\ CNN-4 & 19.0 & 88.2 & 0.23 \\ CNN-6 & 36.9 & 94.9 & 0.38 \\ Latent & 39.0 & 100.0 & 0.39 \\ Action & 39.2 & 61.2 & 0.62 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Experiments about different learning targets for the student model.** The driving performance is evaluated by feeding the prediction of the student model directly into the corresponding layer of the teacher model. _CNN-i_ denotes the CNN feature map at the \(i^{th}\) layer of Roach [55]. _Latent_ denotes the 1D feature map after the linear layer on the flattened BEV feature. _Action_ denotes a behavior cloning student model.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Method** & **DS\(\uparrow\)** & RC\(\uparrow\) & IS\(\uparrow\) \\ \hline DriveAdapter & 61.7 & 92.3 & 0.69 \\ \hline Adapter at Early Stage & 47.2 & 93.9 & 0.47 \\ Adapter at Late Stage & 54.3 & 79.9 & 0.69 \\ w/o BEV Raw Feature & 34.8 & 82.3 & 0.43 \\ Unfrozen Teacher Model & 49.0 & 73.2 & 0.72 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Ablation on the design of the adapter.** |
2303.15381 | Causal schema induction for knowledge discovery | Making sense of familiar yet new situations typically involves making
generalizations about causal schemas, stories that help humans reason about
event sequences. Reasoning about events includes identifying cause and effect
relations shared across event instances, a process we refer to as causal schema
induction. Statistical schema induction systems may leverage structural
knowledge encoded in discourse or the causal graphs associated with event
meaning, however resources to study such causal structure are few in number and
limited in size. In this work, we investigate how to apply schema induction
models to the task of knowledge discovery for enhanced search of
English-language news texts. To tackle the problem of data scarcity, we present
Torquestra, a manually curated dataset of text-graph-schema units integrating
temporal, event, and causal structures. We benchmark our dataset on three
knowledge discovery tasks, building and evaluating models for each. Results
show that systems that harness causal structure are effective at identifying
texts sharing similar causal meaning components rather than relying on lexical
cues alone. We make our dataset and models available for research purposes. | Michael Regan, Jena D. Hwang, Keisuke Sakaguchi, James Pustejovsky | 2023-03-27T16:55:49Z | http://arxiv.org/abs/2303.15381v1 | # Causal schema induction for knowledge discovery
###### Abstract
Making sense of familiar yet new situations typically involves making generalizations about _causal schemas_, stories that help humans reason about event sequences. Reasoning about events includes identifying cause and effect relations shared across event instances, a process we refer to as _causal schema induction_. Statistical schema induction systems may leverage structural knowledge encoded in discourse or the causal graphs associated with event meaning, however resources to study such causal structure are few in number and limited in size.
In this work, we investigate how to apply schema induction models to the task of _knowledge discovery_ for enhanced search of English-language news texts. To tackle the problem of data scarcity, we present Torquestra, a manually curated dataset of text-graph-schema units integrating temporal, event, and causal structures. We benchmark our dataset on three knowledge discovery tasks, building and evaluating models for each. Results show that systems that harness causal structure are effective at identifying texts sharing similar causal meaning components rather than relying on lexical cues alone. We make our dataset and models available for research purposes.
## 1 Introduction
Humans use language to understand stories describing participant interactions in events unfolding over time. To explain novel events in terms of previous experiences, humans rely heavily on _causal schemas_: stories about cause and effect relations that make memory and cognition more efficient [22, 17]. If such schemas or stories form the basis of human reasoning, perhaps AI systems may similarly learn, store, and manipulate knowledge of causal structure for model interpretability or reasoning applications. However, datasets to support studies of causal schemas with natural language processing (NLP) methods are few and far between, a problem we set out to address in this work.
Making a dataset for the computational modeling of causal relations described in language is challenging, and so most existing resources are limited in size and focus on explicit causality at the sentence level. We introduce Torquestra, a dataset of implicit and explicit causal relations at the discourse level to support language studies using statistical methods (e.g., large language models). Our premise is that for human interpretability, causal stories are best represented as graphs, opening up decades of formidable research in graph theory that we can apply to our tasks.
In Fig. 1, we show a pair of Torquestra directed graphs. An instance graph (left) represents the causal story associated with a single text with short descriptions of events and participants as nodes. The corresponding schema (right) is a generalization of this causal story with event types for nodes, giving a means of inferring how different event instances may be similar in predictable ways.
Figure 1: A causal schema is either an instance (left) tied directly to a text (top) or a schema graph (right) composed of event types. Edges indicate relations (not all shown) for causation of action and rest (Enables and Blocks). Graphs include participants, e.g., civilians (dotted orange node, left).
Results of knowledge discovery experiments using Torquestra demonstrate that graph-based methods help identify texts that describe event sequences sharing similar causal structures as well as lexical features, with performance in clustering and schema matching experiments comparable to strong baselines that rely on lexical patterns alone. Through our experiments, we highlight the versatility of the dataset, with the hope of encouraging future research into causation and schemas in NLP.
As we study the inference of latent causal stories given textual descriptions of event sequences, our dataset, Torquestra, may help answer questions such as: (1) In what ways are temporal, causal, event hierarchical, and schema structures related? And, (2) How well do statistical methods such as pre-trained language models help with tasks that resemble causal reasoning? To address these questions, our contributions include:
* (**Theoretical**) We study participant-centered causal structure, a relatively unexplored approach to discourse modeling, for which we define fine-grained causal relations based on physical models of causation;
* (**Dataset**) We present a dataset to analyze the temporal, event, schema, and causal structures described in natural language text; and,
* (**Empirical**) We carry out experiments in structured generation for knowledge discovery, testing the suitability of a general purpose commonsense model distilled on symbolic knowledge for our data and tasks.
We first explore background in schema research (SS2) and define causal structure from multiple perspectives including our own (SS3). We then take a close look at our dataset, Torquestra, including details about annotation and evaluation (SS4).
To demonstrate the versatility of the dataset, our experiments include: causal instance graph generation, causal graph clustering, and causal schema matching, and we design and build models and metrics for each (SS5). We report baseline results with large language models and graph neural networks (SS6), concluding with remarks on challenges and opportunities of schema understanding research.
## 2 Causal schemas
Schemas, cf. _scripts_ and _frames_, are high-level semantic structures for event sequences such as going to restaurants, crime investigations, and investing money (Minsky, 1974; Fillmore, 1976; Schank and Abelson, 1977), a coherent story or pattern of interactions distinct in memory.
### Why schemas?
Schemas help us reconstruct, order, and make predictions about events, about events' relative _salience_ and about the _centrality_ of event participants. Cognitive processes such as generalization, induction, and intuitive notions of physics and psychology (Talmy, 1988; Tenenbaum et al., 2011) are associated with causal cues encoded in language (Croft, 2012). Together, these base elements give rise to _causal reasoning_, a defining feature of human cognition and possibly one day of AI systems as well (Lake et al., 2017; Scholkopf et al., 2021).
### Causal schema induction
In AI, semantic understanding or analysis is viewed as "abduction to the best explanation" (Hobbs et al., 1993). Abductive reasoning is tightly associated with _induction_, which we view as abduction to the best _high-level_ explanation. The causal schema induction task is: given a text, infer high-level semantics for an event sequence using explicit (textual) and implicit (commonsense) knowledge. Consider the example events 1a and 2a.
1. a. I passed the salt to you. b. Transfer \(\twoheadrightarrow\) Change of ownership
2. a. I passed the money to you. b. Transfer \(\twoheadrightarrow\) Change of ownership
We formalize our notion of schemas using event types (1b and 2b) from FrameNet (Fillmore et al., 2003), with arrows denoting causal relations, either lack of enablement (\(\twoheadrightarrow\)) or enablement (\(\twoheadrightarrow\)). This formalization helps represent typical human experiences, e.g., not all Transfer events enable (or _imply_, _entail_ or _cause_) a Change of Ownership.
Human understanding of which logical conclusions are appropriate in a given context is integral to causal commonsense reasoning. In this paper, we examine how well large language models, e.g., GPT2/3 (Radford et al., 2019; Brown et al., 2020), perform related schema induction tasks.
### Related work
In this section, we briefly describe relevant background in research on stories, event temporality, schemas, semantic search, and challenges of collecting causal data from natural language texts.
**Stories and time**. A _story_ is a temporal ordering of events Labov and Waletzky (1967) characterized by change of state and participant interaction Croft (2012); Croft et al. (2017). In NLP, research into story understanding has emerged from studies of temporal relations Allen (1983); Mani et al. (2006) using temporal data for model development and evaluation Pustejovsky et al. (2003). Temporal event meaning is nuanced, evident in work on multiple meaning axes Ning et al. (2018), temporal aspect Donatelli et al. (2018), and the relative duration of events Zhou et al. (2021).
**Schemas as temporal structures**. Human knowledge is encoded in stories as _schemas_Schank and Abelson (1995), prototypical event sequences for common situations. In NLP, schemas are temporal structures, e.g., narrative event chains Chambers and Jurafsky (2008), for event schema induction Chambers (2013), timeline construction Wen et al. (2021), temporal schema induction Li et al. (2021), future event prediction Li et al. (2022), and partially-ordered temporal schema generation Sakaguchi et al. (2021). However, temporal knowledge is complex and inherently noisy Ning et al. (2018), likely limiting advances in automated schema understanding systems.
**Temporal and causal structures are related**. Some lines of work integrate both temporal and causal perspectives, including narrative storylines Caselli and Vossen (2016) and representations for temporal and causal networks Bethard et al. (2008); Berant et al. (2014); Mirza and Tonelli (2016); O'Gorman et al. (2018). However much of this research does not directly address schemas, which we consider crucial for improved AI reasoning about stories, at the very least in an evaluation context.
**Temporal and causal datasets**. Our dataset is similar to efforts to crowdsource plot graphs Li et al. (2013), collect graphical schemas for everyday activities Sakaguchi et al. (2021), and apply text-graph pairs for temporal reasoning Madaan and Yang (2021). Our work differs in that we integrate knowledge of causal, temporal, event, and schema structures in a single dataset.
**Semantic search**. Knowledge discovery can be framed as semantic search: identifying texts that share semantic structure. Heavy lifts in information retrieval are methods like BM25 and TF-IDF (sparse retrievers), often combined with text embedding similarity metrics (dense retrievers) Chen et al. (2022). In work close to ours, similarity can also be measured using sentence meaning representations Bonial et al. (2020), which we extend to study causal structure at the discourse level.
**Challenges in making NLP causal datasets**. Due to the complexities of faithfully assessing causal relations, natural language datasets have focused mostly on explicit causal markers Mirza and Tonelli (2014); Dunietz et al. (2017) typically at the sentence level Tan et al. (2022). In contrast, we seek to identify implicit (commonsense) and explicit causal relations at the discourse level, resulting in a dataset with at least 6x more causal relations than other resources, as we see in Table 1.
## 3 Causal story framework
Causality is complex, as centuries of research stand to remind us. For one, commonsense causal reasoning goes beyond mere notions of necessary and sufficient conditions Minsky (1974); Hobbs (2005). For another, causal viewpoints depend on perspective. Which causal dimensions of event sequences are humans most likely to agree on? To find answers to this question, we examine viewpoints from physics, neuroscience, philosophy, epidemiology, and cognitive semantics.
### Defining causal relations
In Newton's law of inertia Newton (1687), cause and effect relations are viewed in terms of **causation of action** (an external force puts an object into motion) and the **causation of rest** (an external force brings an object to rest). We argue that physical models for the acceleration and deceleration of an object through the addition of energy correspond closely with factors leading to the starting and ending of events, drawing on work in cognitive semantics Talmy (1988); Croft (2012) and psychology Wolff (2007) where causal relations are conceived of as tendency to action and rest, force, opposition to force, and the overcoming of force.
\begin{table}
\begin{tabular}{l|l|l}
**Dataset** & **\# does** & **\# causal rels** \\ \hline EventCausality Do et al. (2011) & 25 & 580 \\ Causal-TB Mirza and Tonelli (2014) & 183 & 318 \\ REO O’Gorman et al. (2016) & 90 & 1000 \\ Ning et al. (2018) & 25 & 172 \\ ESTER Han et al. (2021) & 2000 & 4k \\ Causal News Corpus Tan et al. (2022) & 3.5k sentences & 1600 (train) \\ Ours Torqueustra\_many & 3k texts & 24k \\ Ours Torqueustra\_many & 6k long texts & 75k \\ \end{tabular}
\end{table}
Table 1: Existing human-curated datasets of causal relations in written text are relatively limited in size. Our base dataset (2nd from bottom) is at least 6x greater in number of causal relations compared to previous work.
Likewise, in neuroscience physical causal mechanisms are related to excitatory and inhibitory synapses making neuron responses either more or less likely Purves et al. (2017), similar to views in scientific philosophy Reichenbach (1956) and causation in epidemiology Parascandola and Weed (2001). Across views, causal factors are associated with that which increases or decreases the likelihood of events, notions we integrate into definitions for causal structure we present in Table 2.
### Participant-centered causal structure
Modeling causal relations depends not only on how one event (a precondition or _cause_) is related to a subsequent event (a postcondition or _effect_), but also to the direct role of event _participants_, often the grammatical subjects and objects associated with event structure Talmy (1988); Croft (2012). Participant-centeredness is featured in studies of narrative Propp (1968); Caselli and Vossen (2016); Brahman et al. (2021), of participant states Ghosh et al. (2022); Vallurupalli et al. (2022), and of disease where organisms are conceived of as causative agents, e.g., the pathogen _tubercle bacillus_ Causes _tuberculosis_.
In the participant-centered graph in Fig. 2 we show how participants and events causally interact. Typical causal relations are between events, e.g., the _outsing of a leader_ may end or Block a _conflict_. In a participant-centered approach, people and things directly act on one another and also act as the initiating agents or causal endpoints of events, e.g., the _rebels_Block the _leader_ who in turn Enables the conflict, etc.
## 4 Torquestra
Torquestra is a _causal schema library_: a dataset1 of text-graph-schema units for research in schema induction and, more broadly, knowledge discovery (see SS5). At its core, a Torquestra data instance is a newswire text paired with causal (\(G_{\textit{causal}}\)), temporal (\(G_{\textit{temp}}\)) and event (\(G_{\textit{event}}\)) structures. For notation, see Table 3.
Footnote 1: [https://github.com/fd-semantics/causal-schema-public](https://github.com/fd-semantics/causal-schema-public)
In this section, we briefly describe the resources we used to make Torquestra, including notes about texts, size, graphs, and data annotation.
\begin{table}
\begin{tabular}{c l l l l l} \hline \hline
**Rel** & **Sub-relation** & **Description** & & **Verbs/Concepts (ex.)** & **Example** \\ \hline \multirow{5}{*}{**Purchals**} & Begins & Prototypical causation of action & cause, start & Oleg started the ball rolling. \\ & ADRs & Acceleration; cf. sufficient condition & contribute, help & Oleg kept the ball rolling. \\ & Allways/lets action & Inaction allows action & let, allow, permit & Oleg let the ball (0) for acting to stop it). \\ & & **Plevents** & Remove barrier so action can continue & free, maintain & Olegana removed obstacles to the ball rolling. \\ & Without effect & Despite expectations, no enabling effect & despite, even though & Despite our efforts, we couldn’t get the ball rolling. \\ & Unknown & Uncertainty of enabling relation & questions, modality & Did Did anybody/anything start the ball rolling? \\ \hline \multirow{5}{*}{**Purchals**} & EnBs & Prototypical causation of rest & stop & Oleg stopped the ball rolling. \\ & Disrupts & Reduction of momentum & hinder, resist, slow & Oleg slowed the ball down. \\ & Allways/nets first & Inaction leads to rest & not help & Oleg let the ball stop rolling. \\ & **Plevents** & Barrier to action & refrain, forbid, hold & Oksana prevented the ball from rolling. \\ & Without effect & Despite expectations, no blocking effect & despite, even though & We tried but could not stop the ball. \\ & Unknown & Uncertainty of blocking relation & questions, modality & Did anybody/anything stop the ball? \\ \hline \hline \end{tabular}
\end{table}
Table 2: Causal relations reflect the dual concepts of Enables (\(\approx\)makes more likely), shorthand for _causation of action_, and its counterpart Blocks (\(\approx\)makes less likely), shorthand for _causation of rest_. More fine-grained sub-relations (second column) are symmetric, e.g., the most prototypical causal relation Enables-Begins corresponds to Blocks-Ends, Enables-Adds corresponds to Blocks-Disrupts, etc. The sub-relation Without effect denotes the absence of expected causality for events that happen or do not happen _despite expectations_, a challenging task for machines and often overlooked in other datasets.
\begin{table}
\begin{tabular}{c l} \hline
**Symbol** & **Meaning** \\ \hline \(G_{\textit{causal}}\in\mathcal{G}\) & Instance causal graph assoc. w/ text \\ \(S_{\textit{causal}}\in\mathcal{S}\) & Schema causal graph assoc. w/ text(s) \\ \(p\in\mathcal{P}\) & Participant node in a causal graph \\ \((p_{i},rel,p_{j})\) & Causal relation (see Table 2) \\ & between participants \(p_{i}\) and \(p_{j}\) \\ \(G_{\textit{temp}}\) & Temporal graph (Ning et al., 2020) \\ \(G_{\textit{event}}\) & Event graph (Han et al., 2021) \\ \(V_{\textit{num}}\) & Node w/ event types (Wang et al., 2020) \\ \(\phi_{\textit{num}}\) & Set of hierarchical event types \\ \hline \hline \end{tabular}
\end{table}
Table 3: Notation used in this paper.
Figure 2: An example participant-centered instance graph for “The rebels ousted the leader to end the conflict.” Causal graphs consist of two types of nodes: participants (top, orange) and events (bottom, red). Here, the rebels enable the ousting event, thus blocking the leader and the conflict as well.
### Temporal and event structure knowledge
Torque + Ester = Torquestra.We wanted a dataset for a joint study of temporal, causal, and event structures. To this end, we examined question-answer (QA) datasets for temporal relations Torque Ning et al. (2020) and event structures Ester Han et al. (2021). After noting both drew texts from TempEval3, a subsequent analysis revealed the datasets shared 700 text snippets, an intersection of data that we used to form core Torquestra, as illustrated in Fig. 3.
### Texts
**Texts** in Torquestra are English-language newswire snippets from TempEval3 UzZaman et al. (2013) and Wikipedia. The texts cover a number of typical news domains, including politics, sports, and business. Texts are mostly multiple sentences (98%+), with mean text length between 60-300 subword tokens Byte Pair Encoding Sennrich et al. (2016)).
### Dataset size
Our manually constructed dataset consists of three slices of data: Torque (2500 exs), Ester (700 exs), and Wiki-crime (200 exs), each aligned with up to four semantic networks. For details about data slices, see Appendix A.2, Fig. 6.
### Causal instance graphs
Torquestra consists of causal instance graphs for events described in text (see Fig. 2). A causal graph \(G_{\textit{causal}}=(V,E)\) is directed and at times cyclic, with vertices \(V\) for salient events and participants and edges \(E\) being causal relations. Nodes in the graphs are natural language descriptions for events and event participants, typically of subject-verb-object form. For graph statistics, see Appendix A.2, Fig. 7.
### Causal schema graphs
As counterparts to \(G_{\textit{causal}}\), schema causal graphs \(S_{\textit{causal}}\) are generalizations for event sequences using event types from \(V_{\textit{maven}}\subset\phi_{\textit{maven}}\)2. Annotators also add free-form event labels for cases not represented in \(\phi_{\textit{maven}}\), e.g., event types such as International_relations, amounting to >\(400\) event types observed (with details in Appendix A.3).
Footnote 2: \(|\phi_{\textit{maven}}|=168\) FrameNet event types
Compare the instance graph from earlier (Fig. 2) to the schema graph in Fig. 4: the 'tosting of the leader' blocks the 'conflict' in the instance graph, which in the schema graph is generalized to a Change of leadership that blocks a Military operation.
Ontological questions for causal schemas remain open. In Torquestra, causal schema are associated with event frame semantics at the node- or subgraph-level, one step of the data collection process that we discuss next.
### Data annotation
The dataset is compiled through manual and automated means, subsets of data we refer to as Torquestra_human_ and Torquestra_auto. In this subsection, we focus on manual annotation efforts.
**Torquestra_human_** consists of approximately 30K spans of text corresponding to graph nodes and 48K additional labels for nodes and edges. Annotation consisted of four main tasks: Given a short text and commonsense knowledge, identify and label causal participants (nodes), event types (for nodes/graph), causal relations (edges), and salient causal chains.
For annotation, we relied on a group of eight (8) in-house undergraduate and graduate students with backgrounds in linguistics and computer science which we found could faithfully recreate the causal graphs we envisioned. Core Torquestra_human_ required approx 250 hours with annotators earning between $16-25/hr. For more details about the annotation process, guidelines, evaluation, and prompt engineering, see Appendix A.4.
Figure 4: A causal schema graph for the event sequence “The outsing of the leader ended the conflict” using frame semantic labels as nodes.
Figure 3: The core of Torquestra is drawn from texts with rich QA annotations from two existing resources: Torque Ning et al. (2020) and Ester Han et al. (2021).
## 5 Tasks and experimental methods
Torqueestra supports the induction and knowledge discovery tasks illustrated in Fig. 5: (1) _causal instance graph generation_, (2) unsupervised _causal graph clustering_, and (3) _causal schema matching_. We briefly describe each in turn.
### Causal instance graph generation
The first task is _causal instance graph generation_. As in work on narrative planning Riedl and Young (2010), learning mini knowledge graphs as world models Ammanabrolu and Riedl (2021), and temporal graph generation Madaan and Yang (2021), we generate graphs conditioned on text and, in an extension to previous work, also condition on event semantics (e.g., temporal structures). We compare few-shot GPT-3 Brown et al. (2020) with fine-tuned GPT2-XL and knowledge distilled GPT2-XL_distill_West et al. (2022) (60/40 train/dev split).
**Generation evaluation**. Perplexity and n-gram overlap metrics such as METEOR Banerjee and Lavie (2005) are of limited use as proxy measures for the faithfulness and coherence of generated causal stories. So, we also manually evaluate triples, reporting _correctness_ (% of accurate edges) and _completeness_ (% of causal graph generated).
### Causal graph clustering
The second task is _causal graph clustering_, unsupervised clustering of schema instance graphs (Torqueestra_auto_). The objective of this task is to study the effectiveness of similarity metrics for texts using lexical features and graph embeddings.
**Data**. As out-of-domain data for testing, we used MAVEN (MAssive eVENt detection) Wang et al. (2020), a collection of 3K+ Wikipedia articles that targets open domain event understanding systems, adopting 168 labels from FrameNet Fillmore et al. (2003) organized in an event hierarchy.
**Models**. For lexical similarity baselines, we use standard implementations of tf-idf3 for sparse vectors and a SentenceTransformer Reimers and Gurevych (2019) for dense text embeddings.
Footnote 3: [https://en.wikipedia.org/wiki/Tf-idf](https://en.wikipedia.org/wiki/Tf-idf)
For graph embeddings, we first encode graph nodes using DeBERTa (900M model) He et al. (2021), and assign scalar values to edges (\(+1\) for Enables and \(-1\) for Blocks). We then train a graph attention network Velickovic et al. (2018) via self-supervision masking random nodes. For further comparison, we also compose graph embeddings using the FEATHER algorithm Rozemberczki and Sarkar (2020) based on random walks.
We cluster embeddings using standard K-means4 with \(k\)=6 for the number of clusters and fix the number of observations to \(n\)=25 for evaluation. We then examine results, measuring similarity using automated metrics we outline next.
Footnote 4: [https://scikit-learn.org/stable/modules/clustering.html](https://scikit-learn.org/stable/modules/clustering.html)
**Clustering metrics**. We report purity, adjusted Rand index (ARI), and V-measure (VM) Rosenberg and Hirschberg (2007) using as ground truth topic labels for each article Wang et al. (2020) mapped to a smaller ontology5.
Footnote 5: e.g., ‘hurricane’ and ‘earthquake’ are both Disaster’
To measure similarity accounting for multiple labels, we propose a new metric: _event cluster purity_, estimating for each cluster a true label \(e_{j}\) as the top-j event types observed6. In these experiments, we look at the top-10 event types observed, \(j\)=10, so \(e_{10}^{m}\) denotes the ground truth event type vector for cluster \(m\). We then compare this ground truth with a human-annotated k-hot event vector for each graph, \(e^{G}\). For \(N\) clusters \(M\), the metric is defined:
Footnote 6: Reminiscent of Jaccard similarity
\[\text{purity}_{\text{\emph{event}}}=\frac{1}{N}\sum_{m\in M}\sum_{e^{G}\in m }\max_{e_{10}^{m}\in\mathbf{E}}|e_{10}^{m}\cap e^{G}| \tag{1}\]
### Causal schema matching
As a variant of exemplar matching, the aim of causal schema matching is to identify induced MAVEN graphs Torqueestra_auto_) most similar to curated schemas from two sources: core Torqueestra_human_ (short texts) and an existing schema library Du et al. (2022). In the latter case, we match to schema chapters (individual subgraphs) for ease of evaluation.
Previous work has investigated methods to align schema nodes Du et al. (2022), and methods for subgraph matching exist Rex et al. (2020).
Figure 5: Data-task pipeline. With texts from different sources (dotted boxes), we annotate, generate, and collect causal stories for a _schema library_, a repository we use for _causal graph clustering_ and _schema matching_.
Nonetheless, our experiments show the effectiveness of using graph embedding similarity as a step in identifying relevant schemas given a query.
Using the same models as our clustering experiments, we randomly select 50 Wikipedia articles to match with our schema library (RESIN Du et al. (2022) + core Torquestra) and examine the top-5 matched schemas using text topic labels, event type overlap, and graph visualizations for qualitative analysis. We report mean average precision (MAP) and mean reciprocal rank (MRR) as the accuracy of the ranking of the most relevant text. For more details about metrics and the evaluation tool, consult the Appendix A.8 and website.
## 6 Results and discussion
We present results for causal instance graph generation using manual and automatic evaluation in Table 4 and results for causal graph clustering and schema matching in Tables 5 and 6.
We report mean results for a minimum of three different model runs varying random seeds (of graph neural networks) and hyperparameters (# epochs, block size, p-sampling rate, etc.). We do not exhaustively explore settings nor compare with language models outside the GPT family. Experimentation leads to the following observations.
**Large language models can generate complex structured representations**. Experiments show (Table 4) we can generate symbolic causal knowledge in the form of directed, branching causal graph structures with multiple events and participants. We expect that research into interpretable, neuro-symbolic, stepwise reasoning using generative models may build upon this progress in structure prediction.
**Conditioning on structural knowledge improves generation performance, in some cases**. We evaluate if conditioning on temporal, event, and event type networks helps improve generated causal graph correctness and completeness and find that temporal and event structures appended to raw text result in more correct (Table 4, +7%), more complete (+7%) graphs than raw text alone. Overall, semantic signals jointly increase performance over conditioning on raw text alone using validation data (texts 100-150 tokens in length).
More specifically, we experimented with various forms of concatenated text and structures, including text alone, text + G\({}_{temp}\), and text + G\({}_{temp}\) + V\({}_{event}\). With in-distribution data (top half of Table 4), the
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Matching** & \multicolumn{2}{c}{**Metric**} \\ & & MAP & MRR \\ \hline TF-IDF & text-to-text & 0.36 & 0.32 \\ GAT & graph-to-graph & 0.48 & 0.36 \\ \hline TF-IDF & text-to-schema & 0.59 & 0.35 \\ GAT & graph-to-schema & **0.68** & **0.43** \\ \hline \hline \end{tabular}
\end{table}
Table 6: First (top), we match Torquestra to Wikipedia texts (MAVEN) using TF-IDF and graphs encoded with a graph attention network (GAT). For schema matching (bottom), we match MAVEN graphs to our causal schema library.
\begin{table}
\begin{tabular}{l l l c c c} \hline \hline
**Experiment** & **Model** & **Output \(\,\)I Input** & \multicolumn{4}{c}{**Metrics**} \\ & & & METEOR & Correct & Complete & \# triples eval \\ \hline Graph generation 1 & GPT-3 (7-shot) & \(p(G_{round}\)_text_text_, \(G_{range})\) & 0.28 & 0.50 & 0.33 & 224 \\ (validation set) & & \(p(G_{round}\)_text_text_, \(G_{round})\) & 0.26 & 0.55 & 0.25 & 180 \\ (supervised, 60/40 split) & GPT2-XL & \(p(G_{round}\)_text_text_, \(G_{round})\) & 0.27 & 0.52 & 0.23 & 180 \\ & GPT2-XL_distill_ & \(p(G_{round}\)_text_text_, \(G_{round})\) & 0.27 & 0.58 & 0.35 & 120 \\ & & \(p(G_{round}\)_text_text_, \(G_{round})\) & 0.29 & 0.60 & 0.38 & 120 \\ & & \(p(G_{round}\)_text_text_, \(V_{event})\) & 0.34 & 0.60 & 0.40 & 300 \\ & & \(p(G_{round}\)_text_text_, \(G_{round}\)_, \(V_{event})\) & **0.41** & **0.65** & **0.42** & 360 \\ \hline Graph generation 2 & GPT2-XL_distill_ & \(p(G_{round}\)_text_text_, \(G_{round})\) & n/a & 0.56 & 0.33 & 320 \\ (test set) & & \(p(G_{round}\)_text_, \(V_{event})\) & n/a & 0.59 & 0.36 & 360 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results for causal graph (G\({}_{causal}\)) generation using GPT-3 Brown et al. (2020), GPT2-XL Radford et al. (2019) and GPT2-XL_distill_West et al. (2022). We condition on texts + temporal networks (\(G_{temp}\)), event structure (\(G_{event}\)), and hierarchical events (\(V_{event}\)), automatically evaluating with METEOR and manually evaluating correctness and completeness. Pairs of results underlined (or dotted) illustrate important points we discuss in §6.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Model** & **Input** & \multicolumn{4}{c}{**Metric**} \\ & & parity & ARI & VM & \(\text{purity}_{event}\) \\ \hline embedding & text & 0.96 & 0.95 & 0.95 & 4.36 \\ TF-IDF & text & **0.98** & **0.97** & **0.97** & **5.47** \\ FEATHER & graph & 0.82 & 0.20 & 0.33 & 4.09 \\ GAT & graph & 0.83 & 0.46 & 0.49 & 4.69 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for causal graph clustering (higher is better). Evaluation is based on single labels (for first three metrics), with purity\({}_{event}\) (Eq. 1) based on most frequent event types.
most rich input (G\({}_{temp}\) + V\({}_{event}\)) led to the best generation results. In contrast, with out-of-domain texts (bottom half), text alone works better than conditioning with 'dense' paraphrases of V\({}_{event}\), with the length of test texts (2-4x longer than avg. validation) likely a factor.
**The student surpasses the teacher**. Following work in knowledge distillation West et al. (2022), experiments show GPT2-XL\({}_{distill}\) (student model, trained on knowledge graph triples, fine-tuned on Torquestra) outperforms few-shot GPT-3 (teacher model, trained to predict next word) (Table 4, \(\pm\)10%). Further, GPT2-XL\({}_{distill}\) outperforms original GPT2-XL in correctness (\(+\)8%) and completeness (\(\pm\)15%) (input text + G\({}_{temp}\)), evidence the distilled model learns causal structure, suggesting that we need commonsense knowledge for more complete and correct causal graph generation.
**Lexical methods for clustering texts are generally better**. Unsurprisingly, tf-idf significantly outperforms graph similarity methods across all metrics (Table 5). We note tf-idf clusters are 'quite' homogeneous, due in part to the provenance of the test data: Wikipedia articles automatically selected and labeled with topics Wang et al. (2020), likely with similar methods as ours.
**Advantages of matching using graph methods versus words alone**. We measure similarity of event sequences for clustering comparing text-to-texts and graph-to-graphs, and for matching experiments comparing graph-to-schemas and text-to-schemas. Advantages of our system are evident matching graph-to-schemas (Table 6, \(\pm\)9%).
We find graph-based methods help identify articles with similar causal stories, e.g., graphs with a 4-nary node'military operation'. Graph-based methods provide a means of measuring conceptual similarities between the causal stories associated with events that may not otherwise be matched. For example, our algorithm finds a high similarity between the 1939 'Invasion of Poland' and a head-on train collision, where both stories involve opposing forces running into each other explosively, with similar (predictable) tragic consequences.
**Smaller block size helps identify salient subgraphs**. In training, setting block size (the length of input presented to the model) to shorter lengths (e.g., \(<\) 300 subword units) provides the model with only a subset of causal triples for each text. As we topologically sort input graphs using breadth-first search, the model learns to generate salient and connected edges. We leave for future work more rigorous evaluation of salience detection using manual annotations we include as part of our data release.
**Evaluation is challenging**. There are many challenges associated with the evaluation of schema induction systems. On the one hand, lexical overlap and shared entities make two texts similar. On the other, similar causal structures, i.e. the causal schemas that stories share, can be discovered and compared. Still, the weighing of multiple semantic signals remains subjective.
We experimented with various means of evaluation: precision of topic labels (e.g., _man-made disaster_), overlap of event types (text as bag-of-events) and subsets of frequent event types (Eq. 1). We qualitatively assess graph structural similarity, with an automated tool a work in-progress.
We find that multiple measures of schema similarity to be more robust than using a single method, though we also recognize that more work, both theoretical and computational, needs to be done to develop still more reliable tools.
**Schema meaning**. Previous work views schemas as linear event orderings Chambers (2013) and as more complex graph structures Li et al. (2021); Du et al. (2022). How to further compose atomic meanings into larger semantic units for computational processing remains an open research question. Something like an event ontology of hierarchical event structures likely plays a role in the human conceptualization of event similarity, however, we make no hypotheses about better representations for computational applications.
## 7 Conclusion
We present Torquestra, a dataset of paired semantic graphs for studies of causal structure at the discourse level. Our experiments in causal graph generation, clustering, and schema matching provide insight into how to leverage Torquestra for knowledge discovery of latent causal structures of news texts, comparable to or outperforming search methods based on lexical similarity alone.
Research in knowledge discovery using causal schema induction will be of interest to historians, journalists, and health researchers looking for new angles on the study of narratives and stories. To support such research, we make our dataset, starter code, and evaluation tools publicly available7.
Footnote 7: [https://fd-semantics.github.io/](https://fd-semantics.github.io/)
## Acknowledgements
Our special thanks to the annotation team at the University of Colorado Boulder for help in collecting data. Thanks to Ed Hovy, Yejin Choi, Dan Roth, Martha Palmer, Frank Ferraro, the XPO ontology group, and Heng Ji for guidance, inspiration, and feedback. This research was supported in part by DARPA under I2O (RA-21-02) and DARPA under the KAIROS program (FA8750-19-2-1004).
|
2305.11269 | Energy dissipation in high speed impact on granular media | In this work, we thoroughly investigate the impact process on the granular
media in the limit when the ratio of the impact velocity to the acoustic speed
becomes of the order of 0.01-1, which is far greater than the existing
literature (0.0001-0.001). We show that the energy dissipation is largely due
to the energy cost associated with the exploration between different metastable
states via large scale reorganization of the force chain network. In this
regime, the conventional drag force models break down, and the drag force can
not be decomposed into a depth dependent static pressure and a depth
independent inertial drag as proposed in the existing literature. The high
dynamical stress generates acoustic pulses, which propagate longer distances
rather than decaying exponentially, as observed in the previous works. In the
latter stage of the impact process, the boundary also plays an essential role
in the reorganization of the force chains as the reflected acoustic pulses
interact with the original impact pulses. Furthermore, we study the scaling of
the early stage peak forces with the impact velocity and find that spatial
dimensionality strongly influences the scaling. | Manish Kumar Mandal, Saikat Roy | 2023-05-18T19:33:04Z | http://arxiv.org/abs/2305.11269v1 | # Energy dissipation in high speed impact on granular media
###### Abstract
In this work, we thoroughly investigate the impact process on the granular media in the limit when the ratio of the impact velocity to the acoustic speed becomes of the order of 0.01-1, which is far greater than the existing literature (\(0.0001-0.001\)). We show that the energy dissipation is largely due to the energy cost associated with the exploration between different metastable states via large scale reorganization of the force chain network. In this regime, the conventional drag force models break down, and the drag force can not be decomposed into a depth dependent static pressure and a depth independent inertial drag as proposed in the existing literature. The high dynamical stress generates acoustic pulses, which propagate longer distances rather than decaying exponentially, as observed in the previous works. In the latter stage of the impact process, the boundary also plays an essential role in the reorganization of the force chains as the reflected acoustic pulses interact with the original impact pulses. Furthermore, we study the scaling of the early stage peak forces with the impact velocity and find that spatial dimensionality strongly influences the scaling.
## I Introduction
Granular material is a special class of complex systems composed of many interacting constituents behaving collectively. On top of the inherent complex behaviour of granular material, the response of granular media under high speed impact is notoriously difficult to model analytically since the impact process never reaches a steady state (steady-state velocity different from zero) except when a heavy intruder hits a superlight granular media [1]. The existing constitutive laws for the granular materials are applicable only for the steady, fully developed flow conditions, whereas the granular impact process leads to unsteady and complex flow. Scarcity of governing equations and insufficient force and trajectory data at the level of grain led to the development of numerous phenomenological models[2; 3; 4; 5; 6; 7] to describe the scaling of crater morphology, collision time, penetration depth with that of impact velocity and grain and intruder properties.
Following these works, various researchers [8; 9; 10] proposed that the granular drag force term, \(F_{d}\) can be decomposed into a static depth dependent friction term and a velocity dependent inertial drag term. Although the drag model was shown to be valid in the past works, but almost all of the experimental and simulational studies in the literature focused on the impact velocities, \(V_{0}\) (1 to 5 \(m/s\)) that are far below than the velocity scale set by the acoustic speed, \(V_{a}\) (2000 to 5000 \(m/s\)) in the same media. Recently, Clark _et al._[11; 12] cleverly reduced the stiffness of the grains to bring down the force propagation speed and consequently made the low impact velocity approach the force propagation speed. The nature of force propagation was shown to depend on a dimensionless parameter, \(B\), which is the ratio of the collision time scale (\(t_{col}\)) and the time scale set by the intruder impact velocity (\(D_{p}/V_{0}\)), where \(D_{p}\) is the grain diameter and the collision time, \(t_{col}\) can be calculated based on the interaction law [12; 13]. Impact pulses propagate through the sparse force chains when \(B\approx 0.1\), whereas \(B\to 1\) leads to a dense space filling network with a homogeneous front. Although this study made some interesting observations, but it does not address the real scenario where the high speed impact requires greater amount of energy to be dissipated, and the energy dissipation mechanism can be quite different compared to the low speed impact. Very recent investigation [14] suggests some universal scaling of the early stage peak forces with the impact velocity, and the scaling turns out to be insensitive to the spatial dimension and many other system parameters. This is a puzzling observation and begs for a detailed study on the nature of the initial forces during an impact in granular media. The process of the granular impact and crater formation has rich physics with wider application in many disciplines like ballistics[15; 16], astrophysics[17], wind-blown transport of sands via granular splashing [18] and earth sciences[19]. At present, there is very scant literature available on the high speed (comparable to the force propagation speed) impact due to the technological limitations. The nature of the drag forces and the energy dissipation mechanism are completely unknown in this regime. In this work, we employ extensive numerical simulations (both in \(2D\) and \(3D\)) to comprehend the physics of the impact process in the high speed limit. The applicability of the existing drag force models is also tested for a wide range of \(V_{0}/V_{a}\), from 0.008 to 0.25. The mechanism of energy transfer and its eventual dissipation in granular media during high speed impact is unveiled via spatio-temporal monitoring of the displacement field, velocity field and complex force chain networks. The scaling of the early stage peak forces with the impact velocity is thoroughly investigated. Also, the effect of the boundary and its importance in transmitting or holding the impact stress is explored in detail.
Simulation methodology
### Contact interaction
Frictional granular material is used as a model system for studying the high-speed impact cratering. Discrete element method (DEM ) simulation is employed to keep track of the particles with frictional interaction taking into account both the normal(\(F_{n}\)) and the path dependent tangential forces(\(F_{t}\)). Simulation is performed both in two and three dimensions. Open source codes, Large-scale Atomic/Molecular Massively Parallel Simulator[LAMMPS][20; 21; 22] are used and customized to carry out the numerical simulation. In _2D_ as well as _3D_ the particle-particle interactions are modeled as linear spring-dashpot model with a velocity dependent damping and static friction. We have also used non-linear Hertzian interaction between particles in \(3D\). Implementation of static friction is done through the tracking of the elastic part of the shear displacement from the time contact was first made. Particles \(i\) and \(j\), with position vectors given by \(\underline{r}_{i},\underline{r}_{j}\), have linear velocities \(\underline{v}_{i},\underline{v}_{j}\) and angular velocities \(\underline{\omega}_{i},\underline{\omega}_{j}\), respectively. Grains will experience a normal force, \(\underline{F}_{ij}^{(n)}\) whenever there is a relative normal compression on contact given by \(\Delta_{ij}=|\underline{r}_{ij}-D_{ij}|\), where \(\underline{r}_{ij}\) denotes the vector joining the centers of mass and \(D_{ij}=R_{i}+R_{j}\) with \(R_{i}\) and \(R_{j}\) being radii of particles. The normal force is modeled as a Hookean spring like interaction, whereas the tangential force is given by similar linear elastic relation upto the sliding point[23]. The force magnitudes are given as,
\[\underline{F}_{ij}^{(n)}=k_{n}\Delta_{ij}\underline{n}_{ij}-\frac{\gamma_{n}} {2}\underline{v}_{n_{ij}} \tag{1}\]
\[\underline{F}_{ij}^{(t)}=-k_{t}\underline{t}_{ij}-\frac{\gamma_{t}}{2} \underline{v}_{i_{j}} \tag{2}\]
where \(\Delta_{ij}\) and \(t_{ij}\) denote normal and tangential displacements respectively; \(\underline{n}_{ij}\) denotes the normal unit vector given by \(\underline{r}_{ij}/|\underline{r}_{ij}|\). \(k_{n}\) and \(k_{t}\) are respectively stiffness of the springs for the normal and tangential mode of elastic displacement. For the Hertzian case, contact normal force is given as, \(F_{Hertzian}=F_{Hookean}\sqrt{\Delta_{ij}}\sqrt{\frac{R_{i}R_{j}}{R_{i}+R_{j}}}\). Viscoelastic damping constant for normal and tangential deformation are denoted by \(\gamma_{n}\) and \(\gamma_{t}\) respectively and \(\underline{v}_{n_{ij}}\) as well as \(\underline{v}_{tij}\) designate the normal and tangential component of the relative velocity between two grains. The relative normal and tangential velocity are given as:
\[\underline{v}_{n_{ij}}=(\underline{v}_{ij}.\underline{n}_{ij})\underline{n}_ {ij} \tag{3}\]
\[\underline{v}_{t_{ij}}=\underline{v}_{ij}-\underline{v}_{n_{ij}}-\frac{1}{2} (\underline{\omega}_{i}+\underline{\omega}_{j})\times\underline{r}_{ij}. \tag{4}\]
where \(\underline{v}_{ij}=\underline{v}_{i}-\underline{v}_{j}\). Elastic tangential displacement \(\underline{t}_{ij}\) is set to zero when the contact develops for the first time between two particles and is computed using \(\frac{d\underline{t}_{ij}}{dt}=\underline{v}_{t_{ij}}\). The simulation also account for the rigid body rotation around the contact point to make sure that \(\underline{t}_{ij}\) always remains in the local tangential plane of the contact. The gravitational forces are also accounted for in the simulation. The translational and rotational degrees of freedom of the particles are computed using Newton's second law; total forces and torques on particle \(i\) are given as:
\[\underline{F}_{i}^{(tot)}=m_{i}\underline{g}+\sum_{j}\left( \underline{F}_{ij}^{(n)}+\underline{F}_{ij}^{(t)}\right) \tag{5}\] \[\underline{r}_{i}^{(tot)}=-\frac{1}{2}\sum_{j}\underline{r}^{ij} \times\underline{F}_{ij}^{(t)}. \tag{6}\]
Note that, the tangential force follows a linear relationship with the relative tangential displacement at the contact point as long as the tangential force is below the limit set by the Coulomb friction,
\[F_{ij}^{(t)}\leq\mu F_{ij}^{(n)}\, \tag{7}\]
where \(\mu\) stands for friction coefficient. Upon exceeding this, the contact slips in a dissipative fashion and the tangential displacement is truncated accordingly to satisfy the Coulomb criterion. Simulation also incorporates the effect of inelastic collision for both normal and tangential mode of relative movement via the viscoelastic damping coefficients (\(\gamma_{n,t}\)) which are related to the coefficient of restitution(\(\epsilon_{n,t}\)) and the collision time as below:
\[\epsilon_{n,t}=exp(-\gamma_{n,t}t_{col}/2), \tag{8}\]
where the collision time \(t_{col}\) is given as:
\[t_{col}=\pi(2k_{n}/m-\gamma_{n}^{2}/4)^{-0.5} \tag{9}\]
In our simulation, \(\epsilon_{n,t}\) is taken as[21] 0.9 since the coefficient of restitution for dry sand falls in the similar range. In order to capture the dynamics at the time scale of collision, the time step is set as \(t_{col}/50\). \(t_{col}\) was calculated for the simulation parameters shown in Table 1.
In Table 1 parameters are shown only for a reference simulation, we also varied the system parameters for different simulations and describe the same in the text whenever the parameter values change with respect to the reference values. Mass per unit area for the smallest particle (in \(2D\) bi-disperse particles are used) \(m_{g}\) is 0.133. Accordingly, the acoustic speed based on the properties of the smallest diameter particle is approximately \(1200(V_{a}=\sqrt{\frac{k_{n}}{m_{g}}})\).
\begin{table}
\begin{tabular}{c c c c c c} \hline \(k_{n}\) & \(\gamma_{n}\) & \(\frac{k_{n}}{k_{n}}\) & \(\frac{\gamma_{t}}{n_{a}}\) & \(\mu\) & \(\epsilon\) & \(g\) \\ \hline \(2\times 10^{5}\) & 600 & 2/7 & 1 & 0.5 & 0.90 & 9.8 \\ \hline \end{tabular}
\end{table}
Table 1: Simulation parameters used for both 2D and 3D simulations
Also, parameter, \(B(=\frac{t_{c\omega}V_{0}}{D_{p}})\) can be calculated by estimating the collision time based on the initial impact velocity, \(V_{0}\) and the form of the potential (\(F=k_{n}\Delta^{\beta}\)) assuming no viscous dissipation. Here, \(D_{p}\) is the diameter of the smallest particle and \(\beta=1\) and \(1.5\) for Hookean and Hertzian, respectively. Consequently, \(B\) is given as \(P(\beta)\left(\frac{V_{0}}{V_{0}}\right)^{\left(\frac{2}{\beta+1}\right)}\), where \(P(\beta)\) is as follows[12],
\[P(\beta)=(\pi(\beta+1)/16)^{\frac{1}{\beta+1}}\frac{4\sqrt{\pi}\Gamma(1+\frac {1}{\beta+1})}{\Gamma(\frac{1}{2}+\frac{1}{\beta+1})} \tag{10}\]
For the Hookean case, \(P(\beta)=3.9374\). Note that all the parameter values are reported in SI units.
### Bed preparation
For \(3D\) simulation of high speed impact, we first make a three dimensional box with periodic boundary conditions in the \(x\) and \(y\) direction, and a fixed wall at the bottom plane whose outward unit normal is \(\widehat{e_{z}}\). The box has a dimension of \(30D_{p}\times 30D_{p}\times 500D_{p}\). The length in the \(z\) direction is kept long enough to prevent atom loss due to the high speed impact. Next, \(N=50000\) (larger system size \(N=250000\) is also investigated) mono-disperse spherical particles having diameter, \(D_{p}=0.2m\) are dropped under gravity and following that, the bed is allowed sufficient time to attain the mechanically stable state with an average force and torque on each particle of the order of \(10^{-9}\) with negligible kinetic energy. This stabilization process is extremely important to ensure that we start with a stable bed rather than some fragile configuration which may lead to spurious results. After a stable bed is created, the impacting ball of spherical shape of diameter \(10D_{p}\) is placed very close to the free surface and is given an initial impact velocity in the \(z\) direction which is also the direction of the gravity. The average stable bed height is \(46D_{p}\) and the corresponding volume fraction, \(\phi\) of the prepared bed is \(0.62\), which is very close to the random close packing[24] state.
We also performed simulation in _two dimension_ because visualization of the grain scale phenomena is much easier in \(2D\) compared to higher dimensions. Similar to \(3D\), we first define a simulation box of dimension \(100D_{p1}\times 1000D_{p1}\) in the \(xy\) plane, followed by pouring of \(10000\) (we also simulate a large system,\(N=40000\)) bi-disperse granular particles, half of which is having diameter, \(D_{p1}=0.2m\) and the other half has diameter, \(D_{p2}=0.28m\), into the box under gravity. Selection of bi-disperse particles was done to prevent crystallization which is spontaneous for mono-disperse particles in two dimension. The particles were also poured layer by layer instead of pouring them at one go to achieve a stable configuration with sufficient mechanical equilibrium. Pouring all the particle at one go would have caused large collisional stress leading to very long computational time before the system can be relaxed. Following the similar analysis as done on \(3D\) bed, the bed height in \(2D\) is calculated to be \(139D_{p1}\) having packing fraction of \(0.83\). In Fig. 1, representative schematic of the initial \(2D\) granular bed is shown with and without the force chains. The force chain figure is the visual representation of contact forces between particles giving us an idea about the gradient of the stress created due to the gravity. The contact force data was used to draw the force chain of a certain thickness that scales with the magnitude of the force. As expected, the force chains are more dense and thick at the bottom due to the gravitational stress whereas the force chain network is very sparse close to the surface. Force chains represent the stress transmission paths in the granular media and it plays a crucial role in the propagation of any disturbance through the granular materials.
## III Results and discussions
### Phenomenology of impact
We begin our study by analyzing the force on the intruder exerted by the granular media in Fig. 2 (a)-(b) for both \(2D\) and \(3D\). In line with the low speed impact[14], the drag force attains a maximum very quickly, followed by slow relaxations dominated by fluctuations. The fluctuations decay very fast in the three dimension compared to the two dimensional case since the extra dimension gives the system additional direction to relax the effect of the impact. Also, the response in \(3D\) in terms of the peak force becomes stronger than \(2D\) with the increase in impact velocity. Fig. 2 (c)-(d) depicts the power law scaling of the peak force with \(V_{0}\). Interestingly, the exponent in \(3D\) is higher (\(\sim 1.5\)) than in 2D, contradicting the recent results where the scaling exponent (\(\sim 1.33\)) is independent of the spatial dimension. Since the volume of the configurational space of stress paths (force chains) is greater in \(3D\) than \(2D\), the number of acoustic events or pulses that carry the intruder energy into
Figure 1: Left: Stable granular bed in two dimension, Right: Corresponding force chain network before the impact
the medium also increases proportionately in three dimension compared to the two-dimensional case. Hence, greater resistance to impact is observed in \(3D\) than in \(2D\) for comparable initial impact speeds. To explicitly show that the observed scaling is not an artifact of the finite size, we vary the system size in both two and three dimension and find that the scaling exponent remains invariant of the system size. We have also checked in all our simulations that the peak force occurs well before the acoustic pulses get reflected from the boundary (Please see supplemental[13] video 1 showing the pulse propagation in a large system, \(N=40000\)). The exponent of the power law scaling is also insensitive to the interaction potential, and the stiffness, in line with the recent observations [14].
Note that during the attainment of the peak force, the velocity does not reduce significantly and the intruder hardly penetrates. After reaching the peak, the retarding force starts to relax and the ball starts penetrating significantly into the media with "stop and go" kind of
Figure 2: (a) Temporal variation of drag force in \(2D\) for \(V_{0}/V_{a}\)=0.03, 0.04, 0.06 and 0.08: inset shows the corresponding variation of velocity as a function of the intruder depth. (b) Drag force in \(3D\) for the similar range of velocities, inset: intruder velocity vs intruder depth. (c) Scaling of the dimensionless peak force, \(F_{max}/k_{n}D\) with the non-dimensional impact velocity, \(V_{0}/V_{a}\) in \(2D\) for different stiffness and system size. The solid line shows a power law fit with an exponent \(\sim 1.33\). (d) The same scaling in \(3D\) for different system size, interaction law and stiffness. The solid fit corresponds to a power law scaling with an exponent \(\sim 1.5\). (e) and (f) show the force chain evolution (for Hookean interaction) in the initial stages of the impact for \(B=0.01\) and \(B=0.1\), respectively. Here, the line thickness is scaled according to the contact force magnitude normalized by the mean force.
motion where at some moments the ball is falling freely under gravity even inside the bed and this phenomenon is observed for all ranges of impact speed. The stronger fluctuations in \(2D\) is also reflected in the velocity-depth trajectories (see the inset of Fig. 2(a) and (b)). The shape of the velocity-depth curve is concave upward, which presents striking dissimilarity with the existing literature in the low speed limit [6; 10; 25], where the shape is concave downward and can be reproduced by solving the conventional drag models [1; 9; 25]. This observation is evocative of the possible breakdown of the known macroscopic drag force models in the high speed limit and presents the possibility of unexplored rich grain scale physics.
### Grain scale picture
Before we test the drag force models explicitly, we turn our attention to the grain scale picture of the impact process in terms of the spatio-temporal variations of the complex force networks, displacement field and the velocity field. Despite, the recent experiments [11; 12] with photoelastic disks presented some interesting grain scale picture of the impact process, the exact and the complete understanding of the force network evolution and its effect on the intruder motion is not well understood. Photoelastic measurements have a resolution of \(256\times 584\) pixels [11] at high speed and thus give only a measure of the total photoelastic intensity in an image. In contrast, simulations can provide better insights into the nature of the vectorial contact forces. In Fig. 2 (e)-(f), we show the evolution of the force chain networks in the early stages of the impact for the two values of \(B=0.01\) and \(B=0.1\). For both cases, before the impact, gravity sets the gradient of the pressure for which the force chains look denser at the bottom. As soon as the intruder strikes the bed, the large dynamical stress dictates the gradient of pressure, and the force chains look denser close to the impact point. The impact energy gets propagated in the form of acoustic pulses, which reach the end of the system boundary even though the system size is large enough to avoid boundary effects (see the Supplemental videos 2-3[13]). This very fast large length scale propagation of disturbances is evocative of the collective motion of the granular particles, which are correlated upto long range even before the impact.
We observe the reflection of the acoustic pulses from the boundary and the sideways scattering and branching of the pulses. Reflected pulses also interact with the original pulses emitted from the intruder and give rise to continuous large scale reorganization of the force chains, which results in the temporally fluctuating force on the intruder. None of the existing experimental studies capture such long range propagation of disturbances due to the low resolution of the photoelastic measurements that are typically used to characterize the forces in the experiments. The existing literature, without any physical explanation, suggests an exponential decay of the pulses meaning the pulses decay almost immediately after traveling only a few particle diameter, which is at variance with our simulation observations. We observe that the force propagation happens via a well-defined compression front for \(B=0.01\), which is far below that reported (\(B=0.6\)) in the recent \(2D\) experiments [12]. Even for \(V_{0}/V_{a}=0.03\) (\(B=0.1\)), we see a dense compression front propagating through the media whereas the previous observations showed sparse chain-like force propagation for the same value of \(B\). We speculate that the setup used in the experiment had strong side-wall friction and boundary effects which led to the quick damping of the energy pulses. It is also possible that the limited resolution of the photoelastic response at low stress levels makes the determination of the signal propagation far beneath the intruder difficult.
We also simultaneously analyze the particle displacement field near the bottom (see Fig.3 (a)-(b) and also supplemental video 4 [13]) and observe that a compression front indeed reaches the bottom, and a strong elastic resistance is provided by the bottom wall leading to the flip in the particle displacement field. During the whole process, the particles are moving cooperatively, and the flipping of the displacement field takes longer than the collision time, meaning a large length scale reorganization is inevitable. Intriguingly, the phenomenon of compression and decompression keeps repeating until the in
Figure 3: (a) Displacement field near the bottom wall showing the arrival of the compression front. Here, \(V_{0}=4.4\) m/s and displacement magnitude is magnified 40 times the original. (b) Elastic unloading of the bed leading to the reversal of the displacement field.(c)-(d) Temporal variation of the force on the wall and the intruder in \(3D\) are plotted simultaneously for \(V_{0}=4.4\) and 40 m/s, respectively. We also observe a similar behavior in \(2D\) (not shown).
truder comes to rest, which also gives rise to large fluctuations in the force time series. We also show the velocity field at different stages of the impact in the supplemental videos 5-6[13] for low and high impact velocities, and the long distance propagation of an acoustic pulse is vividly observed. The disturbance propagation speed can also be estimated by monitoring the force on the wall (See Fig. 3 (c)-(d)) and measuring the time taken for the disturbance to reach the wall (time of flight measurement). The force propagation speed is almost of the order of the acoustic speed (\(1200m/s\)), and it is independent of the impact velocity for the linear interaction. Surprisingly, the temporal variation of the force on the wall looks very similar to the force-time series of the intruder, albeit with lesser fluctuations since the wall is in contact with large number of force chains. As the wall and the intruder are far apart, a similar temporal response at distant points suggests that the large scale reorganization of the force chain networks dictates the response.
### Force network reorganization, dissipation and breakdown of inertial drag models
For the quantitative description of the force chain reorganization, we now monitor the anisotropy of the force network and its preferred orientations. As earlier studies showed that friction does not significantly influence the dynamic impact process, we focus on the force skeletons formed by the contact normal forces only. The normal force anisotropy and its preferred direction are defined by \(a_{n}\) and \(\theta_{f}\), respectively. The calculation of these parameters from the discrete simulation data is performed by introducing a second order tensor, \(\xi_{ij}\approx\frac{1}{N_{g}}\sum_{\theta_{g}}\bar{f}_{n}n_{i}n_{j}\), where \(N_{g}\) denotes the number of orientation intervals spanning from \(0\) to \(2\pi\), \(\theta_{g}\) is the average orientation of a group and the corresponding average normal force is denoted by \(\bar{f}_{n}\), \(n_{i}\) denotes the Cartesian components of the contact unit normal vector. Anisotropy parameters, \(a_{n}\) and \(\theta_{f}\) are related to the invariants of \(\xi_{ij}\) and its principal directions: \(a_{n}=\frac{2\sqrt{(\xi_{11}-\xi_{22})^{2}+4\xi_{12}^{2}}}{\xi_{11}+\xi_{22}}\); \(tan\,2\theta_{f}=\frac{2\xi_{12}}{\xi_{11}-\xi_{22}}\).
Fig. 4 depicts the temporal variation of both the anisotropy and its principal direction, along with the force time series of the intruder for different impact velocities. Before the impact, the force chains are organized mostly in the direction of gravity and \(a_{n}=0.2\). Upon impact, the force chains start to reorganize (see the inset of Fig. 4) as reflected by the change in the principal direction (\(\theta_{f}\)) of the force network. Also, the force network becomes progressively anisotropic to support the sudden impact load and hits a peak, which in turn gives rise to a maximum force on the intruder. After the peak, the force anisotropy decreases in a manner similar to the decrease in the force on the intruder. In the later stages of the relaxation, temporal variation of the force
Figure 4: (a) Temporal variation of the force anisotropy, \(a_{n}\) (Right \(y\)-axis) is plotted simultaneously with the temporal variation of the force, \(F_{i}\) on the intruder (Left \(y\)-axis) for \(B=0.2\): The inset shows the change in the orientations of the force network as a function of time (b) Similar to (a) except \(B=0.5\).
Figure 5: (a) Net acceleration, \(a+g\) versus square of the velocity, \(V^{2}\) is plotted at seven fixed depths(\(d\)) for different initial impact velocities (\(V_{0}/V_{a}=0.008\) to \(0.25\)) in \(2D\), (b) the same is plotted for the three dimensional case for impact velocities ranging from \(V_{0}/V_{a}=0.02\) to \(0.13\).
anisotropy decorrelates from the force-time series of the intruder. Furthermore, we find a strong correlation between the orientation of the normal force network and the temporal evolution of the force on the intruder. We also observe a time lag between the force network reorganization and its effect to be felt on the intruder force. The time lag decreases with the impact velocity, suggestive of a decreasing length scale upto which the reorganization occurs. In summary, under high speed impact, the granular media constantly traverses between different fragile states via large scale reorientation of the force networks, and the force on the intruder is the consequence of this large scale reorganization. These transient rearrangements of the force network lead to plastic dissipation and are the principal energy loss mechanism during the high speed impact. Finally, we test the validity of the existing drag models [8; 9] in both \(2D\) and \(3D\) by monitoring the net acceleration and the speed of the intruder at different fixed depths for different impact velocities. If the depth-independent inertial drag were to apply to our high speed regime, net acceleration would be quadratic in speed resulting in parallel straight lines when \(a+g\) is plotted against \(V^{2}\) for different depths. Fig. 5 instead presents an entirely contrasting picture; depth-dependent quadratic profiles are obtained when net acceleration is plotted against the square of the speed at a fixed depth for different trajectories. Hence, the conventional depth independent inertial drag models are unable to capture the force on the intruder in the high speed limit; rather we observe that the net acceleration varies linearly with the velocity with a depth dependent slope, though this scaling needs to be checked extensively with large data set.
## IV Conclusion
In summary, we employed large scale numerical simulations to understand the response of the granular media under a high speed impact. Although a large volume of the work on granular impact exists in the literature, but most of the approaches to tackle such a complex problem are heuristic with insufficient grain scale understanding of the highly dynamic impact phenomenon. This work presents a detailed microscopic length scale picture of the impact process in terms of the evolution of the inhomogeneous force chain networks, displacement field, and the velocity field as the impact progresses. These particle scale information proved to be quite useful in understanding the dissipation mechanism in the granular materials which are neither solid nor fluid. Contrary to the previous works showing the exponential spatial decay of the acoustic pulses, we vividly demonstrate a large-length scale propagation of disturbances that get reflected from the boundary, interfering with the original pulses. These acoustic pulses, in turn, induce large scale reorganization of the force chain network, and the granular media constantly explores different fragile states to support the impact load. Reorientation of the force chains leads to plastic dissipation and the eventual absorption of the impact energy. The large scale temporal evolution of the force chain networks dictates the force on the intruder. Consequently, this novel energy dissipation picture does not corroborate with the conventional drag models and hence, the breakdown of the depth independent inertial drag forces. Furthermore, the power law scaling of the early stage peak forces with the impact velocity shows a dependence on the spatial dimensionality, which is at variance with the past works. The result of this work begs for the development of a novel theoretical framework to explain the drag force on the intruder in the high speed limit. It would also be interesting to study the effect of cohesive interactions on the different aspects of the impact process[26] and the scaling of the peak forces since, in a natural setup, attractive forces are expected to be present due to van der Waals forces, humidity, moisture, etc.
_Acknowledgement_-S.R. acknowledges the support of SERB under Grant No. SRG/2020/001943 and the IIT Ropar under ISIRD grant.
|
2305.09426 | Fluctuations of the energy density and intensity for arbitrary objects
in an arbitrary environment | I apply the scattering approach within the framework of macroscopic quantum
electrodynamics to derive the variances and mean values of the energy density
and intensity for a system of an arbitrary object in an arbitrary environment.
To evaluate the temporal bunching character of the energy density and
intensity, I determine the ratio of their variances with respect to their mean
values. I explicitly evaluate these ratios for the cases of vacuum, a
half-space in vacuum, and a sphere in vacuum. Eventually, I extend the
applicability of this theory to the case of more than one arbitrary object,
independent of the geometrical shapes and materials. | Florian Herz | 2023-05-16T13:33:00Z | http://arxiv.org/abs/2305.09426v2 | # Fluctuations of the energy density and intensity for arbitrary objects in an arbitrary environment
###### Abstract
I apply the scattering approach within the framework of macroscopic quantum electrodynamics to derive the variances and mean values of the energy density and intensity for a system of an arbitrary object in an arbitrary environment. To evaluate the temporal bunching character of the energy density and intensity, I determine the ratio of their variances with respect to their mean values. I explicitly evaluate these ratios for the cases of vacuum, a half-space in vacuum, and a sphere in vacuum. Eventually, I extend the applicability of this theory to the case of more than one arbitrary object, independent of the geometrical shapes and materials.
## I Introduction
In most works on near-field thermal radiation, theory and experiment focus on the analysis of coherence properties of the first order, i.e. the heat flux or mean Poynting vector. In contrast, higher order coherence properties for thermal near-field radiation, for instance, the variance of the energy density or the heat flux are only scarcely investigated. From an experimental point of view, the variance or fluctuations around the mean values can only be monitored with improved ultra fast measurement methods. In a theoretical description, this demands the additional assumption of the Gaussian property for thermal radiation in the near-field regime to be able to evaluate the corresponding correlation functions within fluctuational electrodynamics. This property was used to calculate the variance of the Casimir-Lifshitz force [1; 2; 3] and the vacuum friction [4] in the near-field. In recent years, also the fluctuations of thermal quantities moved into the focus of interest, especially when evaluating their impact on experiments, in which the spectral information is lost while measuring the heat currents [5], for instance. There are also works on the variance of the mean Poynting vector between two planar media [6]. A very important application of theses higher order correlation functions are Green-Kubo relations which connect the linear transport coefficients of a system out of thermal equilibrium with the equilibrium fluctuations of the corresponding quantity [7; 8].
The first order spatial coherence property of thermal radiation is well studied in the far- and near-field regime. For instance, for half-spaces it was shown that the coherence length strongly depends on the chosen material. If it supports surface waves, the coherence length can be much larger than the well-known \(\lambda/2\) of black-body radiation but if it does not support them, the coherence lengths can be much shorter [9]. This was later validated by discussing the contributions of surface waves, skin-layer currents, and small-scale polarization fluctuations to the cross-spectral density tensor [10] as well as by analyzing the energy density with respect to surface waves [11]. For periodically micro-structured SiC and photonic crystals this can be exploited to confine the emission angles to build an infrared antenna [12; 13; 14].
For thermal radiation, the expectation value defined by Glauber [15] has to be evaluated by using the density matrix formalism because it is a mixed state due to the broad range of frequencies involved. Then, these expectation values can be treated by macroscopic quantum electrodynamics (MQED) formalism introduced by Scheel and Buhmann [16]. For some dielectric materials like SiC, the near-field spectrum becomes quasi-monochromatic due to the resonance at the surface phonon polariton (SPhP) frequency. Such a change of the spectrum from broadband in the far-field to quasi-monochromatic in the near-field makes it interesting to investigate the bunching property of the thermal near-field radiation. By employing the scattering approach introduced by Rahi et al. [17] and Kruger et al. [18], the correlation functions necessary to study second order coherence can even be generalized to basis independent expressions.
In the following, I will derive the mean values and variances of the intensity and the energy density for a system of an arbitrary object in an arbitrary environment. Subsequently, I will compute the degree of coherence which I use to investigate the bunching character of three special systems - vacuum, a substrate in vacuum, and a sphere in vacuum. Eventually, I also extend this theory to a system of more than one arbitrary object.
Theoretical framework
In classical electrodynamics the energy density \(u\) is given by
\[u(\mathbf{r},t)=\frac{\varepsilon_{0}}{2}\mathbf{E}^{2}(\mathbf{r},t)+\frac{\mu_ {0}}{2}\mathbf{H}^{2}(\mathbf{r},t) \tag{1}\]
with the electric field \(\mathbf{E}\), the magnetic field \(\mathbf{H}\), the vacuum's permittivity \(\varepsilon_{0}\), and vacuum's permeability \(\mu_{0}\) which are connected by \(\mu_{0}\varepsilon_{0}=1/c^{2}\). Since fluctuational electrodynamics treat thermal fluctuations as sources of the electromagnetic fields, the fields and the energy density become fluctuational quantities. Therefore, general mean values are evaluated. Here, to write down the mean value of the energy density, the squared expressions on the right hand side will be replaced by the correlation functions of the considered field
(2)
Note that I replaced the fields by quantum mechanical operators denoted by the \(\hat{\cdot}\) symbol. Here, I will use the symmetrically ordered operators to obtain the energy density and its fluctuations. In addition, let me introduce the positive and negative frequency field operators \(\hat{\mathbf{E}}^{*}\) defined by
\[\hat{\mathbf{E}}(\mathbf{r},t)=\hat{\mathbf{E}}^{+}(\mathbf{r},t)+\hat{ \mathbf{E}}^{-}(\mathbf{r},t) \tag{3}\]
which are simply given by
\[\hat{\mathbf{E}}^{+}(\mathbf{r},t)=\int_{0}^{\infty}\frac{\mathrm{d}\omega}{2 \pi}\tilde{\mathbf{E}}(\mathbf{r},\pm\omega)e^{\pi\mathrm{i}\omega t}. \tag{4}\]
Herein, \(\hat{\mathbf{E}}^{+}\) describes the annihilation of a photon and \(\hat{\mathbf{E}}^{-}\), its Hermitian conjugate, its creation. By using the definition in Eq. (3) and taking advantage of the stationarity of the fields, i.e. \(\langle\!\langle\hat{E}^{+}(\omega)\hat{E}^{-}(\omega^{\prime})\rangle\! \rangle\propto 2\pi\delta(\omega-\omega^{\prime})\), the energy density becomes
\[\langle\!\langle u(\mathbf{r},t)\rangle\!\rangle = \tag{5}\]
Note that index \(i\) indicates Einstein summation over the vector components. In quantum mechanics, however, a measuring process, e.g. in a photon interference experiment, is not described by symmetrically ordered operators. That is because a photon is annihilated at the detector during the measuring process. This corresponds to the intensity \(I\) defined by normally ordered operators
\[\langle\!\langle I(\mathbf{r},t)\rangle\!\rangle = \tag{6}\]
Note that \(\langle\!\langle I\rangle\!\rangle\) follows from \(\langle\!\langle u\rangle\!\rangle\) by dropping the first term in each line of Eq. (5) and multiplying by \(2/\varepsilon_{0}\).
For the fluctuations of the energy density, one has to evaluate the correlation function of the energy density. This results in a correlation function of four operators, namely
\[\langle\!\langle u(\mathbf{r},t)u(\mathbf{r}^{\prime},t^{ \prime})\rangle\!\rangle = \frac{\varepsilon_{0}^{2}}{4}\langle\!\langle\hat{E}_{i}(\mathbf{ r},t)\hat{E}_{i}(\mathbf{r},t)\hat{E}_{j}(\mathbf{r}^{\prime},t^{\prime})\hat{E}_{j}( \mathbf{r}^{\prime},t^{\prime})\rangle\!\rangle \tag{7}\] \[+\frac{\mu_{0}^{2}}{4}\langle\!\langle\hat{H}_{i}(\mathbf{r},t) \hat{H}_{i}(\mathbf{r},t)\hat{H}_{j}(\mathbf{r}^{\prime},t^{\prime})\hat{H}_{ j}(\mathbf{r}^{\prime},t^{\prime})\rangle\!\rangle\] \[+\frac{\varepsilon_{0}\mu_{0}}{4}\langle\!\langle\hat{H}_{i}( \mathbf{r},t)\hat{H}_{i}(\mathbf{r},t)\hat{E}_{j}(\mathbf{r}^{\prime},t^{ \prime})\hat{E}_{j}(\mathbf{r}^{\prime},t^{\prime})\rangle\!\rangle.\]
This expression can be rewritten in terms of correlation functions of two operators by exploiting the Gaussian property of thermal radiation yielding
\[\left\langle\!\!\left\langle u(\mathbf{r},t)u(\mathbf{r}^{\prime},t^{ \prime})\right\rangle\!\right\rangle = \tag{8}\] \[+\frac{\mu_{0}^{2}}{2}\!\left\langle\!\!\left\langle\hat{H}_{i}( \mathbf{r},t)\hat{H}_{j}(\mathbf{r}^{\prime},t^{\prime})\right\rangle\!\right\rangle\! \!\left\langle\!\!\left\langle\hat{H}_{i}(\mathbf{r},t)\hat{H}_{j}(\mathbf{r}^ {\prime},t^{\prime})\right\rangle\!\right\rangle\] \[+\frac{\epsilon_{0}\mu_{0}}{2}\!\left[\!\left\langle\!\!\left\langle \hat{E}_{i}(\mathbf{r},t)\hat{H}_{j}(\mathbf{r}^{\prime},t^{\prime})\right\rangle \!\right\rangle\!\right\rangle\!\!\left\langle\!\!\left\langle\hat{E}_{i}( \mathbf{r},t)\hat{H}_{j}(\mathbf{r}^{\prime},t^{\prime})\right\rangle\!\right\rangle\] \[+\!\left\langle\!\!\left\langle\hat{H}_{i}(\mathbf{r},t)\hat{E}_{ j}(\mathbf{r}^{\prime},t^{\prime})\right\rangle\!\right\rangle\!\!\left\langle \!\!\left\langle\hat{H}_{i}(\mathbf{r},t)\hat{E}_{j}(\mathbf{r}^{\prime},t^{ \prime})\right\rangle\!\right\rangle\!\right]\!.\]
Let me now come back to the variance
(9)
This quantity can now be evaluated by inserting Eq. (8) and performing a Fourier transform, giving
\[\mathrm{Var}_{u} =\int_{0}^{\infty}\frac{\mathrm{d}\omega}{2\pi}\int_{0}^{\infty} \frac{\mathrm{d}\omega^{\prime}}{2\pi}\bigg{\{}\frac{\varepsilon_{0}^{2}}{2} \!\left[\mathsf{C}_{\mathrm{EE}}^{*ij}(\omega)e^{-\mathrm{i}\omega\tau}+ \mathsf{C}_{\mathrm{EE}}^{*ij}(\omega)e^{\mathrm{i}\omega\tau}\right]\!\left[ \mathsf{C}_{\mathrm{EE}}^{*ji}(\omega^{\prime})e^{\mathrm{i}\omega^{\prime} \tau}+\mathsf{C}_{\mathrm{EE}}^{*ji}(\omega^{\prime})e^{-\mathrm{i}\omega^{ \prime}\tau}\right]^{*}\] \[\quad+\frac{\mu_{0}^{2}}{2}\!\left[\mathsf{C}_{\mathrm{HH}}^{*ij }(\omega)e^{-\mathrm{i}\omega\tau}+\mathsf{C}_{\mathrm{HH}}^{*ij}(\omega)e^{ \mathrm{i}\omega\tau}\right]\!\left[\mathsf{C}_{\mathrm{HH}}^{*ji}(\omega^{ \prime})e^{\mathrm{i}\omega^{\prime}\tau}+\mathsf{C}_{\mathrm{HH}}^{*ji}( \omega^{\prime})e^{-\mathrm{i}\omega^{\prime}\tau}\right]^{*}\] \[\quad+\frac{\epsilon_{0}\mu_{0}}{2}\!\left(\left[\mathsf{C}_{ \mathrm{EH}}^{*ij}(\omega)e^{-\mathrm{i}\omega\tau}+\mathsf{C}_{\mathrm{EH}}^{* ij}(\omega)e^{\mathrm{i}\omega\tau}\right]\!\left[\mathsf{C}_{\mathrm{HE}}^{*ji}(\omega^{ \prime})e^{\mathrm{i}\omega^{\prime}\tau}+\mathsf{C}_{\mathrm{HE}}^{*ji}( \omega^{\prime})e^{-\mathrm{i}\omega^{\prime}\tau}\right]^{*}\right.\] \[\quad\left.+\left[\mathsf{C}_{\mathrm{HE}}^{*ij}(\omega)e^{- \mathrm{i}\omega\tau}+\mathsf{C}_{\mathrm{HE}}^{*ij}(\omega)e^{\mathrm{i} \omega\tau}\right]\!\left[\mathsf{C}_{\mathrm{EH}}^{*ji}(\omega^{\prime})e^{ \mathrm{i}\omega^{\prime}\tau}+\mathsf{C}_{\mathrm{EH}}^{*ji}(\omega^{\prime}) e^{-\mathrm{i}\omega^{\prime}\tau}\right]^{*}\right\}\bigg{\}} \tag{10}\]
with
\[\mathsf{C}_{\mathrm{AB}}^{*ij}(\omega)=\left\langle\!\!\left\langle\hat{A}_{i} ^{+}(\mathbf{r},\omega)\hat{B}_{j}^{-}(\mathbf{r}^{\prime},\omega)\right\rangle\!\right\rangle\!. \tag{11}\]
Note that indices \(i\) and \(j\) indicate the dependence on the coordinates \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\), respectively. Because of stationarity the variance only depends on the time difference \(\tau=t-t^{\prime}\). Note that the relation
\[\mathsf{C}_{\mathrm{AB}}^{*ij}(\omega)=\mathsf{C}_{\mathrm{BA}}^{*ji}(\omega) ^{*} \tag{12}\]
was used as well. Now, let me do the same calculation for the intensity fluctuations. Keeping in mind that the fields with identical frequency sign are uncorrelated, due to the evaluation of correlation functions with either only creation or annihilation operators, I obtain
\[\mathrm{Var}_{I} = \tag{13}\] \[= \int_{0}^{\infty}\frac{\mathrm{d}\omega}{2\pi}\int_{0}^{\infty} \frac{\mathrm{d}\omega^{\prime}}{2\pi}\Bigg{\{}\mathsf{C}_{\mathrm{EE}}^{*ij}( \omega)\mathsf{C}_{\mathrm{EE}}^{*ji}(\omega^{\prime})+\frac{\mu_{0}^{2}}{ \varepsilon_{0}^{2}}\mathsf{C}_{\mathrm{HH}}^{*ij}(\omega)\mathsf{C}_{ \mathrm{HH}}^{*ji}(\omega^{\prime})\] \[\quad+\frac{\mu_{0}}{\varepsilon_{0}}\Big{[}\mathsf{C}_{\mathrm{EH }}^{*ij}(\omega)\mathsf{C}_{\mathrm{EH}}^{*ji}(\omega^{\prime})+\mathsf{C}_{ \mathrm{HE}}^{*ij}(\omega)\mathsf{C}_{\mathrm{HE}}^{*ij}(\omega^{\prime}) \Big{]}\Bigg{\}}e^{\mathrm{i}(\omega\!-\!\omega^{\prime})\tau}.\]
Note that in contrast to the mean values and, the variance \(\mathrm{Var}_{I}\) cannot be directly read off from \(\mathrm{Var}_{u}\) in Eq. (10) which is due to the different ordering.
Now, I derive the general expressions of, and their variances for one arbitrary object labeled by \(\alpha\) in an arbitrary environment labeled by b. For this, I use the scattering approach outlined in Ref. [8] going back to the formalism introduced in Ref. [17; 18]. The fields and current densities are written in Dirac notation to obtain basis independent formulas. Therein, the current density \(\left|\mathbf{J}\right\rangle\) is defined by
\[\left|\mathbf{J}\right\rangle=\left|\mathbf{J}_{\mathrm{fl}}\right\rangle+ \frac{1}{\mathrm{i}\mu_{0}\omega}\mathbf{T}\left|\mathbf{E}_{\mathrm{b}}\right\rangle \tag{14}\]
where \(|{\bf J}_{\rm B}\rangle\) reflects the fluctuational part of the current density and the second contribution shows the induced part. Additionally, the T-operator \({\rm T}\) is used. The T-operator contains the scattering behavior of heat radiation due to the material. The fields \(|{\bf F}_{\rm E}\rangle=|{\bf E}\rangle\) and \(|{\bf F}_{\rm H}\rangle=|{\bf H}\rangle\) are given by
\[|{\bf F}_{k}\rangle=|{\bf F}_{k,{\rm b}}\rangle+{\rm i}\mu_{0}\omega{\bf G}_{k {\rm E}}\,|{\bf J}\rangle \tag{15}\]
with \(k\in\{\)E,H\(\}\) and the Green's function \({\bf G}\) containing the scattering behavior of heat radiation due to the environment. Then, the correlation function of the fields is
\[\left\langle\!\left\langle\left|{\bf F}_{k}\right\rangle\otimes\left\langle{\bf F }_{l}\right|\right\rangle\!\right\rangle=2\hbar\mu_{0}\omega^{2}\left[\left(n_ {\rm b}(\omega)+1\right)\frac{{\bf G}_{\rm full,kl}-{\bf G}_{\rm full,lk}^{ \dagger}}{2{\rm i}}+\left[n_{\alpha}(\omega)-n_{\rm b}(\omega)\right]{\bf K} _{kl}\right] \tag{16}\]
with
\[{\bf G}_{\rm full,kl} = {\bf G}_{kl}+{\bf G}_{k{\rm E}}{\bf T}{\bf G}_{{\rm E}l}, \tag{17}\] \[{\bf K}_{kl} = {\bf G}_{k{\rm E}}{\bf X}{\bf G}_{l{\rm E}}^{\dagger}, \tag{18}\]
the Bose-Einstein occupation probability
\[n_{\gamma}(\omega)=\frac{1}{e^{\frac{\hbar\omega}{n_{\rm B}T\gamma}}-1} \tag{19}\]
with the reduced Planck's constant \(\hbar\), the Boltzmann constant \(k_{\rm B}\), and the temperature \(T_{\gamma}\) of object \(\gamma\) as well as the general susceptibility
\[{\bf X}=\frac{{\bf T}-{\bf T}^{\dagger}}{2{\rm i}}-{\bf T}\frac{{\bf G}_{\rm EE }-{\bf G}_{\rm EE}^{\dagger}}{2{\rm i}}{\bf T}^{\dagger}. \tag{20}\]
Keep in mind that \(|{\bf E}\rangle\) is related to \({\bf E}^{+}\) and \(\langle{\bf E}|\) is related to \({\bf E}^{-}\). That means by interchanging kets and bras, the pre-factor \(n_{\rm b}(\omega)+1\) in Eq. (16) simply reduces to \(n_{\rm b}(\omega)\). The same is true for the magnetic field. With the correlation function in Eq. (16), the intensity becomes
\[\left\langle\!\left\langle I({\bf r},t)\right\rangle\!\right\rangle=\frac{2}{ \varepsilon_{0}}\sum_{k\in\{{\rm E},{\rm H}\}}\int_{0}^{\infty}\!\frac{{\rm d} \omega}{2\pi}{\rm Tr}\Big{[}{\rm B}_{kk}({\bf r},{\bf r},\omega)+{\rm Q}_{kk} ({\bf r},{\bf r},\omega)\Big{]}. \tag{21}\]
and its variance is
\[{\rm Var}_{I} = \frac{4}{\varepsilon_{0}^{2}}\sum_{k,l\in\{{\rm E},{\rm H}\}}\int _{0}^{\infty}\!\frac{{\rm d}\omega}{2\pi}\int_{0}^{\infty}\!\frac{{\rm d} \omega^{\prime}}{2\pi}{\rm Tr}\Big{(}\Big{[}{\rm B}_{kl}({\bf r},{\bf r}^{ \prime},\omega)+{\rm Q}_{kl}({\bf r},{\bf r}^{\prime},\omega)\Big{]} \tag{22}\] \[\times\Big{[}{\rm B}_{kl}^{\dagger}({\bf r},{\bf r}^{\prime}, \omega^{\prime})+{\rm Q}_{kl}^{\dagger}({\bf r},{\bf r}^{\prime},\omega^{ \prime})\Big{]}\Big{)}{\rm e}^{{\rm i}(\omega-\omega^{\prime})\tau}.\]
Note, that the energy density always contains vacuum fluctuations whose frequency integrals do not converge, in general, whereas the mean intensity and its variance do not contain vacuum fluctuations. Since I am interested in the evaluation of the thermal contribution of the energy density fluctuations, I will neglect the vacuum contribution in the following. This yields the mean energy density
\[\left\langle\!\left\langle u_{\rm th}({\bf r},t)\right\rangle\!\right\rangle=2 \sum_{k\in\{{\rm E},{\rm H}\}}\int_{0}^{\infty}\!\frac{{\rm d}\omega}{2\pi}{ \rm Tr}\Big{[}{\rm Re}\left({\rm B}_{kk}({\bf r},{\bf r},\omega)\right)+{\rm Q} _{kk}({\bf r},{\bf r},\omega)\Big{]} \tag{23}\]
and its variance
\[{\rm Var}_{u,{\rm th}} = 8\sum_{k,l\in\{{\rm E},{\rm H}\}}\int_{0}^{\infty}\!\frac{{\rm d} \omega}{2\pi}\int_{0}^{\infty}\!\frac{{\rm d}\omega^{\prime}}{2\pi}{\rm Tr} \Big{(}{\rm Re}\left({\rm B}_{kl}({\bf r},{\bf r}^{\prime},\omega)e^{-{\rm i} \omega\tau}\right) \tag{24}\] \[+{\rm Re}\left({\rm Q}_{kl}({\bf r},{\bf r}^{\prime},\omega)e^{-{ \rm i}\omega\tau}\right)\Big{)}\Big{]}\Big{[}{\rm Re}\left({\rm B}_{lk}({\bf r} ^{\prime},{\bf r},\omega^{\prime})e^{{\rm i}\omega^{\prime}\tau}\right)\] \[+{\rm Re}\left({\rm Q}_{lk}({\bf r}^{\prime},{\bf r},\omega^{ \prime})e^{{\rm i}\omega^{\prime}\tau}\right)\Big{]}\]
using the abbreviations
\[\mathds{B}_{kl}(\mathbf{r},\mathbf{r}^{\prime},\omega) =2a_{kl}\hbar k_{0}^{2}n_{\mathrm{b}}(\omega)\frac{\mathrm{G}_{ \mathrm{full},kl}(\mathbf{r},\mathbf{r}^{\prime},\omega)-\mathds{G}_{\mathrm{ full},lk}^{\dagger}(\mathbf{r}^{\prime},\mathbf{r},\omega)}{2\mathrm{i}}, \tag{25}\] \[\mathds{Q}_{kl}(\mathbf{r},\mathbf{r}^{\prime},\omega) =2a_{kl}\hbar k_{0}^{2}\left[n_{\alpha}(\omega)-n_{\mathrm{b}}( \omega)\right]\mathds{K}_{kl}(\mathbf{r},\mathbf{r}^{\prime},\omega), \tag{26}\]
and
\[a_{kl}=\begin{cases}1&k=l=\mathrm{E}\\ \frac{\mu_{0}}{\varepsilon_{0}}&k=l=\mathrm{H}\\ \sqrt{\frac{\mu_{0}}{\varepsilon_{0}}}&k\neq l\end{cases}. \tag{27}\]
Note that \(\mathrm{Tr}(\mathds{Q}(\mathbf{r},\mathbf{r},\omega))\) has only real components.
Let me first consider the special case of the variances at \(\mathbf{r}^{\prime}=\mathbf{r}\) and \(\tau=0\) for the pure electric case. If one finds a coordinate system in which \(\mathds{M}_{\mathrm{EE}}(\mathbf{r})=\int_{0}^{\infty}\frac{\mathrm{d}\omega }{2\pi}[\mathds{B}_{\mathrm{EE}}(\mathbf{r},\mathbf{r},\omega)+\mathds{Q}_{ \mathrm{EE}}(\mathbf{r},\mathbf{r},\omega)]\) is diagonal, it is possible to decompose this matrix into \(\mathds{M}_{\mathrm{EE}}=\mathds{S}\mathds{D}_{\mathrm{EE}}\mathds{S}^{-1}\). Here, the diagonal matrix \(\mathds{D}_{\mathrm{EE}}\) contains the eigenvalues \(\lambda\) of \(\mathds{M}_{\mathrm{EE}}\) and the matrix \(\mathds{S}\) has the corresponding eigenvectors of \(\mathds{M}_{\mathrm{EE}}\) as its columns. Due to the trace operation, one ends up with
(28)
and
\[\mathrm{Var}_{u,\mathrm{th},\mathrm{E}}(\mathbf{r},\mathbf{r},0) =2\sum_{i=1}^{3}\left\langle\!\left\langle u_{\mathrm{th},\mathrm{E},i}(\mathbf{r},0)\right\rangle\!\right\rangle^{2}, \tag{29}\] \[\mathrm{Var}_{I,\mathrm{E}}(\mathbf{r},\mathbf{r},0) =\sum_{i=1}^{3}\left\langle\!\left\langle I_{\mathrm{E},i}( \mathbf{r},0)\right\rangle\!\right\rangle^{2}. \tag{30}\]
When considering isotropic systems like vacuum, all eigenvalues are equal yielding
\[\left\langle\!\left\langle I_{\mathrm{E}}(\mathbf{r},0)\right\rangle\!\right\rangle =\frac{6}{\varepsilon_{0}}\lambda_{\mathrm{EE}}(\mathbf{r})=\frac{\left\langle \!\left\langle u_{\mathrm{th},\mathrm{E}}(\mathbf{r},0)\right\rangle\!\right\rangle }{\varepsilon_{0}} \tag{31}\]
and
\[\mathrm{Var}_{u,\mathrm{th},\mathrm{E}}(\mathbf{r},\mathbf{r},0) =\frac{2}{3}\langle\!\left\langle u_{\mathrm{th},\mathrm{E}}( \mathbf{r},0)\right\rangle\!\rangle^{2}, \tag{32}\] \[\mathrm{Var}_{I,\mathrm{E}}(\mathbf{r},\mathbf{r},0) =\frac{1}{3}\langle\!\left\langle I_{\mathrm{E}}(\mathbf{r},0) \right\rangle\!\rangle^{2}. \tag{33}\]
The latter exactly coincides with the result found in [19] meaning that intensity fluctuations are on the same order of magnitude as their mean values. Eq. (32) now shows that the same is true for the thermal energy density fluctuations but with a different pre-factor.
In general, to investigate the second order coherence properties of thermal radiation, it is reasonable to compare the variance of the considered quantity with its mean value. It allows for classifying the non-classical character of light [15; 20] as measured by the HBT experiment [21]. It is also called "complex degree of coherence of second order" defined by
\[g_{u,\mathrm{th}}^{(2)}(\mathbf{r},\mathbf{r}^{\prime},\tau) =\frac{\left\langle\!\left\langle u_{\mathrm{th}}(\mathbf{r},t)u_{ \mathrm{th}}(\mathbf{r}^{\prime},t^{\prime})\right\rangle\!\right\rangle}{ \left\langle\!\left\langle u_{\mathrm{th}}(\mathbf{r},t)\right\rangle\! \right\rangle\!\right\rangle\!\left\langle\!\left\langle u_{\mathrm{th}}( \mathbf{r}^{\prime},t^{\prime})\right\rangle\!\right\rangle}\] \[=1+\frac{\mathrm{Var}_{u,\mathrm{th}}(\mathbf{r},\mathbf{r}^{ \prime},\tau)}{\left\langle\!\left\langle u_{\mathrm{th}}(\mathbf{r},t)\right\rangle \!\right\rangle\!\left\langle\!\left\langle u_{\mathrm{th}}(\mathbf{r}^{ \prime},t^{\prime})\right\rangle\!\right\rangle} \tag{34}\]
for the energy density and
\[g_{I}^{(2)}(\mathbf{r},\mathbf{r}^{\prime},\tau)=1+\frac{\mathrm{Var}_{I}( \mathbf{r},\mathbf{r}^{\prime},\tau)}{\left\langle\!\left\langle I(\mathbf{r},t)\right\rangle\!\right\rangle\!\left\langle\!\left\langle I(\mathbf{r}^{ \prime},t^{\prime})\right\rangle\!\right\rangle} \tag{35}\]
for the intensity. Thermal radiation showing bunching belongs to the class of quasi-classical light. For isotropic objects and environments, one can directly read off, due to Eqs. (32)-(33), that
\[g_{\text{\tiny{UE}},\text{th}}^{(2)}(\mathbf{r},\mathbf{r},0)=\frac{5}{3}=g_{ \text{\tiny{IE}}}^{(2)}(\mathbf{r},\mathbf{r},0)+\frac{1}{3} \tag{36}\]
when only considering electric contributions. This also defines the maximum value of \(g_{\text{\tiny{E}}}^{(2)}\) for both, spatial and temporal bunching.
## III Numerical results
At first, let me validate the main results of Eq. (21)-(24) and Eq. (34)-(35) by retrieving the well known result for the case of pure vacuum [22]. Additionally, I want to apply it to two more complex examples of practical interest for experiments, namely a half-space and a sphere, that have not been investigated with respect to the variances of energy density and intensity, yet. In the following, I will use two different types of materials: SiC and gold. SiC as a dielectric material can be modeled by a Lorentz-Oscillator [23]
\[\varepsilon_{\text{SiC}}(\omega)=\varepsilon_{\infty}\frac{\omega_{l}^{2}- \omega^{2}-\mathrm{i}\Gamma\omega}{\omega_{l}^{2}-\omega^{2}-\mathrm{i} \Gamma\omega} \tag{37}\]
with \(\varepsilon_{\infty}=6.7\), \(\omega_{l}=1.827\times 10^{14}\) rad/s, \(\omega_{t}=1.495\times 10^{14}\) rad/s, and \(\Gamma=0.9\times 10^{12}\) rad/s. For gold I employ the Drude model [24]
\[\varepsilon_{\text{Au}}(\omega)=\varepsilon_{\infty}-\frac{\omega_{p}^{2}}{ \omega^{2}+\mathrm{i}\Gamma\omega} \tag{38}\]
with \(\varepsilon_{\infty}=8.344\), \(\omega_{p}=1.372\times 10^{16}\) rad/s, and \(\Gamma=4.059\times 10^{13}\) rad/s.
### Energy density and intensity fluctuations of vacuum
In the simplest case of considering black body radiation in vacuum, one can set \(\text{T}=0\) and, thus, obtain \(\mathds{Q}_{kl}=0\) because no object is involved. Then, if one is only interested in the temporal coherence, the spatial arguments become identical. The Green's functions become \(\mathds{G}_{\text{full},kl}(\mathbf{r},\mathbf{r})=\mathds{G}_{kl}(\mathbf{r },\mathbf{r})\) yielding for Eq. (25)
\[\mathds{B}_{kl}(\mathbf{r},\mathbf{r},\omega)=2a_{kl}\hbar k_{0}^{2}n_{\text{b }}(\omega)\times\begin{cases}\frac{\omega}{6\pi c}\mathds{1}&k=l\\ 0&k\neq l\end{cases}. \tag{39}\]
By using this in Eqs. (21)-(24), it is straightforward to derive the desired results for the mean values and variances
\[\left\langle\!\left\langle u(\mathbf{r},t)\right\rangle\!\right\rangle= \varepsilon_{0}\!\left\langle\!\left\langle I(\mathbf{r},t)\right\rangle\! \right\rangle=\frac{\pi^{2}k_{\text{B}}^{4}T_{\text{b}}^{4}}{15\hbar^{3}c^{3}} \tag{40}\]
and
\[\text{Var}_{u,\text{th}}(\mathbf{r},\mathbf{r},\tau) =\frac{12\hbar^{2}}{\pi^{4}c^{6}\tau_{\text{b}}^{8}}\text{Re} \left(\zeta\left(4,1-\mathrm{i}\frac{\tau}{\tau_{\text{b}}}\right)\right)^{2}, \tag{41}\] \[\text{Var}_{I}(\mathbf{r},\mathbf{r},\tau) =\frac{6\hbar^{2}}{\varepsilon_{0}^{2}\pi^{4}c^{6}\tau_{\text{b }}^{8}}\bigg{|}\zeta\left(4,1-\mathrm{i}\frac{\tau}{\tau_{\text{b}}}\right) \bigg{|}^{2} \tag{42}\]
which coincide with the corresponding expression in [22]. Here,
\[\zeta(x,y)=\sum_{n=0}^{\infty}\frac{1}{(n+y)^{x}} \tag{43}\]
is the Hurwitz zeta function, \(T_{\text{b}}\) is the vacuum background temperature, and I defined the vacuum's coherence time
\[\tau_{\text{b}}=\frac{\hbar}{k_{\text{B}}T_{\text{b}}}. \tag{44}\]
Comparing the full variances with the mean values, I find
\[\mathrm{Var}_{u,\mathrm{th}}(\mathbf{r},\mathbf{r},0) =\frac{1}{3}\langle\!\!\langle\left|u(\mathbf{r},t)\right|\!\rangle \!\rangle^{2}, \tag{45}\] \[\mathrm{Var}_{I}(\mathbf{r},\mathbf{r},0) =\frac{1}{6}\langle\!\!\langle\left|I(\mathbf{r},t)\right|\! \rangle^{2}. \tag{46}\]
However, the pure electric or magnetic contributions fulfill the relations of Eqs. (32)-(33) because both contribute equally to Eqs. (45)-(46). The factor of 2 is missing in Eqs. (45)-(46) compared to Eqs. (32)-(33) when only considering one contribution, e.g. only the electric one. Thereby, I retrieve the results of Ref. [19] for vacuum.
The \(g^{(2)}\) function for the energy density and the intensity are depicted in Fig. 1. For both quantities one can clearly see the bunching character of thermal vacuum radiation and the loss of coherence at a time delay of \(\tau_{\mathrm{b}}=25.5\) fs at \(T_{\mathrm{b}}=300\) K.
### Energy density and intensity fluctuations above a planar substrate
A more sophisticated problem is the calculation of the energy density and intensity fluctuations at distance \(d\) above a half-space. The half-space is assumed to cover the whole x-y space and the space \(z<0\). I only take non-magnetic, homogeneous, and isotropic materials into account for the half-space. The temperature of the half-space is \(T_{\alpha}\) and the one of the background \(T_{\mathrm{b}}\). The Green's function for the whole system is well known [25] and can be separated into a vacuum part and a scattered contribution due to reflections at the surface of the half-space within the planar wave base as
\[\mathds{G}_{\mathrm{full,EE}}(\mathbf{r},\mathbf{r}^{\prime})=\int\frac{ \mathrm{d}^{2}k_{\downarrow}}{(2\pi)^{2}}e^{\mathrm{i}\mathbf{k}_{\downarrow} \mathbf{x}_{\downarrow}}\left[\mathds{G}_{\mathrm{vac,EE}}(\mathbf{k}_{ \downarrow},z,z^{\prime})+\mathds{G}_{\mathrm{scat,EE}}(\mathbf{k}_{ \downarrow},z,z^{\prime})\right]. \tag{47}\]
Note that I assume \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\) being outside of the half-space. The remaining Green's functions \(\mathds{G}_{\mathrm{EH/HE/HH}}\) can be obtained by exploiting the duality relations between the desired Green's function and \(\mathds{G}_{\mathrm{EE}}\). The explicit expressions can be found in appendix A. Then, it is easy to find the B matrices using Eq. (25)
\[\mathds{B}_{\mathrm{EE/HH}}(\mathbf{r},\mathbf{r},\omega) =\hbar k_{0}^{3}n_{\mathrm{b}}(\omega)\sum_{j\in\{\downarrow, \downarrow\}}\left(\frac{1}{6\pi}(1+\delta_{j\downarrow})+I_{\mathrm{p}/ \mathrm{s},j}\right)\mathds{A}_{j}, \tag{48}\] \[\mathds{B}_{\mathrm{EH/HE}}(\mathbf{r},\mathbf{r},\omega) =\mathrm{i}\hbar k_{0}^{3}n_{\mathrm{b}}(\omega)I_{\mathrm{mix}} \mathds{A}_{\mathrm{mix}}, \tag{49}\]
where I introduced the matrices
\[\mathds{A}_{\downarrow} =\frac{1}{2}\left(\mathbf{e}_{x}\otimes\mathbf{e}_{x}+\mathbf{e} _{y}\otimes\mathbf{e}_{y}\right), \tag{50}\] \[\mathds{A}_{\downarrow} =\mathbf{e}_{z}\otimes\mathbf{e}_{z}, \tag{51}\]
Figure 1: \(g^{(2)}\) function of the thermal contribution of the energy density (blue) and of the intensity (red) with respect to the normalized time delay \(\tau/\tau_{\mathrm{b}}\) for \(\tau_{\mathrm{b}}=25.5\) fs at \(T_{\mathrm{b}}=300\) K.
and
\[\mathds{A}_{\rm mix}=\mathbf{e}_{x}\otimes\mathbf{e}_{y}-\mathbf{e}_{y}\otimes \mathbf{e}_{x} \tag{52}\]
as well as \(I_{\rm p/s,j}\) and \(I_{\rm mix}\) defined in Eqs. (A12)-(A14). Note that \(\mathds{B}_{\rm HE}=\mathds{B}_{\rm EH}^{\dagger}\). The first term in (48) corresponds to the vacuum contribution, whereas the second part describes the reflected parts. The mixed Green's functions possess no vacuum part as in the case of pure vacuum.
To obtain the remaining \(\mathds{Q}\) matrices defined in Eq. (26), I expand the Green's function and T-operators in K as defined in Eq. (18) in the plane wave basis as well. These expressions are also well known in literature [25; 26] so that I get with Eq. (26)
\[\mathds{Q}_{\rm EE/HH}(\mathbf{r},\mathbf{r},\omega) =\hbar k_{0}^{3}\left[n_{\alpha}(\omega)-n_{\rm b}(\omega)\right] \sum_{j\in\{L,\downarrow\}}K_{\rm p/s,j}\mathds{A}_{j}, \tag{53}\] \[\mathds{Q}_{\rm EH/HE}(\mathbf{r},\mathbf{r},\omega) =\mp\hbar k_{0}^{3}\left[n_{\alpha}(\omega)-n_{\rm b}(\omega) \right]\left[K_{\rm mix}^{\rm pv}\pm{\rm i}K_{\rm mix}^{\rm ev}\right]\mathds{ A}_{\rm mix} \tag{54}\]
with \(K_{\rm p/s,j}\), \(K_{\rm mix}^{\rm pr}\), and \(K_{\rm mix}^{\rm ev}\) defined in Eqs. (A15)-(A18). Inserting these in Eqs. (21)-(24), yields the mean values
\[\left\langle\!\left\langle u_{\rm th}(\mathbf{r},t)\right\rangle\!\right\rangle =\varepsilon_{0}\sum_{k\in\left\{\overline{\rm E},{\rm H}\right\}}\sum_{j \in\{L,\downarrow\}}\left[\Gamma_{k,j}^{\rm eq}(0)+\Gamma_{k,j}^{\rm leq}(0) \right]=\varepsilon_{0}\!\left\langle\!\left\langle I(\mathbf{r},t)\right\rangle\!\right\rangle \tag{55}\]
and the corresponding variances
\[\mathrm{Var}_{u,{\rm th}}(\mathbf{r},\mathbf{r},\tau) =\varepsilon_{0}^{2}\sum_{k\in\left\{\overline{\rm E},{\rm H} \right\}}\sum_{j\in\{L,\downarrow\}}(1+\delta_{j\parallel})\mathrm{Re}\left( \Gamma_{k,j}^{\rm eq}(\tau)+\Gamma_{k,j}^{\rm leq}(\tau)\right)^{2}\] \[\quad+2\varepsilon_{0}^{2}\mathrm{Im}\!\left(\Gamma_{\rm mix}^{ \rm eq}(\tau)-\Gamma_{\rm mix,ev}^{\rm leq}(\tau)\right)^{2}+2\varepsilon_{0} ^{2}\mathrm{Re}\!\left(\Gamma_{\rm mix,pr}^{\rm leq}(\tau)\right)^{2}, \tag{56}\] \[\mathrm{Var}_{I}(\mathbf{r},\mathbf{r},\tau) =\frac{1}{2}\sum_{k\in\left\{\overline{\rm E},{\rm H}\right\}} \sum_{j\in\{L,\downarrow\}}(1+\delta_{j\parallel})\Big{|}\Gamma_{k,j}^{\rm eq }(\tau)+\Gamma_{k,j}^{\rm leq}(\tau)\Big{|}^{2}\] \[\quad+\Big{|}\Gamma_{\rm mix}^{\rm eq}(\tau)-\Gamma_{\rm mix,ev }^{\rm leq}(\tau)\Big{|}^{2}+\Big{|}\Gamma_{\rm mix,pr}^{\rm leq}(\tau)\Big{|} ^{2}. \tag{57}\]
The \(\Gamma\) integrals are defined in Eqs. (A19)-(A22). These variances fulfill the relations in Eqs. (29)-(30) when taking into account that there are two directions parallel to the half-space's surface which equally contribute to the energy density and the intensity, then
\[\mathrm{Var}_{u,{\rm th},{\rm E}}(\mathbf{r},\mathbf{r},0) =2\sum_{j\in\{\chi,y,\mathbf{z}\}}\left\langle\!\left\langle u_{ \rm th,E,j}(\mathbf{r},t)\right\rangle\!\right\rangle^{2}\] \[=2\left(2\!\left\langle\!\left\langle u_{\rm th,E,x}(\mathbf{r},t) \right\rangle\!\right\rangle^{2}+\left\langle\!\left\langle u_{\rm th,E,z}( \mathbf{r},t)\right\rangle\!\right\rangle^{2}\right), \tag{58}\] \[\mathrm{Var}_{I,{\rm E}}(\mathbf{r},\mathbf{r},0) =\sum_{j\in\{\chi,y,\mathbf{z}\}}\left\langle\!\left\langle I_{ \rm E,j}(\mathbf{r},t)\right\rangle\!\right\rangle^{2}=2\!\left\langle\!\left\langle I _{\rm E,x}(\mathbf{r},t)\right\rangle\!\right\rangle^{2}+\left\langle\!\left\langle I _{\rm E,z}(\mathbf{r},t)\right\rangle\!\right\rangle^{2} \tag{59}\]
with
\[\left\langle\!\left\langle u_{\rm th,E,x/y/x}(\mathbf{r},t)\right\rangle\! \right\rangle=\varepsilon_{0}\left[\Gamma_{\rm E,x/y/z}^{\rm eq}(0)+\Gamma_{ \rm E,x/y/z}^{\rm leq}(0)\right]=\varepsilon_{0}\!\left\langle\!\left\langle I _{\rm E,x/y/z}(\mathbf{r},t)\right\rangle\!\right\rangle \tag{60}\]
and \(\Gamma_{\rm E,x}^{\rm(leq)}=\Gamma_{\rm E,y}^{\rm(leq)}=\Gamma_{\rm E,l}^{\rm(leq )}/2\).
The resulting \(g^{(2)}\) functions are plotted in Fig. 2 for SiC employing the temperatures \(T_{\alpha}=350\) K and \(T_{\rm b}=300\) K. As one would expect, the coherence time \(\tau_{c}\), which replaces \(\tau_{\rm b}\) for this geometry now depends on the distance \(d\) between substrate and observation point and exceeds the vacuum value by at least two orders of magnitude, even for distances like \(d=200\) nm in agreement with Ref. [6]. The larger this distance, the shorter becomes the coherence time \(\tau_{c}\). The large values \(\tau_{c}\gg\tau_{\rm b}\) in the near-field regime exist because SiC possesses two resonance frequencies that become very pronounced in the near field spectrum. These are the SPhP resonance frequency and the transverse optical phonon (TOP) resonance frequency. The SPhP mode is more pronounced in the electric contribution compared to the TOP mode that dominates the spectrum of the magnetic contribution [27; 28]. Due to the quasi-monochromatic distribution of the spectrum around these two frequencies the correlation time increases. For larger distances \(d\) the effect of these evanescent waves decreases and the amplitude of the \(g^{(2)}\) functions decreases as well. This is most
pronounced for \(d>50\) nm since below this distance the curves almost overlap. There is also an increasing drop of amplitude at very short time delays for growing distances \(d\) for the same reason. This drop happens for time delays on the order of \(\tau_{\rm b}\). Therefore, this drop can be connected to the degrading of the quasi-monochromatic spectrum to a black-body one like in vacuum for growing distances. Thus, this \(g^{(2)}\) function seems to be an overlap of the one corresponding to the vacuum case that I discussed previously, which explains the drop emerging for larger distances, and the one for the quasi-monochromatic spectrum dominated by the SPhP mode, which explains the large \(\tau_{c}\) values for smaller distances. For the chosen distances \(d\), the values for the global equilibrium situation (\(T_{\alpha}=T_{\rm b}\)) are always smaller than for the local equilibrium case (\(T_{\alpha}\neq T_{\rm b}\)) due to the missing K matrix contribution. Interestingly, the initial values of the global and local equilibrium conditions for \(\tau=0\) approach each other for increasing distances \(d\). Compared to \(g_{I}^{(2)}\), \(g_{u,{\rm th}}^{(2)}\) behaves qualitatively identical regarding the above mentioned points but with a wave-like character. \(g_{I}^{(2)}\), however, behaves like the average of \(g_{u,{\rm th}}^{(2)}\) due to the absolute value.
### Energy density and intensity fluctuations in the vicinity of a sphere
Finally, let me apply the here developed theory to a single sphere of radius \(R\) immersed in vacuum for which I compute the mean values and variances of the energy density and intensity at distance \(r\) to the center of the sphere. Again, the fully electric Green's function \(\mathsf{G}_{\rm full,EE}\) can be decomposed into a vacuum part and a scattering part due to reflections at the sphere's surface like in Eq. (47) but for a different geometrical basis. For that I use the notation [18]
\[\mathsf{G}_{\rm vac,EE}(\mathbf{r},\mathbf{r}^{\prime})= \mathrm{i}\sum_{P,l,m}\mathbf{E}_{P,l,m}^{\rm out}(\mathbf{r}) \otimes\mathbf{E}_{P,l,m}^{\rm reg*}(\mathbf{r}^{\prime}), \tag{61}\] \[\mathsf{G}_{\rm scat,EE}(\mathbf{r},\mathbf{r}^{\prime})= \mathrm{i}\sum_{P,l,m}\mathcal{T}_{l}^{P}\mathbf{E}_{P,l,m}^{\rm out }(\mathbf{r})\otimes\mathbf{E}_{P,l,-m}^{\rm reg*}(\mathbf{r}^{\prime}). \tag{62}\]
Here, \(P\in\{\)M,N\(\}\) corresponds to the two different wave vector solutions to the general Helmholtz equation applied to spherical waves, \(l\geq 1\) denotes the multipole order, and \(-l\leq m\leq l\) characterizes the multipole index. Note that the scattering at the sphere's surface manifests in \(\sigma(m)=-m\) in the second vector of \(\mathsf{G}_{\rm scat,EE}\). \(\mathcal{T}\) denotes the T-operator applied to the spherical basis being diagonal for all indices and independent of \(m\). Please, find the expressions for the vector functions \(\mathbf{E}\) and the T-operator \(\mathcal{T}\) in appendix B. Inserting the Green's functions in Eq. (25) and (26) yields the B matrices
\[\mathsf{B}_{\rm EE/HH}(\mathbf{r},\mathbf{r},\omega) =\frac{1}{2}\hbar k_{0}^{3}n_{\rm b}(\omega)\sum_{j,k,P,l,m}\frac {\mathbf{e}_{j}\otimes\mathbf{e}_{k}}{l(l+1)}\Big{[}P_{j,l,m}^{\rm out}P_{k,l,m}^{\rm reg*}+\mathcal{T}_{l}^{P/P}P_{j,l,m}^{\rm out}P_{k,l,-m}^{\rm reg*} \Big{]}+\mathrm{h.c.}, \tag{63}\] \[\mathsf{B}_{\rm EH}(\mathbf{r},\mathbf{r},\omega) =\frac{1}{2\mathrm{i}}\hbar k_{0}^{3}n_{\rm b}(\omega)\sum_{j,k,P, l,m}\frac{\mathbf{e}_{j}\otimes\mathbf{e}_{k}}{l(l+1)}\Big{[}\mathcal{T}_{l}^{P }P_{j,l,m}^{\rm out}\bar{P}_{k,l,-m}^{\rm reg*}-\mathcal{T}_{l}^{P*}P_{j,l,-m }^{\rm reg}\bar{P}_{k,l,m}^{\rm out*}\Big{]}, \tag{64}\]
Figure 2: \(g^{(2)}\) function of the intensity for different distances \(d\) to the substrate’s surface and in global equilibrium conditions (\(T_{\alpha}=T_{\rm b}\), dashed lines) as well as in local equilibrium conditions (\(T_{\alpha}\neq T_{\rm b}\), solid lines) with respect to the normalized time delay \(\tau/\tau_{\rm b}\). Inset: \(g^{(2)}\) function of the thermal contribution of the energy density in local equilibrium for the same distances.
and the \(\mathds{Q}\) matrices
\[\mathds{Q}_{\rm EE/HH}(\mathbf{r},\mathbf{r},\omega) =-\hbar k_{0}^{3}\left[n_{\alpha}(\omega)-n_{\rm b}(\omega)\right] \sum_{j,k,P,l,m}\frac{\mathbf{e}_{j}\otimes\mathbf{e}_{k}}{l(l+1)}\left(\mathrm{ Re}(\mathcal{T}_{l}^{P/P})+|\mathcal{T}_{l}^{P/P}|^{2}\right)P_{j,l,m}^{\rm out }P_{k,l,m}^{\rm out*}, \tag{65}\] \[\mathds{Q}_{\rm EH}(\mathbf{r},\mathbf{r},\omega) =-\mathrm{i}\hbar k_{0}^{3}\left[n_{\alpha}(\omega)-n_{\rm b}( \omega)\right]\sum_{j,k,P,l,m}\frac{\mathbf{e}_{j}\otimes\mathbf{e}_{k}}{l(l+ 1)}\left(\mathrm{Re}(\mathcal{T}_{l}^{P})+|\mathcal{T}_{l}^{P}|^{2}\right)P_{j,l,m}^{\rm out}\tilde{P}_{k,l,m}^{\rm out*}. \tag{66}\]
Note that \(\mathds{B}_{\rm HE}^{\dagger}=\mathds{B}_{\rm EH}\) and \(\mathds{Q}_{\rm HE}^{\dagger}=\mathds{Q}_{\rm EH}\) as well as \(j,k\in\{r,\vartheta,\varphi\}\) and \(\bar{\mathds{M}}=\mathds{N}\) with the definitions in Eqs. (B3)-(B7). This results in the following final expressions for the mean values
\[\left\langle\!\left\langle u_{\rm th}(\mathbf{r},t)\right\rangle\!\right\rangle =\sum_{j}\sum_{\tau\in\{\rm E,H\}}\left[\Lambda_{jj,\gamma}^{\rm eq}(0)- \Lambda_{jj,\gamma}^{\rm leq}(0)\right] \tag{67}\]
and the variances
\[\mathrm{Var}_{u,\rm th}(\mathbf{r},\mathbf{r},\tau) =2\varepsilon_{0}^{2}\sum_{j,k}\Big{\{}\sum_{\gamma\in\{\rm E,H \}}\left[\mathrm{Re}\Big{(}\Lambda_{kj,\gamma}^{\rm eq}(\tau)-\Lambda_{kj, \gamma}^{\rm leq}(\tau)\Big{)}\right]^{2}-\left[\mathrm{Re}\Big{(}\Lambda_{kj,\rm mix}^{\rm eq}(\tau)+\Lambda_{kj,\rm mix}^{\rm leq}(\tau)\Big{)}\right]^ {2}\] \[-\left[\mathrm{Re}\Big{(}\Lambda_{kj,\rm mix}^{\rm eq}(-\tau)+ \Lambda_{kj,\rm mix}^{\rm leq}(-\tau)\Big{)}\right]^{2}\Big{\}}, \tag{68}\] \[\mathrm{Var}_{I}(\mathbf{r},\mathbf{r},\tau) =\] \[+\left|\Lambda_{kj,\rm mix}^{\rm eq}(\tau)+\Lambda_{kj,\rm mix}^{ \rm leq}(\tau)\right|^{2}\Big{\}} \tag{69}\]
where \(\mathds{I}\) defined the different \(\Lambda\)'s in Eqs. (B8)-(B11). Let me stress that it is, again, possible to rewrite the sum of the integrated matrices \(\mathds{B}\) and \(\mathds{Q}\) in terms of a diagonal matrix and a matrix containing their eigenvectors so that one is left with the sum over squared eigenvalues. But since the obtained expressions are very lengthy compared to the form \(\mathds{I}\) decided to show here, \(\mathds{I}\) will employ a notation that does not imply a diagonalization regarding the components \(r,\vartheta,\varphi\). Therefore, it is also not obvious by looking at the formulas that the resulting mean values and variances are angular independent like one would expect for the intensity and energy density above a sphere. But all numerical results performed for metallic and dipolar materials proved that both, mean values and variances, only depend on the radial distance between observation point and the sphere's center \(r\).
As an illustrating example, \(\mathds{I}\) show the functions \(g_{u,\rm th}^{(2)}\) (blue) and \(g_{I}^{(2)}\) (red) for gold and SiC for different distances \(r\) in Fig. 3. The radius is \(R=20\) nm and the temperatures are \(T_{\alpha}=700\) K and \(T_{\rm b}=300\) K. The multipole order is chosen such that the difference to the values of the next order becomes insignificant. The actual meaning of that will be detailed out in the last figure of this section. First of all, both \(g^{(2)}\) functions basically show the same qualitative behavior regarding the dependence on time delay \(\tau\) and radial distance \(r\). The \(g_{u,\rm th}^{(2)}\) function for SiC shows an additional wave like character as mentioned in the previous chapter. For a SiC sphere, the localized surface phonon polariton (LSPhP) resonance frequency causes a quasi-monochromatic spectrum for the dominating electric part, whereas the TOP mode is, again, only present in the magnetic contribution. Gold does not have such resonances in the infrared regime and, therefore, \(\tau_{\mathcal{E}}^{\rm gold}\approx\tau_{\rm b}\) holds. For all curves \(\mathds{I}\) retrieve the bunching property of heat radiation but with a different distance behavior compared to the one for a half-space. Gold clearly shows that the photons are stronger correlated closer to the surface of the sphere, although the \(g^{(2)}\) function finds a plateau between \(3R\leq r\leq 5R\) (see Fig. 4). However, the correlation time \(\tau_{c}\) of smaller \(r\) is now shorter than the one for larger \(r\).
To make this radial distance behavior more obvious, \(\mathds{I}\) show \(g_{I}^{(2)}\) for gold and SiC in Fig. 4 for different \(\tau\) depending on \(r\). \(\mathds{I}\) also indicated the distance \(r\) for which \(g_{I}^{(2)}\) becomes maximal. Interestingly, the case \(r=R\) is not the most likely one of finding bunched photons for both materials. This is rather the case at \(r_{\rm max,SiC}\approx 4R\) for SiC and at \(r_{\rm max,Au}\approx 3.5R\) for gold. This seems to be a feature of the local equilibrium contribution describing emission by the sphere itself because it vanishes for \(T_{\alpha}=T_{\rm b}\). Let me add that for larger temperature differences the distance \(r\) for the most likely measurement of bunched photons also increases. Both materials reach the same value in the limit of large \(r\) showing that for such distances the coherence properties correspond to that of the vacuum environment at \(T_{\rm b}=300\) K. Then, \(g_{I}^{(2)}\) values are identical to those in section III.1.
Eventually, let me come back to the evaluation with respect to the multipole orders. By that \(\mathds{I}\) refer to the highest multipole order used in the summations in Eq. (63)-(66). In Fig. 5 this is done exemplary for SiC. There one can see that the dipole moment, e.g. \(l=1\), dominates for distances starting at \(r=5R\). Especially for distances \(r<3R\), the dipole contribution strongly overestimates the overall result. For distances \(r<1.5R\) even \(l=5\) is insufficient to obtain
an accurate result. For the smallest distance \(r=R\) the multipole order \(l=20\) can give accurate results. This also clearly shows that the multipole moments \(l>1\) cause the above mentioned maximum value of \(g_{I}^{(2)}\) at \(3R<r<5R\). A similar conclusion can be found for gold. Therefore, in the above shown figures \(l\) was always chosen such that the results can be regarded as exact for the considered distance \(r\). In the cases of \(r=R\) this means I chose \(l=38\).
## IV Extension to two objects
To derive the corresponding expressions for \(\left\langle\!\left\langle u\right\rangle\!\right\rangle\), \(\left\langle\!\left\langle I\right\rangle\!\right\rangle\), \(\mathrm{Var}_{u,\mathrm{th}}\), \(\mathrm{Var}_{I}\), \(g_{u,\mathrm{th}}^{(2)}\), and \(g_{I}^{(2)}\) for two objects, one has to adapt the total fields and currents in Eqs. (14)-(15) as it is explained in Ref. [8]. The second object will be labeled by index \(\beta\). Thereby, I obtain the fields
\[\left|\hat{\mathbf{F}}_{k}\right\rangle=\left|\hat{\mathbf{F}}_{k,\mathrm{b} }\right\rangle+\mathrm{i}\mu_{0}\omega\mathbf{G}_{k\mathrm{E}}\left[\left| \hat{\mathbf{J}}_{\alpha}\right\rangle+\left|\hat{\mathbf{J}}_{\beta}\right\rangle\right] \tag{70}\]
and the current density
\[\left|\hat{\mathbf{J}}_{\alpha}\right\rangle=\left|\hat{\mathbf{J}}_{\alpha, \mathrm{fl}}\right\rangle+\frac{1}{\mathrm{i}\mu_{0}\omega}\mathrm{T}_{\alpha }\left|\hat{\mathbf{E}}_{\mathrm{b}}\right\rangle+\mathrm{T}_{\alpha} \mathrm{G}_{\mathrm{EE}}\Big{[}\left|\hat{\mathbf{J}}_{\beta,\mathrm{fl}} \right\rangle+\frac{1}{\mathrm{i}\mu_{0}\omega}\mathrm{T}_{\beta}\left|\hat{ \mathbf{E}}_{\mathrm{b}}\right\rangle+\mathrm{T}_{\beta}\mathrm{G}_{\mathrm{EE }}\left|\hat{\mathbf{J}}_{\alpha}\right\rangle\Big{]}. \tag{71}\]
Using this, I obtain the new correlation function
\[\left\langle\!\left\langle\left|\hat{\mathbf{F}}_{k}\right\rangle\otimes \left\langle\hat{\mathbf{F}}_{l}\right|\right\rangle\!\right\rangle=2\hbar \mu_{0}\omega^{2}\left[\left[n_{\mathrm{b}}(\omega)+1\right]\frac{\mathrm{G} _{t,kl}-\mathrm{G}_{t,lk}^{\dagger}}{2\mathrm{i}}+\sum_{\gamma\in\{\alpha, \beta\}}\left[n_{\gamma}(\omega)-n_{\mathrm{b}}(\omega)\right]\mathrm{K}_{ \gamma}\right] \tag{72}\]
Figure 3: \(g^{(2)}\) function of the thermal contribution of the energy density (blue) and intensity (red) for different distances \(r\) between observation point and sphere’s center for \(T_{\alpha}=700\) K and \(T_{\mathrm{b}}=300\) K with respect to the normalized time delay \(\tau/\tau_{\mathrm{b}}\). The calculations are performed for the sphere’s materials SiC (a) and gold (b) for \(R=20\) nm.
with
\[\mathbf{G}_{\mathrm{f},kl} =\left[\left(\mathds{1}+\mathbf{O}_{\alpha}\mathbf{G}\mathbf{T}_{ \alpha}+\mathbf{O}_{\beta}\mathbf{G}\mathbf{T}_{\beta}\right)\mathbf{G}\right] _{kl}, \tag{73}\] \[\mathbf{K}_{\gamma,kl} =\left[\mathbf{O}_{\gamma}\mathbf{G}\right]_{k\mathrm{E}} \mathbf{X}_{\gamma}\left[\mathbf{G}^{\dagger}\mathbf{O}_{\gamma}^{\dagger} \right]_{l\mathrm{E}},\] (74) \[\mathbf{X}_{\gamma,k} =\frac{\mathbf{T}_{\gamma,k}-\mathbf{T}_{\gamma,k}^{\dagger}}{2 \mathrm{i}}-\mathbf{T}_{\gamma,k}\,\frac{\mathbf{G}_{kk}-\mathbf{G}_{kk}^{ \dagger}}{2\mathrm{i}}\mathbf{T}_{\gamma,k}^{\dagger} \tag{75}\]
as well as
\[\mathbf{O}_{\alpha}=\left(\mathds{1}+\mathbf{G}\mathbf{T}_{\beta}\right) \left[\mathds{1}-\mathbf{G}\mathbf{T}_{\alpha}\mathbf{G}\mathbf{T}_{\beta} \right]^{-1}. \tag{76}\]
The above equations, then, have to be inserted into Eqs. (25)-(26) to get the mean values and variances for the energy density and intensity for two arbitrary objects in an arbitrary environment. This can also be extended to \(N\) particles by regarding one of the two particles as a compound of \(N-1\) particles, while repeating this procedure for that compound.
## V Conclusion
In this work I employed the methods of mQED and scattering approach to derive expressions for the variances of the energy density and intensity of heat radiation in a systems of an arbitrary object immersed in an arbitrary environment. I compared the general solution with the corresponding mean values and retrieved the expressions found in [19] for the ratio of variance and squared mean values for the intensity of isotropic systems. I also extended this to corresponding expressions of the energy density and systems containing three preferred axes like in cartesian, cylindrical, or spherical coordinates. With that formalism I computed the \(g^{(2)}\) functions of both, the energy density
Figure 4: \(g^{(2)}\) function of the intensity evaluated at different correlation times \(\tau\) for \(T_{\alpha}=700\) K and \(T_{\mathrm{b}}=300\) K with respect to the normalized distance \(r/R\). The calculations are performed for the sphere’s materials SiC (a) and gold (b). The maximal value of each curve is indicated by the vertical dashed lines.
and the intensity, for vacuum, a half-space, and a sphere. Thereby, I retrieved the results of Ref. [19] for vacuum, showed the expected distance dependence of the \(g^{(2)}\) functions above a half-space, and did the same thing for the \(g^{(2)}_{I}\) of a sphere as well as a multipole order analysis. Interestingly, \(g^{(2)}_{I}\) becomes maximal for distances \(r>R\) also depending on the material and the chosen temperatures, which seems to be due to higher order multipole moments. The effect behind this feature is unclear for the moment and might be interesting for future investigations. Finally, I showed theoretically how the expressions can be generalized for two arbitrary objects and a many-body system.
## VI Acknowledgments
The author acknowledges support from the Studienstiftung des deutschen Volkes (eng. German Academic Scholarship Foundation) as well as fruitful discussions with PD Dr. Svend-Age Biehs from the Carl von Ossietzky Universitat Oldenburg, Germany.
## Appendix A Green's functions and integral formulas for a planar geometry
For the planar geometry, I use these expressions for the electric Green's function
\[\mathds{G}_{\text{vac,EE}}(\mathbf{k}_{\perp},z,z^{\prime}) =\frac{\text{i}e^{\text{i}k_{x}(z-z^{\prime})}}{2k_{z}}\left[ \mathbf{a}_{\perp}(k_{0})\otimes\mathbf{a}_{\perp}(k_{0})+\mathbf{a}_{\parallel }^{*}(k_{0})\otimes\mathbf{a}_{\parallel}^{*}(k_{0})\right], \tag{10}\] \[\mathds{G}_{\text{scat,EE}}(\mathbf{k}_{\perp},z,z^{\prime}) =\frac{\text{i}e^{\text{i}k_{x}(z+z^{\prime})}}{2k_{z}}\left[r_{ \text{i}}\mathbf{a}_{\perp}(k_{0})\otimes\mathbf{a}_{\perp}(k_{0})+r_{\text{ E}}\mathbf{a}_{\parallel}^{*}(k_{0})\otimes\mathbf{a}_{\parallel}^{*}(k_{0})\right] \tag{11}\]
and
\[\mathbf{k}_{\perp} =(k_{x},k_{y})^{T}, \tag{12}\] \[\mathbf{x}_{\perp} =(x,y)^{T},\] (13) \[\text{d}^{2}k_{\perp} =\text{d}k_{x}\text{d}k_{y},\] (14) \[k_{z} =\sqrt{k_{0}^{2}-k_{\perp}^{2}}. \tag{15}\]
Figure 5: \(g^{(2)}\) function of the intensity for SiC evaluated at correlation time \(\tau=0\) for \(T_{\alpha}=700\) K and \(T_{\text{b}}=300\) K with respect to the normalized distance \(r/R\). The different lines correspond to different maximal multipole orders up to which the summation in Eq. (63)-(66) is performed.
\(\text{G}_{\text{vac}}\) describes the vacuum part and \(\text{G}_{\text{scat}}\) the contribution reflected at the substrate's surface. Here, I defined the polarization unit vectors
\[\mathbf{a}_{\perp}(k_{0}) =\frac{1}{k_{\perp}}(k_{y},-k_{x},0)^{T}, \tag{100}\] \[\mathbf{a}_{\parallel}^{*}(k_{0}) =\frac{1}{k_{\perp}k_{0}}(\mp k_{x}k_{x},\mp k_{y}k_{z},k_{\perp}^ {2})^{T}, \tag{101}\]
and used the Fresnel amplitude reflection coefficients
\[r_{\text{H}} =\frac{k_{z}-k_{z,\text{sub}}}{k_{z}+k_{z,\text{sub}}}, \tag{102}\] \[r_{\text{E}} =\frac{c_{\text{sub}}k_{z}-k_{z,\text{sub}}}{c_{\text{sub}}k_{z}+ k_{z,\text{sub}}} \tag{103}\]
with
\[k_{z,\text{sub}}=\sqrt{c_{\text{sub}}k_{0}^{2}-k_{\perp}^{2}} \tag{104}\]
where I introduced the substrates permittivity \(\varepsilon_{\text{sub}}\).
For the mean values and variances I define the integrals
\[I_{\text{p/s},z} =\int_{0}^{k_{0}}\frac{\text{d}k_{\perp}k_{\perp}}{8\pi k_{z}k_{ 0}}\text{Re}\left(e^{2\text{i}k_{z}d}\left[r_{\text{s/p}}-\frac{k_{z}^{2}}{k_ {0}^{2}}r_{\text{p/s}}\right]\right)\] \[\quad+\int_{k_{0}}^{\infty}\frac{\text{d}k_{\perp}k_{\perp}}{8\pi k _{z}|k_{0}}e^{-2\left[k_{\perp}\right]d}\text{Im}\left(r_{\text{s/p}}+\frac{|k _{z}|^{2}}{k_{0}^{2}}r_{\text{p/s}}\right), \tag{105}\] \[I_{\text{p/s},z} =\int_{0}^{k_{0}}\frac{\text{d}k_{\perp}k_{\perp}^{3}}{8\pi k_{z} k_{0}^{3}}\text{Re}\left(e^{2\text{i}k_{z}d}r_{\text{p/s}}\right)+\int_{k_{0}}^{ \infty}\frac{\text{d}k_{\perp}k_{\perp}^{3}}{8\pi|k_{z}|k_{0}^{3}}e^{-2\left[k_{ \perp}\right]d}\text{Im}(r_{\text{p/s}}),\] (106) \[I_{\text{mix}} =\int_{0}^{k_{0}}\frac{\text{d}k_{\perp}k_{\perp}}{16\pi k_{0}^{2 }}\text{Im}\left(e^{2\text{i}k_{z}d}[r_{s}(\mathbf{k}_{\perp})-r_{p}(\mathbf{ k}_{\perp})]\right)\] \[\quad+\int_{k_{0}}^{\infty}\frac{\text{d}k_{\perp}k_{\perp}}{16 \pi k_{0}^{2}}e^{-2\left[k_{\perp}\right]d}\text{Im}[r_{s}(\mathbf{k}_{\perp}) -r_{p}(\mathbf{k}_{\perp})]. \tag{107}\]
for the equilibrium contribution (\(T_{\alpha}=T_{\text{b}}\)) and
\[K_{\text{p/s},z} =\int_{0}^{k_{0}}\frac{\text{d}k_{\perp}k_{\perp}}{8\pi k_{z}k_{0} }\left([1-|r_{\text{s/p}}(\mathbf{k}_{\perp})|^{2}]+\frac{k_{z}^{2}}{k_{0}^{2} }[1-|r_{\text{p/s}}(\mathbf{k}_{\perp})|^{2}]\right)\] \[\quad+\int_{k_{0}}^{\infty}\frac{\text{d}k_{\perp}k_{\perp}}{4\pi k _{\parallel}|k_{0}}e^{-2\left[k_{\perp}\right]d}\left[\text{Im}(r_{s}(\mathbf{ k}_{\perp}))+\frac{|k_{z}|^{2}}{k_{0}^{2}}\text{Im}(r_{\text{p}}(\mathbf{k}_{ \perp}))\right], \tag{108}\] \[K_{\text{p/s},z} =\int_{0}^{k_{0}}\frac{\text{d}k_{\perp}k_{\perp}^{3}}{8\pi k_{z} k_{0}^{3}}[1-|r_{\text{p/s}}(\mathbf{k}_{\perp})|^{2}]+\int_{k_{0}}^{\infty}\frac{ \text{d}k_{\perp}k_{\perp}^{3}}{4\pi|k_{z}|k_{0}^{3}}e^{-2\left[k_{\perp} \right]d}\text{Im}(r_{\text{p}}(\mathbf{k}_{\perp})),\] (109) \[K_{\text{mix}}^{\text{pr}} =\int_{0}^{k_{0}}\frac{\text{d}k_{\perp}k_{\perp}k_{\parallel}z}{ 16\pi k_{0}^{3}}[2-|r_{\text{s}}(\mathbf{k}_{\perp})|^{2}-|r_{\text{p}}( \mathbf{k}_{\perp})|^{2}],\] (110) \[K_{\text{mix}}^{\text{ev}} =\int_{k_{0}}^{\infty}\frac{\text{d}k_{\perp}k_{\parallel}k_{ \parallel}z}{8\pi k_{0}^{3}}e^{-2\left[k_{\perp}\right]d}\left[\text{Im}(r_{ \text{p}}(\mathbf{k}_{\perp}))-\text{Im}(r_{s}(\mathbf{k}_{\perp}))\right] \tag{111}\]
for the local equilibrium contribution (\(T_{\alpha}\neq T_{\text{b}}\)). For the frequency integrals, I define the expressions
\[\Gamma_{\text{E/H},\perp/\parallel}^{\text{leq}}(\tau) =\frac{\hbar}{\varepsilon_{0}\pi}\int_{0}^{\infty}\!\text{d} \omega k_{0}^{3}[n_{\alpha}(\omega)-n_{\text{b}}(\omega)]K_{\text{p/s},\perp/ \parallel}e^{\text{i}\omega\tau}, \tag{112}\] \[\Gamma_{\text{mix,pr/ev}}^{\text{leq}}(\tau) =\frac{2\hbar}{\varepsilon_{0}\pi}\int_{0}^{\infty}\!\text{d} \omega k_{0}^{3}[n_{\alpha}(\omega)-n_{\text{b}}(\omega)]K_{\text{mix}}^{ \text{pr/ev}}e^{\text{i}\omega\tau},\] (113) \[\Gamma_{\text{E/H},\perp/\parallel}^{\text{eq}}(\tau) =\frac{\hbar}{\varepsilon_{0}\pi}\int_{0}^{\infty}\!\text{d} \omega k_{0}^{3}n_{\text{b}}(\omega)\left(\frac{1}{6\pi}\left(1+\delta_{j\perp} \right)+I_{\text{p/s},\perp/\parallel}\right)e^{\text{i}\omega\tau},\] (114) \[\Gamma_{\text{mix}}^{\text{eq}}(\tau) =\frac{2\hbar}{\varepsilon_{0}\pi}\int_{0}^{\infty}\!\text{d} \omega k_{0}^{3}n_{\text{b}}(\omega)I_{\text{mix}}e^{\text{i}\omega\tau}. \tag{115}\]
## Appendix B Green's functions and integral formulas for a spherical geometry
The two general solutions of the electric field for the spherical geometry can be expressed by
\[\mathbf{E}^{\rm reg/out}_{{\rm M},l,m}(\mathbf{r}) =M^{\rm reg/out}_{\varphi,l,m}\mathbf{e}_{\varphi}+M^{\rm reg/out}_{ \vartheta,l,m}\mathbf{e}_{\vartheta}, \tag{101}\] \[\mathbf{E}^{\rm reg/out}_{{\rm N},l,m}(\mathbf{r}) =N^{\rm reg/out}_{r,l,m}\mathbf{e}_{r}+N^{\rm reg/out}_{\varphi,l,m}\mathbf{e}_{\varphi}+N^{\rm reg/out}_{\vartheta,l,m}\mathbf{e}_{\vartheta} \tag{102}\]
using the abbreviations
\[M^{\rm reg/out}_{\varphi,l,m} =-\begin{cases}j_{l}(k_{0}r)\\ h_{l}(k_{0}r)\end{cases}\frac{\partial Y_{l}^{m}(\vartheta,\varphi)}{\partial _{\vartheta}}, \tag{103}\] \[M^{\rm reg/out}_{\vartheta,l,m} =\mathrm{i}\begin{cases}j_{l}(k_{0}r)\\ h_{l}(k_{0}r)\end{cases}\frac{mY_{l}^{m}(\vartheta,\varphi)}{\sin(\vartheta)}, \tag{104}\]
and
\[N^{\rm reg/out}_{r,l,m} =\frac{l(l+1)}{k_{0}r}\begin{cases}j_{l}(k_{0}r)\\ h_{l}(k_{0}r)\end{cases}Y_{l}^{m}(\vartheta,\varphi), \tag{105}\] \[N^{\rm reg/out}_{\vartheta,l,m} =\frac{1}{k_{0}r}\frac{\partial}{\partial r}\begin{cases}rj_{l}( k_{0}r)\\ rh_{l}(k_{0}r)\end{cases}\frac{\partial Y_{l}^{m}(\vartheta,\varphi)}{\partial _{\vartheta}},\] (106) \[N^{\rm reg/out}_{\varphi,l,m} =\frac{\mathrm{i}}{k_{0}r}\frac{\partial}{\partial r}\begin{cases} rj_{l}(k_{0}r)\\ rh_{l}(k_{0}r)\end{cases}\frac{mY_{l}^{m}(\vartheta,\varphi)}{\sin(\vartheta)}. \tag{107}\]
The unit vectors \(\mathbf{e}_{r/\vartheta/\varphi}\) either point into radial or angular directions of \(\vartheta\) or \(\varphi\). \(j_{l}\) and \(h_{l}\) denote the spherical Bessel function and the spherical Hankel function of the \(l\)th order, respectively. \(Y_{l}^{m}\) denotes the spherical harmonics of order \(m\) and degree \(l\).
For the spherical geometry I define the following frequency integral expressions
\[\Lambda^{\rm eq}_{kj,{\rm E}/{\rm H}}(\tau) =\int_{0}^{\infty}\!\frac{\mathrm{d}\omega}{2\pi}\frac{\hbar k_{0} ^{3}n_{\rm b}(\omega)e^{-\mathrm{i}\omega\tau}}{2\varepsilon_{0}}\Big{[}\frac {\delta_{jk}}{3\pi}\] \[\quad+\sum_{P,l,m}\frac{1}{l(l+1)}(\mathcal{T}_{l}^{P/P}P_{j,l,m} ^{\rm out}P_{k,l,-m}^{\rm reg*}+\mathcal{T}_{l}^{P/P*}P_{j,l,-m}^{\rm reg}P_{k, l,m}^{\rm out*})\Big{]} \tag{108}\]
as well as
\[\Lambda^{\rm leq}_{kj,{\rm E}/{\rm H}}(\tau) =\int_{0}^{\infty}\!\frac{\mathrm{d}\omega}{2\pi}\sum_{P,l,m} \frac{\hbar k_{0}^{3}(n_{\alpha}(\omega)-n_{\rm b}(\omega))}{\varepsilon_{0}l (l+1)}e^{-\mathrm{i}\omega\tau}\left(\mathrm{Re}(\mathcal{T}_{l}^{P/P})+| \mathcal{T}_{l}^{P/P}|^{2}\right)P_{j,l,m}^{\rm out}P_{k,l,m}^{\rm out*} \tag{109}\]
and
\[\Lambda^{\rm eq}_{kj,{\rm mix}}(\tau) =\frac{1}{2}\int_{0}^{\infty}\!\frac{\mathrm{d}\omega}{2\pi}\sum_ {P,l,m}\frac{k_{0}^{3}n_{\rm b}(\omega)e^{-\mathrm{i}\omega\tau}}{\varepsilon _{0}l(l+1)}\Big{[}\mathcal{T}_{l}^{P}P_{j,l,m}^{\rm out}\bar{P}_{k,l,-m}^{\rm reg *}-\mathcal{T}_{l}^{P*}P_{j,l,-m}^{\rm reg}\bar{P}_{k,l,m}^{\rm out*}\Big{]} \tag{110}\]
as well as
\[\Lambda^{\rm leq}_{kj,{\rm mix}}(\tau) =\int_{0}^{\infty}\!\frac{\mathrm{d}\omega}{2\pi}\sum_{P,l,m} \frac{k_{0}^{3}(n_{\alpha}(\omega)-n_{\rm b}(\omega))}{\varepsilon_{0}l(l+1)}e ^{-\mathrm{i}\omega\tau}\left(\mathrm{Re}(\mathcal{T}_{l}^{P})+|\mathcal{T}_{ l}^{P}|^{2}\right)P_{j,l,m}^{\rm out}\bar{P}_{k,l,m}^{\rm out*}. \tag{111}\]
The T-operators are defined by
\[\mathcal{T}_{l}^{M} =-\frac{j_{l}(y)\frac{\partial}{\partial x}[xj_{l}(x)]-j_{l}(x) \frac{\partial}{\partial y}[yj_{l}(y)]}{j_{l}(y)\frac{\partial}{\partial x}[xh _{l}(x)]-h_{l}(x)\frac{\partial}{\partial y}[yj_{l}(y)]} \tag{112}\]
and
\[\mathcal{T}_{l}^{N} =-\frac{\varepsilon j_{l}(y)\frac{\partial}{\partial x}[xj_{l}(x) ]-j_{l}(x)\frac{\partial}{\partial y}[yj_{l}(y)]}{\varepsilon j_{l}(y)\frac{ \partial}{\partial x}[xh_{l}(x)]-h_{l}(x)\frac{\partial}{\partial y}[yj_{l}(y )]} \tag{113}\]
with \(x=k_{0}R\) and \(y=\sqrt{\varepsilon}x\) using the sphere's material's permittivity \(\varepsilon\).
|
2302.08947 | Learning from Label Proportion with Online Pseudo-Label Decision by
Regret Minimization | This paper proposes a novel and efficient method for Learning from Label
Proportions (LLP), whose goal is to train a classifier only by using the class
label proportions of instance sets, called bags. We propose a novel LLP method
based on an online pseudo-labeling method with regret minimization. As opposed
to the previous LLP methods, the proposed method effectively works even if the
bag sizes are large. We demonstrate the effectiveness of the proposed method
using some benchmark datasets. | Shinnosuke Matsuo, Ryoma Bise, Seiichi Uchida, Daiki Suehiro | 2023-02-17T15:30:13Z | http://arxiv.org/abs/2302.08947v1 | # Learning from Label Proportion with Online Pseudo-Label Decision by Regret Minimization
###### Abstract
This paper proposes a novel and efficient method for Learning from Label Proportions (LLP), whose goal is to train a classifier only by using the class label proportions of instance sets, called bags. We propose a novel LLP method based on an online pseudo-labeling method with regret minimization. As opposed to the previous LLP methods, the proposed method effectively works even if the bag sizes are large. We demonstrate the effectiveness of the proposed method using some benchmark datasets.
Shinnosuke Matsuo, Ryoma Bise, Seiichi Uchida, Daiki Suehiro Kyushu University, Fukuoka, Japan Learning from label proportion, online decision-making, pseudo-labeling
## 1 Introduction
Learning from Label Proportions (LLP) [1, 2] is a weakly-supervised machine learning task where only the class label proportion of the instances in each _bag_\(B^{i}\) is given. A bag is a set of instances. Formally, for a \(C\)-class classification problem, multiple bags \(B^{1},\ldots,B^{i},\ldots,B^{n}\) and the label proportion \(\mathbf{p}^{i}=(p^{i}_{1},\ldots,p^{i}_{C},\ldots,p^{i}_{C})\) of each bag are given as the training set. For example, if \(B^{i}\) contains 100, 50, and 50 instances of the class 1, 2, and 3, respectively, \(\mathbf{p}^{i}=(0.5,0.25,0.25)\). The goal of LLP is to train an instance classifier, just by the label proportion, that is, without the class label of each instance \(x^{i}_{j}\in B^{i}\) (\(j=1,\ldots,|B^{i}|\)). Therefore, LLP is one of the most difficult weakly-supervised tasks.
Currently, the _proportion loss_ is widely used for realizing LLP [3, 4, 5]. It evaluates the difference between the given proportion \(\mathbf{p}^{i}\) and the proportion of the estimated labels of the \(i\)th bag \(B^{1}\). However, it is known that the accuracy decreases for larger bags [5, 6]. This weakness becomes crucial in many applications with large bags. An application example is a window-wise long-term signal classification with label proportion, where each signal is represented as a large bag with many instances corresponding to individual windows.
This paper proposes a new LLP method based on online pseudo-labeling by a regret minimization approach. In the proposed method, we assume a Deep Neural Network (DNN) as a classification model, and alternately update the model and pseudo labels along epochs. More precisely, at each \(t\)-th epoch, the DNN model is trained by the pseudo labels in a fully-supervised manner. Then the pseudo labels are updated by observing the behavior of the updated model.
One of the advantages of our method is that, by assigning the pseudo labels to the instances over the bags, we can make full use of instances to train a model even if the bag sizes are large. In other words, if we have \(n\) instances, our method can train a model with \(n\) instances with pseudo labels without depending on the bag sizes.
Another advantage of our online pseudo-labeling approach is its strong theoretical support. Different from various heuristics-based pseudo-labeling approaches, ours follows the _regret_ minimization framework, which is one of the theories for online decision-making. The regret is the difference between the actual decision and the best decision; in our case, the actual decision is the pseudo labels at each epoch, and the best decision is the best-performed pseudo labels averaged over the epochs. Our method has a theoretical upper bound of the regret -- this means that the performance of our method is not far away from the best-performed pseudo labels, although the pseudo labels are determined at each epoch in an online manner.
To evaluate the performance of the proposed method, we use CIFAR10 for a synthetic LLP task. We observe how the proportion-loss-based methods perform with different sizes of bags and compare them with the proposed method. In addition, we conduct an ablation study to demonstrate the effectiveness of our pseudo-labeling approach based on regret minimization.
The main contributions of this paper are summarized as follows:
* This paper proposes a novel and efficient LLP method, which can deal with even a very large bag.
* The proposed method is based on online pseudo-labeling and has strong theoretical support in terms of regret minimization.
* The robustness to large bag sizes and the accuracy of the proposed method were validated through multiple comparative experiments using CIFAR-10 and SVHN.
The code is publicly available at [https://github.com/matsuo-shinnosuke/online-pseudo-labeling](https://github.com/matsuo-shinnosuke/online-pseudo-labeling).
## 2 Related Work
**Learning from label proportions (LLP):** The recent trend of LLP is to train a DNN using a proportion loss, originally provided by [3]. The proportion loss is a bag-level cross-entropy between the correct label proportion and the predicted proportion, which is computed by averaging the probability outputs in every bag as the proportion estimation. Many methods extend the proportion loss by introducing regularization terms or pre-training techniques [7, 3, 6, 8, 9, 5]. In these papers, it has been reported that the accuracy decreases as the bag sizes increase.
**pseudo-labeling:** pseudo-labeling has often been used for semi-supervised learning [10, 11], in which a pre-trained model is first trained using few labeled data. pseudo-labeling [12] assigns pseudo labels to confident unlabeled data when the maximum prediction probability estimated by a pre-trained model exceeds a threshold and re-trains the model using pseudo labels.
This pseudo-labeling is also used for several LLP methods [13, 6, 14, 4]. Yu et al. provided \(\propto\)-SVM, which alternately updates the pseudo labels and the SVM-based classifier. However, it can be used only for linear or kernel-based binary classification. [14] tackled the LLP tasks for medical image recognition. Their proposed
method generates suitable pseudo labels using several supervised instances. [6] and [4] considered the hybrid method of proportion loss and pseudo-labeling. However, these methods degrade the performance by increasing the bag sizes.
**Online decision-making for combinatorial decision space:** Various online decision-making problems have been investigated (see, e.g., [15]). The task is to give a decision from the decision space sequentially with a small regret. Particularly, the problems for combinatorial decision space are algorithmically challenging due to the computational difficulty, and thus various problems and approaches have been proposed [16, 17, 18, 19, 20]. However, the real applications have not been studied well.
A similar study to ours is [21], where a training scheme of DNN with a noisy-labeled training set is proposed. Its approach alternately updates the decision of whether clean or noisy data and the parameters of the DNN. They utilize the online \(k\)-set decision framework with Follow the Perturbed Leader (FPL) algorithm [22]. However, the task is essentially different from ours, and our provided online pseudo-label decision is a more challenging problem because the decision space is a set of zero-one matrices, and thus it is difficult to utilize FPL due to the computational hardness.
## 3 LLP with online pseudo-label decision
In this section, we propose a pseudo-labeling algorithm for LLP. The overview of the proposed method is shown in Fig. 1.
### LLP and pseudo-labeling
In LLP, a training set contains \(n\) bags, \(B^{1},\ldots,B^{n}\), and each bag \(B^{i}\) has the set of instances, i.e., \(B^{i}=\{x_{j}\}_{j=1}^{|B^{i}|}\). Each \(B^{i}\) has a label proportion \(p_{c}^{i}=\frac{[|j|j\in|B^{i}|,Y_{i,j}^{i}=1]}{|B^{i}|}\) for any \(c\in[C]\)1, where \(C\) is the number of target classes and \(Y^{i}\in\{Y\mid Y\in\{0,1\}^{C\times|B^{i}|},\forall j\in[|B^{i}|],\sum_{c=1}^ {C}Y_{c,j}=1\}\) indicates _unknown_ labels of the instances. The goal of the learner is to find \(f\) which predicts the correct labels of the instances. The problem can be considered as the optimization of not only \(f\) but also the labels of instances \(\hat{Y}^{1},\ldots\hat{Y}^{n}\) according to the label proportions. We formulate the problem of LLP as follows:
Footnote 1: For a positive integer \(a\), \([a]\) denotes the set \(\{1,\ldots,a\}\).
\[\min_{\hat{Y}^{1},\ldots\hat{Y}^{n},f}\sum_{i=1}^{n}\sum_{j=1}^{|B^{i}|}\ell( x_{j}^{i},\hat{Y}_{i,j}^{i},f) \tag{1}\] \[\text{s.t.}\ \ \forall i\in[n],\forall c\in[C],\frac{[|j|\ j\in[|B^{i}|], \hat{Y}_{c,j}^{i}=1]}{|B^{i}|}=p_{c}^{i},\]
where \(Y_{,j}\) denotes the \(j\)-th column vector of a matrix \(Y\), \(\ell\) is a loss function for multi-class classification.
To obtain the optimal solution of the problem (1) is computationally hard. A straightforward way is to solve the following (i) and (ii) alternately [13]; (i) obtain \(f\) for fixed pseudo labels \(\hat{Y}^{1},\ldots,\hat{Y}^{n}\), (ii) obtain pseudo labels \(\hat{Y}^{1},\ldots,\hat{Y}^{n}\) for a fixed \(f\). Then, the final \(f\) and \(\hat{Y}^{1},\ldots,\hat{Y}^{n}\) are the learned model and the estimated labels, respectively. However, when we employ a model with a high representation ability, \(f\) may overfit (possibly incorrect) initial fixed labels, and the labels are not updated.
Then, we consider updating \(\hat{Y}^{1}[t],\ldots,\hat{Y}^{n}[t]\) and \(f[t]\) alternately at epoch \(t\), where \(\hat{Y}^{i}[t]\) denotes the pseudo labels of \(B^{i}\) at epoch \(t\) and \(f[t]\) denotes a trained model at epoch \(t\). That is, at each epoch, we train \(f[t]\) using pseudo labels \(\hat{Y}^{1}[t],\ldots,\hat{Y}^{n}[t]\) and update the pseudo labels. The main questions are as follows: One is how to update the pseudo labels using the information of label proportions and observing the behavior of \(f[t]\) at each epoch. Another is that obtaining good \(\hat{Y}^{1}[t],\ldots,\hat{Y}^{n}[t]\) is computationally hard. While we can efficiently obtain an optimal \(\hat{Y}^{1}[t],\ldots,\hat{Y}^{n}[t]\) by greedy algorithm in binary classification case (see, e.g., [13]), the optimization problem becomes a Mixed Integer Problem (MIP), which is an NP-complete problem in multi-class cases. Therefore, pseudo-labeling for LLP is a simple but challenging approach.
### Proposed procedure
\(\mathcal{Y}^{i}\) denotes the decision space of \(Y^{i}\) for any \(i\in[n]\), i.e., \(\mathcal{Y}^{i}=\{Y\mid Y\in\{0,1\}^{C\times|B^{i}|},\forall j\in[|B^{i}|], \sum_{c=1}^{C}Y_{c,j}=1,\ \text{and}\ \forall c\in[C],\sum_{j=1}^{|B^{i}|}Y_{c,j}=k_{c}^{i}\}\), where \(k_{c}^{i}=|B^{i}|p_{c}^{i}\) (i.e., the number of instances belonging to class \(c\) in a bag \(B^{i}\)). For any \(B^{i}\), we define an "unlikelihood" of pseudo-labeling to the instances as \(L^{i}\in[0,1]^{C\times|B^{i}|}\). For example, if \(c\) is not likely as a pseudo label of \(x_{j}^{i}\), \(L_{c,j}^{i}\) takes a higher value (detailed later).
We provide a 3-step procedure for LLP with pseudo-labeling as follows: Let \(\hat{Y}^{i}[1]\in\mathcal{Y}^{i}\) (\(i\in[n]\)) be initial pseudo labels and \(f[0]\) be an initial DNN. At each epoch \(t=1,\ldots,T\), for any \(i\in[n]\),
1. Obtain \(f[t]\) by training \(f[t-1]\) using \(\ell\) and the pseudo-labeled instances (\((x_{1}^{i},\hat{Y}_{:,1}^{i}[t]),\ldots,(x_{|B^{n}|}^{n},\hat{Y}_{:,|B^{n}|}^{n} [t])\)).
2. Obtain unlikelihood \(L^{i}[t]\).
3. Decide a next \(\hat{Y}^{i}[t]\) by observing \(L^{i}[1],\ldots,L^{i}[t]\).
In this paper, we compute the unlikelihood at each epoch \(t\) as
\[L_{c,j}^{i}[t]=\begin{cases}1-\operatorname{conf}(x_{j}^{i},c,f[t])&\hat{Y}_{c, j}^{i}[t]=1\\ \max\limits_{c\in[C]}\operatorname{conf}(x_{j}^{i},c,f[t])-\operatorname{ conf}(x_{j}^{i},c,f[t])&\text{otherwise},\end{cases} \tag{2}\]
where \(\operatorname{conf}\) returns the confidence of \(f\) when \(x_{j}^{i}\) is assigned to class \(c\) (i.e., posterior probability). The motivation of this unlikelihood is rather simple. In the first case, if \(f[t]\) learned \(x_{j}^{i}\) with the pseudo label \(c\), and \(x_{j}^{i}\) is assigned to \(c\) by \(f[t]\) with high confidence, \(c\) is likely to be a correct label of \(x_{j}^{i}\). On the other hand, if \(x_{j}^{i}\) is assigned to \(c\) by \(f[t]\) with low confidence, \(c\) is not likely to be a correct label. In the second case, if \(f[t]\) learned \(x_{j}^{i}\) with some pseudo label other than \(c\), and \(x_{j}^{i}\) is assigned to \(c\) by \(f[t]\) with high confidence relative to the maximum confidence, \(c\) is likely to be a correct label of \(x_{j}^{i}\). More specifically, if it is difficult to learn \(x_{j}^{i}\) with the pseudo label \(c\) (i.e., \(\max_{c\in[C]}\operatorname{conf}(x_{j}^{i},c,f[t])\) is small), and \(x_{j}^{i}\) is assigned to
Figure 1: Overview of the proposed method, LLP with online pseudo-label decision by regret minimization.
by \(f[t]\) with similar confidence to the maximum confidence, \(L^{i}_{c,j}[t]\) becomes a low value.
### pseudo-labeling with regret minimization approach
We consider deciding the pseudo labels for each bag \(B^{i}\) individually. Since the unlikelihood of the decided pseudo label for \(x^{i}_{j}\) at epoch \(t\) can be formulated as \(\hat{Y}^{i}_{:,j}[t]^{\top}L^{i}_{:,j}[t]\), we can evaluate the performance of the decided pseudo labels \(\hat{Y}^{i}[t]\) by the total unlikelihood over \(B^{i}:\,\sum_{j=1}^{|B^{i}|}\hat{Y}^{i}_{:,j}[t]^{\top}L^{i}_{:,j}[t]\). Therefore, a straightforward goal is to predict \(\hat{Y}^{i}[t]\) which minimizes \(\sum_{j=1}^{|B^{i}|}\hat{Y}^{i\top}_{:,j}[t]^{\top}L^{i}_{:,j}[t]\) at each epoch. However, \(L^{i}[t]\) is revealed after the training (step 2) and it is difficult to give such \(\hat{Y}^{i}[t]\) in step 1. Moreover, due to the instability of DNN training (especially in early epochs), \(L^{i}[t]\) may fluctuate and thus to predict \(\hat{Y}^{i}[t]\) which minimizes \(\sum_{j=1}^{|B^{i}|}\hat{Y}^{i}_{:,j}[t]^{\top}L^{i}_{:,j}[t]\) is not a reasonable goal. Then, we aim to give suitable pseudo labels averagely over epochs:
\[(\hat{Y}^{i})^{*}=\operatorname*{arg\,min}_{\hat{Y}^{i}\in\mathcal{Y}^{i}} \sum_{t=1}^{T}\sum_{j=1}^{|B^{i}|}\hat{Y}^{i\top}_{:,j}L^{i}_{:,j}[t]. \tag{3}\]
The optimization problem is still difficult because we need to decide \((\hat{Y}^{i})^{*}\) online, and the best solution \((\hat{Y}^{i})^{*}\) can be revealed after \(T\) epochs. Therefore, we consider deciding \(\hat{Y}^{i}[t]\) online to minimize the _regret_ for each bag \(B^{i}\), which is defined as:
\[R^{i}_{T}=\sum_{t=1}^{T}\sum_{j=1}^{|B^{i}|}\hat{Y}^{i}_{:,j}[t]^{\top}L^{i}_ {:,j}[t]-\sum_{t=1}^{T}\sum_{j=1}^{|B^{i}|}(\hat{Y}^{i}_{:,j})^{*\top}L^{i}_{ :,j}[t]. \tag{4}\]
The regret measure is used in online decision-making, which evaluates the difference in the relative performance between the actual decisions and the best decision in hindsight. That is, to achieve small regret indicates that the performance of the actual decisions is competitive to the best decision.
The most significant advantage of our online pseudo-labeling decision is that we can have theoretical support on the regret under any tough situation. As aforementioned, \(L^{i}[t]\) may fluctuate during DNN training. However, as detailed later, by utilizing a regret-bounded scheme, we can guarantee the performance of the pseudo labels for any sequences of \(L^{i}[1],\ldots,L^{i}[T]\), i.e., we do not need to care about the fluctuation of the DNN. Thus, we can decide on likely pseudo labels online by the regret minimization approach.
### pseudo-label decision using Follow the Perturbed Leader (FPL)
To minimize Eq. (4) online, we employ FPL [22], a popular regret minimization algorithm. The details of our algorithm using FPL are shown in Algorithm 1. The remarkable feature of FPL is to add the perturbation \(Z^{i}\in\mathbb{R}^{C\times|B^{i}|}\) with the rate \(\eta\) to the original \(L^{i}[t]\) as shown in line 8 and 9, where \(\eta\) is the hyperparameter which controls the effect of the perturbation. If we naively use the optimal decision without perturbation, the decision is the optimal pseudo labels only at epoch \(t\), and thus it may overfit to the fluctuated \(L^{i}[t]\).
```
1:Inputs: Training bags \((B^{1},\mathbf{p}^{1}),\ldots,(B^{n},\mathbf{p}^{n})\), total epochs \(T\), initial DNN \(f[0]\), loss \(\ell\), \(\eta>0\)
2:Outputs:\(f[T]\): trained DNN
3:Initialize:\(\forall i\in[n],\hat{Y}^{i}[1]\in\mathcal{Y}^{i}\) and \(Z^{i}\in\mathbb{R}^{C\times|B^{i}|}\)
4:for epochs \(t=1,\ldots,T\)do
5: Obtain \(f[t]\) by training \(f[t-1]\) using \(\ell\) and the pseudo-labeled instances \(((x^{1}_{1},\hat{Y}^{i}_{:,1}[t]),\ldots,(x^{n}_{1^{B}},\hat{Y}^{n}_{:,|B^{n}| }[t]))\).
6:for\(i=1,\ldots,n\)do
7: Obtain \(L^{i}[t]\) by Eq.(2).
8: Sample the perturbation \(Z^{i}_{c,j}\sim\mathcal{N}(0,1)\) for any \(c\in[C]\) and \(j\in[|B^{i}|]\).
9: Decision pseudo labels by \[\hat{Y}^{i}[t+1]=\arg\min_{\hat{Y}^{i}\in\mathcal{Y}^{i}}\left(\sum_{\tau=1}^{ t}\sum_{j=1}^{|B^{i}|}\hat{Y}^{i\top}_{:,j}(L^{i}_{:,j}[t]+\eta Z^{i}_{:,j})\right)\]
10:endfor
11:endfor
```
**Algorithm 1** pseudo-label decision by regret minimization.
Theoretically, the perturbation allows us to avoid such overfitting. Using the analysis of FPL [20], for any sequences \(L^{i}[1],\ldots,L^{i}[T]\), we can guarantee the upper bound of the regret as \(\mathbb{E}[R^{i}_{T}]=O(|B^{i}|\sqrt{T\ln|\mathcal{Y}^{i}|})\), where the expectation is derived from the randomness of FPL. This bound indicates that we can logarithmically.
suppress the complexity of the combinatorially large decision space \(|\mathcal{Y}_{i}|\), and the regret converges with increasing the epochs.
The remaining issue is how to obtain the solution of Eq. (5), which is explicitly formulated as follows.
\[\min_{\hat{Y}^{i}\in\{0,1\}^{C\times|B^{i}|}}\sum_{\tau=1}^{t}\sum_{j=1}^{|B^{ i}|}\left(\hat{Y}^{i}_{:,j}[t]^{\top}L^{i}_{:,j}[t]+\eta\hat{Y}^{i\top}_{:,j}Z^{i}_ {:,j}\right) \tag{6}\]
\[\mathrm{s.t.}\qquad\forall j\in[|B^{i}|],\sum_{c=1}^{C}\hat{Y}^{i}_{c,j}=1,\ \ \forall c \in[C],\sum_{j=1}^{|B^{i}|}\hat{Y}^{i}_{c,j}=k^{i}_{c}.\]
The optimization problem is MIP, and it is NP-complete [23] in general. However, the constraint matrix is totally unimodular; thus, we can obtain the optimal solution in polynomial time by relaxing to the linear programming problem.
## 4 Experiments
As we introduced, our focus is LLP with large bag sizes. Following previous LLP research [7, 6, 8, 9, 5], we consider virtual LLP using SVHN and CIFAR-10 datasets. First, we show the results on LLP with large bag sizes [24] compared with the state-of-the-art methods that use the proportion loss. Second, we show the ablation study of our online pseudo-labeling approach.
### Comparative methods
Methods using proportion loss: As a standard baseline, we consider a DNN trained with the proportion loss (we call the method **PL** for short). A standard proportion loss is formulated as below:
\[\ell_{\mathrm{prop}}(B^{i},\mathbf{p}^{i},f)=-\sum_{c=1}^{C}p^{i}_{c}\log\frac{1}{ |B^{i}|}\sum_{j=1}^{|B^{i}|}\mathrm{conf}(x^{i}_{j},c,f). \tag{7}\]
We also compare with \(\Pi\)**-model**[25] and **LLP-VAT**[8], which are the state-of-the-art LLP methods using the proportion loss, the implementations of which are publicly available.
**Methods for an ablation study:** First, to show the proposed unlikelihood (see Eq. (2)) is effective, we compare it with the simpler likelihood as below:
\[L_{c,j}^{i}[t]=1-\mathrm{conf}(x_{j}^{i},c,f[t]). \tag{8}\]
Second, to evaluate the effectiveness of our proposed pseudo-labeling approach based on the regret minimization, we also compare it with the following two methods. One is "Greedy," which does not use the perturbation term to decide the pseudo labels in Eq. (5). Another is "Naive" which naively update the pseudo labels only using the latest \(L^{i}[t]\), i.e., \(\hat{Y}^{i}[t+1]=\mathrm{arg\,min}_{\hat{Y}^{i}\in\mathcal{Y}^{i}}\sum_{j=1} ^{|B^{i}|}\hat{Y}_{\cdot,j}^{i\top}L_{\cdot,j}^{i}[t]\). We used Eq. (2) as the unlikelihood for Greedy and Naive.
### Implementation details
For all methods, we used ResNet18. The learning rate was set to \(0.0003\), and the model was optimized by Adam [26]. The number of training epochs was fixed at \(400\). The mini-batch size (number of bags) was fixed to \(4\). The hyperparameter \(\eta\) of the proposed method is set to \(5\). The number of original training instances was fixed to \(102400\). Training instances were separated at \(7:3\) for training and validation. We randomly separated the original training instances into bags. The bag sizes (i.e., the numbers of instances in a bag) were \(64,128,256,512,1024,2048,4096\). For example, if the bag size is \(1024\), the number of bags (\(n\)) is \(100\). We randomly created proportions \(n\) times, and then the instances of the bags were chosen based on each proportion. Including the proposed method, the best model for evaluation was chosen based on the mean absolute label-proportion error of the validation set.
### Results of comparative experiments
Tables 1 and 2 show the results on CIFAR-10 and SVHN, respectively. We can see that our method achieves the best accuracy when the bag size is large on both datasets. Fig. 4 plots the accuracies at different bag sizes. The accuracy of the methods using proportion loss was degraded by larger bag sizes. On the other hand, our method achieved high accuracy stably even when the bag size was large. We can say that the proposed method is robust to increasing the bag sizes.
### Ablation study
As shown in Table 3, the performances of Greedy and Naive approaches were significantly worse than our proposed method. Moreover, we can see that the unlikelihood Eq. (2) performed better than Eq. (8). The results indicate that the regret minimization approach with the unlikelihood evaluation by Eq. (2) effectively works for the pseudo-labeling.
In Fig. 3, we can observe the difference in the pseudo-labeling results between ours and others. The left side of Fig. 3 shows how much the pseudo labels have been updated by epochs on CIFAR-10. Naive and no perturbation approaches fixed most of the pseudo labels at the initial 5 epochs, and the accuracies were not improved much at the later epochs as shown in the right side of Fig. 3. On the other hand, the proposed method updated more pseudo labels than Greedy and Naive. This is because the effect of perturbation is larger than the effect of the original unlikelihood \(\sum_{t}\hat{Y}^{i\top}L^{i}[t]\) at early epochs. That is, the proposed method can explore various pseudo labels and achieve better performance.
## 5 Conclusion
In this paper, we propose a novel LLP method based on pseudo-labeling with regret minimization, which is robust to increasing the bag sizes compared to the previous LLP methods. The proposed method is that, by assigning the pseudo labels to the instances over the bags, we can make full use of instances to train a model even if the number of bags is small. We demonstrated the effectiveness of the proposed method through comparative and ablation studies.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{5}{c}{bag size (number of bags)} \\ \cline{2-7} method & 64 & 128 & 256 & 512 & 1024 & 2048 & 4096 \\ & (1600) & (800) & (400) & (200) & (100) & (50) & (25) \\ \hline PL & **61.24** & 57.12 & 55.54 & 55.07 & 51.09 & 50.32 & 42.60 \\ IL-model & 60.68 & 55.97 & 52.10 & 51.56 & 50.03 & 47.96 & 47.15 \\ IL-VAT & 59.23 & 53.64 & 52.58 & 50.52 & 52.05 & 45.53 & 44.81 \\ \hline ours & 58.59 & **59.34** & **60.76** & **61.13** & **61.24** & **59.83** & **59.86** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy (%) on CIFAR-10.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{5}{c}{bag size (number of bags)} \\ \cline{2-7} method & 64 & 128 & 256 & 512 & 1024 & 2048 & 4096 \\ & (1600) & (800) & (400) & (200) & (100) & (50) & (25) \\ \hline ours w/ Eq. (2) & **58.59** & **59.34** & **60.76** & **61.13** & **61.24** & **59.83** & **59.86** \\ ours w/ Eq. (8) & 55.01 & 53.18 & 54.11 & 52.80 & 53.05 & 52.48 & 50.75 \\ Greedy & 25.77 & 22.51 & 22.64 & 25.45 & 23.82 & 22.81 & 21.35 \\ Naive & 39.05 & 35.47 & 36.22 & 36.44 & 35.33 & 32.86 & 32.76 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy (%) on CIFAR-10 (ablation study).
Figure 3: (Left) The rate of the updated pseudo labels compared to the previous epoch on CIFAR-10 with bag size 4096. (Right) The accuracy of the pseudo labels.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{5}{c}{bag size (number of bags)} \\ \cline{2-7} method & 64 & 128 & 256 & 512 & 1024 & 2048 & 4096 \\ & (1600) & (800) & (400) & (200) & (100) & (50) & (25) \\ \hline PL & 90.19 & **87.95** & **87.35** & 84.76 & 81.05 & 78.35 \\ IL-model & **90.94** & 87.08 & 82.97 & 81.87 & 77.27 & 79.05 & 77.58 \\ ILP-VAT & 88.02 & 84.97 & 83.04 & 81.96 & 80.09 & 80.17 & 78.58 \\ \hline ours & 87.36 & 85.42 & 84.79 & 85.87 & **85.99** & **86.08** & **88.37** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy (%) on SVHN.
Figure 2: The accuracy at different bag sizes. |
2301.11946 | Motion of an electron through vacuum fluctuations | We study the effects of the electromagnetic vacuum on the motion of a
nonrelativistic electron. First, we derive the equation of motion for the
expectation value of the electron's position operator. We show how this
equation has the same form as the classical Abraham-Lorentz equation but, at
the same time, is free of the well known runaway solution. Second, we study
decoherence induced by vacuum fluctuations. We show that decoherence due to
vacuum fluctuations that appears at the level of the reduced density matrix of
the electron, obtained after tracing over the radiation field, does not
correspond to actual irreversible loss of coherence. | Anirudh Gundhi, Angelo Bassi | 2023-01-27T19:00:07Z | http://arxiv.org/abs/2301.11946v2 | # On the motion of an electron through vacuum fluctuations
###### Abstract
We study the effects of the electromagnetic vacuum on the motion of a non-relativistic electron. To this end, the vacuum is treated as the environment and the electron as the system within the framework of open quantum systems. After tracing over the environmental degrees of freedom, we obtain the time evolution of the reduced density matrix of the electron in the position basis. Using the master equation, in the first part of the article we derive the equation of motion for the expectation value of the position operator. In the presence of an external potential, the equation turns out to be the same as its classical counterpart: the Abraham-Lorentz equation. However, in its absence, the dynamics is free of the runaway solution. In the second part of the article we study decoherence induced by vacuum fluctuations. We show that decoherence that appears at the level of the reduced density matrix does not correspond to actual irreversible loss of coherence.
Numerous physical phenomena such as the Casimir effect [1; 2; 3], the Unruh effect [4; 5; 6] and the Lamb shift [7; 8; 9; 10] are attributed to the presence of vacuum fluctuations. The possibility of decoherence due to vacuum fluctuations, as being fundamental and unavoidable, has also been discussed in various works [11; 12; 13; 14; 15; 16; 17; 18] without arriving at a general consensus.
The interaction of an electron with the vacuum fluctuations can be studied within the framework of open quantum systems. We use this formalism to study two specific phenomena. First, we derive the equation of motion (EOM) for the electron in the presence of an external potential that provides a quantum mechanical description of radiation emission by an accelerated electron. Second, we investigate if the interactions with the vacuum fluctuations alone can lead to spatial decoherence of the electron.
The quantum mechanical version of the classical Abraham-Lorentz (AL) equation, which describes the recoil force experienced by an accelerated electron due to the emission of radiation [19; 20; 21; 22], has been previously derived, for example, in [10]. Instead of the electron's position, the equation was obtained for the position operator and it was then argued why this operator equation is fundamentally different from the classical one. The difficulties in making a direct connection with the classical dynamics were attributed to the presence of the additional transverse electric field operator of the electromagnetic vacuum, which is zero classically. Similar problem persists concerning the interpretation of the quantum Langevin equation obtained in [17] for an electron interacting with vacuum fluctuations.
In our work, we use the path-integral formalism to obtain the explicit expression of the reduced density matrix in the position basis. The formalism used is adopted from [23]. Within this framework, instead of the Langevin equation, we derive the master equation which yields the EOM for the expectation value of the position operator which provides a direct correspondence with the classical dynamics. In the presence of an arbitrary potential, we show that the classical EOM is the same as the one obtained from the reduced quantum dynamics. Moreover, the equation that emerges after a quantum mechanical treatment appears to be free of the problems associated with the AL equation: the existence of the runaway solution which leads to an exponential increase of the electron's acceleration, even in the absence of an external potential [19; 20; 21].
Concerning decoherence, we show that the loss of coherence due to vacuum fluctuations at the level of the reduced density matrix is only apparent and reversible. To this end we show that by'switching off' the interactions with the EM field, the original coherence is restored at the level of the system. Moreover, the expression for the decoherence factor that we obtain differs from the ones obtained in [17; 18] where the authors argue for a finite loss of coherence for momentum superpositions, due to vacuum fluctuations, but with different estimates for the magnitude of decoherence.
_The action._ We work in the Coulomb gauge in which the Lagrangian relevant for the dynamics of a non-relativistic electron in the presence of an external potential and an external radiation field is given by [24]
\[L(t)=\frac{1}{2}m\dot{\mathbf{r}}_{e}^{2}-V_{0}(\mathbf{r}_{e})+\int d^{3}r \mathcal{L}_{\mbox{\tiny EM}}-c\mathbf{r}_{e}\mathbf{E}_{\perp}(\mathbf{r}_{e })\,. \tag{1}\]
Here, \(\mathbf{r}_{e}\) denotes the position of the electron, \(m\) the bare mass, \(e\) the electric charge, \(V_{0}(\mathbf{r}_{e})\) an arbitrary bare external potential (acting only on the electron) and \(\mathcal{L}_{\mbox{\tiny EM}}:=(\epsilon_{0}/2)\left(\mathbf{E}_{\perp}^{2}( \mathbf{r})-c^{2}\mathbf{B}^{2}(\mathbf{r})\right)\) in which \(\mathbf{E}_{\perp}\) denotes the transverse electric field, \(\mathbf{B}\) the magnetic field, \(\epsilon_{0}\) the permittivity of free space and \(c\) the speed of light. As detailed in Appendix A, Eq. (1) is obtained from the general Lagrangian for electrodynamics under the non-relativistic approximation.
Following the standard prescription, the EM field is quantized by quantizing the transverse vector potential
\(\hat{\mathbf{A}}_{\perp}\). In terms of its conjugate momentum \(\hat{\mathbf{\Pi}}\) (which is not proportional to \(\mathbf{E}_{\perp}\) due to the form of the interaction term in Eq. (1), c.f. Appendix A), we define and work with \(\hat{\mathbf{\Pi}}_{\text{\tiny E}}=-\hat{\mathbf{\Pi}}/\epsilon_{0}\), since it appears repeatedly in the calculations. Further, the quantized EM field is initially assumed to be in its vacuum state.
_The master equation via path integral formalism._ The position basis representation of the full density matrix within the path integral formalism is given by [23; 25]
\[\langle x^{\prime}_{i}|\,\hat{\rho}(t)\,|x_{\text{\tiny f}}\rangle=\int D[x,x^{ \prime}]e^{\frac{i}{\hbar}\langle S^{\text{\tiny E}}_{\text{\tiny T}}-S_{\text {\tiny T}}\rangle}\rho(x^{\prime}_{i},x_{i},t_{i})\,. \tag{2}\]
Eq. (2) describes the density matrix at some final time \(t\), starting from an initial time \(t_{i}\), such that \(x_{i}:=x(t_{i})\), \(x^{\prime}_{i}:=x^{\prime}(t_{i})\), with \(S^{\prime}_{\text{\tiny T}}:=S_{\text{\tiny T}}[x^{\prime}]\) (and similarly \(S_{\text{\tiny T}}:=S_{\text{\tiny T}}[x]\)) denoting the full action describing some general dynamics along the \(\mathbf{x}\)-axis. The path integral in Eq. (2) is computed with the boundary conditions \(x_{t}=x(t)\), \(x^{\prime}_{t}=x^{\prime}(t)\), and includes the integral over \(x_{i}\) and \(x^{\prime}_{i}\).
In our case, the quantized radiation field is treated as the environment, initially assumed to be in its vacuum state and the electron as the system. We are interested in the reduced effective dynamics of the electron having taken into account its interaction with the environment. This is described by the reduced density matrix \(\hat{\rho}_{r}\) which is obtained after tracing over the environmental degrees of freedom. After performing the trace, by assuming the initial density matrix to be in the product state \(\hat{\rho}(t_{i})=\hat{\rho}_{\text{\tiny B}}(t_{i})\otimes\hat{\rho}_{\text {\tiny EM}}(t_{i})\), \(\hat{\rho}_{r}\) takes the form [23] (c.f. Appendix B)
\[\rho_{r}(x^{\prime}_{i},x_{i},t)=\int D[x,x^{\prime}]e^{\frac{i}{ \hbar}(S^{\prime}_{\text{\tiny B}}-S_{\text{\tiny B}}+S_{\text{\tiny BF}}[x,x^ {\prime}])}\rho_{r}(x^{\prime}_{i},x_{i},t_{i})\,,\] \[\text{with, }S_{\text{\tiny BF}}=\frac{1}{2}\int_{t_{i}}^{t}dt_{1}dt_{2}x^{ a}(t_{1})M_{ab}(t_{1};t_{2})x^{b}(t_{2})\,. \tag{3}\]
Here, \(S_{\text{\tiny B}}\) denotes the action corresponding to the system Hamiltonian (c.f. Appendix A) and, under the Einstein summation convention, we have introduced the vector notation \(x^{a}(t_{1})=x(t_{1})\) for \(a=1\) and \(x^{a}(t_{1})=x^{\prime}(t_{1})\) for \(a=2\) such that the matrix elements \(M_{ab}\) are related to the two-point correlations of the canonical transverse electric field operator \(\hat{\mathbf{\Pi}}_{\text{\tiny E}}\) (c.f. Appendix B). Since the electron's motion is considered to be along the \(\mathbf{x}\)-axis only, the two-point correlations involve only the x-component of \(\hat{\mathbf{\Pi}}_{\text{\tiny E}}\). In terms of the creation and annihilation operators, and the x-component of the unit polarization vector \(\varepsilon^{x}_{\mathbf{k}}\), it is given by [26]
\[\hat{\Pi}_{\text{\tiny E}}(\mathbf{r},t)=iC\int d^{3}k\sqrt{k}\sum_{\varepsilon }\hat{a}_{\varepsilon}(\mathbf{k})e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t)} \varepsilon^{x}_{\mathbf{k}}+\text{c.c.}\,, \tag{4}\]
with the constant prefactor \(C:=\left(\hbar c/(2\epsilon_{0}(2\pi)^{3})\right)^{\frac{1}{2}}\). By making a change of basis to \((X(t),u(t))\) with \(X(t)=(x(t)+x^{\prime}(t))/2\) and \(u(t)=x^{\prime}(t)-x(t)\), the so-called influence functional \(S_{\text{\tiny BF}}\)[27] takes the simplified form
\[S_{\text{\tiny BF}}[X,u](t)=\int_{t_{i}}^{t}dt_{1}dt_{2}\left[i \frac{u(t_{1})\mathcal{N}(t_{1};t_{2})u(t_{2})}{2}+\right.\] \[\left.u(t_{1})\mathcal{D}(t_{1};t_{2})X(t_{2})\right]\,, \tag{5}\]
where the noise kernel \(\mathcal{N}(t_{1};t_{2})\) and the dissipation kernel \(\mathcal{D}(t_{1};t_{2})\) are defined to be
\[\mathcal{N}(t_{1};t_{2}):= \frac{e^{2}}{2\hbar}\left\langle 0|\left\{\hat{\Pi}_{\text{\tiny E}}(t_{ 1}),\hat{\Pi}_{\text{\tiny E}}(t_{2})\right\}|0\right\rangle\,,\] \[\mathcal{D}(t_{1};t_{2}):= \frac{ie^{2}}{\hbar}\left\langle 0|\left[\hat{\Pi}_{\text{\tiny E}}(t_{ 1}),\hat{\Pi}_{\text{\tiny E}}(t_{2})\right]|0\right\rangle\theta(t_{1}-t_{2})\,. \tag{6}\]
Here, \(|0\rangle\) is the vacuum state of the free radiation field and \(\theta(\tau)\) is the Heaviside step function. As in [17; 18], we have also used the standard non-relativistic dipole approximation in which one ignores the spatial dependence of the EM fields. From the definitions in Eq. (6) and the expression for \(\hat{\Pi}_{\text{\tiny E}}\) in Eq. (4), the explicit expressions for the noise and the dissipation kernels can be obtained.
It is important to note that the evaluation of the kernels necessitates the introduction of a high frequency cut-off in the calculations. This is due to the fact that the expressions for the kernels, which only depend upon the difference \(\tau:=t_{1}-t_{2}\), diverge at \(\tau=0\). A cure is provided by the standard Hadamard finite part prescription [23] which introduces the convergence factor \(e^{-k/k_{\text{\tiny noise}}}\) inside the integrals appearing in the vacuum expectation values of the commutator and the anti-commutator. In terms of \(\epsilon=1/\omega_{\text{\tiny max}}\), with \(\omega_{\text{\tiny max}}=k_{\text{\tiny max}}c\) being the high frequency cut-off, the kernels read (c.f. Appendix C)
\[\mathcal{N}(t_{1};t_{2})=\mathcal{N}(\tau) =\frac{e^{2}}{\pi^{2}\epsilon_{0}c^{3}}\frac{\left(\epsilon^{4}-6 \epsilon^{2}\tau^{2}+\tau^{4}\right)}{\left(\epsilon^{2}+\tau^{2}\right)^{4}}\,, \tag{7}\] \[\mathcal{D}(t_{1};t_{2})=\mathcal{D}(\tau) =\frac{e^{2}}{3\pi\epsilon_{0}c^{3}}\theta(\tau)\frac{d^{3}}{d\tau^ {3}}\delta_{\epsilon}(\tau)\,. \tag{8}\]
The function \(\pi\delta_{\epsilon}(\tau):=\epsilon/(\tau^{2}+\epsilon^{2})\) appearing in Eq. (8) behaves like a Dirac delta for \(\tau\gg\epsilon\) but is non-singular at \(\tau=0\) due to the finite cut-off. We refer to Appendix C for more details.
Following [23], starting from Eq. (3) and using the explicit functional form of \(S_{\text{\tiny BF}}\) in Eq. (5), the master equation for the reduced density matrix can be derived. Upto second order in the interactions, we obtain its expression to be (c.f. Appendix B for a detailed derivation)
\[\partial_{t}\hat{\rho}_{r}(t)= -\frac{i}{\hbar}\left[\hat{\mathrm{H}}_{s},\hat{\rho}_{r}(t)\right]\] \[-\frac{1}{\hbar}\int_{0}^{t-t_{i}}d\tau\mathcal{N}(t;t-\tau) \left[\hat{x},[\hat{x}_{\text{\tiny H}_{s}}(-\tau),\hat{\rho}_{r}(t)]\right]\] \[+\frac{i}{2\hbar}\int_{0}^{t-t_{i}}d\tau\mathcal{D}(t;t-\tau) \left[\hat{x},\{\hat{x}_{\text{\tiny H}_{s}}(-\tau),\hat{\rho}_{r}(t)\}\right]\,. \tag{9}\]
The first line of the master equation is the usual Liouville-von Neuman evolution and involves only the system Hamiltonian \(\hat{\mathrm{H}}_{s}\). In the second and the third lines, which encode the system's interaction with the environment, the operator \(\hat{x}_{\text{\tiny H}_{s}}(-\tau)\) is used as a place holder for the expression
\[\hat{x}_{\text{\tiny H}_{s}}(-\tau):=\hat{U}_{s}^{-1}(t-\tau;t)\hat{x}\hat{U}_{s}( t-\tau;t)\,, \tag{10}\]
where \(\hat{U}_{s}(t-\tau;t)\) is the unitary operator that evolves the statevector of the system from time \(t\) to \(t-\tau\) via the system Hamiltonian \(\hat{\mathrm{H}}_{s}\) only. The operator \(\hat{x}\) without the subscript is the usual Schrodinger operator such that \(\hat{x}_{\mathrm{n}_{s}}(0)=\hat{x}\).
Note that due to the coupling between the position of the electron and the transverse electric field in Eq. (1), the system Hamiltonian receives an additional contribution such that \(\hat{\mathrm{H}}_{s}=\hat{p}^{2}/(2m)+\hat{V}_{0}(x)+\hat{V}_{\mathrm{EM}}(x)\), where, having introduced a cut-off scale in the calculations and considering the motion of the electron along the \(\mathbf{x}\)-axis only, \(\hat{V}_{\mathrm{EM}}(x)=\frac{e^{2}\omega_{\mathrm{max}}^{3}}{3\pi^{2}\epsilon _{0}c^{3}}\hat{x}^{2}\) (c.f. Appendix A). We point out that since the master equation is valid upto second order in the interactions and since the operator \(\hat{x}_{\mathrm{n}_{s}}(-\tau)\) appears alongside the dissipation and the noise kernels (which are already second order in \(e\)), the time evolution governed by \(\hat{U}_{s}(t-\tau;t)\) in Eq. (10) is understood to involve only \(\hat{V}_{0}\) and not \(\hat{V}_{\mathrm{EM}}\). Therefore, upto second order in the interactions, \(\hat{V}_{\mathrm{EM}}\) only contributes via the Liouville-von Neuman term.
_The equation of motion._ Using the master equation (9), we obtain the coupled equations for the time evolution of \(\langle\hat{x}\rangle\) and \(\langle\hat{p}\rangle\). It is interesting to compare the quantum mechanical EOM with the one derived classically.
Within classical electrodynamics, a charged spherical shell of radius R which is accelerated by an external force \(\mathrm{F}_{\mathrm{ext}}\), experiences an extra recoil force (radiation reaction) due to the emission of radiation. By taking the limit \(\mathrm{R}\to 0\) in the equation describing its dynamics, one obtains the Abraham-Lorentz formula
\[m_{\mathrm{R}}\ddot{x}=\mathrm{F}_{\mathrm{ext}}+\frac{2\hbar\alpha}{3c^{2}} \dddot{x}\,, \tag{11}\]
where \(m_{\mathrm{R}}\) denotes the observed renormalized mass. See for example [20; 28] and the references therein for the derivation of the AL formula. The triple derivative term appearing in Eq. (11) can be interpreted as the friction term that leads to energy loss due to radiation emission. For instance, when the external potential is taken to be \(V_{0}(x)=(1/2)m\omega_{0}^{2}x^{2}\), one has \(\dddot{x}\approx-\omega_{0}^{2}\dddot{x}\)[22]. However, the issue with Eq. (11) is that the same triple derivative term persists even when the external potential is switched off, leading to an exponentially increase of the particle's acceleration. A discussion of the AL formula and the problems associated with it can be found in [19; 20; 21; 28] and the references therein.
In the case that we are considering, the rate of change of the expectation values is calculated from Eq. (9). The coupled differential equations for \(\langle\hat{x}\rangle\) and \(\langle\hat{p}\rangle\) are given by (c.f. Appendix E)
\[\frac{d}{dt}\langle\hat{x}\rangle= \mathrm{Tr}(\hat{x}\hat{\rho}_{r}^{\dot{\rho}})=\frac{\langle \hat{p}\rangle}{m}\,, \tag{12}\] \[\frac{d}{dt}\langle\hat{p}\rangle= -\langle\hat{V}_{0,x}\rangle+\mathrm{Tr}\left(\hat{\rho}_{r}(t) \int_{0}^{t-t_{i}}d\tau\mathcal{D}(\tau)\hat{x}_{\mathrm{n}_{s}}(-\tau)\right)\] \[-2e^{2}\omega_{\mathrm{max}}^{3}\langle\hat{x}\rangle/(3\pi^{2} \epsilon_{0}c^{3})\,. \tag{13}\]
While it might not be apparent at the first glance, Eq. (13) is actually local in time due the form of the dissipation kernel in Eq. (8). To see this explicitly, the integral involving the dissipation kernel needs to be evaluated. In order to do so, we integrate by parts such that the derivatives acting on \(\delta_{\epsilon}\) (which appear in the expression obtained for the dissipation kernel in Eq. (8)) are shifted onto the adjacent function. The integral is calculated explicitly in Appendix D and the following identity is derived
\[\int_{0}^{t}d\tau\mathcal{D}(\tau)f(\tau)= -\frac{2\alpha\hbar}{3c^{2}}f^{\prime\prime\prime}(0)-\frac{4 \alpha\hbar\omega_{\mathrm{max}}}{3\pi c^{2}}f^{\prime\prime}(0)\] \[+2e^{2}\omega_{\mathrm{max}}^{3}f(0)/(3\pi^{2}\epsilon_{0}c^{3})\,. \tag{14}\]
Here, the prime denotes the derivative taken with respect to \(\tau\) and \(\alpha=e^{2}/(4\pi\epsilon_{0}\hbar c)\) the fine structure constant. Using the identity (14), Eq. (13) becomes
\[\frac{d}{dt}\langle\hat{p}\rangle= -\langle\hat{V}_{0,x}\rangle-\frac{4\alpha\hbar\omega_{\mathrm{max }}}{3\pi c^{2}}\mathrm{Tr}\left(\hat{\rho}_{r}(t)\left.\frac{d^{2}}{d\tau^{2}} \hat{x}_{\mathrm{n}_{s}}(-\tau)\right|_{\tau=0}\right)\] \[-\frac{2\alpha\hbar}{3c^{2}}\mathrm{Tr}\left(\hat{\rho}_{r}(t) \left.\frac{d^{3}}{d\tau^{3}}\hat{x}_{\mathrm{n}_{s}}(-\tau)\right|_{\tau=0} \right)\,. \tag{15}\]
We see that in the EOM (15) only the original bare potential \(\hat{V}_{0}\) remains, because the contribution coming from \(\hat{V}_{\mathrm{EM}}\) in the last line of Eq. (13) is canceled by the term in the last line of the integral (14), after one introduces the cut-off consistently throughout the calculations. For more details we refer to Appendices A and E, or Ref. [17] where the same cancellation was argued for.
The time derivatives of \(\hat{x}_{\mathrm{n}_{s}}\) in Eq. (15) can be easily computed, since from Eq. (10) we have the relation (upto leading order in the interactions)
\[\frac{d}{d\tau}\hat{x}_{\mathrm{n}_{s}}(-\tau)=-\frac{i}{\hbar}\left[\hat{V}_ {0}(x)+\frac{\hat{p}^{2}}{2m},\hat{x}_{\mathrm{n}_{s}}(-\tau)\right]\,. \tag{16}\]
First we consider the situation when the external potential is switched off. From Eq. (16), with \(\hat{V}_{0}(x)=0\), taking another time derivative of \(\hat{x}_{\mathrm{n}_{s}}\) we get
\[\left.\frac{d^{2}}{d\tau^{2}}\hat{x}_{\mathrm{n}_{s}}(-\tau)\right|_{\tau=0}= \left(\frac{-i}{\hbar}\right)^{2}\left[\frac{\hat{p}^{2}}{2m},\left[\frac{\hat{p} ^{2}}{2m},\hat{x}\right]\right]=0\,, \tag{17}\]
where, in Eq. (17), we have also used the relation \(\hat{x}_{\mathrm{n}_{s}}(0)=\hat{x}\). Similarly, the third derivative term appearing in Eq. (15) also vanishes. Therefore, when \(\hat{V}_{0}(x)=0\), Eq. (15) simply reduces to
\[\frac{d}{dt}\langle\hat{p}\rangle=0\,. \tag{18}\]
Unlike the AL formula in Eq. (11), we see that upto second order in the interactions there are no solutions which allow for an exponential increase of the particle's acceleration in the absence of an external potential.
Next we consider the case when the external potential is switched on. When the potential does not depend explicitly on time, the double and triple derivative terms
in Eq. (15) yield double and triple commutators with respect to the system Hamiltonian respectively (discarding \(\hat{V}_{\text{EM}}\) upto second order). Eq. (15) can then be written as
\[\frac{d}{dt}\langle\hat{p}\rangle= \text{F}_{\text{ext}}+\frac{4\alpha\hbar\omega_{\text{\tiny max} }}{3\pi c^{2}}\text{Tr}\left(\frac{1}{\hbar^{2}}\hat{\rho}_{r}(t)\left[\hat{ \text{H}}_{s},\left[\hat{\text{H}}_{s},\hat{x}\right]\right]\right)\] \[-\frac{2\alpha\hbar}{3c^{2}}\text{Tr}\left(\frac{i}{\hbar^{3}} \hat{\rho}_{r}(t)\left[\hat{\text{H}}_{s},\left[\hat{\text{H}}_{s},\left[ \hat{\text{H}}_{s},\hat{x}\right]\right]\right]\right)\,. \tag{19}\]
Here, we have defined \(\text{F}_{\text{ext}}:=-\langle\hat{V_{0}}(x)_{,x}\,\rangle\). Due to the presence of \(\hat{V}_{0}(x)\), the commutators of \(\hat{\text{H}}_{s}\) with \(\hat{x}\) no longer vanish. To simplify the equation further, we shift the commutators onto the density matrix using the cyclic property \(\text{Tr}(\hat{a}\cdot[\hat{b},\hat{c}])=\text{Tr}([\hat{a},\hat{b}]\cdot \hat{c})\) such that
\[\text{Tr}\left(\hat{\rho}_{r}\left[\hat{\text{H}}_{s},\left[\hat{\text{H}}_{s },\hat{x}\right]\right]\right)=\text{Tr}\left(\hat{x}\left[\hat{\text{H}}_{s },\left[\hat{\text{H}}_{s},\hat{\rho}_{r}\right]\right]\right)\,. \tag{20}\]
The same relationship is also obtained for the triple commutator term, with an additional minus sign. Remembering that the master equation is only valid upto second order in the interaction, it is sufficient to evaluate the trace in Eq. (19) at \(0^{\text{th}}\) order. This implies that within the trace, the time dependence of the density matrix can be evaluated only by retaining the Liouville-von Neuman term in Eq. (9). The right hand side of Eq. (20) thus becomes proportional to \(\text{Tr}(\hat{x}\hat{\tilde{\rho}}_{r})\). With these simplifications, Eq. (19) can be written as
\[m_{{}_{\text{R}}}\frac{d^{2}}{dt^{2}}\langle\hat{x}\rangle=\text{F}_{\text{ ext}}+\frac{2\alpha\hbar}{3c^{2}}\frac{d^{3}}{dt^{3}}\langle\hat{x}\rangle\,. \tag{21}\]
After identifying the observed electron mass with the renormalized mass \(m_{{}_{\text{R}}}:=m+(4\alpha\hbar\omega_{\text{\tiny max}})/(3\pi c^{2})\), Eq. (21) reduces to the Abraham-Lorentz formula (11). The same result is also obtained for the general case in which the bare potential \(\hat{V}_{0}(x,t)\) depends explicitly on time, as shown in Appendix E. We remark that the equation of motion derived quantum mechanically only reduces to Eq. (11) in the presence of an external potential. When the external potential is switched off, the EOM reduces to Eq. (18) and is therefore free of the runaway solution. _Decoherence_. In this final part of the article, we are interested in assessing if the spatial superposition of a charged particle at rest can be suppressed via its interaction with the vacuum fluctuations alone. We begin by writing the position space representation of the master equation (9) relevant for decoherence
\[\partial_{t}\rho_{r}=\left[-\frac{(x^{\prime}-x)^{2}\mathcal{N}_{1}(t)}{\hbar }\right]\rho_{r}\,, \tag{22}\]
where \(\mathcal{N}_{1}(\tau)\) is defined to be \(\mathcal{N}_{1}(\tau):=\int_{0}^{\tau}d\tau^{\prime}\mathcal{N}(\tau^{\prime}) =-4\alpha\hbar(\tau^{3}-3\tau\epsilon^{2})(\tau^{2}+\epsilon^{2})^{-3}(3\pi c ^{2})^{-1}\,.\) We have set \(t_{i}=0\) and only retained the second term involving the noise kernel in Eq. (9). This is because the other terms typically give subdominant contributions when the question of interest is to evaluate the rate of decay of the off-diagonal elements of the density matrix at late times [23; 29]. We have also used the expression of the noise kernel in Eq. (7) inside the integral to obtain the expression for \(\mathcal{N}_{1}\). Integrating Eq. (22) we get
\[\rho_{r}(x^{\prime},x,t)=\exp\left(-\frac{(x^{\prime}-x)^{2}}{\hbar}\mathcal{N }_{2}(t)\right)\rho_{r}(x^{\prime},x,0)\,, \tag{23}\]
where \(\mathcal{N}_{2}(t):=\int_{0}^{t}d\tau\mathcal{N}_{1}(\tau)\). The function \(\mathcal{N}_{2}(t)\) is inversely proportional to the coherence length \(l_{x}(t)\) defined by \(l_{x}(t):=(\hbar/\mathcal{N}_{2}(t))^{\frac{1}{2}}\). After performing the integral over \(\mathcal{N}_{1}\) the expression for the coherence length is obtained to be
\[l_{x}(t)=\sqrt{\frac{3\pi c^{2}}{2\alpha\omega_{\text{\tiny max}}^{2}}\cdot \frac{(t^{2}+\epsilon^{2})^{2}}{t^{4}+3t^{2}\epsilon^{2}}}\overset{t\gg\epsilon }{=}\sqrt{\frac{3\pi}{2\alpha}}\frac{1}{k_{\text{\tiny max}}}\,. \tag{24}\]
We see that the coherence length approaches a constant value on time scales much larger than \(\epsilon=1/\omega_{\text{\tiny max}}\) and that its value scales inversely with the UV cut-off. Taken literally, if one sets \(k_{\text{\tiny max}}=1/\lambda_{db}\), where \(\lambda_{db}\) is the de Broglie wavelength of the electron, one would arrive at the conclusion that vacuum fluctuations lead to decoherence with the coherence length of the charged particle asymptotically reducing to \(l_{x}\approx 25\lambda_{db}\) within the time scales \(t\approx\lambda_{db}/c\).
_False Decoherence_. It is clearly unsatisfactory to have an observable effect scale explicitly with the UV cut-off, since the precise numerical value of the cut-off is, strictly speaking, arbitrary. A similar situation was encountered in [30] in a different context of a harmonic oscillator coupled to a massive scalar field. However, it was argued in [30] that the reduced density matrix of the harmonic oscillator described false decoherence. In such a situation, the off-diagonal elements of the density matrix are suppressed simply because the state of the environment goes into different configurations depending upon the spatial location of the system. However, these changes in the environmental states remain locally around the system and are reversible. For the electron interacting with vacuum fluctuations, we therefore take the point of view that if the reduced density matrix describes false decoherence, then after adiabatically switching off the interactions with the environment (after having adiabatically switched it on initially), the original coherence must be fully restored at the level of the system.
To formulate the argument we consider a time dependent coupling \(q(t)=-ef(t)\) such that \(f(t)=1\) for most of the dynamics between the initial time \(t=0\) and the final time \(t=T\), while \(f(0)=f(T)=0\). The quantity relevant for decoherence is the noise kernel which, under the time-dependent coupling, transforms as \(\mathcal{N}\rightarrow\tilde{\mathcal{N}}=f(t_{1})f(t_{2})\mathcal{N}(t_{1};t_{2} )=f(t_{1})f(t_{2})\mathcal{N}(t_{1}-t_{2})\,.\) The decoherence factor in the double commutator in Eq. (9) involves replacing \(t_{2}\) with \(t_{1}-\tau\) and then integrating over \(\tau\). Therefore, the function \(\mathcal{N}_{1}\) transforms as \(\mathcal{N}_{1}\rightarrow\tilde{\mathcal{N}}_{1}\), with \(\tilde{\mathcal{N}}_{1}\) given by
\[\tilde{\mathcal{N}}_{1}(t_{1})=f(t_{1})\int_{0}^{t_{1}}d\tau f(t_{1}-\tau) \mathcal{N}(\tau)\,. \tag{25}\]
From the definitions of \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) we have \(\mathcal{N}_{1}=(d/d\tau)\mathcal{N},\mathcal{N}_{2}=(d/d\tau)\mathcal{N}_{1}\) and \(\mathcal{N}_{1}(0)=\mathcal{N}_{2}(0)=0\). Using these relations and integrating by parts, Eq. (25) becomes
\[\tilde{\mathcal{N}}_{1}(t_{1})= f(t_{1})\mathcal{N}_{1}(t_{1})f(0)+f(t_{1})\mathcal{N}_{2}(t_{1}) \dot{f}(0)\] \[+f(t_{1})\int_{0}^{t_{1}}d\tau\mathcal{N}_{2}(\tau)\frac{d^{2}}{d \tau^{2}}f(t_{1}-\tau)\,. \tag{26}\]
In the limit \(\epsilon\to 0\) (taking the UV cut-off to infinity), we see from Eq. (24) that \(\mathcal{N}_{2}\) looses any time dependence. We can therefore bring \(\mathcal{N}_{2}\) outside the integral such that \(\tilde{\mathcal{N}}_{1}(t_{1})=f(t_{1})\mathcal{N}_{1}(t_{1})f(0)+f(t_{1}) \mathcal{N}_{2}\dot{f}(0)-f(t_{1})\mathcal{N}_{2}(\dot{f}(0)-\dot{f}(t_{1}))\). The terms involving \(\dot{f}(0)\) cancel out and we get
\[\tilde{\mathcal{N}}_{1}(t_{1})=f(t_{1})\mathcal{N}_{1}(t_{1})f(0)+f(t_{1}) \mathcal{N}_{2}\dot{f}(t_{1})\,. \tag{27}\]
After integrating by parts Eq. (27), in order to obtain \(\tilde{\mathcal{N}}_{2}(T)=\int_{0}^{T}dt_{1}\tilde{\mathcal{N}}_{1}(t_{1})\), we get
\[\tilde{\mathcal{N}}_{2}(T)= f(0)\left(f(T)\mathcal{N}_{2}(T)-f(0)\mathcal{N}_{2}(0)\right)\] \[-f(0)\mathcal{N}_{2}\int_{0}^{T}dt_{1}\dot{f}+\frac{\mathcal{N}_{ 2}}{2}\int_{0}^{T}dt_{1}\frac{d}{dt_{1}}f^{2}\,. \tag{28}\]
In the limit \(\epsilon\to 0\), as we noted earlier, \(\mathcal{N}_{2}(t)\) takes a constant value for any time \(t>0\) but is zero at \(t=0\) from the way it is defined. Therefore, after completing the remaining integrals, we get
\[\tilde{\mathcal{N}}_{2}(T)=\frac{\mathcal{N}_{2}}{2}\left(f^{2}(0)+f^{2}(T) \right)\,. \tag{29}\]
Since we assume that the interactions are switched off in the very beginning and at the very end, we see that \(\tilde{\mathcal{N}}_{2}(T)=0\) such that Eq. (23) becomes \(\tilde{\rho}_{r}(x^{\prime},x,T)=\rho_{r}(x^{\prime},x,0)\). Therefore, by adiabatically switching off the interactions we recover the original coherence within the system.
This is different from standard collisional decoherence where, for example, one originally has \(\partial_{t}\rho_{r}(x^{\prime},x,t)=-\Lambda(x^{\prime}-x)^{2}\rho_{r}(x^{ \prime},x,t)\)[29]. When in this case we send \(\Lambda\to\tilde{\Lambda}=f(t)\Lambda\), we get \(\tilde{\rho}_{r}(x^{\prime},x,t)=\exp\Bigl{\{}-\Lambda(x^{\prime}-x)^{2}\int_ {0}^{t}dt^{\prime}f(t^{\prime})\Bigr{\}}\rho_{r}(x^{\prime},x,0)\). The density matrix depends on the integral of \(f(t)\) rather than its end points and we see that coherence is indeed lost irreversibly. We interpret this result to imply that the vacuum fluctuations alone do not lead to irreversible loss of coherence. Moreover, our results imply that the apparent decoherence cannot be due to emission of photons as otherwise one would not be able to retrieve the coherence back into the system simply by switching off the interactions with the environment at late times.
_Discussion._ We formulated the interaction of a non-relativistic electron with the radiation field within the framework of open quantum systems and obtained the master equation for the reduced electron dynamics in the position basis. We showed that the classical limit of the quantum dynamics is free of the problems associated with the purely classical derivation of the Abraham-Lorentz formula. With respect to possible decoherence induced by vacuum fluctuations alone, we showed that the apparent decoherence at the level of the reduced density matrix is reversible and is an artifact of the formalism used. In mathematically tracing over the environment, one traces over the degrees of freedom that physically surround the system being observed. These degrees of freedom must be considered part of the system being observed, rather than the environment [16; 30]. We formulated this interpretation by showing that one restores full initial coherence back into the system after switching off the interactions with the environment adiabatically. The formulation is fairly general and might also be used in other situations to distinguish true decoherence from a false one. The analysis therefore brings together various works in the literature [15; 16; 17; 18; 30] and addresses some of the conflicting results.
_Acknowledgements._ A.G. thanks Davide Bason and Lorenzo Di Pietro for numerous discussions. We thank Oliviero Angeli for cross checking some of the results obtained in the manuscript and Lajos Diosi for discussions concerning false decoherence. A.B. acknowledges financial support from the EIC Pathfinder project QuCoM (GA no. 101046973) and the PNRR PE National Quantum Science and Technology Institute (PE0000023). We thank the University of Trieste and INFN for financial support.
## Appendix A The Lagrangian and the Hamiltonian formulation
In the Coulomb gauge, the standard Lagrangian for electrodynamics is given by [24]
\[L=\frac{1}{2}m\dot{\mathbf{r}}_{e}^{2}-V_{0}(\mathbf{r}_{e})-\int_{1/2}d^{3}k \frac{|\rho|^{2}}{\epsilon_{0}k^{2}}+\frac{\epsilon_{0}}{2}\int d^{3}r\left( \mathbf{E}_{\perp}^{2}(\mathbf{r})-c^{2}\mathbf{B}^{2}(\mathbf{r})\right)+ \int d^{3}r\mathbf{j}(\mathbf{r})\cdot\mathbf{A}_{\perp}(\mathbf{r})\,. \tag{30}\]
In addition to the terms that have been described in the main article, Eq. (30) also includes the Coulomb potential between different particles. It is given by the third term in which \(\rho(\mathbf{r})\) denotes the charge density and the symbol \(\int_{1/2}\) means that the integral is taken over half the volume in the reciprocal space. For a single particle, it reduces to the particle's Coulomb self energy \(E_{\text{Coul}}\). After the introduction of a suitable cut-off it takes a finite value given by \(E_{\text{Coul}}=\alpha\hbar\omega_{\text{\tiny{max}}}/\pi\)[22]. The transverse vector potential is denoted by \(\mathbf{A}_{\perp}(\mathbf{r},t)\) whose negative partial time derivative yields the transverse electric field \(\mathbf{E}_{\perp}(\mathbf{r},t)\) while its curl gives the magnetic field \(\mathbf{B}(\mathbf{r},t)\). For an electron, the current
density is given by \(\mathbf{j}(\mathbf{r})=-e\dot{\mathbf{r}}\delta(\mathbf{r}-\mathbf{r}_{e})\) and the interaction term becomes \(-e\dot{\mathbf{r}}_{e}\mathbf{A}_{\perp}(\mathbf{r}_{e},t)\). For a non-relativistic charged particle, the time derivative can be shifted from the position of the particle onto the transverse vector potential. This is because in addition to a total derivative term, a term of the form \(e\mathbf{r}_{e}v^{i}\partial_{i}\mathbf{A}_{\perp}(\mathbf{r},t)\) appears (where \(v^{i}:=\dot{r}^{i}\)). After the wave expansion of \(\mathbf{A}_{\perp}\), this term is seen to be negligible with respect to \(e\mathbf{r}_{e}\mathbf{A}_{\perp}(\mathbf{r}_{e},t)=-e\mathbf{r}_{e}\mathbf{ E}_{\perp}(\mathbf{r}_{e},t)\) as long as \(\omega_{k}\gg vk\) or \(v\ll c\). Therefore, for the non-relativistic electron, the Lagrangian relevant for the dynamics reduces to
\[L(t)\approx\frac{1}{2}m\dot{\mathbf{r}}_{e}^{2}-V_{0}(\mathbf{r}_{e})+\frac{ \epsilon_{0}}{2}\int d^{3}r\left(\mathbf{E}_{\perp}^{2}(\mathbf{r})-c^{2} \mathbf{B}^{2}(\mathbf{r})\right)-e\mathbf{r}_{e}\mathbf{E}_{\perp}(\mathbf{ r}_{e})\,. \tag{10}\]
In Eq. (10) the total derivative \(d/dt(\mathbf{r}_{e}\mathbf{A}_{\perp}(\mathbf{r}_{e}))\) and the constant Coulomb self energy term have been omitted as these do not affect the electron's dynamics.
The Hamiltonian corresponding to the Lagrangian (10) can now be obtained. In terms of the canonical variables \(\mathbf{r}_{e},\mathbf{p},\mathbf{A}_{\perp}\) and \(\mathbf{\Pi}_{\mathrm{E}}:=-\frac{1}{\epsilon_{0}}\mathbf{\Pi}\), it takes the form
\[\mathrm{H}=\mathrm{H}_{\mathrm{S}}+\mathrm{H}_{\mathrm{EM}}+\mathrm{H}_{ \mathrm{int}}\,, \tag{11}\]
where \(\mathrm{H}_{\mathrm{EM}}=\frac{\epsilon_{0}}{2}\int d^{3}r(\mathbf{\Pi}_{ \mathrm{E}}^{2}(\mathbf{r})+c^{2}\mathbf{B}^{2}(\mathbf{r}))\) is the free field Hamiltonian of the radiation field, \(\mathrm{H}_{\mathrm{int}}=e\mathbf{r}_{e}\mathbf{\Pi}_{\mathrm{E}}(\mathbf{ r}_{e})\) the interaction term and \(\mathrm{H}_{\mathrm{S}}\) the system Hamiltonian given by
\[\mathrm{H}_{\mathrm{S}}=\frac{\mathbf{p}^{2}}{2m}+V_{0}(\mathbf{r}_{e})+\frac {e^{2}}{2\epsilon_{0}}\int d^{3}rr^{i}\delta_{im}^{\perp}(\mathbf{r}-\mathbf{r }_{e})\delta_{mj}^{\perp}(\mathbf{r}-\mathbf{r}_{e})r^{j}\,. \tag{12}\]
Here, the transverse Dirac delta \(\delta_{ij}^{\perp}(\mathbf{r}-\mathbf{r}_{e})\), which appears due to the coupling of the position of the electron with the transverse electric field, is defined to be [22]
\[\delta_{ij}^{\perp}(\mathbf{r}-\mathbf{r}_{e}):=\frac{1}{(2\pi)^{3}}\int d^{3 }k\left(\delta_{ij}-\frac{k_{i}k_{j}}{k^{2}}\right)e^{\mathbf{k}\cdot(\mathbf{ r}-\mathbf{r}_{e})}\,. \tag{13}\]
The form of \(\mathrm{H}_{\mathrm{S}}\) calls for an identification of the full effective potential \(V(\mathbf{r}_{e})\) governing the dynamics of the electron such that
\[V(\mathbf{r}_{e}):=V_{0}(\mathbf{r}_{e})+V_{\mathrm{EM}}(\mathbf{r}_{e})\,, \qquad V_{\mathrm{EM}}(\mathbf{r}_{e})=\frac{e^{2}}{2\epsilon_{0}}\int d^{3}rr ^{i}\delta_{im}^{\perp}(\mathbf{r}-\mathbf{r}_{e})\delta_{mj}^{\perp}( \mathbf{r}-\mathbf{r}_{e})r^{j}\,. \tag{14}\]
Note that the extra term \(V_{\mathrm{EM}}(\mathbf{r}_{e})\) is not added to the bare potential by hand, but arises naturally due to the \(\mathbf{r}_{e}\mathbf{E}_{\perp}\) coupling [17]. Although it gives a divergent contribution \(\frac{e^{2}}{2\epsilon_{0}}\delta_{ij}^{\perp}(\mathbf{0})r_{e}^{i}r_{e}^{j}\), after regularizing the transverse delta function on a minimum length scale \(r_{\mathrm{min}}=1/k_{\mathrm{max}}\), the contribution coming from this term scales as \(\mathcal{O}(\frac{e^{2}}{2\epsilon_{0}}\mathbf{r}_{e}^{2}k_{\mathrm{max}}^{3})\). To be more precise, we impose the cut-off consistently throughout the calculations by introducing the convergence factor \(e^{-k/k_{\mathrm{max}}}\) inside the integral in the reciprocal space (c.f. Appendix C). Using this procedure, the expression for \(\delta_{ij}^{\perp}(\mathbf{0})\) is obtained to be
\[\delta_{ij}^{\perp}(\mathbf{0})=\frac{1}{(2\pi)^{3}}\int dkk^{2}e^{-k/k_{ \mathrm{max}}}\int d\Omega\left(\delta_{ij}-\frac{k_{i}k_{j}}{k^{2}}\right)\,. \tag{15}\]
First evaluating the angular integral, which gives a factor \(\frac{8\pi}{3}\delta_{ij}\), and then the radial integral, we get
\[V_{\mathrm{EM}}(\mathbf{r}_{e})=\frac{e^{2}\omega_{\mathrm{max}}^{3}}{3\pi^{2} \epsilon_{0}c^{3}}\mathbf{r}_{e}^{2}\,. \tag{16}\]
Since the contribution of \(V_{\mathrm{EM}}(\mathbf{r}_{e})\) is canceled exactly by another term, as shown in the discussion around Eq. (15) of the main text, for all practical purposes, \(V_{\mathrm{EM}}(\mathbf{r}_{e})\) has no consequences on the dynamics of the electron.
## Appendix B The master equation
The probability amplitude for a particle to be at the position \(x_{i}\) at some final time \(t\), starting from the position \(x_{i}\) at some initial time \(t_{i}\), is given by [25]
\[\left\langle x_{i}\right|\hat{U}(t;t_{i})\left|x_{i}\right\rangle=\int_{ \begin{subarray}{c}x(t)=x_{i},\\ x(t_{i})=x_{i}\end{subarray}}D[x,p]e^{-\frac{i}{\hbar}\int_{t_{i}}^{t}dt^{ \prime}(H_{\mathrm{T}}[x,p]-p\dot{x})}=\int_{\begin{subarray}{c}x(t)=x_{i},\\ x(t_{i})=x_{i}\end{subarray}}D[x]e^{\frac{i}{\hbar}S_{\mathrm{T}}[x]}\,, \tag{17}\]
where \(H_{\rm T}\) is the full Hamiltonian and \(S_{\rm T}\) is the corresponding action describing some general dynamics. From Eq. (10) the expression for the density matrix at time \(t\) can be written as [23]
\[\left\langle x_{\rm f}^{\prime}\right|\hat{\rho}(t)\left|x_{\rm i}\right\rangle= \int_{\begin{subarray}{c}\mathbb{z}(t)=x_{\rm i}\\ x^{\prime}(t)=x_{\rm f}^{\prime}\end{subarray}}D[x,x^{\prime}]e^{\frac{i}{\hbar }(S_{\rm T}[x^{\prime}]-S_{\rm T}[x])}\rho(x_{\rm i}^{\prime},x_{\rm i},t_{ \rm i})\,, \tag{11}\]
where the integrals over \(x_{\rm i}\) and \(x_{\rm i}^{\prime}\) are included within the path integral. The expression analogous to Eq. (10) also exists for \(\left\langle p_{\rm i}\right|\hat{U}(t;t_{\rm i})\left|p_{\rm i}\right\rangle\) in which the boundary conditions are fixed on \(p(t)\) and the phase-space weighing function is instead given by such that
\[\left\langle p_{\rm i}\right|\hat{U}(t;t_{\rm i})\left|p_{\rm i}\right\rangle= \int_{\begin{subarray}{c}p(t)=p_{\rm f}\\ p(t_{\rm i})=p_{\rm i}\end{subarray}}D[x,p]e^{-\frac{i}{\hbar}\int_{t_{\rm i}}^ {t}dt^{\prime}(H_{\rm T}[x,p]+x\hat{p})}\,. \tag{12}\]
For computing the path integral over the EM field, with a slight abuse of notation, we understand \(\exp\{\frac{i}{\hbar}S_{\rm EM}\}\) to be simply the appropriate phase-space weighing function appearing inside the path integral with \(S_{\rm EM}:=-\int_{t_{\rm i}}^{t}dt^{\prime}d^{3}\mathbb{r}(\mathbb{H}_{\rm EM }-\Pi\Delta_{\perp})\) or \(S_{\rm EM}:=-\int_{t_{\rm i}}^{t}dt^{\prime}d^{3}\mathbb{r}(\mathbb{H}_{\rm EM }+\mathrm{A}_{\perp}\mathbb{I})\) depending upon the basis states between which the transition amplitudes are calculated. We are interested in the dynamics of the electron, having taken into account its interaction with the radiation field environment. With this distinction, the total phase-space function can be written as \(S_{\rm T}=S_{\rm S}[x]+S_{\rm EM}[\mu]+S_{\rm int}[x,\Pi_{\rm E}]\), where \(S_{\rm S}\) denotes the system action, \(S_{\rm EM}[\mu]:=S_{\rm EM}[\mathrm{A}_{\perp},\Pi_{\rm E}]\) the phase-space function governing the time evolution of the free radiation field in which \(\mu\) denotes its phase-space degrees of freedom and \(S_{\rm int}[x,\Pi_{\rm E}]:=-e^{\int_{t_{\rm i}}^{t}dt^{\prime}x\Pi_{\rm E}}\) encodes the interaction between the two. The expression for the system-environment density matrix can then be written as
\[\left\langle x_{\rm i}^{\prime};\Pi_{\rm E}^{f\prime}\right| \hat{\rho}(t)\left|x_{\rm i};\Pi_{\rm E}^{f}\right\rangle =\int_{\begin{subarray}{c}\mathbb{z}(t)=x_{\rm i}\\ x^{\prime}(t)=x_{\rm i}^{\prime}\end{subarray}}D[x,x^{\prime}]e^{\frac{i}{\hbar }(S_{\rm S}[x^{\prime}]-S_{\rm S}[x])}\rho_{\rm S}(x_{\rm i}^{\prime},x_{\rm i },t_{\rm i})\times\] \[\times\int_{\begin{subarray}{c}\Pi_{\rm E}(t)=\Pi_{\rm B}^{f} \end{subarray}}D[\mu,\mu^{\prime}]e^{\frac{i}{\hbar}(S_{\rm EM}[\mu^{\prime}] +S_{\rm int}[x^{\prime},\Pi_{\rm E}^{f}]-S_{\rm EM}[\mu]-S_{\rm int}[x,\Pi_{ \rm E}])}\rho_{\rm EM}(\Pi_{\rm E}^{f}(t_{\rm i}),\Pi_{\rm E}(t_{\rm i}),t_{ \rm i})\,, \tag{13}\]
where \(\left|\Pi_{\rm E}^{f}\right\rangle\) denotes the basis state of the environment. Note that the precise choice of the environmental basis states is unimportant since the reduced density matrix is obtained by tracing over the environment. In writing Eq. (13) we have also assumed the full density matrix \(\hat{\rho}(t_{\rm i})\) to be in the product state \(\hat{\rho}(t_{\rm i})=\hat{\rho}_{\rm s}(t_{\rm i})\otimes\hat{\rho}_{\rm EM} (t_{\rm i})\) at the initial time \(t_{\rm i}\). We notice that \(S_{\rm EM}[\mu]\) is quadratic in the environmental degrees of freedom while \(S_{\rm int}[x,\Pi_{\rm E}]\) is linear in both \(x\) and \(\Pi_{\rm E}\). After tracing over the environment, that is integrating over \(\Pi_{\rm E}(t)=\Pi_{\rm E}^{\prime}(t)\), the term in the second line of Eq. (13) yields a Gaussian in \(x\) such that [23]
\[\int_{\Pi_{\rm E}(t)=\Pi_{\rm E}^{\prime}(t)}d\Pi_{\rm E}(t)D[\mu,\mu^{\prime}] e^{\frac{i}{\hbar}(S_{\rm EM}[\mu^{\prime}]+S_{\rm int}[x^{\prime},\Pi_{\rm E}^{ \prime}]-S_{\rm EM}[\mu]-S_{\rm int}[x,\Pi_{\rm E}])}\rho_{\rm EM}^{i}=e^{ \frac{i}{\hbar}\int\int dt_{1}dt_{2}M_{ab}(t_{1};t_{2})x^{a}(t_{1})x^{b}(t_{2}) }\,, \tag{14}\]
where \(\rho_{\rm EM}^{i}:=\rho_{\rm EM}(\Pi_{\rm E}^{\prime}(t_{\rm i}),\Pi_{\rm E}(t_ {\rm i}),t_{\rm i})\). We have also introduced the vector notation with the convention \(x^{a}=x\) for \(a=1\), \(x^{a}=x^{\prime}\) for \(a=2\) and \(x_{a}=\eta_{ab}x^{b}\) with \(\eta_{ab}=\mathrm{diag}(-1,1)\). It is the matrix elements \(M_{ab}\) which determine the effective action of the system and contain the information about its interaction with the environment. They can be obtained by acting with \(\frac{\hbar}{i}\frac{\delta}{\delta x^{a}}\frac{\delta}{\delta x^{b}}|_{x^{a}=x ^{b}=0}\) (where \(x^{a}\) and \(x^{b}\) are set to zero after taking the derivatives) on Eq. (14) such that
\[M^{ab}(t_{1};t_{2})=\frac{ie^{2}}{\hbar}\int_{\Pi_{\rm E}(t)=\Pi_{\rm E}^{\prime }(t)}d\Pi_{\rm E}(t)D[\mu,\mu^{\prime}]\Pi_{\rm E}^{a}\left(t_{1}\right)\Pi_{ \rm E}^{b}\left(t_{2}\right)e^{\frac{i}{\hbar}(S_{\rm EM}[\mu^{\prime}]-S_{\rm EM }[\mu])}\rho_{\rm EM}^{i}\,. \tag{15}\]
Here, in the light of the standard non-relativistic dipole approximation, we have ignored the spatial dependence of the canonical fields (c.f. Appendix C). Depending upon the value of the indices \(a\) and \(b\), the matrix elements correspond to the expectation values of the time-ordered or path-ordered correlations in the Heisenberg picture [23]. For the dynamics of the non-relativistic electron that we are considering, the expression for \(M_{ab}\) reads
\[M_{ab}(t_{1};t_{2})=\frac{ie^{2}}{\hbar}\left[\left\langle\tilde{\mathcal{T}}\{ \hat{\Pi}_{\rm E}(t_{1})\hat{\Pi}_{\rm E}(t_{2})\}\right\rangle_{0}\,\,\,\,\,- \left\langle\hat{\Pi}_{\rm E}(t_{1})\hat{\Pi}_{\rm E}(t_{2})\right\rangle_{0} \right]_{0} \tag{16}\]
The zero in the subscript denotes that the expectation values are calculated by disregarding the interaction with the system, while \(\mathcal{T}\) and \(\tilde{\mathcal{T}}\) denote the time-ordered and the anti-time ordered products respectively. It is also understood that since the electron's motion is considered to be along the \(\mathbf{x}\)-axis, the canonical field operator that enters \(M_{ab}\) is only the x-component given by [26]
\[\hat{\Pi}_{\text{\tiny E}}(\mathbf{r},t)=i\left(\frac{\hbar c}{2 \epsilon_{0}(2\pi)^{3}}\right)^{\frac{1}{2}}\int d^{3}k\sqrt{k}\sum_{\varepsilon }\hat{a}_{\varepsilon}(\mathbf{k})e^{i(\mathbf{k}\cdot\mathbf{r}-\omega t) \varepsilon_{\mathbf{k}}^{x}}+\text{c.c.}\,. \tag{10}\]
In our case, the initial state of the environment is taken to be the vacuum state \(\left|0\right\rangle\) of the radiation field such that \(\left\langle\cdot\right\rangle_{0}=\left\langle 0\right|\cdot\left|0\right\rangle\). After tracing over the environment, the reduced density matrix of the electron is obtained from Eq. (11) to be
\[\left\langle x_{i}^{\prime}\right|\hat{\rho}_{r}(t)\left|x_{i} \right\rangle=\int_{\begin{subarray}{c}x(t)=x_{i}\\ x^{\prime}(t)=x_{i}^{\prime}\end{subarray}}D[x,x^{\prime}]e^{\frac{i}{\hbar} \left(S_{\text{\tiny S}}[x^{\prime}]-S_{\text{\tiny S}}[x]+S_{\text{\tiny F}} [x,x^{\prime}]\right)}\rho_{r}(x_{i}^{\prime},x_{i},t_{i})\,, \tag{11}\]
where
\[S_{\text{\tiny F}}[x,x^{\prime}]=\frac{ie^{2}}{2\hbar}\int_{t_{ i}}^{t}dt_{1}dt_{2}\left[\left\langle\tilde{\mathcal{T}}\{\hat{\Pi}_{\text{\tiny E }}(t_{1})\hat{\Pi}_{\text{\tiny E}}(t_{2})\}\right\rangle_{0}x(t_{1})x(t_{2})- \left\langle\hat{\Pi}_{\text{\tiny E}}(t_{1})\hat{\Pi}_{\text{\tiny E}}(t_{2} )\right\rangle_{0}x(t_{1})x^{\prime}(t_{2})\right.\] \[\left.-\left\langle\hat{\Pi}_{\text{\tiny E}}(t_{2})\hat{\Pi}_{ \text{\tiny E}}(t_{1})\right\rangle_{0}x^{\prime}(t_{1})x(t_{2})+\left\langle \mathcal{T}\{\hat{\Pi}_{\text{\tiny E}}(t_{1})\hat{\Pi}_{\text{\tiny E}}(t_{2} )\}\right\rangle_{0}x^{\prime}(t_{1})x^{\prime}(t_{2})\right]\,. \tag{12}\]
The integral \(\int_{t_{i}}^{t}\) stands for both the \(t_{1}\) and the \(t_{2}\) integrals which run from \(t_{i}\) to \(t\). Alternatively, the influence functional \(S_{\text{\tiny F}}\) can be written in the matrix notation as
\[S_{\text{\tiny F}}[x,x^{\prime}]=\frac{1}{2}\int_{t_{i}}^{t}dt_{1}dt_{2}\left[x (t_{1})\ \ x^{\prime}(t_{1})\right]\cdot\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix}\cdot\begin{bmatrix}x(t_{2})\\ x^{\prime}(t_{2})\end{bmatrix}\,. \tag{13}\]
As it is more convenient, we make a change of basis to \((X\,,u)\) defined by
\[X(t):= (x^{\prime}(t)+x(t))/2\,,\quad u(t)=x^{\prime}(t)-x(t)\,, \tag{14}\]
in which the influence functional transforms as
\[S_{\text{\tiny F}}[X,u]=\frac{1}{2}\int_{t_{i}}^{t}dt_{1}dt_{2}\left[X(t_{1}) \ \ u(t_{1})\right]\cdot\begin{bmatrix}\tilde{M}_{11}&\tilde{M}_{12}\\ \tilde{M}_{21}&\tilde{M}_{22}\end{bmatrix}\cdot\begin{bmatrix}X(t_{2})\\ u(t_{2})\end{bmatrix}\,, \tag{15}\]
where
\[\begin{bmatrix}\tilde{M}_{11}&\tilde{M}_{12}\\ \tilde{M}_{21}&\tilde{M}_{22}\end{bmatrix}=\begin{bmatrix}M_{11}+M_{12}+M_{21} +M_{22}&\frac{1}{2}\left((M_{12}-M_{21})+(M_{22}-M_{11})\right)\\ \frac{1}{2}\left(-(M_{12}-M_{21})+(M_{22}-M_{11})\right)&\frac{1}{4}((M_{11}+M_ {22})-(M_{12}+M_{21}))\end{bmatrix}\,. \tag{16}\]
From Eq. (10) we obtain the following relations
\[M_{11}+M_{22} =-(M_{12}+M_{21})=\frac{ie^{2}}{\hbar}\left\langle\{\hat{\Pi}_{ \text{\tiny E}}(t_{1}),\hat{\Pi}_{\text{\tiny E}}(t_{2})\}\right\rangle_{0}\,, \tag{17}\] \[M_{12}-M_{21} =\frac{ie^{2}}{\hbar}\left\langle\left[\hat{\Pi}_{\text{\tiny E} }(t_{2}),\hat{\Pi}_{\text{\tiny E}}(t_{1})\right]\right\rangle_{0}\,,\] (18) \[M_{22}-M_{11} =\frac{ie^{2}}{\hbar}\left\langle\left[\hat{\Pi}_{\text{\tiny E} }(t_{1}),\hat{\Pi}_{\text{\tiny E}}(t_{2})\right]\right\rangle_{0}\text{sgn}(t _{1}-t_{2})\,. \tag{19}\]
Using these relations, \(\tilde{M}\) takes the simplified form
\[\begin{bmatrix}\tilde{M}_{11}&\tilde{M}_{12}\\ \tilde{M}_{21}&\tilde{M}_{22}\end{bmatrix}=\frac{ie^{2}}{\hbar}\begin{bmatrix}0& \left\langle\left[\hat{\Pi}_{\text{\tiny E}}(t_{2}),\hat{\Pi}_{\text{\tiny E }}(t_{1})\right]\right\rangle_{0}\theta(t_{2}-t_{1})\\ \left\langle\left[\hat{\Pi}_{\text{\tiny E}}(t_{1}),\hat{\Pi}_{\text{\tiny E}}(t_ {2})\right]\right\rangle_{0}\theta(t_{1}-t_{2})&\frac{1}{2}\left\langle\{\hat{ \Pi}_{\text{\tiny E}}(t_{1}),\hat{\Pi}_{\text{\tiny E}}(t_{2})\}\right\rangle_{0 }\end{bmatrix}\,, \tag{20}\]
where \(\theta(t)\) is the Heaviside step function. Thus, in the \((X,u)\) basis, the influence functional in Eq. (12) takes the compact form
\[S_{\text{\tiny IF}}[X,u](t)=\int_{t_{i}}^{t}dt_{1}dt_{2}\left[i \frac{u(t_{1})\mathcal{N}(t_{1};t_{2})u(t_{2})}{2}+u(t_{1})\mathcal{D}(t_{1};t_{2} )X(t_{2})\right]\,, \tag{21}\]
where the noise kernel \(\mathcal{N}\) and the dissipation kernel \(\mathcal{D}\) are defined as
\[\mathcal{N}(t_{1};t_{2}):= \frac{e^{2}}{2\hbar}\left\langle\{\hat{\Pi}_{\text{E}}(t_{1}),\hat{ \Pi}_{\text{E}}(t_{2})\}\right\rangle_{0}\,,\] \[\mathcal{D}(t_{1};t_{2}):= \frac{ie^{2}}{\hbar}\left\langle\left[\hat{\Pi}_{\text{E}}(t_{1} ),\hat{\Pi}_{\text{E}}(t_{2})\right]\right\rangle_{0}\theta(t_{1}-t_{2})\,. \tag{101}\]
Having determined the full effective action for the electron, in terms of the influence functional, we can now derive the master equation. From Eq. (100), it can be seen that the time derivative of the reduced density matrix will have, in addition to the standard Liouville-von Neuman term, the contribution coming from the influence functional. In order to compute that we need to evaluate the rate of change of \(S_{\text{\tiny IF}}\). It is given by
\[\delta_{t}S_{\text{\tiny IF}}[X,u]=u(t)\int_{t_{i}}^{t}dt_{1}\left(i\mathcal{N }(t;t_{1})u(t_{1})+\mathcal{D}(t;t_{1})X(t_{1})\right)\,. \tag{102}\]
In terms of the original \((x,x^{\prime})\) basis, the full expression for the master equation can now be written as
\[\partial_{t}\rho_{r}(x^{\prime}_{i},x_{i},t) =-\frac{i}{\hbar}\left\langle x^{\prime}_{i}\right|\left[\hat{ \Pi}_{\text{s}},\hat{\rho}_{r}\right]\left|x_{i}\right\rangle+\frac{i}{\hbar} \int_{\begin{subarray}{c}x(t)=x_{i}\\ x^{\prime}(t)=x^{\prime}_{i}\end{subarray}}D[x,x^{\prime}]\delta_{t}S_{\text{ \tiny IF}}[x^{\prime},x]e^{\frac{i}{\hbar}(S_{\text{\tiny S}}[x^{\prime}]-S_{ \text{\tiny S}}[x]+S_{\text{\tiny IF}}[x,x^{\prime}])}\rho_{r}(x^{\prime}_{i},x _{i},t_{i})\] \[\approx-\frac{i}{\hbar}\left\langle x^{\prime}_{i}\right|\left[ \hat{\Pi}_{\text{s}},\hat{\rho}_{r}\right]\left|x_{i}\right\rangle\] \[\quad-\frac{1}{\hbar}(x^{\prime}_{t}-x_{i})\int_{t_{i}}^{t}dt_{1 }\mathcal{N}(t;t_{1})\int_{\begin{subarray}{c}x(t)=x_{i}\\ x^{\prime}(t)=x^{\prime}_{i}\end{subarray}}D[x,x^{\prime}](x^{\prime}(t_{1})-x (t_{1}))e^{\frac{i}{\hbar}(S_{\text{\tiny S}}[x^{\prime}]-S_{\text{\tiny S}} [x])}\rho_{r}(x^{\prime}_{i},x_{i},t_{i})\] \[\quad+\frac{i}{2\hbar}(x^{\prime}_{t}-x_{i})\int_{t_{i}}^{t}dt_{1 }\mathcal{D}(t;t_{1})\int_{\begin{subarray}{c}x(t)=x_{i}\\ x^{\prime}(t)=x^{\prime}_{i}\end{subarray}}D[x,x^{\prime}](x^{\prime}(t_{1})+x (t_{1}))e^{\frac{i}{\hbar}(S_{\text{\tiny S}}[x^{\prime}]-S_{\text{\tiny S}} [x])}\rho_{r}(x^{\prime}_{i},x_{i},t_{i})\,. \tag{103}\]
The Liouville-von Neuman evolution is governed by the system Hamiltonian \(\hat{\Pi}_{\text{s}}\) alone. For the second term on the right hand side in the second line of Eq. (103), we have omitted \(S_{\text{\tiny IF}}\) in the exponential. This is because \(S_{\text{\tiny IF}}\) is second order in the coupling constant and is already present adjacent to the exponential. Since we limit our calculations to second order in the interactions, \(S_{\text{\tiny IF}}\) can be neglected inside the exponential.
To simplify the master equation further, we note that the last two lines of Eq. (103) can be written much more compactly. This is due to the following identity [23]
\[\int_{\begin{subarray}{c}x(t)=x_{i},\\ x^{\prime}(t)=x^{\prime}_{i}\end{subarray}}D[x,x^{\prime}]x^{\prime}(t_{1})e^{ \frac{i}{\hbar}(S_{\text{\tiny S}}[x^{\prime}]-S_{\text{\tiny S}}[x])}\rho_{r }(x^{\prime}_{i},x_{i},t_{i})=\] \[=\int dx^{\prime}(t_{1})\left\langle x^{\prime}_{i}\right|\hat{U}_{ s}(t;t_{1})\left|x^{\prime}(t_{1})\right\rangle x^{\prime}(t_{1})\left\langle x^{ \prime}(t_{1})\right|\hat{U}_{s}(t_{1};t_{i})\hat{\rho}_{r}(t_{i})\hat{U}_{s}^{ -1}(t;t_{i})\left|x_{i}\right\rangle\] \[=\left\langle x^{\prime}_{i}\right|\hat{U}_{s}(t;t_{1})\hat{x} \hat{U}_{s}(t_{1};t_{i})\hat{\rho}_{r}(t_{i})\hat{U}_{s}^{-1}(t;t_{i})\left|x _{i}\right\rangle=\left\langle x^{\prime}_{i}\right|\hat{U}_{s}(t;t_{1})\hat{x} \hat{U}_{s}(t_{1};t_{i})\hat{U}_{s}^{-1}(t;t_{i})\hat{U}_{s}(t;t_{i})\hat{\rho} _{r}(t_{i})\hat{U}_{s}^{-1}(t;t_{i})\left|x_{i}\right\rangle\] \[=\left\langle x^{\prime}_{i}\right|\hat{U}_{s}(t;t_{1})\hat{x} \hat{U}_{s}^{-1}(t;t_{1})\hat{\rho}_{r}(t)\left|x_{i}\right\rangle=\left\langle x ^{\prime}_{i}\right|\hat{x}_{\text{\tiny H}_{s}}(-\tau)\hat{\rho}_{r}(t)\left|x _{i}\right\rangle\,, \tag{104}\]
where
\[\hat{x}_{\text{\tiny H}_{s}}(-\tau):=\hat{U}_{s}^{-1}(t-\tau;t)\hat{x}\hat{U}_{ s}(t-\tau;t)\,,\qquad\tau:=t-t_{1}\,. \tag{105}\]
Similarly, we also have
\[\int_{\begin{subarray}{c}x(t)=x_{i},\\ x^{\prime}(t)=x^{\prime}_{i}\end{subarray}}D[x,x^{\prime}]x(t_{1})e^{\frac{i}{ \hbar}(S_{\text{\tiny S}}[x^{\prime}]-S_{\text{\tiny S}}[x])}\rho_{r}(x^{ \prime}_{i},x_{i},t_{i})=\left\langle x^{\prime}_{i}\right|\hat{\rho}_{r}(t)\hat{x }_{\text{\tiny H}_{s}}(-\tau)\left|x_{i}\right\rangle\,. \tag{106}\]
The operator \(\hat{x}_{\text{\tiny H}_{s}}(-\tau)\) is understood to be simply the placeholder for the expression that appears on the right hand side of the first equality in Eq. (105) such that
\[\hat{x}_{\text{\tiny H}_{s}}(0)=\hat{x}\,. \tag{107}\]
Here, the operator \(\hat{x}\) without the subscript \(\mathrm{H}_{s}\) is the usual position operator in the Schrodinger picture. Using these relations, and replacing the \(t_{1}\) integral with the \(\tau\) integral (\(t_{1}=t-\tau\)), the master equation takes the compact form
\[\partial_{t}\rho_{r}(x_{t}^{\prime},x_{t},t)= -\frac{i}{\hbar}\left\langle x_{t}^{\prime}\right|\left[\hat{ \mathrm{H}}_{s},\hat{\rho}_{r}(t)\right]\left|x_{t}\right\rangle\] \[-\frac{1}{\hbar}(x_{t}^{\prime}-x_{t})\int_{0}^{t-t_{i}}d\tau \mathcal{N}(t;t-\tau)\left\langle x_{t}^{\prime}\right|\left[\hat{x}_{\mathrm{ H}_{s}}(-\tau),\hat{\rho}_{r}(t)\right]\left|x_{t}\right\rangle\] \[+\frac{i}{2\hbar}(x_{t}^{\prime}-x_{t})\int_{0}^{t-t_{i}}d\tau \mathcal{D}(t;t-\tau)\left\langle x_{t}^{\prime}\right|\left\{\hat{x}_{ \mathrm{H}_{s}}(-\tau),\hat{\rho}_{r}(t)\right\}\left|x_{t}\right\rangle\,. \tag{100}\]
The eigenvalues outside of the integrals in Eq. (100) can be obtained by acting with the position operator \(\hat{x}\) such that
\[\left\langle x_{t}^{\prime}\right|\partial_{t}\hat{\rho}_{r} \left|x_{t}\right\rangle= -\frac{i}{\hbar}\left\langle x_{t}^{\prime}\right|\left[\hat{ \mathrm{H}}_{s},\hat{\rho}_{r}(t)\right]\left|x_{t}\right\rangle\] \[+\frac{i}{2\hbar}\int_{0}^{t-t_{i}}d\tau\mathcal{D}(t;t-\tau) \left\langle x_{t}^{\prime}\right|\left[\hat{x},\{\hat{x}_{\mathrm{H}_{s}}(- \tau),\hat{\rho}_{r}(t)\}\right]\left|x_{t}\right\rangle\,. \tag{101}\]
The master equation in the operator form can therefore be written as
\[\partial_{t}\hat{\rho}_{r}=-\frac{i}{\hbar}\left[\hat{\mathrm{H}}_{s},\hat{ \rho}_{r}\right]-\frac{1}{\hbar}\int_{0}^{t-t_{i}}d\tau\mathcal{N}(t;t-\tau) \left[\hat{x},[\hat{x}_{\mathrm{H}_{s}}(-\tau),\hat{\rho}_{r}(t)]\right]+ \frac{i}{2\hbar}\int_{0}^{t-t_{i}}d\tau\mathcal{D}(t;t-\tau)\left[\hat{x},\{ \hat{x}_{\mathrm{H}_{s}}(-\tau),\hat{\rho}_{r}(t)\}\right]\,. \tag{102}\]
## Appendix C The dissipation and the noise kernels
In order to solve the master equation (102), the kernels need to be evaluated explicitly. To achieve that, we begin with the expression for the vacuum expectation value of the correlator
\[\left\langle 0\right|\hat{\Pi}_{\mathrm{E}}(x(t_{1}),t_{1})\hat{\Pi}_{ \mathrm{E}}(x(t_{2}),t_{2})\left|0\right\rangle=\frac{-i\hbar c}{2\epsilon_{0} 4\pi^{2}}\hat{\square}\left\{\frac{1}{r}\int_{0}^{\infty}dke^{-ikc\tau}\left(e ^{ikr}-e^{-ikr}\right)\right\}\,, \tag{103}\]
where
\[r:=\left|x(t_{1})-x(t_{2})\right|,\qquad\tau:=t_{1}-t_{2}\,,\qquad\hat{ \square}:=-\frac{1}{c^{2}}\partial_{\tau}^{2}+\partial_{r}^{2}\,. \tag{104}\]
Here, the right hand side of Eq. (103) is obtained with the help of the expression of the quantized canonical transverse electric field operator in Eq. (101). The expression in Eq. (103) becomes convergent after resorting to the standard Hadamard finite part prescription [23], in which the convergence factor \(e^{-\omega_{k}/\omega_{\mathrm{max}}}\) is introduced inside the integral (with \(\omega_{k}=kc\)). Physically, this prescription cuts off the contribution coming from the modes \(\omega_{k}\gg\omega_{\mathrm{max}}\) and mathematically it is the same as using the \(i\epsilon\) prescription where one sends \(\tau\to\tau-i\epsilon\), with \(\epsilon=1/\omega_{\mathrm{max}}\). After completing the integral by using this prescription we get
\[\left\langle 0\right|\hat{\Pi}_{\mathrm{E}}(1)\hat{\Pi}_{\mathrm{E}}(2) \left|0\right\rangle=\frac{\hbar c}{4\pi^{2}\epsilon_{0}}\hat{\square}\left\{ \frac{1}{r^{2}-c^{2}(\tau-i\epsilon)^{2}}\right\}=\frac{\hbar c}{\pi^{2} \epsilon_{0}}\frac{1}{\left(r^{2}-c^{2}(\tau-i\epsilon)^{2}\right)^{2}}\,. \tag{105}\]
For the correlator in Eq. (105), we ignore the spatial dependence of the fields in the spirit of the non-relativistic approximation \(r\ll c\tau\). In this limit, the correlator becomes
\[\left\langle 0\right|\hat{\Pi}_{\mathrm{E}}(1)\hat{\Pi}_{\mathrm{E}}(2) \left|0\right\rangle\approx\frac{\hbar}{\pi^{2}\epsilon_{0}c^{3}\left(\tau-i \epsilon\right)^{4}}\,. \tag{106}\]
Using Eq. (106), we obtain the explicit functional form of the noise and the dissipation kernels to be
\[\mathcal{N}(\tau) =\frac{e^{2}}{\pi^{2}\epsilon_{0}c^{3}}\frac{\left(\epsilon^{4}-6 \epsilon^{2}\tau^{2}+\tau^{4}\right)}{\left(\epsilon^{2}+\tau^{2}\right)^{4}}\,, \tag{107}\] \[\mathcal{D}(\tau) =\frac{8e^{2}}{\pi^{2}\epsilon_{0}c^{3}}\frac{\epsilon\tau( \epsilon^{2}-\tau^{2})}{\left(\epsilon^{2}+\tau^{2}\right)^{4}}\theta(\tau)\,. \tag{108}\]
With some algebraic manipulation, the dissipation kernel can be expressed more compactly as
\[\mathcal{D}(\tau)=\frac{e^{2}}{3\pi^{2}\epsilon_{0}c^{3}}\theta(\tau)\frac{d^{3}} {d\tau^{3}}\left(\frac{\epsilon}{\tau^{2}+\epsilon^{2}}\right)\,. \tag{100}\]
Noticing that
\[\frac{\epsilon}{\tau^{2}+\epsilon^{2}}=\frac{d}{d\tau}\tan^{-1}(\tau/\epsilon) =\pi\delta_{\epsilon}(\tau)\,, \tag{101}\]
we arrive at the expression
\[\mathcal{D}(\tau)=\frac{e^{2}}{3\pi\epsilon_{0}c^{3}}\theta(\tau)\frac{d^{3}} {d\tau^{3}}\delta_{\epsilon}(\tau)\,. \tag{102}\]
The last equality in Eq. (101) can be understood in the limit \(\epsilon\to 0\) when the function \(\tan^{-1}(\tau/\epsilon)\) takes the shape of a step function. Such an expression for \(\mathcal{D}\) would yield infinite results. For that, we keep in mind that these functions are always well behaved for a finite \(\epsilon\) and that \(\delta_{\epsilon}\) only behaves like a Dirac delta for \(\tau\gg\epsilon\).
## Appendix D Integrals involving the dissipation kernel
In this section we derive an identity involving the integrals of the form \(\int d\tau\mathcal{D}(\tau)f(\tau)\). To proceed, we keep in mind the situation where \(\epsilon\) is small but finite so that all the derivatives of the _smoothed_ Dirac delta are large but finite. However, for times \(\tau\gg\epsilon\), we have \(\delta_{\epsilon}(\tau)=\delta^{\prime}_{\epsilon}(\tau)=\delta^{\prime\prime }_{\epsilon}(\tau)=0\). In addition, since the derivative of the Dirac delta is an odd function of \(\tau\), we also have \(\delta^{\prime}_{\epsilon}(0)=0\). In computing the integral of \(\mathcal{D}(\tau)\) multiplying an arbitrary function \(f(\tau)\), we shift the derivatives acting on \(\delta_{\epsilon}\) one by one onto \(f(\tau)\) by integrating by parts. Since the calculations of interest involve integrating \(\int_{0}^{t}d\tau\mathcal{D}(\tau)f(\tau)\), where \(\tau\) takes only non-negative values from \(0\) to \(t\), the step function \(\theta(\tau)\) can be omitted inside the integral.
The first integration by parts gives (the constant pre-factors appearing in Eq. (102) will be plugged in at the end)
\[\int_{0}^{t}d\tau\delta^{\prime\prime\prime}_{\epsilon}(\tau)f(\tau)=-\int_{0 }^{t}d\tau\delta^{\prime\prime}_{\epsilon}(\tau)f^{\prime}(\tau)+\left.\delta ^{\prime\prime}_{\epsilon}(\tau)f(\tau)\right|_{0}^{t}\,. \tag{103}\]
Since \(\delta^{\prime\prime}_{\epsilon}(t)=0\), only the boundary term \(-\delta^{\prime\prime}_{\epsilon}(0)f(0)\) survives. Further,
\[-\int_{0}^{t}d\tau\delta^{\prime\prime}_{\epsilon}(\tau)f^{\prime}(\tau)= \int_{0}^{t}d\tau\delta^{\prime}_{\epsilon}(\tau)f^{\prime\prime}(\tau)- \delta^{\prime}_{\epsilon}(\tau)\left.f^{\prime}(\tau)\right|_{0}^{t}\,. \tag{104}\]
Since \(\delta^{\prime}_{\epsilon}(t)=\delta^{\prime}_{\epsilon}(0)=0\) (\(\delta^{\prime}_{\epsilon}(\tau)\) being an odd function of \(\tau\)), both the boundary terms vanish. Proceeding further we get
\[\int_{0}^{t}d\tau\delta^{\prime}_{\epsilon}(\tau)f^{\prime\prime}(\tau)=-\int_ {0}^{t}d\tau\delta_{\epsilon}(\tau)f^{\prime\prime\prime}(\tau)+\delta_{ \epsilon}(\tau)\left.f^{\prime\prime}(\tau)\right|_{0}^{t}\,. \tag{105}\]
As before, the boundary term at \(\tau=t\) is zero and only the term \(-\delta_{\epsilon}(0)f^{\prime\prime}(0)\) survives. Finally, since \(\delta_{\epsilon}(\tau)\) goes to zero much faster than a generic function \(f(\tau)\) for a small \(\epsilon\), it can be treated like a Dirac delta such that
\[-\int_{0}^{t}d\tau\delta_{\epsilon}(\tau)f^{\prime\prime\prime}(\tau)=-\frac{f ^{\prime\prime\prime}(0)}{2}\,. \tag{106}\]
The factor of half comes because the integral is performed from \(0\) to \(t\). Collecting the two boundary terms we get the result
\[\int_{0}^{t}d\tau\delta^{\prime\prime\prime}_{\epsilon}(\tau)f(\tau)=-\frac{f ^{\prime\prime\prime}(0)}{2}-\delta_{\epsilon}(0)f^{\prime\prime}(0)-\delta^ {\prime\prime}_{\epsilon}(0)f(0)\,. \tag{107}\]
From Eq. (101) we have \(\delta_{\epsilon}(0)=1/(\pi\epsilon)=\omega_{\mbox{\tiny max}}/\pi\) and \(\delta^{\prime\prime}_{\epsilon}(0)=-2\omega_{\mbox{\tiny max}}^{3}/\pi\) such that
\[\int_{0}^{t}d\tau\mathcal{D}(\tau)f(\tau)=-\frac{2\alpha\hbar}{3c^{2}}f^{ \prime\prime\prime}(0)-\frac{4\alpha\hbar\omega_{\mbox{\tiny max}}}{3\pi c^{2 }}f^{\prime\prime}(0)+\frac{2e^{2}\omega_{\mbox{\tiny max}}^{3}}{3\pi^{2} \epsilon_{0}c^{3}}f(0)\,. \tag{108}\]
Here, we have now plugged in the constant prefactor appearing in Eq. (102).
## Appendix E The Abraham-Lorentz equation as a classical limit
The rate of change of the expectation values can be obtained with the help of the master equation (149). For the position operator it is given by
\[\frac{d}{dt}\langle\hat{x}\rangle=\text{Tr}\left(\hat{x}\partial_{t }\hat{\rho}_{r}\right)= -\frac{i}{\hbar}\text{Tr}\left(\hat{x}\cdot\left[\hat{\mathbb{H}} _{s},\hat{\rho}_{r}\right]\right)+\frac{i}{2\hbar}\int_{0}^{t-t_{i}}d\tau \mathcal{D}(t;t-\tau)\text{Tr}\left(\hat{x}\cdot\left[\hat{x},\{\hat{x}_{\text{ n}_{s}}(-\tau),\hat{\rho}_{r}(t)\}\right]\right)\] \[-\frac{1}{\hbar}\int_{0}^{t-t_{i}}d\tau\mathcal{N}(t;t-\tau)\text{ Tr}\left(\hat{x}\cdot\left[\hat{x},[\hat{x}_{\text{n}_{s}}(-\tau),\hat{\rho}_{r}(t)] \right]\right)\,. \tag{150}\]
Due to the identity
\[\text{Tr}\left(\hat{A}\cdot\left[\hat{B},\hat{C}\right]\right)=\text{Tr} \left(\left[\hat{A},\hat{B}\right]\cdot\hat{C}\right)\,, \tag{151}\]
the terms involving the dissipation and the noise kernels vanish and we get
\[\frac{d}{dt}\langle\hat{x}\rangle=-\frac{i}{\hbar}\text{Tr}\left(\hat{\rho}_{ r}\cdot\left[\hat{x},\hat{\mathbb{H}}_{s}\right]\right)=\frac{\langle\hat{p} \rangle}{m}\,. \tag{152}\]
Here, we remember that the system Hamiltonian \(\hat{\mathbb{H}}_{s}\) receives a contribution from \(\hat{V}_{\text{\tiny EM}}\) in addition to the bare potential \(\hat{V}_{0}\) such that (c.f. the discussion between Eqs. (124) and (128))
\[\hat{\mathbb{H}}_{s}(t)=\frac{\hat{p}^{2}}{2m}+\hat{V}_{0}(x,t)+\frac{e^{2} \omega_{\text{\tiny max}}^{3}}{3\pi^{2}\epsilon_{0}c^{3}}\hat{x}^{2}\,. \tag{153}\]
Similarly, for the momentum operator we obtain the relation
\[\frac{d}{dt}\langle\hat{p}\rangle=\text{Tr}\left(\hat{p}\partial_ {t}\hat{\rho}_{r}\right)= -\frac{i}{\hbar}\text{Tr}\left(\left[\hat{p},\hat{\mathbb{H}}_{s} \right]\cdot\hat{\rho}_{r}\right)+\frac{i}{2\hbar}\int_{0}^{t-t_{i}}d\tau \mathcal{D}(t;t-\tau)\text{Tr}\left([\hat{p},\hat{x}]\cdot\{\hat{x}_{\text{n} _{s}}(-\tau),\hat{\rho}_{r}(t)\}\right)\] \[-\frac{1}{\hbar}\int_{0}^{t-t_{i}}d\tau\mathcal{N}(t;t-\tau) \text{Tr}\left([\hat{p},\hat{x}]\cdot[\hat{x}_{\text{n}_{s}}(-\tau),\hat{ \rho}_{r}(t)]\right)\,. \tag{154}\]
Since \([\hat{x},\hat{p}]=i\hbar\mathds{1}\), the term involving the noise kernel vanishes and Eq. (154) simplifies to
\[\frac{d}{dt}\langle\hat{p}\rangle=-\langle\hat{V}_{0,x}\,\rangle-\frac{2e^{2 }\omega_{\text{\tiny max}}^{3}}{3\pi^{2}\epsilon_{0}c^{3}}\langle\hat{x}\rangle +\text{Tr}\left(\hat{\rho}_{r}(t)\int_{0}^{t-t_{i}}d\tau\mathcal{D}(\tau)\hat{ x}_{\text{n}_{s}}(-\tau)\right)\,. \tag{155}\]
Evaluating the integral using Eq. (155), we see that the last term in the integral gives the contribution \(\frac{2e^{2}\omega_{\text{\tiny max}}^{3}}{3\pi^{2}\epsilon_{0}c^{3}}\langle \hat{x}\rangle\) to \(\frac{d}{dt}\langle\hat{p}\rangle\) in Eq. (155) and cancels the contribution coming from \(\hat{V}_{\text{\tiny EM}}\). The EOM therefore reduces to
\[\frac{d}{dt}\langle\hat{p}\rangle=-\langle\hat{V}_{0}(x)_{,x}\,\rangle-\frac{2 \alpha\hbar}{3c^{2}}\text{Tr}\left(\hat{\rho}_{r}(t)\left.\frac{d^{3}}{d\tau^{ 3}}\hat{x}_{\text{n}_{s}}(-\tau)\right|_{\tau=0}\right)-\frac{4\alpha\hbar \omega_{\text{\tiny max}}}{3\pi c^{2}}\text{Tr}\left(\hat{\rho}_{r}(t)\left. \frac{d^{2}}{d\tau^{2}}\hat{x}_{\text{n}_{s}}(-\tau)\right|_{\tau=0}\right)\,. \tag{156}\]
As shown in the main article, when \(\hat{V}_{0}(x,t)=0\), the double and the triple derivatives acting on \(\hat{x}_{\text{n}_{s}}(-\tau)\) vanish upto second order in the interactions. Here, we only focus on the general case in which the external (time-dependent) potential is switched on. To simplify the equation further, we begin by evaluating the second order derivative in Eq. (156). From Eq. (143) we have
\[\frac{d^{2}}{d\tau^{2}}\hat{x}_{\text{n}_{s}}(-\tau)=\hat{U}_{s}^{-1}(t-\tau;t) \hat{x}\hat{U}_{s}^{\prime\prime}(t-\tau;t)+2\hat{U}_{s}^{-1^{\prime}}(t-\tau; t)\hat{x}\hat{U}_{s}^{\prime}(t-\tau;t)+\hat{U}_{s}^{-1^{\prime\prime}}(t-\tau;t) \hat{x}\hat{U}_{s}(t-\tau;t)\,, \tag{157}\]
where the prime denotes the derivative with respect to \(\tau\). From the Schrodinger equation
\[\hat{U}_{s}^{\prime}(t-\tau;t)=\frac{i}{\hbar}\hat{\mathbb{H}}_{s}(t-\tau)\hat{ U}_{s}(t-\tau;t)\,, \tag{158}\]
the derivatives acting on the unitary operator can be expressed in terms of the Hamiltonian. It is clear that taking higher derivatives of \(\hat{U}_{s}(t-\tau;t)\) would result in higher powers of the Hamiltonian or the partial derivative of the Hamiltonian with respect to \(\tau\), multiplied with only a single unitary operator on the very right. However, if in the
end \(\tau\) is set to zero, the Hamiltonian and its explicit time derivatives will be evaluated at time t, and the unitary operator on the very right disappears since \(\hat{U}_{s}(t;t)=\mathds{1}\). We therefore have the following identities
\[\hat{U}_{s}^{(m)}(t-\tau;t)\Big{|}_{\tau=0} = (-1)^{n}\left(\frac{d^{n}}{dt^{n}}\hat{U}_{s}(t;t_{i})\right)\hat {U}_{s}^{-1}(t;t_{i})\,, \tag{105}\] \[\hat{U}_{s}^{-1(m)}(t-\tau;t)\Big{|}_{\tau=0} = (-1)^{n}\hat{U}_{s}(t;t_{i})\left(\frac{d^{n}}{dt^{n}}\hat{U}_{s} ^{-1}(t;t_{i})\right)\,. \tag{106}\]
The additional time parameter \(t_{i}\) that appears in Eqs. (105) and (106) is only apparent. As discussed before, evaluating the time derivatives on the right hand side of Eq. (105) would result in powers of \(\hat{\mathrm{H}}_{s}(t)\) and its derivatives evaluated at \(t\). The remaining unitary matrix \(\hat{U}_{s}(t;t_{i})\) would be canceled by the additional \(\hat{U}_{s}^{-1}(t;t_{i})\) on the very right such that \(t_{i}\) disappears from the equation. Using Eqs. (105) and (106) in Eq. (100) we get
\[\mathrm{Tr}\left(\hat{\rho}_{r}(t)\left.\frac{d^{2}}{d\tau^{2}} \hat{x}_{\mathrm{H}_{s}}(-\tau)\right|_{\tau=0}\right) = \mathrm{Tr}\left(\left[\left(\frac{d^{2}}{dt^{2}}\hat{U}_{s}(t; t_{i})\right)\hat{U}_{s}^{-1}(t;t_{i})\hat{\rho}_{r}(t)\right.\right. \tag{107}\] \[\left.\left.+2\left(-\frac{d}{dt}\hat{U}_{s}(t;t_{i})\right)\hat {U}_{s}^{-1}(t;t_{i})\hat{\rho}_{r}(t)\hat{U}_{s}(t;t_{i})\left(-\frac{d}{dt} \hat{U}_{s}^{-1}(t;t_{i})\right)\right.\right.\] \[\left.\left.+\hat{\rho}_{r}(t)\hat{U}_{s}(t;t_{i})\left(\frac{d^{ 2}}{dt^{2}}\hat{U}_{s}^{-1}(t;t_{i})\right)\right]\hat{x}\right)\,.\]
Here, we have used the cyclic property within the trace to shift the unitary operators \(\hat{U}_{s}\) and its derivatives on the right of \(\hat{x}\) in Eq. (100) onto the very left within the trace. To proceed further we note that the terms involving the trace in Eq. (101) are multiplied by \(\alpha\). It is therefore sufficient to evaluate the trace at \(0^{\mathrm{th}}\) order in the interactions as the master equation is valid only upto second order in the interactions. This implies that within the trace the time dependence of the density matrix can be evaluated by keeping only the Liouville-von Neuman term such that
\[\hat{\rho}_{r}(t)=\hat{U}_{s}(t;t_{i})\hat{\rho}_{r}(t_{i})\hat{U}_{s}^{-1}(t ;t_{i})\,. \tag{108}\]
Eq. (107) then simplifies to
\[\mathrm{Tr}\left(\hat{\rho}_{r}(t)\left.\frac{d^{2}}{d\tau^{2}} \hat{x}_{\mathrm{H}_{s}}(-\tau)\right|_{\tau=0}\right) = \mathrm{Tr}\left(\left[\left(\frac{d^{2}}{dt^{2}}\hat{U}_{s}(t;t_{ i})\right)\hat{\rho}_{r}(t_{i})\hat{U}_{s}^{-1}(t;t_{i})+2\left(\frac{d}{dt} \hat{U}_{s}(t;t_{i})\right)\hat{\rho}_{r}(t_{i})\left(\frac{d}{dt}\hat{U}_{s} ^{-1}(t;t_{i})\right)\right.\right. \tag{109}\] \[\left.\left.+\hat{U}_{s}(t;t_{i})\hat{\rho}_{r}(t_{i})\left(\frac {d^{2}}{dt^{2}}\hat{U}_{s}^{-1}(t;t_{i})\right)\right]\hat{x}\right)\] \[= \mathrm{Tr}\left(\frac{d^{2}}{dt^{2}}\hat{\rho}_{r}(t)\hat{x} \right)\,.\]
Thus, we have the relation
\[\mathrm{Tr}\left(\hat{\rho}_{r}(t)\left.\frac{d^{2}}{d\tau^{2}}\hat{x}_{ \mathrm{H}_{s}}(-\tau)\right|_{\tau=0}\right)=\mathrm{Tr}\left(\frac{d^{2}}{ dt^{2}}\hat{\rho}_{r}(t)\hat{x}\right)=\frac{d^{2}}{dt^{2}}\langle\hat{x}\rangle\,. \tag{110}\]
Similar line of reasoning also leads to the identity
\[\mathrm{Tr}\left(\left.\hat{\rho}_{r}(t)\left.\frac{d^{3}}{d\tau^{3}}\hat{x}_{ \mathrm{H}_{s}}(-\tau)\right|_{\tau=0}\right)=-\mathrm{Tr}\left(\frac{d^{3}}{ dt^{3}}\hat{\rho}_{r}(t)\hat{x}\right)=-\frac{d^{3}}{dt^{3}}\langle\hat{x}\rangle\,. \tag{111}\]
Using Eqs. (110) and (111) in Eq. (101), the EOM for the expectation value of the position operator in the presence of an external potential is obtained to be
\[m_{\mathrm{R}}\frac{d^{2}}{dt^{2}}\langle\hat{x}\rangle=-\langle\hat{V}_{0}(x )_{,x}\,\rangle+\frac{2\alpha\hbar}{3c^{2}}\frac{d^{3}}{dt^{3}}\langle\hat{x} \rangle\,,\qquad\mathrm{where}\qquad m_{\mathrm{R}}:=m+\frac{4\alpha\hbar \omega_{\mathrm{max}}}{3\pi c^{2}}\,. \tag{112}\]
|
2303.16688 | Model Checking Access Control Policies: A Case Study using Google Cloud
IAM | Authoring access control policies is challenging and prone to
misconfigurations. Access control policies must be conflict-free. Hence,
administrators should identify discrepancies between policy specifications and
their intended function to avoid violating security principles. This paper aims
to demonstrate how to formally verify access control policies. Model checking
is used to verify access control properties against policies supported by an
access control model. The authors consider Google's Cloud Identity and Access
Management (IAM) as a case study and follow NIST's guidelines to verify access
control policies automatically. Automated verification using model checking can
serve as a valuable tool and assist administrators in assessing the correctness
of access control policies. This enables checking violations against security
principles and performing security assessments of policies for compliance
purposes. The authors demonstrate how to define Google's IAM underlying
role-based access control (RBAC) model, specify its supported policies, and
formally verify a set of properties through three examples. | Antonios Gouglidis, Anna Kagia, Vincent C. Hu | 2023-03-29T13:39:05Z | http://arxiv.org/abs/2303.16688v1 | # Model Checking Access Control Policies: A Case Study using Google Cloud IAM
###### Abstract
Authoring access control policies is challenging and prone to misconfigurations. Access control policies must be conflict-free. Hence, administrators should identify discrepancies between policy specifications and their intended function to avoid violating security principles. This paper aims to demonstrate how to formally verify access control policies. Model checking is used to verify access control properties against policies supported by an access control model. The authors consider Google's Cloud Identity and Access Management (IAM) as a case study and follow NIST's guidelines to verify access control policies automatically. Automated verification using model checking can serve as a valuable tool and assist administrators in assessing the correctness of access control policies. This enables checking violations against security principles and performing security assessments of policies for compliance purposes. The authors demonstrate how to define Google's IAM underlying role-based access control (RBAC) model, specify its supported policies, and formally verify a set of properties through three examples.
keywords: Role-based access control, access control, authorization, policy verification, temporal logic, NuSMV
## 1 Introduction
The objective of an access control system is to control and limit the actions or operations in a system that an authorized user or process can perform on a set of resources [1; 2]. Access control is the process that checks all requests to a system and takes a decision to grant or deny access based on a set
of rules. This makes it an essential component in all computing systems. In recent years, Cloud services have rapidly grown, rendering Cloud computing a popular computing paradigm. It changed the way organizations obtain IT resources and reduced costs significantly. As a result, Cloud computing has received considerable attention from academia as well as industry. Access control in the Cloud poses significant security challenges, e.g., secure inter-operation [3], and supporting security assessment of policies [4].
Access control policies dictate who has what access to which resource and thus it is important that these policies are error-free throughout their life-cycle. However, in practice, policies often do not satisfy the desired security requirements, and flaws in their specification can remain hidden and cause observable harm when exploited. Indeed, [5] states that misconfigurations in access control policies are one of the main reasons for security and privacy breaches due to potential inconsistencies. To eliminate unwanted access control discrepancies, verifying and rigorously testing access control policies before enforcing them in an operational environment is necessary. Nevertheless, the correct specification of access control policies is challenging since it is difficult to identify discrepancies between policy rule specifications and their intended functions for ensuring no violation of access control security principles [6].
Although the integrated tools provided by Cloud providers can check policies for errors, Cloud administrators have little control over the specification of security requirements that can be formally verified in access control policies. We anticipate that having an automated technique to verify the correctness of access control policies against a set of desired security requirements would serve as a valuable tool for Cloud administrators. This may assist in promptly identifying issues in the existing policies and provide information on how to exploit them. In this paper, we use an existing Identity and Access Management (IAM) system (i.e., Google's Cloud IAM) as a case study to elaborate on how policies can be modeled and subsequently verified against a set of user-defined properties.
The main contributions of this paper are:
* Demonstrate how we formally define the RBAC model of IAM based on the limited publicly available information.
* Specify a transition system for the RBAC model and demonstrate how to specify access control policies and properties in temporal logic.
* Verify user-defined properties in policy examples provided by Google through the above methods.
In the rest of this paper, we review some of the related work in Section 2, define Google's Cloud IAM RBAC model in Section 3, specify a transition system for the defined RBAC model and relevant properties in Section 4, verify example policies in Section 5, and present concluding remarks in Section 6.
## 2 Related Work
Zhang et al., [7] described the main Cloud access control models for OpenStack, AWS, and Microsoft Azure Cloud platforms. They provided a formal specification of these access control models and extended them to include the capability of handling information and resource sharing across tenants. Power et al., [8] presented two formal models of the access policy language used within the AWS Cloud computing infrastructure. They followed a hybrid approach by using both the Z specification language and the Alloy modeling language to test multiple policy properties and generate and test candidate policies. Evangelidis et al., [9] proposed a probabilistic verification scheme based on performance modeling and formal verification of Cloud-based auto-scaling policies. To demonstrate the applicability of their method, they used a validation process on Amazon EC2 and Microsoft Azure, considering two different Cloud service models, i.e., IaaS and PaaS. Others focused on the challenges faced by the Cloud computing growth and conducted comparison studies between popular Cloud service providers, e.g., [10] compared Amazon EC2 and Microsoft Azure regarding how they deal with the challenges of availability, resource scaling, data deletion, data lock-in, and data security. Tajadod et al., [11] compared the same platforms looking at the security of architecture and the application levels.
A number of papers address verification of access control policies and several techniques have been reported in [5; 12; 13; 14; 15; 16; 17; 18]. Their objectives are to look at methods that can check the correctness of policies. In this paper, we demonstrate the application of a generic technique, following NIST's guidelines [5], which can verify access control properties against policies supported by an access control model.
In addition to the aforementioned approaches, a few access control verification tools were developed [12; 19; 20; 21; 22] to facilitate policy-testing,
with Access Control Policy Tool (ACPT) [19] and Security Policy Tool (SPT) [23] as representative examples. The NIST Computer Security Division developed ACPT in collaboration with the North Carolina State University and University of Arkansas[24], and it is an implementation of the verification method in [5]. Through a graphical user interface (GUI), it provides templates for composing access control policies and properties and verifying them using a symbolic model verification (SMV) checker, NuSMV [25]. Moreover, it provides a complete test suite generated by NIST's combinatorial testing tool ACTS [26] and generates XACML policy outputs of the verified model. SPT provides the same fundamental functions as ACPT with an extension of adding advanced features as a commercial product [23].
## 3 The \(Rba_{gcp}\) Model
Cloud IAM is part of the Google Cloud Platform (GCP), allowing Cloud administrators to control users' access to resources. Hence, when enforcing a policy, an organization can meet its regulatory and business objectives [27]. _"Cloud IAM manages access control by defining who (identity) has what access (role) for which resource"_[28]. A high-level description of the RBAC model used in Google's Cloud IAM is available. Although its formal definition is not provided, Google documents its main entities, relations, and main operations. We formally define an access control model for Cloud IAM by following publicly available information and specify it based on the ANSI INCITS 359-2012 RBAC [29], which provides a solid foundation for defining role-based models. The following sections provide formal definitions of the main elements and functionalities of the model. Henceforth, we refer to the GCP RBAC model as \(RBAC_{GCP}\).
### Model Description
The \(RBAC_{GCP}\) model consists of eight elements: MEMBERS, ROLES, PERMISSIONS, RESOURCES, SERVICES, VERBS, POLICIES, and CONDITIONS. It binds MEMBERS to ROLES and ROLES to PERMISSIONS instead of assigning PERMISSIONS directly to MEMBERS [28]. Figure 1 illustrates the relation of \(RBAC_{GCP}\) elements. A MEMBER representing a human user or autonomous entity can access RESOURCES through a ROLE representing a job function described by a collection of PERMISSIONS. PERMISSIONS determine what VERBS (i.e., operations) are allowed on a system's RESOURCE (e.g., Compute Engine instances, Cloud Storage buckets).
A POLICY is a collection of ROLE bindings, which bind one or more MEMBERS to individual ROLES. CONDITIONS assigned on ROLE bindings are logical expressions based on Google's Common Expression Language (CEL) and assigned on ROLE bindings.
Typically, in Cloud IAM, MEMBERS can be of the following type: Google account, Service account, Google group, G Suite domain, or Cloud Identity domain [28]. ROLES can be Primitives, Predefined, or Custom. Primitives are the three concentric roles that have always existed in the GCP console: the Owner, Editor, and Viewer ROLES. The Owner ROLE contains the PERMISSIONS of the Editor, and the Editor ROLE includes the Viewer's PERMISSIONS. Google creates and maintains predefined roles and can provide granular access to specific GCP resources. Each product in the Google Cloud platform has its predefined role since different types of operations apply to different resources. A particular kind of role in Cloud IAM is Custom, which allow administrators to combine one or more PERMISSIONS and create unique ROLES that satisfy their organizations' needs when predefined ROLES are insufficient. Custom roles can only be granted within the Organization and cannot be used to grant PERMISSIONS on RESOURCES owned by a different Organization. Maintaining custom roles poses a challenge for administrators in creating potential security risks despite their flexibility. These ROLES are user-defined, therefore, not maintained by Google. Also, they are not automatically updated when an administrator adds new permissions, features, or services to the GCP [30]. Consequently, administrators must always keep up with the changes and ensure that any new functionality
Figure 1: The \(RBAC_{GCP}\) model.
is consistent with the existing access control policies so as not to violate the security principles of the Organization. This task is challenging and can be highly complex and time-consuming [31].
PERMISSIONS in Cloud IAM are tuples \(<\)_service\(>\), \(<\)resource\(>\), \(<\)verb\(>\)_ that describe using VERBS what OPERATIONS are allowed on a RESOURCE. A PERMISSION is defined per SERVICE and RESOURCE since every RESOURCE enables different OPERATIONS [28]. For example, the PERMISSION _"storage.buckets.create"_ indicates creating a bucket in Cloud Storage is permitted for the storage service. RESOURCES are the fundamental components that comprise the GCP services, the Compute Engine instances (i.e., virtual machines), the App Engine services, the Cloud Storage buckets, and the Cloud Pub/Sub topics [32]. RESOURCES in Cloud IAM are hierarchical, as shown in Figure 2. Projects are the children of the Folders, which are children of Organization, and the Resources are the descendants of Projects at the lowest level. Folders is an optional grouping mechanism.
POLICIES of Cloud IAM manage access to RESOURCES. A POLICY is a collection of statements that define the BINDING of ROLES and MEM
Figure 2: The Cloud IAM resource hierarchy (based on [33]).
BERS, as illustrated in Figure 3[28]. BINDINGS can contain a CONDITION, an expression that includes one or multiple logic statements that evaluate various conditional attributes, which is optional, and each role BINDING may have only one. A BINDING without a CONDITION will always grant the ROLE to the specified MEMBERS. A BINDING is valid if a CONDITION is evaluated to TRUE. CONDITIONS provide constraints based either on the availability of a requested RESOURCE or on the situation of the access request. Examples for the former are the RESOURCE type and the RESOURCE name, and for the latter, the date/time of the request, the expected URL path, and the destination IP address. The enforcement of CONDITIONS can support attribute-based access control (ABAC) [34] to enhance the \(RBAC_{GCP}\) model, allowing administrators to create more flexible and efficient access control policies. For instance, they can grant access to MEMBERS only during specified working hours and only for a specific RESOURCE type with the desired access level [35].
POLICIES are hierarchical and follow the same path as the RESOURCE
Figure 3: The Cloud IAM bindings (based on [28]).
hierarchy in Figure 2. That means that the administrator can set a policy at any level in the RESOURCE hierarchy (e.g., Organization, Folder, Project, Resource level), and the children's resources of that level can automatically inherit it. RESOURCES always inherit the POLICIES of the parent RESOURCE, and the inheritance is transitive through the hierarchical path. Therefore, RESOURCES inherit the POLICIES of the Project; Projects inherit the POLICIES of the Folder, and Folders inherit the Organization's POLICIES. At each level, the effective policies (i.e., in the presence of a hierarchy) are equal to the union of policies directly applied at the level and POLICIES inherited from its ancestors. For instance, a POLICY used in a Folder will also apply to Projects and RESOURCES under that Folder. Note that the POLICY hierarchy will change if the RESOURCE hierarchy is changed such that the PERMISSIONS that a child node inherited from its original parent will be lost and replaced by the PERMISSIONS set at the destination parent. \(RBAC_{GCP}\) has no sessions. Instead, a ROLE remains dormant and not grantable if the respective SERVICE is not enabled. An administrator can use custom ROLES to enforce the principle of least privilege [36].
### Model Definition
Following the notation used in the ANSI INCITS 359-2012 standard, we formally define the core \(RBAC_{GCP}\) model as:
* _MEMBERS_, \(ROLES\), _SERVICES_, _RESOURCES_, _VERBS_, _CONDITIONS_ are sets of members, roles, services, resources, verbs, and conditions, respectively;
* _BINDING_ is a binding, such as _BINDING_\(\subseteq\)_MEMBERS_\(\times\)_ROLES_\(\times\)_CONDITIONS_ is a many-to-many mapping relation of _MEMBERS_, _ROLES_ and _CONDITIONS_ assignment. _CONDITIONS_ are optional;
* _PERMISSIONS_\(=\)_2\({}^{(\text{SERVICES}\times\text{RESOURCES}\times\text{VERBS})}\) is a set of permissions;
* _PA_\(\subseteq\)_PERMISSIONS_\(\times\)_ROLES_ is a many-to-many mapping of _PERMISSIONS_ to _ROLES_ assignment;
* _POLICIES_\(\subseteq\)_2\({}^{\text{BINDING}}\) is the set of policies, i.e., a single policy is a set of bindings.
## 4 Model and Properties Specification
This section elaborates on the model checking technique for verifying \(RBAC_{GCP}\) policies. The process is compliant with NIST's guidelines [5]. Specifically, we define the \(RBAC_{GCP}\) model using a transition system (TS). And verify example policies using temporal logic specifications for demonstration purposes.
### A Transition System for \(RBAC_{GCP}\)
Model checking is a formal verification technique that can be applied to verify the correctness of models and detect faults in model specifications. It takes a finite-state model and checks it against specified properties expressed using temporal modalities, linear temporal operators, and path quantifiers. To achieve this, we define access control rules in a transition system for the \(RBAC_{GCP}\), as follows.
**Definition 1.** An access control rule is an implication of \(c\to d\), where constraint \(c\) is a predicate expression of the form:
\((\bigvee\mathit{MEMBER}=\mathit{mbrs})\wedge(\bigvee\mathit{ROLE}=\mathit{ role})\wedge(\bigvee\mathit{PERMISSION}=\mathit{prms})\)
\(\wedge(\bigvee\mathit{RESOURCE}=\mathit{rscs})\) which when \(true\) implies the access control decision \(d\), i.e., \(\mathit{decision}=\mathit{Grant}\) or \(\mathit{decision}=\mathit{Deny}\), where \(\mathit{mbrs}\in\mathit{MEMBER}\), \(\mathit{role}\in\mathit{ROLES},\ \mathit{prms}\in\mathit{PERMISSION},\ \mathrm{and}\ \mathit{rscs}\in\mathit{RESOURCE}.\) The symbol of \(\bigvee\) denotes that more than one formula may be present, e.g., \(\bigvee\mathit{MEMBER}=\mathit{mbrs}\) could be \(MEMBER=\mathit{mbrs}_{1}\lor MEMBER=\mathit{mbrs}_{2}\vee\ldots\lor MEMBER= \mathit{mbrs}_{n}\), where \(\mathit{mbrs}_{1},\ldots,\mathit{mbrs}_{n}\in MEMBERS\).
**Definition 2.** An \(RBAC_{GCP}\) access control property \(prop\) is an implication formula of \(\forall\square(c\rightarrow\forall\Diamond d)\), where \(c\) is the cause and \(d\) is the effect (response property pattern). Both \(\square\) and \(\Diamond\) are elementary temporal modalities for "always" and "eventually", respectively, and \(\forall\) means "for all paths" (Computation Tree Logic (CTL) semantics) [37].
**Definition 3.** The transition system \(TS\) for the \(RBAC_{GCP}\) model is expressed as a tuple \((S,Act,\delta,i_{0})\) where:
* \(S\) is a set of system states, \(S=\{Grant,Deny\}\);
* \(Act\) is a set of actions, where \(Act=\{((\bigvee\mathit{MEMBER}=\mathit{mbrs})\ \wedge\\ (\bigvee\mathit{ROLE}=\mathit{role})\wedge(\bigvee\mathit{PERMISSION}=\mathit{prms}) \wedge(\bigvee\mathit{RESOURCE}=\mathit{rscs})\to decision=Grant),\ldots\}\)
* \(\delta\) is a transition relation, where \(\delta:S\times Act\to S\);
* \(i_{0}\) is the initial state, \(i_{0}=\{\textit{Deny}\}\).
Access control rules define the system's behavior, which functions as the transition relation \(\delta\) in \(TS\). In other words, a transition system specifies how a system can evolve from one state to another when the transition relation is applied, i.e., an action \(Act\) is performed on a state \(S\) to bring the system to the next state of \(S\). To verify \(RBAC_{GCP}\) access control properties using a temporal logic formula, we say that model \(TS\) satisfies \(prop\) by \(TS\models prop\) i.e., \(TS\models\forall\Box(c\rightarrow\forall\Diamond d)\) from Definition 2.
### Specification of Properties
The transition system describes the system's behavior, which can be used for verifying properties [38]. The verification shows if the access control policy is correctly specified and according to the security requirements. Specifically, model checking performs exhaustive testing of all behaviors of the model. It can verify if the defined properties hold or not throughout the model's behaviors (i.e., system states). In the \(RBAC_{GCP}\) model, properties are expressed as (based on Definition 2; conditions are optional):
\[\forall\Box((MEMBER=m\wedge ROLE=r\land\] \[PERMISSION=prms\land\] \[RESOURCE=rsrc\land\] \[CONDITION=value)\rightarrow\] \[\forall\Diamond(decision=Grant\lor Deny))\]
Different specifications can be expressed depending on the values used in the predicates forming the property above. Consequently, we can define several different logical representations of the response pattern property using the same CTL formula.
## 5 Verification of Example Policies
This section demonstrates using examples from Google's Cloud IAM website [33] how to verify \(RBAC_{GCP}\) policies. The examples show how the POLICY inheritance works in the Cloud IAM platform. We use these examples for their diversity in terms of used RESOURCES, MEMBER types, structural complexity, number of PERMISSIONS per ROLE, and level of a hierarchy of access control policy rules. The NuSMV code of all three examples are available on GitHub [39].
We assign values \(m,r,prms,rsrc,value\) to the parameters \(MEMBER\), \(ROLE\), \(PERMISSION\), \(RESOURCE\), and \(CONDITION\), respectively, following the CTL formula in Section 4.2 to specify properties. \(CONDITION\) is optional and not used in the examples. The ANY value is introduced for all variables as a wild card. The response property is written as: \(AG(c\to AF(d))\), where \(G\) is an equivalent symbol used instead of \(\square\), and \(F\) instead of \(\diamondsuit\). \(A\) represents the universal path quantifier \(\forall\). So, we can rewrite access control properties in NuSMV as:
\[AG((MEMBER=m\&ROLE=r\&\] \[PERMISSION=prms\&\] \[RESOURCE=rsrc\&\] \[CONDITION=value)\rightarrow\] \[AF(decision=Grant\mid Deny)).\]
The model checker creates all system model states and evaluates whether the policy model satisfies the specified properties. If it does, there are no errors from the output of NuSMV. Otherwise, a counterexample is generated, which details why the model fails to satisfy a property.
### Example 1: Cloud Pub/Sub
The first example [33] uses Cloud Pub/Sub RESOURCES, which are topics under a Project. As illustrated in Figure 4, topic_a resides in project_a. The Cloud IAM platform manages two Google accounts, i.e., \([email protected]\) and \([email protected]\). We assume that the POLICY \(pl_{1}\) is set on \(project\_a\) to assign the ROLE of Editor (\(roles/pubsub.editor\)) to \([email protected]\) and POLICY \(pl_{2}\) is set on \(topic\_a\) to assign the ROLE of Publisher (\(roles/pubsub.publisher\)) to \([email protected]\). Hence the two POLICIES that contain the rules are (based on Definition 1):
POLICY for [email protected]_:
\[pl_{1}:MEMBER="[email protected]"\&\] \[ROLE="roles/pubsub.editor"\&\] \[(PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&\] \[RESOURCE="project\_a"\rightarrow\] \[(decision=Grant)\]
POLICY for [email protected]_:
\[pl_{2}:MEMBER="[email protected]"\&\] \[ROLE="roles/pubsub.publisher"\&\] \[(PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&\] \[RESOURCE="topic\_a"\to\] \[(decision=Grant)\]
As RESOURCES always inherit the POLICIES of the parent RESOURCE, \(topic\_a\) inherits the POLICY from \(project\_a\). Hence, we introduce an additional POLICY \(pl^{\prime}_{1}\) for \(topic\_a\) to assign the Editor ROLE \(roles/pubsub.editor\) to \([email protected]\), as follows:
\[pl^{\prime}_{1}:MEMBER="[email protected]"\&ROLE="roles/pubsub.editor"\&\] \[(PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&RESOURCE="topic\_a"\to\] \[(decision=Grant))\]
Ultimately the effective policy for \(topic\_a\) will be the union of the POLICIES directly applied to \(topic\_a\) and the POLICIES inherited from its ancestors.
The respective NuSMV code for the POLICIES and the transition system is available in Example 1 on GitHub [39]. As a result, the ROLE assignments for each MEMBER per RESOURCE are shown in Table 1.
After expressing POLICIES and the \(RBAC_{GCP}\)\(TS\), the policy properties should be specified for verification in the model checker. When a specification is evaluated to be TRUE, there is no error to report, i.e., the specified property is satisfied by the model. On the other hand, when the specified property is not satisfied and evaluated to be FALSE, the model checker provides a counterexample to justify the result. For example, the NuSMV specification to check if \([email protected]\) has the publisher ROLE for \(project\_a\) at the \(Projects\) hierarchy level is:
\begin{table}
\begin{tabular}{|l|c|c|} \hline \multirow{2}{*}{**Resource**} & \multicolumn{2}{c|}{**Authorized role**} \\ \cline{2-3} & **[email protected]** & **[email protected]** \\ \hline \hline project\_a & Editor & No access \\ \hline topic\_a & Editor & Publisher \\ \hline \end{tabular}
\end{table}
Table 1: Example 1 - Authorized roles per member and resource.
SPEC AG ((MEMBER = "[email protected]") & (ROLE = "roles.pubsub.publisher") & (PERMISSION = ANY) & (RESOURCE = "project_a") -> AF decision = Grant)
The above will be evaluated to be FALSE since \([email protected]\) is assigned to ROLE \(Publisher\) on \(topic\_a\), and according to \(RBAC_{GCP}\) POLICY, she cannot access \(project\_a\) because it resides in a higher level.
A NuSMV specification to check if \([email protected]\) has the \(pubsub.topics.publish\) PERMISSION on \(project\_a\) at \(Projects\) hierarchy level can be written:
SPEC AG ((MEMBER = "[email protected]") & (ROLE = ANY) & (PERMISSION = "pubsub.topics.publish") & (RESOURCE = "project_a") -> AF decision = Grant)
The above will be evaluated to be FALSE since \([email protected]\) has the PERMISSION \(pubsub.topics.publish\), for her \(Publisher\) ROLE only on \(topic\_a\), but not on \(project\_a\) that resides on a higher level.
Figure 4: Example 1 - Cloud Pub/Sub (based on [33]).
Lastly, to check if \([email protected]\) has the PERMISSION \(pubsub.topics.delete\) on \(topic\_a\) at \(Resources\) hierarchy level, we write:
SPEC AG ((MEMBER = "[email protected]") & (ROLE = ANY) & (PERMISSION = "pubsub.topics.delete") & (RESOURCE = "topic_a") -> AF decision = Grant)
Although \([email protected]\) has the ROLE \(Publisher\) on \(topic\_a\) she does not have the PERMISSION \(pubsub.topics.delete\) since an assignment is missing between that ROLE and the PERMISSION; hence, it is evaluated to be FALSE.
In all three specifications, the result of the verification is \(RBAC.decision=Deny\) without a next state, which indicates that they can never be satisfied, according to the \(RBAC_{GCP}\) POLICIES. The model checker could not find any system state where the property verified to be TRUE for the access permission \(Grant\) to happen.
### Example 2: Cloud Storage
The second example [33] uses Cloud Storage RESOURCES called buckets. The bucket \(upload\_here\) belongs to the Project \(project\_a\) of the Organization \(example.com\) and is used to store files uploaded from GCP users (see Figure 5). Many users can use the same bucket to upload files; thus, it requires that no user can delete any of the files uploaded by other users. However, the data processing expert should be able to gain or delete anyone's files.
We assume that \([email protected]\) is the Google account of the data processing expert and \(data\[email protected]\) is the group account of users who upload files to the bucket. The group has three MEMBERS: \([email protected]\), \([email protected]\), and \([email protected]\). To achieve the security requirements, a POLICY is set on \(project\_a\) to assign the Storage Object Admin ROLE (\(roles/storage.objectAdmin\)) to \([email protected]\), and a second POLICY is set on \(project\_a\) to assign the Storage Object Creator ROLE (\(roles/storage.objectCreator\)) to \(data\[email protected]\). These ROLES should allow \([email protected]\) to upload or delete any object in any bucket in \(project\_a\), while the MEMBERS of \(data\[email protected]\) should be allowed to upload files. The two POLICIES will look as follows:
POLICY for [email protected]_:
\(pl_{1}:\)\(MEMBER="[email protected]"\&\)
\(ROLE="roles/storage.objectAdmin"\&\)
\((PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&RESOURCE="project\_a"\to\)
\((decision=Grant)\)
POLICY for data [email protected]_:
\(pl_{2}:\)\((MEMBER="data\[email protected]"\mid"[email protected]"\mid\)
\("[email protected]"\mid"[email protected]")\&\)
\(ROLE="roles/storage.objectCreator"\&\)
\((PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&RESOURCE="project\_a"\to\)
\((decision=Grant)\)
POLICY \(pl_{2}\) applies to \(project\_a\) for every group MEMBER, which assigns the Storage Object Creator ROLE to \([email protected]\), \([email protected]\) and \([email protected]\), as well.
Bucket \(upload\_here\) inherits POLICIES from its parent RESOURCE \(project\_a\). POLICIES \(pl_{1}\) and \(pl_{2}\) will then be defined and populated to the transition system of the \(RBAC_{GCP}\) model. Although the bucket has no defined POLICIES, these two POLICIES will apply on \(upload\_here\) (due to hierarchy) such that the Storage Object Admin ROLE is assigned to \([email protected]\) on \(upload\_here\), and the Storage Object Creator ROLE is assigned to \(data\[email protected]\) for \(upload\_here\), as follows:
\(pl^{\prime}_{1}:\)\(MEMBER="[email protected]"\&\)
\(ROLE="roles/storage.objectAdmin"\&\)
\((PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&\)
\(RESOURCE="upload\_here"\to\)
\((decision=Grant)\)
and
\(pl_{2}^{\prime}\) :\((MEMBER="data\[email protected]"\mid"[email protected]"\mid"[email protected]" \mid"[email protected]")\&\)
\(ROLE="roles/storage.objectCreator"\&\)
\((PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&\)
\(RESOURCE="upload\_here"\to\)
\((decision=Grant)\)
Ultimately, the effective POLICIES at \(project\_a\) and \(upload\_here\) will be the union of the POLICIES directly applied to them and the POLICIES inherited from their ancestors.
Table 2 shows the ROLES assigned to each MEMBER per RESOURCE.
The respective NuSMV code for the POLICIES and the transition system
Figure 5: Example 2 - Cloud Storage (based on [33]).
is available on GitHub [39], under Example 2. After expressing POLICIES and the \(TS\) of the \(RBAC_{GCP}\) in NuSMV, we specify the policy properties to be verified by the model checker. The following explains the evaluation of specifications.
Four of the example properties will be evaluated to be FALSE as follows.
SPEC AG ((MEMBER = "[email protected]") & (ROLE = ANY) & (PERMISSION = "storage.objects.delete") & (RESOURCE = ANY) -> AF decision = Grant)
This property is FALSE since the group of \(data\[email protected]\) does not have the permission \(storage.objects.delete\) on any RESOURCE.
SPEC AG ((MEMBER = "[email protected]") & (ROLE = ANY) & (PERMISSION = ANY) & (RESOURCE = "example.com") -> AF decision = Grant)
This property was also evaluated to be FALSE since we assigned \([email protected]\) to the Storage Object Admin ROLE on \(project\_a\), and from the RESOURCE hierarchy, it has no access on \(example.com\) in a higher level.
SPEC AG ((MEMBER = ANY) & (ROLE = ANY) & (PERMISSION = "storage.objects.delete" | PERMISSION = "storage.objects.update" ) & (RESOURCE = "example.com") -> AF decision = Grant)
The above property is evaluated to FALSE since, according to \(RBAC_{GCP}\) RESOURCE hierarchy, none of the MEMBERS have the PERMISSION \(storage.objects.delete\) or \(storage.objects.update\) on \(example.com\) because we assigned them to \(project\_a\) that resides in a lower level.
\begin{table}
\begin{tabular}{|l|c|c|} \hline \multicolumn{3}{c}{Authorized role} \\ \cline{2-3} Resource & \multicolumn{2}{c|}{[email protected]} & \multicolumn{2}{c|}{data\[email protected]} \\ \cline{3-3} & \multicolumn{2}{c|}{jane harry} & \multicolumn{1}{c|}{bob} \\ \hline \hline example.com & \multicolumn{1}{c|}{No access} & \multicolumn{1}{c|}{No access} \\ \hline project\_a & Storage Object Admin & \multicolumn{1}{c|}{Storage Object Creator} \\ \hline upload\_here & Storage Object Admin & \multicolumn{1}{c|}{Storage Object Creator} \\ \hline \end{tabular}
\end{table}
Table 2: Example 2 - Authorized roles per member and resource.
SPEC AG ((MEMBER!= "[email protected]") & (ROLE = ANY) & (PERMISSION = "storage.objects.create") & (RESOURCE = ANY) -> AF decision = Deny) ```
This property is also FALSE since MEMBERS (different than \([email protected]\)) have the PERMISSION \(storage.objects.create\) on a RESOURCE at the RESOURCE hierarchy level. Group MEMBERS \(data\[email protected]\) have PERMISSION for its assignment for the Storage Object Creator ROLE on \(project\_a\) that resides at a higher level.
The verification of the first three specifications result is \(RBAC.decision=Deny\) without a next state, which indicates that these properties can never be satisfied in the \(RBAC_{GCP}\) model. The model checker NuSMV could not find any system state where the property would be evaluated to be TRUE so that it could eventually cause the access permission \(Grant\) to happen. Similarly, the verification of the fourth specification results in \(RBAC.decision=Grant\); hence, it is invalidated too.
### Example 3: Compute Engine
The third example [33] uses Compute Engine RESOURCE, which are virtual machines (VM) hosted on Google's infrastructure. For this example, the organization \(example.com\), owns two projects, \(project\_1\) and \(project\_2\). And RESOURCEES \(instance\_a\) and \(instance\_b\) belong to each project respectively, as illustrated in Figure 6. Assuming that \([email protected]\) is a MEMBER of the administrator's team that manages the network and security RESOURCE of the Organization, and \([email protected]\) is a MEMBER of the development team. \([email protected]\) is capable of making changes to all network RESOURCE and any project under it, and \([email protected]\) should be allowed to launch instances and carry out other actions related to instances related to her project. Such security requirements are implemented by the POLICY on \(example.com\) that assigns the Compute Network Admin ROLE (\(roles/compute.networkAdmin\)) to \([email protected]\) and a second POLICY on \(project\_2\) that assigns the Compute Instance Admin ROLE (\(roles/compute.instanceAdmin\)) to \([email protected]\). The two POLICIES are:
POLICY for [email protected]_:
\(\begin{array}{l}pl_{1}:&MEMBER="[email protected]"\&ROLE="compute.networkAdmin"\&\\ &(PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&RESOURCE="example.com"\to \\ &(decision=Grant)\end{array}\)
POLICY for [email protected]_:
\[\begin{array}{l}pl_{2}:&MEMBER="[email protected]"\&\\ &ROLE="roles/compute.instanceAdmin"\&\\ &(PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&\\ &RESOURCE="project\_2"\to\\ &(decision=Grant)\end{array}\]
Since \(project\_1\) and \(project\_2\) inherit the POLICIES of \(example.com\), once we define POLICY \(pl_{1}\), we introduce the following POLICIES for the Compute Network Admin ROLE (\(roles/compute.networkAdmin\)) to be assigned to \([email protected]\) on \(project\_1\) and \(project\_2\), as follows:
POLICY for [email protected]_ on \(project\_1\):
Figure 6: Example 3 - Compute Engine (based on [33]).
\(pl_{1.1}:MEMBER="bb\bar{@}example.com"\&\)
\(ROLE="roles/compute.networkAdmin"\&\)
\((PERMISSION=prms_{1}|\dots|prms_{n})\&\)
\(RESOURCE="project\_1"\to\)
\((decision=Grant)\)
POLICY for [email protected]_ on \(project\_2\):
\(pl_{1.2}:MEMBER="bb\bar{@}example.com"\&\)
\(ROLE="roles/compute.networkAdmin"\&\)
\((PERMISSION=prms_{1}|\dots|prms_{n})\&\)
\(RESOURCE="project\_2"\to\)
\((decision=Grant)\)
RESOURCES \(instance\_a\) and \(instance\_b\) also inherit their parent resources' POLICY \(project\_1\) and \(project\_2\), respectively. The Compute Network Admin ROLE \((roles/compute.networkAdmin)\) is assigned to \(bb\bar{@}example.com\) on \(instance\_a\) and \(instance\_b\), and the Compute Instance Admin ROLE \((roles/compute.instanceAdmin)\) is assigned to \([email protected]\) only on \(instance\_b\). The introduced POLICIES are:
POLICY for [email protected]_ on \(instance\_a\):
\(pl^{\prime}_{1.1}:MEMBER="bb\bar{@}example.com"\&\)
\(ROLE="roles/compute.networkAdmin"\&\)
\((PERMISSION=prms_{1}|\dots|prms_{n})\&\)
\(RESOURCE="instance\_a"\to\)
\((decision=Grant)\)
POLICY for [email protected]_ on \(instance\_b\):
\(pl^{\prime}_{1.2}:MEMBER="bb\bar{@}example.com"\&\)
\(ROLE="compute.networkAdmin"\&\)
\((PERMISSION=prms_{1}|\dots|prms_{n})\&\)
\(RESOURCE="instance\_b"\to\)
\((decision=Grant)\)
POLICY for [email protected]_ on \(instance\_b\):
\[pl_{2}^{\prime}:MEMBER="[email protected]"\&\] \[ROLE="roles/compute.instanceAdmin"\&\] \[(PERMISSION=prms_{1}\mid\cdots\mid prms_{n})\&\] \[RESOURCE="instance\_b"\rightarrow\] \[(decision=Grant)\]
Ultimately, the effective POLICIES on every RESOURCE are the union of the POLICIES directly applied to the RESOURCE and the POLICIES inherited from its ancestors.
Table 3 shows the ROLES assigned to each MEMBER per RESOURCE.
The NuSMV code for the properties specification of this example is available on GitHub [39], under Example 3.
SPEC AG ((MEMBER = "[email protected]") & (ROLE = ANY) & (PERMISSION = "compute.instances.create") & (RESOURCE = "project_1") -> AF decision = Grant)
This property will be evaluated to FALSE since \([email protected]\) has the PERMISSION \(compute.instances.create\) assigned to the Compute Instance Admin ROLE, but not on \(project\_1\) that resides in a different branch of the RESOURCE hierarchy. ROLES do not affect peer RESOURCE.
SPEC AG ((MEMBER = ANY) & (ROLE = "roles/compute.instanceAdmin") & (PERMISSION = ANY) & (RESOURCE = "instance_a") -> AF decision = Grant)
This property is FALSE since the Compute Instance Admin ROLE is assigned to \(instance\_b\).
\begin{table}
\begin{tabular}{|l|l|c|} \multicolumn{2}{c}{Resource} & \multicolumn{2}{c|}{Authorized role} \\ \cline{2-3} & [email protected] & [email protected] \\ \hline example.com & Compute Network Admin & No access \\ \hline \(project\_1\) & Compute Network Admin & No access \\ \hline \(project\_2\) & Compute Network Admin & Compute Instance Admin \\ \hline \(instance\_a\) & Compute Network Admin & No access \\ \hline \(instance\_b\) & Compute Network Admin & Compute Instance Admin \\ \hline \end{tabular}
\end{table}
Table 3: Example 3 - Authorized roles per member and resource
SPEC AG ((MEMBER = ANY) & (ROLE = ANY) & (PERMISSION = "compute.instances.create") & (RESOURCE = "project_1") -> AF decision = Grant)
We have that \([email protected]\) has access to \(project\_1\), but his ROLE (Compute Network Admin) does not contain that specific PERMISSION. And \([email protected]\) has this PERMISSION because of her assigned ROLE (Compute Instance Admin) on \(project\_2\), but not on \(project\_1\) in a different branch of the RESOURCE hierarchy. Hence, the property is FALSE since no one has the PERMISSION \(compute.instances.create\) on \(project\_1\).
In all three specifications, the result of the verification is \(RBAC.decision=Deny\), without a next state since the NuSMV model checker could not find any system state where the properties is TRUE.
### Summary of Examples
The first example used Cloud Pub/Sub RESOURCES and presented a case of RESOURCE hierarchy between a Project and a topic. We considered two different POLICIES for two MEMBERS, one on each RESOURCE. This example demonstrates how the \(TS\) operates, and how properties are specified to check whether the hierarchy was implemented correctly. The second example used Cloud Storage RESOURCES to demonstrate the enforcement of two different POLICIES for two MEMBERS on the same RESOURCE. One of the MEMBERS is a Google group account that allowed us to investigate how the applied technique handles this type of a MEMBER. Google groups are a convenient way to apply organization access control policies and a best practice for role distribution [33]. The third example used Compute Engine RESOURCES, which allowed us to evaluate the security policies in a more complex configuration where the resource structure contains more branches and nodes. Various properties in each example were checked to satisfy specific security requirements in compliance with Google's proposed best practices [33]. Overall, the applied technique successfully verified the properties in all three examples; hence, offering the capability of a tool for administrators to specify policies/properties and verify their correctness.
## 6 Conclusion
When defining policies in Cloud systems, it is imperative to understand the underlying access control model and supported policies to avoid configuration errors or even inconsistencies. Towards achieving this aim, we defined
\(RBAC_{GCP}\) to provide a better understanding of the RBAC model and policies supported by the Google Cloud IAM platform. The RBAC access control model of Cloud IAM has a few differences compared to the ANSI standard model. Specifically, the former supports permission inheritance through RESOURCE hierarchies but not ROLE hierarchies. We applied model checking to formally verify supported access control policies. And we demonstrated the technique's applicability through three examples described on the official Google Cloud IAM website. We anticipate this work to assist system administrators in ensuring the correctness of policy specification and checking violations against security requirement [6] and, even further, performing a security assessment of policies for compliance purposes [40].
## Acknowledgement
The authors would like to thank Dr Andrew Sogokon at Lancaster University for his feedback. This research is supported in part by the Security Lancaster VERIFI Mini-Project under grant number IRL1025.
|
2304.12449 | Feigenbaum scenario without parameters | Typically, the period-doubling bifurcations exhibited by nonlinear
dissipative systems are observed when varying systems' parameters. In contrast,
the period-doubling bifurcations considered in the current research are induced
by changing the initial conditions whereas parameter values are fixed. Thus,
the studied bifurcations can be classified as the period-doubling bifurcations
without parameters. Moreover, we show a cascade of the period-doubling
bifurcations without parameters resulting in transition to deterministic chaos.
The explored effects are demonstrated by means of numerical modelling on an
example of a modified Anishchenko-Astakhov self-oscillator where the ability to
exhibit bifurcations without parameters is associated with the properties of a
memristor. Finally, we compare the dynamics of the ideal-memristor-based
oscillator with the behaviour of a model taking into account the memristor
forgetting effect. | Ivan A. Korneev, Ibadulla R. Ramazanov, Andrei V. Slepnev, Tatiana E. Vadivasova, Vladimir V. Semenov | 2023-04-24T21:02:56Z | http://arxiv.org/abs/2304.12449v2 | # Feigenbaum scenario without parameters
###### Abstract
Typically, the period-doubling bifurcations exhibited by nonlinear dissipative systems are observed when varying system parameters. In contrast to the classical case, the period-doubling bifurcations illustrated in the current research are induced by changing the initial conditions when parameter values are fixed, so they can be classified as the period-doubling bifurcations without parameters. Moreover, we show a cascade of period-doubling bifurcations without parameters resulting in transition to deterministic chaos. The explored effects are demonstrated by means of numerical modelling on an example of a modified Anishchenko-Astakhov self-oscillator where the ability to exhibit bifurcations without parameters is associated with the properties of a memristor included into the model circuit. Finally, we compare the dynamics of the ideal-memristor-based oscillator with the behaviour of a model taking into account the memristor forgetting effect.
Memristor; Memristor-based oscillators; Line of equilibria; Bifurcation without parameter; Period-doubling bifurcation; Chaos pacs: 05.10.-a, 05.45.-a, 84.30.-r +
Footnote †: preprint: APS/123-QED
**Bifurcations without parameters are characterized by a continuous dependence of the system dynamics on initial conditions at fixed parameter values. Such bifurcations are typical for oscillators with manifolds of non-isolated limit sets such as lines or surfaces of equilibria, attractive manifolds of non-isolated closed curves, etc. The oscillatory regimes associated with bifurcations without parameters are extremely sensitive to inaccuracies, internal dynamic noise, external perturbations and hence are mostly exhibited by idealized models and often difficult to experimentally implement. Nevertheless, one can reveal the manifestations of bifurcations without parameters in the non-stationary oscillations and transients of real systems. For this reason, understanding bifurcations without parameters and the presence of the manifolds of non-isolated limit sets has transformed from mathematical exotic to the fundamental property of dynamical systems. Nowadays, certain bifurcations without parameters are well-studied in the context of theory and experiments. In the current paper, we extend this list by taking into consideration a new bifurcation, a period-doubling bifurcation without parameters, and realize a cascade of these bifurcations as a route to chaos.**
## I Introduction
A manifold of dynamical systems exhibiting the effect of deterministic chaos is incredible broad. Among them are famous examples of chaotic oscillators such as a model for atmospheric convection developed by Edward Lorenz more than fifty years ago [1], the Rossler oscillator proposed as a prototype equation to the Lorenz model [2], Chua's model [3; 4] and the Anishchenko-Astakhov self-oscillator [5; 6] describing electronic circuits, a model of a ring cavity containing a nonlinear dielectric medium proposed by Kensuke Ikeda [7; 8], the Mackey-Glass equations [9] used to model the variation in the relative quantity of mature cells in the blood. Nowadays, one can find a huge number of models developed for chaotic processes in neuroscience [10; 11; 12], chemistry and biochemistry [13; 14], ecology [15; 16; 17], population dynamics [18; 19], optics [20; 21; 22; 23], electronics [24; 25; 26; 27; 28], plasma physics [29; 30], geology [31], economics [32; 33; 34; 35], to name only a few.
Besides well-studied exponential instability realized in deterministic dissipative nonlinear systems through major routes demonstrating universality properties (the period-doubling cascade route, the crisis and the intermittency route, and the route to chaos through quasiperiodicity destruction), the chaotic behaviour can be exhibited in more complicated forms. In particular, delay-induced transitions to chaos are dependent of a particular form of model equations and occur in various forms. This is due to the fact, that oscillators with time delay are in fact infinite-dimensional dynamical systems [36; 37; 28]. In addition, one can distinguish noise-induced chaos [38; 39], the dynamics of chaotic Hamiltonian systems [40; 41] and systems characterised by the presence of hidden chaotic attractors in the phase space [42; 43; 44], spatio-temporal chaos accompanied by pattern formation processes [17; 26; 45].
A new distinguishable example of complex chaotic behaviour characterised by unique properties and intrinsic peculiarities of bifurcation mechanisms is proposed in the current paper. The studied transition to chaos is marked by simultaneous and continuous dependence of the oscillatory dynamics both on parameter values and initial conditions despite it is observed in a nonlinear dissipative system. This dependence is revealed in a modified model of the Anishchenko-Astakhov self-oscillator and results from the properties of a memristor included into the model circuit. Such behaviour is typical for systems with manifolds of equilibria and in particular for memristor-based oscillators with a line of equilibria. The significant feature of such systems is the occurrence of so-called bifurcations without parameters [46; 47; 48; 49; 50; 51], i.e., the bifurcations observed at fixed parameters and varying initial conditions. A variety of bifurcations of steady states without parameters in memristor-based oscillators with lines of equilibria is represented by the transcritical bifurcation [52; 53] as well as the pitchfork and the saddle-node bifurcations [53]. Bifurcation mechanisms of the periodic solution appearance in such systems associated with the oscillation excitation through the Andronov-Hopf bifurcation have been explored numerically and analytically for different kinds of nonlinearity both for supercritical [54; 55; 56; 57; 58] and subcritical [59] scenarious. It is important to note that the hard oscillation excitation through the subcritical Andronov-Hopf scenario is accompanied by a bifurcation without parameters being analogous to the saddle-node bifurcation of limit cycles observed in systems with a finite number of isolated limit sets [59].
In the present study, we complement a manifold of bifurcations without parameters by a period-doubling bifurcation without parameter. Using the methods of numerical simulation, we demonstrate that a cascade of such bifurcations caused by continuous varying initial conditions results in the chaotic dynamics and represents a particular kind of the Feigenbaum scenario. In addition, we analyse how the memristor properties can affect the observed phenomena.
## II Model and methods
### Memristor
The idea proposed by Leon Chua in 1971 implies a linear relationship between the transferred electrical charge, \(q(t)\), and the magnetic flux linkage, \(\varphi(t)\) in a two-terminal element called a memristor [60]. Mathematically, it is written as \(dq=Wd\varphi\), whence it follows that \(W=W(\varphi)=\dfrac{dq}{d\varphi}\). By this way, using the relationships \(d\varphi=V_{\mathrm{m}}dt\) and \(dq=I_{\mathrm{m}}dt\) (\(V_{\mathrm{m}}\) is the voltage across the memristor, \(I_{\mathrm{m}}\) is the current passing through the memristor), the memristor current-voltage characteristic can be derived: \(I_{\mathrm{m}}=W(\varphi)V_{\mathrm{m}}\). That means the quantity \(W\) plays a role of the flux-controlled conductance (memductance) and depends on the entire past history of \(V_{\mathrm{m}}(t)\):
\[W(\varphi)=\dfrac{dq}{d\varphi}=q^{\prime}\left(\int\limits_{-\infty}^{t}V_{ \mathrm{m}}(t)dt\right). \tag{1}\]
In the current paper, we exclude from the consideration the issues concerning the physical realizability of the postulated relationship \(dq=Wd\varphi\) and use the term 'flux' (or 'flux linkage') only for denoting the memristor state variable being proportional to the integral \(\int\limits_{-\infty}^{t}V_{\mathrm{m}}(t)dt\). Thus, the memristor is considered as a resistive element whose conductance is dictated by the state variable which is not necessarily associated with magnetic phenomena. This approach reflects the conception of'memristive system' also introduced by Leon Chua [61] and implying mathematical definitions, which does not concern a physical sense of the dynamical variables and their functional dependence. It allows to group a broad variety of elements of different nature identified by a continuous functional dependence of characteristics on previous states. For instance, memristors can be implemented as oxide-based [62; 63; 64; 65; 66; 67; 68; 69; 70], polymer-based devices [71; 72; 73; 74], and spintronic systems [75; 76; 77; 78]. Moreover, memristive one-ports can be implemented as electronic circuits with variable characteristics and nonlinearities [79; 80; 81].
Chua's memristor is one of the simplest models used for the description of memristive properties and implies the presence of a piecewise-linear dependence \(q(\varphi)\). For the flux-controlled memristor, the relationship takes the following form:
\[q(\varphi)=\begin{cases}(a-b)\varphi_{*}+b\varphi,&\varphi\geq\varphi_{*},\\ a\varphi,&|\varphi|<\varphi_{*},\\ -(a-b)\varphi_{*}+b\varphi,&\varphi\leq-\varphi_{*},\end{cases} \tag{2}\]
where \(\varphi\) is the memristor state variable. Then the memristor conductance \(W(\varphi)\) is derived as
\[W(\varphi)=\begin{cases}a,&|\varphi|<\varphi_{*},\\ b,&|\varphi|\geq\varphi_{*}.\end{cases} \tag{3}\]
One can approximate piecewise-smooth nonlinearity (3) by the hyperbolic tangent function:
\[W(\varphi)=\dfrac{b-a}{2}\tanh\left(k(\varphi^{2}-\varphi_{*})\right)+\dfrac{ b+a}{2}, \tag{4}\]
where a parameter \(k\) is responsible for the sharpness of the transitions between two memristor's states. As shown in Ref. [59], changing the piecewise-smooth memristor conductance function (3) to the smooth one (4) does not qualitatively modify the memristor properties. In particular, one observes the classical loop in the current-voltage characteristic of memristor (4) driven by an external periodic influence (see Fig. 2 in Ref. [59]). The
memristor model including tanh-nonlinearity is not the only smooth memristor model. There is a set of one-dimensional and multi-dimensional smooth models describing various memristor properties [82; 83; 84; 85; 86; 87; 88].
Real memristors can be characterised by finite correlation between current and previous states. In such a case, memristors 'forget' the state history over time, i.e. the impact of the past states weakens with increase of the time distance between present and past states. Moreover, for sufficiently long time distances one can assume that a memristive system forgot the past states. Such phenomena in metal-oxide-based memristors are associated with the diffusion of charged particles [86; 89; 90] (however, the 'forgetting' can happen very slowly). One of the simplest form of the memristor state equation which implies the forgetting effect is the following:
\[\frac{ds}{dt}=g(x,s)=x-\delta s, \tag{5}\]
where \(s\) is a memristor state variable, \(x\) is an input signal, a parameter \(\delta\) characterizes the forgetting effect strength.
### Model under study
The circuit in Fig. 1 (a) represents a negative impedance converter where \(R_{1}=R_{2}\) and the third resistor is changed to a memristor. Then the resulting circuit conductance is \(-W(\varphi)\). This block is introduced into the circuit in Fig. 1 (b), where a parallel LCR-circuit is forced by feedback signal \(V_{\rm f}\). Four-terminal network \({\rm A_{z}}\) is responsible for inertial processing of the input voltage signal \(V\) such that the output voltage takes the form \(\frac{dV_{z}}{dt}=f(V_{z},V)\). The four-terminal network \({\rm A_{f}}\) produces the feedback output signal \(V_{\rm f}=\beta VV_{z}+V\). The presented in Fig. 1 (b) system is described by the following dynamical variables: \(V\) is the voltage across the capacitor \(C\), \(I\) is the current through the inductor \(L\) and \(\varphi\) is the magnetic flux linkage controlling the memristor. Using Kirchhoff's laws, one obtains differential equations for the considered system evolving in physical time \(t^{\prime}\):
\[\left\{\begin{array}{l}C\frac{dV}{dt^{\prime}}+I-W(\varphi)V+ \frac{1}{R_{\rm f}}(V_{\rm f}-V)=0,\\ L\frac{dI}{dt^{\prime}}=V,\\ \frac{d\Psi_{z}}{dt^{\prime}}=f(V_{z},V),\\ \frac{d\varphi}{dt^{\prime}}=V-\alpha\varphi,\\ V_{\rm f}=\beta VV_{z}+V.\end{array}\right. \tag{6}\]
Using the substitution \(t^{\prime}=t\sqrt{LC}\), \(V=V_{0}\sqrt{L/C}x\), \(I=-I_{0}y\), \(V_{z}=V_{0}\frac{R_{\rm f}}{\alpha}\sqrt{C/L}z\), \(\varphi=V_{0}t_{0}Ls\) with \(I_{0}=1\) [A], \(V_{0}=1\) [V] and \(t_{0}=1\) [s] one obtains the equations in the dimensionless form:
\[\left\{\begin{array}{l}\frac{dx}{dt}=m(s)x+y-zx,\\ \frac{dy}{dt}=-x,\\ \frac{dz}{dt}=gF(z,x),\\ \frac{ds}{dt}=x-\delta s,\end{array}\right. \tag{7}\]
where \(m(s)=W(\varphi)\sqrt{\frac{L}{C}}\), \(g=\frac{\alpha L}{R_{\rm f}}\), a function \(F(z,x)\) is the dimensionless analog of the dependence \(f(V_{z},V)\), \(\delta=\alpha\sqrt{LC}\). In case \(m(s)=\) const, the first three equations of model (7) represent the Anishchenko-Astakhov self-oscillator [6]. To obtain the chaotic dynamics exhibited by the Anishchenko-Astakhov self-oscillator, one can involve the function \(F(z,x)\) in the form \(F(z,x)=-z+I(x)x^{2}\), where \(I(x)=0\) for negative values of \(x\) and \(I(x)=1\) for positive ones. Since the parameter \(m\) is proportional to \(W(\varphi)\) taken in form (4), the function \(m(s)\) is considered below as \(m(s)=\frac{m_{2}-m_{1}}{2}\tanh\left(k(s^{2}-1)\right)+\frac{m_{2}+m_{1}}{2}\). Finally, the explored chaotic system takes the following
Figure 1: (a) Memristive operation-amplifier-based negative impedance converter; (b) Schematic circuit diagram of the studied model (Eqs.(6)).
form:
\[\left\{\begin{array}{l}\frac{dx}{dt}=m(s)x+y-zx,\\ \frac{dy}{dt}=-x,\\ \frac{dz}{dt}=-g\left(z-I(x)x^{2}\right),\\ \frac{ds}{dt}=x-\delta s,\\ m(s)=\frac{m_{2}-m_{1}}{2}\tanh\left(k(s^{2}-1)\right)+\frac{m_{2}+m_{1}}{2}, \\ I(x)=\begin{cases}0,&x<0,\\ 1,&x\geq 0.\end{cases}\end{array}\right. \tag{8}\]
Model (8) is explored numerically by means of integration methods. Numerical simulations are carried out by integration of the system under study using the fourth-order Runge-Kutta method with the time step \(\Delta t=0.001\) from varying initial conditions.
## III System with idealized memristor without forgetting
Consider Eqs. (8) for neglected memristor forgetting effect, \(\delta=0\). In such a case, the system has a continuum of equilibrium points with the coordinates \(x_{*}=y_{*}=z_{*}=0\), \(s\in(-\infty;+\infty)\) forming a line of equilibria in the four-dimensional phase space. The period-doubling bifurcations without parameters considered in the current research do not involve the steady states. For this reason, detailed analysis of the steady states is excluded from the consideration. Further study of Eqs. (8) is carried out at fixed parameters: \(m_{1}=0.02\), \(m_{2}=1.2\), \(g=0.25\), \(k=5\). To visualize bifurcations without parameters, we fix the initial conditions at \(t=0\) for the first three dynamical variables, \(x_{0}=y_{0}=z_{0}=0.01\), and vary the initial state \(s_{0}=s(t=0)\). Then increasing the initial condition for \(s_{0}\) in the range \(s_{0}\in[2.5:6.0]\) gives rise to the transformations illustrated in Fig. 2 (a1)-(a5) as the phase portraits on the plane (\(x\),\(y\)). The observed bifurcation transitions caused by varying \(s_{0}\) are similar to the period-doubling bifurcations exhibited by the Anishchenko-Astakhov self-oscillator written in the classical form when varying the system parameters [6]. A cascade of the period-doubling bifurcations in model (8) finally results in the transition to chaos (Fig. 2 (a6)).
The observation of the phase portraits is a useful approach for studying dynamical systems, but a rigorous research of the bifurcations requires more detailed analysis. To prove that changing \(s_{0}\) at fixed parameter values induces the period-doubling bifurcations, we take into consideration the evolution of the dynamical variable \(x\) in the Poincare section (Fig. 2 (b)) and a spectrum of the Lyapunov exponents (Fig. 2 (c)) when varying \(s_{0}\). The Poincare section is chosen such that it crosses the phase trajectories at \(y=0\) with negative slope. To calculate the Lyapunov spectrum, we use the Benettin algorithm [91; 92]. Panels (b,c) in Fig. 2 are typical illustrations for the route to chaos through a cascade of the period-doubling bifurcations. However, there are certain intrinsic pecu
Figure 2: Evolution of the dynamics exhibited by model (8) when varying initial condition \(s_{0}\) and keeping the parameter values and other initial conditions to be fixed: \(m_{1}=0.02\), \(m_{2}=1.2\), \(g=0.25\), \(k=5\), \(x_{0}=y_{0}=z_{0}=0.01\). Panels (a1)-(a6) illustrate the phase portraits on the plane (\(x\), \(y\)), while panels (b) and (c) demonstrate the transformations of the Poincare section and the Lyapunov spectrum.
liarities. In particular, two of four Lyapunov exponents in Fig. 2 (c) are equal to zero at any \(s_{0}\) (one of the Lyapunov exponent which is always equal to zero at any \(s_{0}\) is marked by the yellow dashed line in Fig. 2 (c)). As shown in Fig. 2 (b,c), the first three period-doubling bifurcations without parameters occur at \(s_{01}=3.081\), \(s_{02}=5.006\), \(s_{03}=5.415\). Considering \(s_{0}\) as a system parameter, one can calculate the corresponding Feigenbaum constants introduced as \(\delta_{\rm F}=\frac{s_{02}-s_{01}}{s_{03}-s_{02}}=4.701\). This value is close to the universal Feigenbaum constant.
It is important to note that the limit sets in Fig. 2 (a1-a6) are not limit cycles and a chaotic attractor in themselves. They represent parts of a continuous set of non-isolated closed (Fig. 2 (a1-a5)) and non-closed (Fig. 2 (a6)) curves belonging to the same attractor (the continuous dependence on the initial conditions depicted in Fig. 2 results from this fact). In contrast to such complex attractors considered in 3D phase space [56; 57; 58; 59], it is difficult to visualise the attractor existing in a four-dimensional phase space and including a manifold of non-isolated chaotic trajectories. Nevertheless, one can prove that system (8) is a nonlinear dissipative dynamical system and possesses an attractor in its phase space by using the dependence of the mean divergence of the phase velocity vector along the trajectory on the initial value \(s_{0}\) for the fixed parameters (see Fig. 2 (d)). The calculation results for the mean divergence \(<{\rm div}F(x,y,z,s)>=\left\langle\frac{d\dot{x}}{dx}+\frac{d\dot{y}}{dy}+ \frac{d\dot{z}}{dz}+\frac{d\dot{s}}{ds}\right\rangle\), where the brackets mean the averaging along the trajectory, indicate the existence of the attractor: as follows from Fig. 2 (d), for the whole set of trajectories the divergence is negative.
## IV Impact of the memristor forgetting effect
When the parameter \(\delta\) of system (8) is positive, the continuous dependence of the bifurcation phenomena on the initial conditions disappears and the system dynamics is determined solely by values of parameters \(m_{1}\) and \(g\) similarly to the classical Anishchenko-Astakhov self-oscillator without memristor. However, the dependence on initial conditions is reflected in transient processes. As an example, starting from the initial conditions \(x_{0}=y_{0}=z_{0}=0.01\), \(s_{0}=5.65\) (correspond to the chaotic dynamics at \(\delta=0\)), the phase trajectory traces transition from chaotic-like temporary regime (Fig. 3 (a)) to the temporary periodic oscillations of different periods (Fig. 3 (b,c)). Then the trajectory finally reaches the system attractor whose properties are dictated by \(m_{1}\) (for \(m_{1}=0.02\) the attractor is the limit cycle corresponding to period-one self-oscillations, see Fig. 3 (d)).
Analysing Lyapunov exponents in Fig. 4 (a), one can conclude that varying the parameter \(m_{1}\) provides for the observation of transitions between chaotic and regular dynamics. In addition, it has been found that system (8) exhibits the quasi-periodic dynamics in certain ranges of the parameter \(m_{1}\), where two of four Lyapunov exponents are zero (Fig. 4 (b)). In such a case, the system attractor is a torus visualised in Fig. 4 (c) in the reduced phase space (\(x\), \(y\), \(z\)).
## V Conclusions
In this study, we demonstrate a new bifurcation without parameter, a period-doubling bifurcation. The occurrence of the studied bifurcation is associated with the properties of the ideal memristor where the instantaneous memristor state depends on the entire past history of functioning. Moreover, a cascade of such bifurcations caused by continuous varying the initial conditions represents a route to chaos, the Feigenbaum scenario without parameters. It should be noted that the phenomena characterised by the continuous dependence of the oscillation characteristics on the initial conditions are observed in a nonlinear dissipative dynamical system.
If the memristor model takes into consideration the forgetting effect, then the dependence on the initial condition is eliminated and the studied model exhibits transitions to chaos caused by varying the system parameters. In this sense, the studied system becomes qualitatively identical to the Anishchenko-Astakhov self-oscillator. Nevertheless, the principal difference has been surprisingly revealed. The classical Anishchenko-Astakhov self-oscillator does not demonstrate stable quasi-periodic oscillatory regimes. In contrast, the modified memristor
Figure 3: Dynamics of model (8) with memristor forgetting effect: panels (a)-(d) illustrate fragments of the same phase trajectory on the plane (\(x\),\(y\)) corresponding to evolution in different time ranges. Initial conditions are \(x_{0}=y_{0}=z_{0}=0.01\), \(s_{0}=5.65\). Parameters are: \(m_{1}=0.02\), \(m_{2}=1.2\), \(g=0.25\), \(k=5\), \(\delta=0.001\).
based model taking into consideration the forgetting effect can exhibit the quasi-periodic dynamics.
The Andronov-Hopf bifurcation, the saddle-node and the period-doubling bifurcations are basic bifurcations of limit cycles. Summarising the present results with ones described in Refs. [56; 57; 58; 59], one can track how these bifurcation transform into bifurcations without parameters in memristor-based oscillators. After first understanding these bifurcations is achieved, one can continue studying them on examples of dynamical systems of any nature. That is the issue for our further investigations.
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Acknowledgements
V.S. acknowledges support by the Russian Science Foundation (project No. 22-72-00038).
|
2307.13207 | Comparative Study of alpha-alpha interaction potentials constructed
using various phenomenological models | In this paper, we have made a comparative study of alpha-alpha scattering
using different phenomenological models like Morse, double Gaussian, double
Hulthen, Malfliet-Tjon and double exponential for the nuclear interaction and
atomic Hulthen as screened coulomb potential. The phase equations for S, D and
G channels have been numerically solved using 5th order Runge-Kutta Method to
compute scattering phase shifts for elastic scattering region consisting of
energies up to 25.5 MeV. The model parameters in each of the chosen potentials
were varied in an iterative fashion to minimize the mean absolute percentage
error between simulated and expected scattering phase shifts. A comparative
analysis revealed that, all the phenomenological models result in exactly
similar inverse potentials with closely matching mean absolute percentage error
values for S, D and G state. One can conclude that any mathematical function
that can capture the basic features of two body interaction would always guide
correctly in construction of inverse potentials. | Ayushi Awasthi, O. S. K. S. Sastri | 2023-07-25T02:19:49Z | http://arxiv.org/abs/2307.13207v1 | Comparative Study of \(\alpha\) - \(\alpha\) interaction potentials constructed using various phenomenological models
###### Abstract
In this paper, we have made a comparative study of \(\alpha-\alpha\) scattering using different phenomenological models like Morse, double gaussian, double Hulthen, Malfliet-Tjon and double exponential for the nuclear interaction and atomic Hulthen as screened coulomb potential. The phase equations for S, D and G channels have been numerically solved using \(5^{th}\) order Runge-Kutta Method to compute scattering phase shifts (SPS) for elastic scattering region consisting of energies up to 25.5 MeV. The model parameters in each of the chosen potentials were varied in an iterative fashion to minimize the mean absolute percentage error (MAPE) between simulated and expected SPS. A comparative analysis revealed that, all the phenomenological models result in exactly similar inverse potentials with closely matching MAPE values for S, D and G state. One can conclude that any mathematical function that can capture the basic features of two body interaction would always guide correctly in construction of inverse potentials.
**Keywords:**\(\alpha\) -\(\alpha\) Scattering, Phenomenological Models, Screened Atomic Hulthen, Scattering Phase Shifts, Resonance Energies.
## 1 Introduction
Scattering studies of \(\alpha\) particles with \({}^{4}_{2}He\) nuclei is of importance in understanding nature of nuclear force and also for gaining insights into few body [1] and cluster models [2, 3]. Rutherford & Chadwick were the first to study \(\alpha\)- \(\alpha\) scattering in 1927 [4] and since then, numerous experiments have been performed at various energy levels to deepen our understanding. In 1956, Heydenburg and Temmer presented experimental scattering phase shifts (SPS) for the low-energy range of 0.6 MeV to 3 MeV [5]. Tombrello and Senhouse, in 1963, provided experimental SPS covering the energy range of 3.84 MeV to 11.88 MeV [6]. Then, SPS for energies between 12.3 MeV and 22.9 MeV were given by Nilson et al. in 1958 [7]. Subsequently, Chien and Brown, in 1974, contributed experimental SPS for the energy range of 18 MeV to 29.50 MeV [8].
The SPS data obtained from these experiments were compiled by Afzal et al.[9] which is generally considered by theoretical physicists for studying \(\alpha-\alpha\) scattering. However, it is worth noting that their compilation included data only up until 1969. Recognizing the significance of incorporating Chien and Brown data from 1974, Anil et. al. took the initiative to update the database for \(\alpha\)-\(\alpha\) scattering in 2022 [10].
In the realm of theoretical physics, numerous phenomenological models have emerged and evolved over the past six decades. Notably, in 1964, Darriulat et al. [11] embarked upon a significant endeavor by employing the Woods-Saxon potential within an optical model. Their
objective was to extract SPS for various angular momentum states, specifically \(\ell\) = 0, 2, 4, 6 and 8, spanning an energy range between 53 MeV and 120 MeV.
Almost at the same time, Ali and Bodmer ventured into the study of \(\alpha\)-\(\alpha\) scattering [12]. In their investigation, they employed a Double Gaussian potential with four adjustable parameters. Their approach involved an initial determination of the attractive component of the nuclear force by fitting the available scattering data in the \(\ell\) = 4 channel. Then constraining the shape of potential for large distances, they obtained the repulsive nature exhibited in the \(\ell\) = 0 and \(\ell\) = 2 channels, at short distances.
In 1977, Buck et al. [13] put forth a compelling argument, emphasizing that local potential is sufficient to model the interaction between \(\alpha\) particles, by meticulously examining of two notable models: the Resonating Group Method (RGM)[14] and the Orthogonality Condition Model (OCM)[15]. They employed a single Gaussian function characterized by two parameters. These are obtained by selecting experimental energy value of pseudo-bound scattering state E= 0.0198 MeV and phase shift for \(\ell\) = 2, at 3 MeV. They were able to provide a reasonable explanation of the observed SPS for \(\ell\) = 0, 2, 4 and 6 for energy values upto \(E_{\ell ab}\) = 80 MeV.
In 2003, M. Odsuren et.al combined two approaches, Complex Scaling Method(CSM) and Orthogonality Condition Method(OCM) called CSOCM [16] to compute resonance states in two-body systems, by including the influence of Pauli exclusion principle between clusters. They have applied two different potentials, Gaussian and harmonic oscillator, and obtained wave functions for \(\alpha\) - \(\alpha\) system to calculate resonance energies with their decay widths. During their calculations, they considered SPS for partial waves \(\ell\) = 0, 2, 4, 6 and 8, with energies upto 50 MeV.
Recently, Anil et.al [17] have revisited the local gaussian potential with an innovative algorithm that is a combination of the Matrix Method [18] and Variational Monte Carlo (VMC)[19] technique. In this approach, they have considered the bound state energies as given in Buck et.al [13] to optimize the model parameters and then utilising the determined interaction potential in phase function method (PFM), they have obtained SPS for \(\ell\) = 0, 2 & 4 channels for energies upto 25.5 MeV. Then, they proposed Morse potential as nuclear interaction and directly utilized all available experimental SPS for optimising the model parameters. This is akin to constructing the model from the data, as in machine learning paradigm, which is fundamentally the approach of inverse scattering theory [20]. All these above procedures [10, 12, 13, 16, 17] utilised _erf()_ function based coulomb interaction.
Alternatively, Laha et al.[21, 22, 23] have utilized PFM to calculate SPS and obtain interaction potentials. They employed the double Hulthen potential to describe the nuclear interaction, while adopting the atomic Hulthen ansatz to account for the screened Coulomb interaction [24]. Their noteworthy study focused on investigating \(\alpha\)-\(\alpha\) scattering up to an energy range of \(E_{\ell ab}\) = 100 MeV. The motivation behind this study was based on the following observations:
Firstly, we have observed that for Morse + _erf()_ ansatz of Anil et.al.[10], the depth of the potential for \(\ell\) = 2 is not shallower than that of \(\ell\) = 0. Therefore, we became intrigued to consider the performance of the atomic Hulthen screening potential as a replacement for _erf()_. This, in turn, led us to include a similar study for the double Gaussian potential [12].
Secondly, we observed that there were three studies[20, 21, 22] on \(\alpha\)-\(\alpha\) scattering using the Double Hulthen potential as the nuclear interaction, with different screening radii. However, with those potential parameters, the height of the Coulomb barrier for \(\ell\) = 2 and 4 was not observed to be near their corresponding resonance energies [25]. Therefore, we have opted to re-optimize the model parameters using our innovative algorithm within the elastic region, specifically up to 25.5 MeV.
Thirdly, the Malfliet-Tjon (MT) potential [26], which is a combination of attractive and repulsive forms of the Yukawa potential [27], has been able to reasonably explain the SPS for
n-p, n-d, and p-d systems [28, 29]. Therefore, we have incorporated this interaction potential for the first time in order to study \(\alpha\)-\(\alpha\) scattering.
Finally, an observation that Morse potential is a composite of exponential functions has led us to incorporate the double exponential function into our analysis for the purpose of comparison.
So, in this paper, our aim is to perform a comprehensive comparative analysis of various phenomenological potential models as local potentials to model the nuclear interaction between two alpha particles, namely Morse, Double Gaussian, Double Hulthen, Malfliet - Tjon (MT), and Double Exponential. Our study focuses on investigating the elastic scattering of alpha particles ( \(\alpha\)-\(\alpha\)) in the S, D and G channels, utilizing the atomic Hulthen potential as screened coulomb potential for energies ranging upto 25.5 MeV.
## 2 Methodology
The interaction between two alpha particles is written as a combination of nuclear and Coulomb parts as
\[V(r)=V_{N}(r)+V_{C} \tag{1}\]
The nuclear part is modeled by various phenomenological potentials as follows:
* Morse Potential [30] \[V_{N}(r)=D_{0}\left(e^{-2(r-r_{m})/a_{m}}-2e^{-(r-r_{m})/a_{m}}\right)\] (2) where \(D_{0}\), \(r_{m}\) and \(a_{m}\) represent the depth of potential (in \(fm^{-2}\)), equilibrium distance (in \(fm\)) and shape of potential (in \(fm\)) respectively. It is a three parameter potential.
* Double Gaussian Potential [12] \[V_{N}(r)=V_{r}e^{-\mu_{r}^{2}r^{2}}-V_{a}e^{-\mu_{a}^{2}r^{2}}\] (3) where \(V_{r}\) and \(V_{a}\) represents the strength of repulsive and attractive parts in \(fm^{-2}\), respectively, \(\mu_{r}\) and \(\mu_{a}\) are their corresponding inverse ranges in \(fm^{-1}\). It is a four parameter potential.
* Double Hulthen Potential [21] \[V_{N}(r)=-S_{\ell_{1}}\frac{e^{-\beta r}}{(e^{-\alpha r}-e^{-\beta r})}+S_{ \ell_{2}}\frac{e^{-(\beta+\alpha)r}}{(e^{-\alpha r}-e^{-\beta r})^{2}}\] (4) where \(S_{\ell_{1}}\), \(S_{\ell_{2}}\), \(\alpha\) and \(\beta\) are four parameters. The first two represent depth of potential (in \(fm^{-2}\)) and the rest two its range (in \(fm^{-1}\)) of potential.
* Malfliet-Tjon (MT) Potential [26] \[V_{N}(r)=\frac{V_{R}e^{-2\mu r}-V_{A}e^{-\mu r}}{r}\] (5) where \(V_{R}\) and \(V_{A}\) represent depths of repulsive and attractive part of the potential in \(fm^{-2}\) and \(\mu\) is inverse range parameter in \(fm^{-1}\).
* Double Exponential \[V_{N}(r)=Ae^{-\alpha_{1}r}-Be^{-\alpha_{2}r}\] (6) where \(A\) and \(B\) represent depths of repulsive and attractive part of the potential in \(fm^{-2}\) and \(\alpha_{1}\) and \(\alpha_{2}\) are inverse range parameters in \(fm^{-1}\).
To account for Coulomb interaction, we consider the atomic Hulthen potential [24] which is given as
\[V_{AH}(r)=V_{o}\frac{e^{-r/a}}{(1-e^{-r/a})} \tag{7}\]
where \(V_{o}\) is strength of the potential and a is the screening radius. The two parameters \(V_{o}\) and a are related by [31]
\[V_{o}a=2K\eta\]
where K is momentum energy in lab frame and \(\eta\) is Sommerfeld parameter defined as
\[\eta=\frac{\alpha}{\hbar v}\]
Here, \(v\) is relative velocity of the reactants at large separation and \(\alpha=Z_{1}Z_{2}e^{2}\). So,
\[V_{o}a=\frac{Z_{1}Z_{2}e^{2}\mu}{\hbar^{2}}\]
For \(\alpha-\alpha\), \(Z_{1}=Z_{2}=2\), \(\mu=\frac{m_{\alpha}}{2}=1864.38525\)\(\frac{MeV}{c^{2}}\), \(e^{2}=1.44MeVfm\) and therefore \(V_{o}a=0.2758fm^{-1}\).
### Phase Function Method
The time independent Schr\(\ddot{o}\)dinger equation (TISE) can be written as
\[\frac{d^{2}u_{\ell}(r)}{dr^{2}}+\bigg{[}k^{2}-\frac{\ell(\ell+1)}{r^{2}}-U(r) \bigg{]}u_{\ell}(r)=0 \tag{8}\]
where \(U(r)=V(r)/(\hbar^{2}/2\mu)\) & \(k_{c.m}=\sqrt{E_{c.m}/(\hbar^{2}/2\mu)}\) and \(E_{c.m}=0.5E_{\ell ab}\).
For \(\alpha-\alpha\) system, the value of \(\hbar^{2}/2\mu\) = 10.44217 MeVfm\({}^{2}\).
Phase Function Method is one of the important tools in scattering studies for both local [32] and non-local interactions [33, 34]. The TISE in Eq.8 can been transformed to a non-linear Riccati equation of first order[32, 36, 36], which directly deals with SPS information, given by:
\[\delta^{\prime}_{\ell}(k,r)=-\frac{U(r)}{k}\bigg{[}\cos(\delta_{\ell}(k,r)) \hat{j}_{\ell}(kr)-\sin(\delta_{\ell}(k,r))\hat{\eta}_{\ell}(kr)\bigg{]}^{2} \tag{9}\]
The Riccati Hankel function of first kind is given by \(\hat{h}_{\ell}(r)=-\hat{\eta}_{\ell}(r)+i\ \hat{j}_{\ell}(r)\), where \(\hat{j}_{\ell}(kr)\) is Ricatti-Bessel and \(\hat{\eta}_{\ell}(kr)\) Riccati-Neumann function. By substituting the expressions for different \(\ell\)-values of these two later functions, we obtain the respective phase equations as:
1. \(\ell=0\): \[\delta^{\prime}_{0}(k,r)=-\frac{U(r)}{k}\sin^{2}[\delta_{0}+\kappa]\] (10)
2. \(\ell\) = 2: \[\delta^{\prime}_{2}(k,r)=-\frac{U(r)}{k}\bigg{[}-\sin\left(\delta_{2}+\kappa \right)-\frac{3\cos\left(\delta_{2}+\kappa\right)}{\kappa}+\frac{3\sin\left( \delta_{2}+\kappa\right)}{\kappa^{2}}\bigg{]}^{2}\] (11)
3. \(\ell\) = 4 \[\delta^{\prime}_{4}(k,r)=-\frac{U(r)}{k}\bigg{[}\sin\left(\delta_{ 4}+\kappa\right)+\frac{10\cos\left(\delta_{4}+\kappa\right)}{\kappa}-\frac{45 \sin\left(\delta_{4}+\kappa\right)}{\kappa^{2}}\] (12) \[-\frac{105\cos\left(\delta_{4}+\kappa\right)}{\kappa^{3}}+\frac{ 105\sin\left(\delta_{4}+\kappa\right)}{\kappa^{4}}\bigg{]}^{2}\]
These equations are solved using 5th order Runge-Kutta methods by choosing the initial condition as \(\delta_{\ell}(0,k)=0\) and integrating to a large distance.
Results and Discussion:
The observed resonances in \(\alpha-\alpha\) scattering experiments occurring at 0.09184 MeV, 3.03 MeV and 11.35 MeV [37] corresponding to \(\ell=0\), 2 and 4 channels respectively provide an understanding of \({}^{8}Be\) nuclear structure. These are named as S, D and G-states. The extremely strong resonance due to the S-state, is due to the repulsive Coulomb interaction that introduces a barrier height and thus creates a pseudo-bound state. Considering each of the potential models in RK-5 algorithm for each of the \(\ell\)-channels, we have obtained corresponding best model parameters by minimizing the mean absolute percentage error (MAPE), given by
\[MAPE=\frac{1}{N}\sum_{i=1}^{N}\Big{|}\frac{\delta_{i}^{expected}-\delta_{i}^{ simulated}}{\delta_{i}^{expected}}\Big{|}\times 100 \tag{13}\]
where \(\delta_{i}^{expected}\) and \(\delta_{i}^{simulated}\) are the expected and simulated scattering phase shifts respectively. This process of utilising all the available experimental SPS to determine the underlying interaction potential is akin to procedure of inverse scattering theory. Thus, each of the phenomenological models are in effect proposing different mathematical functions that guide in constructing the inverse potential for various \(\ell\)-channels [37].
Initially, we have treated the screening radius \(a\) in atomic Hulthen potential as a free parameter and obtained the optimised parameters by integrating the phase equation to large distance of about 40 fm. This is not the case in ref [10], where in the erf() function was cutoff at about 6 fm to obtain the optimised parameters. The optimised parameters for \(\ell=0,2\) along with respective MAPE values are shown in Table 1. We did not show the optimised parameters for \(\ell=4\), as the number of experimental data points available for this channel are only 4, whereas the number of parameters in case of double Hulthen, double Gaussian and double exponential is 5. This implies that the number of equations to be solved is less than the number of unknowns and the system is under-determined.
From Table 1, it is evident that the screening radius for \(\ell=0\) is greater than that for \(\ell=2\). Therefore, we conclude that as the angular momentum (\(\ell\)) increases, the value of 'a' should decrease. One can observe that MAPE convergences to values between 1 to 2 % for \(\ell=0\) and to between 2 to 4 % for \(\ell=2\).
\begin{table}
\begin{tabular}{c c c c c} \hline Mathematical Function & \(\ell\) & Optimized & Screening & MAPE \\ Model Parameters & & Parameters & radius (a) & \\ \hline Morse & 0 & (10.90, 3.31, 1.52) & 4.77 & 1.5 \\ (\(D_{0}\), \(r_{m}\), \(a_{m}\)) & 2 & (40.26, 2.02, 0.41) & 3.31 & 2.0 \\ \hline Double Gaussian & 0 & (28.81, 97.46, 0.23, 0.51) & 7.01 & 0.9 \\ (\(V_{a}\), \(V_{r}\), \(\mu_{a}\), \(\mu_{r}\)) & 2 & (193.58, 499.6, 0.58, 0.86) & 3.55 & 2.4 \\ \hline Double Hulthen & 0 & (58.48, 44.35, 0.99, 0.36) & 4.82 & 1.7 \\ (\(S_{\ell 1}\), \(S_{\ell 2}\), \(\beta\), \(\alpha\)) & 2 & (1623.35, 1494.35, 3.74, 2.07) & 4.81 & 3.7 \\ \hline MT & 0 & (1335.69, 443.49, 0.50) & 4.54 & 1.7 \\ (\(V_{R}\), \(V_{A}\), \(\mu\)) & 2 & (28771.21, 264.31, 1.54) & 4.39 & 3.3 \\ \hline Double Exponential & 0 & (78.65, 423.76, 1.22, 0.68) & 4.69 & 1.4 \\ (\(A\), \(B\), \(\alpha_{1}\), \(\alpha_{2}\))\({}^{i}\) & 2 & (1994.15, 375.95, 3.07, 1.99) & 4.13 & 3.1 \\ \hline \end{tabular}
\end{table}
Table 1: Model parameters of different mathematical functions for \(\ell=0\), 2 and 4 with screening radius ‘a’ as free parameter.
The potential plots for these two channels without and with centrifugal term added are shown in Fig. 1 (a) and (b) respectively. The inset of Fig 1(a) shows the centrifugal barrier height of \(\ell=0\) for various model potentials and it is seen that all of them are not high enough to make the S-state to be a pseudo-bound state. In Fig. 1(b), the inset shows the centrifugal barrier heights of \(\ell=2\), all of which are varying from 2.5 to 3.5 MeV. Even though the barrier heights are close to 3 MeV as one would expect from the observed resonance of \(\ell=2\), the depths of the potential after adding the centrifugal term are not shallower than that of \(\ell=0\). All these observations made us realise that the optimised potentials are not physically realistic interactions. Hence, we have reoptimised the parameters to ensure that the following conditions are met:
1. The height of the Coulomb barrier for the S state is equal to or near the pseudobound state.
2. When the centrifugal term is added, the potential depth for the D state is lower than that for the S state.
3. The heights of the Coulomb barrier for the D and G states are near to their observed resonance energies respectively.
In the second iteration, we have chosen various values of the screening radius 'a' and examined its impact in elucidating \(\alpha-\alpha\) scattering. To achieve this, we started increasing \(a\) value and observed that the Coulomb barrier height kept increasing and so also the MAPE values. The obtained potential depth, barrier height and corresponding MAPE values have been compiled for values of \(a\) from 10 to 25 fm in steps of 5 fm for \(\ell=0\) S-state in table 2. Overall the trend is that while the depth of the potential decreases with increasing \(a\) except for double gaussian, the barrier height increases close to expected 0.1 MeV. Similarly for D-state, \(a\) values were increased in steps of 1 fm from 4 fm onwards. It was found that up to 6 fm, the potential depth remained higher than that of S-state and only at 8 to 9 fm, the depths became shallower except for the double exponential function. The barrier height keeps decreasing with increasing \(a\) and MAPE steadily increases as well. Finally, now that screening parameter \(a\) is being fixed for a particular optimisation run, we could obtain the parameters for \(\ell=4\) as well. It was observed that for higher \(\ell\) values the screening parameter reduces. Hence, the values of \(a\) were started at even smaller value and more fine tuned by varying only in steps of 0.5 fm
Figure 1: Interaction Potential without and with centrifugal term \(\ell=0\) and 2.
now, from 3 to 4.5 fm. The barrier height keeps decreasing with increasing screening radius as in case of D-state. On the other hand, MAPE values tend to decrease for MT and double Hulthen, increase in case of double gaussian and reach a minima in cases of Morse and double exponential for some in between \(a\) values. While double exponential gives best MAPE of 0.1 at a = 4fm, Morse has best value of 0.5 at 3.5 fm. One can choose different sets of \(a\) values for each of the 5 model potentials as given in Table 3 and compare the respective interaction potentials for S, D and G states.
The interaction potentials, with and without the centrifugal term, is depicted in fig. 2. From the inset of fig. 2(a), it is evident that pseudo-bound states are obtained for all phenomenological models. Additionally, fig. 2(b) reveals that the inclusion of the centrifugal term causes the depth of the potential to be lower for \(\ell\) = 2 and 4 compared to \(\ell\) = 0, for all models except the Double Exponential model. Based on all these comparative observations, one can
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{4}{|c|}{\(\ell=0\)} \\ \hline MF/a & 10fm & 15fm & 20fm & 25fm \\ \hline Morse & [-10.85, 0.07, 1.6] & [-10.43, 0.09, 1.6] & [-10.37, 0.11, 1.6] & [-10.28, 0.11, 1.6] \\ Double Gaussian & [-11.32, 0.11, 0.8] & [-11.84, 0.14, 0.9] & [-12.07, 0.16, 0.9] & [-12.17, 0.17, 0.9] \\ Double Hulthen & [-11.23, 0.06, 1.8] & [-11.33, 0.09, 1.8] & [-11.52, 0.11, 2.1] & [-11.50, 0.12, 2.1] \\ MT & [-10.95, 0.06, 1.7] & [-10.23, 0.08, 1.7] & [-10.09, 0.09, 1.7] & [-9.95, 0.10, 1.6] \\ Double Exponential & [-10.73, 0.07, 1.4] & [-10.46, 0.09, 1.5] & [-10.40, 0.11, 1.5] & [-10.51, 0.13, 1.5] \\ \hline \multicolumn{4}{|c|}{\(\ell=2\)} \\ \hline MF/a & 6fm & 7fm & 8fm & 9fm \\ \hline Morse & [-10.82, 2.81, 3.8] & [-9.56, 2.69, 4.3] & [-8.82, 2.63, 4.5] & [-8.51, 2.56, 4.3] \\ Double Gaussian & [-12.98, 2.96, 4.2] & [-11.28, 2.87, 4.6] & [-10.23, 2.82, 4.9] & [-10.08, 2.78, 5.2] \\ Double Hulthen & [-11.77, 2.74, 3.8] & [-10.75, 2.63, 4.3] & [-10.04, 2.55, 4.6] & [-9.59, 2.49, 4.8] \\ MT & [-11.85, 2.85, 3.8] & [-10.56, 2.69, 4.2] & [-9.80, 2.61, 4.5] & [-9.34, 2.55, 4.7] \\ Double Exponential & [-14.29, 2.81, 3.7] & [-13.53, 2.68, 4.2] & [-11.77, 2.60,4.6] & [-11.57, 2.55, 4.7] \\ \hline \multicolumn{4}{|c|}{\(\ell=4\)} \\ \hline MF/a & 3fm & 3.5fm & 4fm & 5fm \\ \hline Morse & [-0.02, 10.98, 1.3] & [0.06, 10.80, 0.5] & [0.06, 10.67, 0.7] & [0.06, 10.44, 1.2] \\ Double Gaussian & [0.13, 9.79, 2.7] & [0.13, 9.61, 3.3] & [0.13, 9.53, 3.8] & [0.13, 9.32, 4.6] \\ Double Hulthen & [-2.67, 11.56, 3.7] & [-1.83, 11.37, 2.9] & [-1.11, 11.22, 2.3] & [0.13, 10.95, 1.2] \\ MT & [-3.13, 11.59, 3.4] & [-2.312, 11.40, 2.6] & [-1.61, 11.24, 2.0] & [-0.27, 10.95, 1.1] \\ Double Exponential & [0.03, 10.74, 0.7] & [0.03, 10.67, 0.4] & [0.03, 10.71, 0.1] & [0.03, 10.56, 1.1] \\ \hline \end{tabular}
\end{table}
Table 2: Potential depth, Barrier height and Mape at different screening radius (’a’) for \(\ell\) = 0, 2 and 4.
\begin{table}
\begin{tabular}{|l l l l l|} \hline Mathematical Function & Model Parameters & \(\ell=0\) & \(\ell\)=2 & \(\ell\)=4 \\ \hline Morse & (\(D_{b}\), \(r_{m}\), \(a_{m}\), \(\mathbf{a}\)) & (11.18, 3.42, 1.63, **15.0**) & (27.06, 1.89, 0.63, **7.0**) & (241.66, 0.37, 0.74, **3.5**) \\ & MAPE & 1.6 & 4.3 & 0.5 \\ \hline Double Gaussian & (\(V_{a}\), \(V_{r}\), \(\mu_{r}\), \(\mu_{r}\), \(\mathbf{a}\)) & (42.79, 98.36, 0.24, 0.44, **20.0**) & (68.93, 36102.08, 0.48, 1.78, **8.0**) & (128.68, 1.24, 0.49, 4.46, **0.5**) \\ & MAPE & 0.9 & 4.6 & 0.5 \\ \hline Double Hulthen & (\(S_{m}\), \(S_{m}\), \(S_{m}\), \(\beta\), \(\alpha\), \(\mathbf{a}\)) & (48.54, 35.67, 1.49, 0.90, **15.0**) & (1065.54, 963.88, 2.10, 0.54, **7.0**) & (48.17, 1.33, 5.36, 4.14, **5.0**) \\ & MAPE & 1.9 & 4.3 & 1.2 \\ \hline MT & (\(V_{b}\), \(V_{A}\), \(\mu\), \(\mathbf{a}\)) & (1080.63, 408.32, 0.43, **20.0**) & (8284,89, 1330.72, 1.27, **8.0**) & (958.32, 857.18, 1.01, **5.0**) \\ & MAPE & 1.7 & 4.5 & 1.1 \\ \hline Double Exponential & (\(A\), \(B\), \(\alpha_{1}\), \(\alpha_{2}\), \(\mathbf{a}\)) & (117.14, 80.59, 0.89, 0.73, **25.0**) & (8597.48, 52.55, 4.86, 1.39, **9.0**) & (55.51, 68.62, 3.08, 1.32, **4.0**) \\ & MAPE & 1.5 & 4.7 & 0.1 \\ \hline \end{tabular}
\end{table}
Table 3: Optimised Model parameters of different mathematical functions for \(\ell\) = 0, 2 and 4.
easily that the inverse potentials obtained using any of the chosen mathematical models are exactly same, with negligible variations, as they all converge to give mean absolute percentage errors to within about 1%. So, even though many different mathematical functions have been proposed over the years, they all guide the process of constructing inverse potentials in exactly same manner.
One might think that this might be due to global optimisation algorithm which seems to always converge to similar shape for the inverse potentials. So, to test this, we have considered only as many experimental data points as per the number of model parameters, so that the equations are neither under-determined nor over-determined. That is, we have obtained interaction potentials, for three parameter Morse and MT potentials, by considering the following energies for each of the partial waves during optimisation:
1. \(\ell=0\): E=[0.85 MeV, 9.88 MeV, 25.55 MeV]
2. \(\ell=2\): E=[3.84 MeV, 7.47 MeV, 25.55 MeV]
3. \(\ell=4\): E=[18 MeV, 21.13 MeV, 25.55 MeV]
Similarly, for mathematical functions with four parameters, such as double Gaussian, double Hulthen, and double exponential potentials, we have extended our analysis by adding one additional energy point for each \(\ell\) value. They are \(2.5MeV,18MeV,24.11MeV\) for \(\ell=0\), 2, and 4 respectively. The obtained parameters for S, D and G-states considering each of the model potentials are compiled in Table 4. It is evident that the results are comparable to those obtained from the global optimization algorithm. Even the mean absolute percentage errors obtained are only slightly higher than those obtained using GOA.
The obtained SPS for S, D, and G states are shown in Figure 3. The obtained scattering phase shifts follow the same trend as the expected ones [10] for \(\ell=0\) and 4. However, there are slight discrepancies from a lab energy of 7.88 MeV to 11.88 MeV for \(\ell=2\). Therefore, we can conclude that the atomic Hulthen as a screened Coulomb potential works well for the S and G states, but it is not as effective in capturing the peak that appears in the SPS of the D state, when we use phase function method to calculate SPS. One can also conclude that all the mathematical functions are more or less equally effective in guiding the construction of inverse potentials for all the \(\ell\)-channels.
## 4 Conclusions
The inverse potentials for alpha-alpha scattering have been constructed by considering various successful models proposed for nuclear interactions, such as Morse, double Gaussian, double
\begin{table}
\begin{tabular}{|c c c c c|} \hline Mathematical Function & Model Parameters & \(\ell=0\) & \(\ell=2\) & \(\ell=4\) \\ \hline Morse & (\(D_{0}\), \(r_{a}\), \(a_{a}\), \(\mathbf{a}\)) & (11.46, 335, 1.58, 15.0) & (27.04, 1.92, 0.62, 7.0) & (214.16, 0.47, 0.73, 3.5) \\ & MAPE & 2.3 & 4.5 & 1.0 \\ \hline Double Gaussian & (\(V_{\nu}\), \(V_{\mu}\), \(\mu_{e}\), \(\mu\), \(\mathbf{a}\)) & (50.77, 100.42, 0.24, 0.41, 20.0) & (53.97, 15558.85, 0.45, 1.80, 8.0) & (128.68, 1.24, 0.49, 4.46, 0.5) \\ & MAPE & 1.4 & 5.2 & 0.5 \\ \hline Double Hulthen & (\(S_{n}\), \(S_{\ell n}\), \(S_{\ell n}\), \(\beta\), \(\alpha\), \(\mathbf{a}\)) & (49.74, 36.61, 3.42, 2.82, 15.0) & (824.72, 737.18, 1.56, 0.12, 7.0) & (48.17, 1.33, 5.36, 4.14, 5.0) \\ & MAPE & 3.2 & 5.4 & 1.2 \\ \hline MT & (\(V_{\mu}\), \(V_{\mu}\), \(\mu\), \(\mathbf{a}\)) & (1189.46, 432.15, 0.46, 20.0) & (10737.19, 1527.78, 1.31, 8.0) & (952.66, 851.72, 1.01, 5.0) \\ & MAPE & 2.7 & 4.7 & 1.3 \\ \hline Double Exponential & (\(A\), \(B\), \(\alpha_{1}\), \(\alpha_{2}\), \(\mathbf{a}\)) & (73.73, 14.53, 1.31, 0.59, 25.0) & (6381.90, 62.53, 4.50, 1.44, 9.0) & (55.51, 68.62, 3.08, 1.32, 4.0) \\ & MAPE & 2.5 & 4.9 & 0.1 \\ \hline \end{tabular}
\end{table}
Table 4: Optimized Model Parameters of Interaction Potential for \(\ell=0\), 2, and 4 by taking number of data points equal to number of model parameters.
Hulthen Malfliet-Tjon and Double exponential functions with atomic Hulthen as ansatz for screened Coulomb interaction. The model parameters have been optimised using a global optimisation algorithm [38] which minimises mean absolute percentage error between the obtained scattering phase shifts from phase function method and the experimental data. On comparison of the resultant inverse potentials, one can conclude that all the mathematical models agree with each other to within small variations with almost similar mean absolute percentage errors. Since, inverse potential approach utilises all available experimental data, they provide a globally optimal solution which might become data dependent. Hence, we have also performed optimisation by considering only as many experimental data points as the number of model parameters. This procedure also lead to similar interaction potentials to those obtained using global optimisation. So, it is reasonable to conclude that all mathematical functions considered only serve to guide the process of obtaining the interaction potential and are not unique. This is going to be true for any potential, as long as, it has the basic required features of any two body interaction, which are repulsion at short distances, attractive nature for intermediate distances and exponentially falling of tail for large distances.
## Acknowledgments
A. Awasthi acknowledges financial support provided by Department of Science and Technology (DST), Government of India vide Grant No. DST/INSPIRE Fellowship/2020/IF200538.
**Author Declaration** The authors declare that they have no conflict of interest.
|
2306.05994 | Bridging Scales: a Hybrid Model to Simulate Vascular Tumor Growth and
Treatment Response | Cancer is a disease driven by random DNA mutations and the interaction of
many complex phenomena. To improve the understanding and ultimately find more
effective treatments, researchers leverage computer simulations mimicking the
tumor growth in silico. The challenge here is to account for the many phenomena
influencing the disease progression and treatment protocols. This work
introduces a computational model to simulate vascular tumor growth and the
response to drug treatments in 3D. It consists of two agent-based models for
the tumor cells and the vasculature. Moreover, partial differential equations
govern the diffusive dynamics of the nutrients, the vascular endothelial growth
factor, and two cancer drugs. The model focuses explicitly on breast cancer
cells over-expressing HER2 receptors and a treatment combining standard
chemotherapy (Doxorubicin) and monoclonal antibodies with anti-angiogenic
properties (Trastuzumab). However, large parts of the model generalize to other
scenarios. We show that the model qualitatively captures the effects of the
combination therapy by comparing our simulation results with previously
published pre-clinical data. Furthermore, we demonstrate the scalability of the
model and the associated C++ code by simulating a vascular tumor occupying a
volume of 400mm3 using a total of 92.5 million agents. | Tobias Duswald, Ernesto A. B. F. Lima, J. Tinsley Oden, Barbara Wohlmuth | 2023-06-09T16:06:25Z | http://arxiv.org/abs/2306.05994v1 | # Bridging Scales: a Hybrid Model to Simulate Vascular Tumor Growth and Treatment Response
###### Abstract
_Cancer_ is a disease driven by random DNA mutations and the interaction of many complex phenomena. To improve the understanding and ultimately find more effective treatments, researchers leverage computer simulations mimicking the tumor growth _in silico_. The challenge here is to account for the many phenomena influencing the disease progression and treatment protocols. This work introduces a computational model to simulate vascular tumor growth and the response to drug treatments in 3D. It consists of two agent-based models for the tumor cells and the vasculature. Moreover, partial differential equations govern the diffusive dynamics of the nutrients, the vascular endothelial growth factor, and two cancer drugs. The model focuses explicitly on breast cancer cells over-expressing HER2 receptors and a treatment combining standard chemotherapy (Doxorubicin) and monoclonal antibodies with anti-angiogenic properties (Trastuzumab). However, large parts of the model generalize to other scenarios. We show that the model qualitatively captures the effects of the combination therapy by comparing our simulation results with previously published pre-clinical data. Furthermore, we demonstrate the scalability of the model and the associated C++ code by simulating a vascular tumor occupying a volume of \(400mm^{3}\) using a total of \(92.5\) million agents.
keywords: Vascular tumor growth model, Angiogenesis, Combination therapy, Agent-based model, Hybrid model, 3D tumor simulation +
Footnote †: journal: Elsevier
## 1 Introduction
According to the WHO [1], cancer is one of the deadliest diseases worldwide and was responsible for one out of six fatalities in 2020. In the same year, officials registered 18 million new cases, roughly matching the entire population of the Netherlands. The sheer number of people suffering from cancer and the accompanying protracted fight against the disease drew many scientists into cancer research. Experimentalists and theoreticians alike strive to foster the understanding of tumor growth, disease progression, and different treatment protocols. United in their goal to battle cancer and improve the life quality of patients, experimentalists gather quantitative data on cancerous systems, while at the same time, theoreticians explore mathematical models for the disease, i.e., attempting to predict its evolution and reaction to treatment.
Mathematical cancer models usually belong to one of the following three categories: (1) models based on ordinary differential equations (ODEs), (2) models based on partial differential equations (PDEs), and (3) models based on discrete cell representations, which we refer to as agent-based models (ABMs). In 2016, Oden and co-workers [2] reviewed these approaches and embedded them into the wider context of the predictive, computational sciences, and the associated data-generating experiments. Earlier work from Byrne [3] and Beerenwinkel [4] documented the progression of the field and offer a great introduction to the topic.
Each of these modeling approaches involves strengths and weaknesses. For instance, ODE models are comparatively cheap to compute but fail to resolve spatial structures. PDEs incorporate spatial information but are significantly
more expensive to implement computationally. Further, they employ homogenized tumor properties (i.e., from tumor cells to cell densities), which may benefit more extensive tumor simulations but limits their ability to resolve effects on a cellular scale. This scale is best described with ABMs resolving the individual tumor cells and allowing a natural way to include cellular information. Unfortunately, the computational costs can quickly get out of hand. In 2019, Metzcar and coworkers [5] reviewed ABMs and their application in theoretical cancer research. Their work offers an excellent overview of the state of the art and the wide range of models leveraged by scientists.
Mathematical tumor models tend to consider simplified scenarios and, ABMs in particular, often focus on small simulations because of the computational costs. While simple models should generally be preferred [6], cancer thrives from complex interactions. To better understand them, the complexity must, of course, be captured by the mathematical models. In the experimental literature, researchers working on _in vitro_ drug screening have long realized that, for instance, 3D cell cultures and tumor spheroids better match _in vivo_ studies than flat, 2D cultures [7] and that the complex tumor microenvironment has a strong influence on the tumor development [8; 9; 10; 11]. Thus, it is of great interest to replicate the system's complexity _in silico_ to test and improve the current understanding with computational models [12]. However, the complexity poses new challenges as it requires significant software development effort upfront before a specific problem in a complex environment can be studied.
In this work, we present a novel hybrid model (ABMs + PDEs) simulating vascular tumor growth and the response to therapy combining Doxorubicin and Trastuzumab in 3D. Our model consists of three major components: (1) a set of PDEs governing the diffusive dynamics of the nutrients, the vascular endothelial growth factor (VEGF), and cancer drug compounds, (2) an off-lattice, center-based ABM with spherical agents for the tumor cells governed by a cell cycle, and (3) another off-lattice, center-based ABM with cylindrical agents describing the vasculature and sprouting angiogenesis. The PDEs and ABMs are coupled in both ways, i.e., the continua influence the agents and vice versa. The vasculature supplies nutrients and treatment drugs, which are consumed by the tumor cells; in contrast, tumor cells secrete VEGF triggering vascular growth via sprouting angiogenesis. Moreover, the tumor cells interact via two-particle forces. To overcome the previously outlined limitations, we base our implementation on the highly-efficient ABM simulation platform BioDynaMo [13; 14], which enables our C++ application code to scale seamlessly from single-core machines to modern compute nodes with hundreds of threads. The entire source code of the project is available, together with a Docker container and bash scripts to reproduce the findings. 1
Footnote 1: Available after final publication on GitHub: TobiasDuswald/angiogenesis; TobiasDuswald/bdm-angiogenesis-reproducer. Currently upon request.
While adopting ideas from previous research, most notably [15; 16; 17; 18; 13], this work represents several significant advances. First, we extend the previous ABMs [16; 17; 18] to a treatment scenario accounting for two different cancer drugs and move from 2D to 3D. Second, we show a novel way to model the vasculature and angiogenesis in a general ABM context and couple it with PDE models. Third, we show that the model qualitatively describes several aspects of tumor dynamics and captures the expected characteristics of the combination therapy as hypothesized by Jain [19] and experimentally investigated in [20]. Lastly, we demonstrate that the code can handle tissue-relevant sizes by simulating a \(9\times 9\times 9mm^{3}\) large volume hosting up to 92.5 million agents over 27 days. Computationally, this significantly exceeds that in previous work.
We first introduce the biological mechanisms of vascular tumor growth, the considered cancer drug treatment, and the associated preclinical study [20] in Section 2. We proceed by detailing the mathematical model in Section 3 and devote Section 4 to discussing our parameter choices. In Section 5, we run the fully coupled model demonstrating the model's ability to simulate vascular tumor growth and treatment by comparing the simulation results to the preclinical study. We scale our simulation to tissue-relevant scales in Section 6. We critically review our approach and address shortcomings in Section 7. Additionally, A displays the data used in Section 6, B gives an overview of all model parameter, and C explains our approach for statistically mimicking the initial vasculature for the tissue scale.
## 2 Preliminaries and Model Framework
In this work, we present a hybrid model simulating the vascular growth of a tumor and its decline under treatment. This section establishes the biological and medical background to understand the model's components and reviews
the literature. We begin with summarizing the most important biological concepts of vascular tumor growth and point the reader to related mathematical literature. In Section 2.2, we sketch the mechanism of action of the two cancer drugs considered by our model, Doxorubicin and Trastuzumab, and outline why a treatment combining both may excel in efficacy in contrast to current practice [19]. The preclinical study supporting this hypothesis [20] is presented in Section 2.3 and provides the most relevant data for this study.
### Vascular Tumor Growth and Mathematical Models
Cancer is a disease evolving on a cellular scale; on the most fundamental level, seemingly random mutations of the cell DNA occur during the regular cell cycle. These DNA changes trigger abnormal behavior, mainly affecting cell proliferation and mobility. Typically, cancerous cells replicate quicker than healthy cells enabling them to locally out-compete the normal cells for resources. However, increased proliferation and mobility are only two among many phenomena that differentiate tumor cells from regular cells. In a seminal series of papers [21; 22; 23], Hanahan and Weinberg identified the _hallmarks of cancer_, i.e., specific properties that either tumor cells or populations thereof show in contrast to normal tissue due to the altered DNA. They describe ten hallmarks and four further candidates as of 2022 [23, Fig. 1]. While these hallmarks characterize the tumor cells on small scales, Nia et al. [24] linked the hallmarks to macroscopic properties, which they called the _physical traits of cancer_. These traits encompass stress, pressure, stiffness, and the complexity of the tumor microenvironment. The hallmarks and physical traits of cancer form a solid basis for the theoretical investigation of cancerous systems using mathematical tools [25].
Among the ten hallmarks, _inducing and accessing vasculature_ is particularly important for the present study. If a local population of tumor cells grows, a commonly observed pattern is that it drains the locally available energy resources, e.g., oxygen and glucose, and creates a deadly, hypoxic environment for all cell types. The tumor cells enter a hypoxic state and secrete signaling substances such as VEGF to attract new vasculature, an observation usually attributed to Folkman [26]. The existing vasculature reacts by forming new sprouts that grow towards the hypoxic region to supply oxygen and rescue the dying cells. This process, called sprouting angiogenesis, forms a central component of this study. We note that there are alternative mechanisms to increase the tumor's vascular density. However, sprouting angiogenesis is usually dominant, and consequently, we focus on it in the present work (see [27; 1.2.1]).
As explained in the introduction, cancer dynamics are typically modeled with ODEs, PDEs, ABMs, or combinations thereof. An excellent summary of ODE methods can be found in Benzekry's work [28]. PDE based models leverage diffusive terms [29; 30] or a phase field description [31; 32; 33]. More recently, models involving fractional diffusion dynamics have been considered [34]; i.e., diffusion processes that deviate from the traditional Flick's law. ABMs have recently been reviewed by Metzcar [5]. It is common practice to combine the three approaches, e.g., using an ABM with cell internals modeled with an ODE system while diffusing substances are modelled with PDEs [16; 35].
Similar modeling paradigms have also been used to model the phenomena of (sprouting) angiogenesis. Villanova [27] presents an introduction to the topic in his Ph.D. thesis by summarizing different modeling approaches and explaining the biological background. In his research [36; 37], he combines a discrete model for the tip cells with a phase field model following them and classifying regions as being vasculature or not, an approach similar to [38]. Fritz and co-workers [39] describe a complex, coupled PDE model with a network growth algorithm considering the vasculature's statistical features. For more general reviews of angiogenesis models, we refer the reader to [40; 41], but for the present work, purely agent-based angiogenesis models are at the center of attention. Arguably one of the most important works in this regard has been carried out by Bentley et al. [42; 43]. They describe an initial blood vessel by points located on a cylinder connected via mechanical springs and reacting to external substances. Each point resembles an agent acting independently, forming sprouts and predecessors to vessels. Perfahl and co-workers [44] modelled the vasculature as a chain of spherical agents connected via springs showing similarities to our approach. In contrast, Phillips et al. [18] model angiogenesis in a 2D setting resolving the individual cells of the vessels modelled as tip and stalk cells. Their cancer model shares significant features with ours, but the angiogenesis module is conceptually different. Furthermore, we use the evolving vasculature to model the supply of Doxorubicin and Trastuzumab, two drugs discussed in the next section.
### Doxorubicinicn and Trastuzumab
We consider a treatment protocol involving the two well-known cancer drugs: Doxorubicinicn (DOX) and Trastuzumab (TRA). The U.S. Food and Drug Administration approved these drugs in 1974 and 1998, respectively, and they routinely find use in clinical applications. DOX is an _anthracycline_ frequently used in chemotherapy, popular because of its high efficacy in fighting many different types of cancer. In typical treatment scenarios, DOX is injected into the patient's veins, from where it spreads through the body and, ultimately, begins interacting with the cells. Effectively, DOX interrupts the DNA duplication by a process referred to as _intercalation_[45; 46]. Once cells fail to duplicate their DNA, they trigger safety mechanisms, often leading to the cell's death[47; 48]. For more information on DOX and its effects on cells, we refer the reader to [49; 50; 51; 52] and references therein.
TRA is a _monoclonal antibody_ and, thus, is more specific in its therapeutic action than DOX. In general, a monoclonal antibody is an antibody that only binds to a specific molecular structure (e.g., a protein). After binding, the antibody induces an immune reaction targeting its binding partner, which may depend on the monoclonal antibody and the binding partner. Historically, monoclonal antibodies had much success in cancer therapy [53]. TRA specifically binds to the so-called _human epidermal growth factor receptor type-2_ (HER2) located at the surface of some tumor cells. HER2 is often over-expressed in dangerous breast cancer variations (20-30%). The associated pathways lead to increased proliferation and, thus, tumor formation. When TRA binds to HER2, it inhibits proliferation and reduces survival. Moreover, there is evidence that TRA shows anti-angiogenic properties; i.e., it stops the formation of new blood vessels and prunes and regularizes the exiting tumor vasculature [54; 55]. For an in-depth literature review, see [56].
While both drugs have proven effective in fighting cancer, they may also have severe side effects. For instance, DOX has been linked to cardiotoxicity, neurological disturbances, and many other maladies (see references in [50; Section 3]). TRA is less harmful, but side effects still occur [56; Toxicity]. _Combination therapy_ strives to combine different drugs into one therapy strategy such that the drugs enhance each other's anti-tumor tumor properties while minimizing their toxicity, i.e., damage to the patient. In 2001, Jain [19] suggested a new paradigm for combining anti-angiogenic therapies with regular tumor treatment. He argued that anti-angiogenic drugs could be used to regularize the tumor vasculature, allowing it to deliver other anti-cancer drugs more effectively. For the case at hand, TRA would regularize the vasculature improving its supply properties. Afterward, lower doses of DOX may be sufficient to eradicate the tumor cell population. In the present work, we provide a computational model designed to illustrate this effect and to compare it to preclinical data introduced in the next section.
### In vivo Experiments for Combination Therapy
Sorace et al. [20] tested Jain's hypothesis [19] in a pre-clinical _in vivo_ study. They injected HER2+ breast cancer cells (BT474, ATCC) into the murine subjects and observed the tumor evolution over 70 days. They split the 42 murine subjects into six different treatment groups:
* Group 1: control group, treated with saline,
* Group 2: treated with DOX only,
* Group 3: treated with TRA only,
* Group 4: first treated with DOX, subsequently with TRA,
* Group 5: first treated with TRA, subsequently treated with DOX,
* Group 6: simultaneously treated with DOX and TRA.
All 42 animals remained untreated for 35 days and showed similar disease progression. Once the treatment started, Sorace and coworkers observed significant differences in tumor volume over time between the groups. The observations are displayed in Fig. 1, which shows that the tumor volume grows exponentially before the treatment begins. Furthermore, the treatments of groups 2 and 4 are observed to be ineffective. For group 3, we observe stagnation, and for groups 5 and 6 a significant decline in tumor volume. The data of these experiments were published in [57; Tab. 1 and 5] together with a calibrated ODE model. We merged the pre-treatment stages of the six groups into a separate dataset given in Tab. A.2 in A. These data, specifically the pre-treatment stage and the groups 2, 3, 4, and 5, play a fundamental role in assessing the quality of our hybrid model later in the results and discussion sections. We now shift our attention to the core of this work: the hybrid model.
## 3 The Hybrid Model
Our model incorporates the evolution of the tumor mass, its nutrient and blood supply, and the effects of the therapy in one comprehensive hybrid model. The tumor mass is described with an agent-based model composed of individual, spherical tumor cells independently progressing in their cell cycle but dependent on the concentration of external substances. The cells interact via two-particle forces. The blood vessels are modeled with individual, cylindrical agents managed in a tree-like structure, i.e., each agent has precisely one predecessor and either one or two successors. The developing vasculature delivers the nutrients and drug compounds to the cells but also reacts to the local VEGF gradient. Four substances are modeled as scalar fields: nutrients, VEGF, DOX, and TRA. All four substances obey reaction-diffusion equations and are coupled to the ABM via source and sink terms proportional to regularized \(\delta\)-distributions marking the agents' locations. Hence, the coupling between PDEs and ABMs is bi-directional. We implemented the model in C++ based on the highly efficient BioDynaMo framework [13; 14].
In the following subsections, different parts of the model and their interactions are described. We begin with the tumor cells, their cell cycle, and their interaction forces in Section 3.1. We continue with the blood vessels and explain the rules governing the dynamics of angiogenesis in Section 3.2. The equations governing the scalar fields are detailed in Section 3.3. After discussing the separate components of the model, we couple them in Section 3.4.
### Tumor cell
Our model describes the tumor on the cellular scale; i.e., each tumor cell is explicitly modeled as a spherical agent with stochastic behaviors. In the simulation, a tumor cell is a C++ object with various attributes. Focusing on the most relevant examples, a tumor cell is characterized by (1) a unique ID, (2) its position \(\vec{x}\) in the three dimensional space, (3) its nuclear, physical, and action radii \((r_{n},r_{p},r_{a})\), (4) its cell state \(s\), and (5) an internal clock tracking the time since the last state transition \(\Delta t_{s}\) to model the time-dependent phases of the cell cycle. All but the cell state \(s\) are real, possibly vector-valued numbers.
Figure 1: Mean and standard deviation of the tumor volume, measured over 70 days, of six different treatment groups: (a) group 1, (b) group 2, (c) group 3, (d) group 4, (e) group 5, and (f) group 6. The vertical lines indicate the day when each treatment was delivered (Doxorubicin (DOX), Trastuzumab (TRA), Saline (SAL)). Data taken from [20] and [57, Tab. 1 and 5].
The cell state \(s\) is a categorical variable taking values \(s\in\{Q,SG2,G1,H,D\}\). \(Q\) is the quiescent state in which the cell is idle and no special events occur. \(SG2\) and \(G1\) denote the proliferative cell states. In \(SG2\), the cell duplicates its inner components and prepares for cell division. The volume-preserving cell division marks the transition from \(SG2\) to \(G1\). In \(G1\), the cells grow until they reach their natural size. The states \(H\) and \(D\) denote the hypoxic and dead states, respectively.
The different cell states form the core of the tumor growth model. The transitions between them depend on the values of the four continua - the nutrients, VEGF, DOX, and TRA. We denote them as \(u_{n}\), \(u_{v}\), \(u_{d}\), and \(u_{t}\), respectively. The second basic component of the tumor model is the force model consisting of the repulsive and adhesive forces governing the cell-cell interaction. In what follows, we describe the stochastic model underlying the state transitions and detail the forces and their algorithmic computation afterward. The model of the cell cycle and the forces are, in part, based on previous work [15; 16; 18; 17].
#### 3.1.1 Cell cycle
The progression of a tumor cell in its cell cycle depends on the local substance concentrations but not on the surrounding cells. The state transitions are governed by stochastic as well as deterministic rules. Given the five states \((Q,SG2,G1,H,D)\), our cell cycle allows transitions \(Q\to SG2/H/D\), \(SG2\to G1/D\), \(G1\to Q\), and \(H\to Q/D\). In its entirety, the cell cycle is best represented graphically as depicted in Fig. 2.
We first focus on the three deterministic transitions: \(SG2\to G1\), \(G1\to Q\), and \(Q\to H\). The first two transitions simply require the cell to spend a given time in the cell state assuming that the time for the cells to duplicate their internals and their growth is fixed. For \(SG2\to G1\) and \(G1\to Q\), the times are \(T_{SG2}\) and \(T_{G1}\), respectively. The transition from \(Q\) to \(H\) depends on the nutrient concentration, i.e., if the concentration falls below the hypoxic threshold, \(u_{n}^{H}\), the cells transition from \(Q\) to \(H\); if the concentration raises above \(u_{n}^{H}\), the cells transition from \(H\) to \(Q\). The deterministic transitions are indicated by solid lines in Fig. 2. We use the Iverson brackets \([\cdot]\) to denote an if-statement: if the condition inside the brackets evaluates to true, the cell moves from one state to the other. With this
Figure 2: The cell cycle for the tumor cells. Deterministic and stochastic transitions are indicated by solid and dashed lines, respectively. The arrows indicate the direction of the transition. Transitions depending on the concentration of the nutrients (N), DOX, or TRA are labeled accordingly. Transitions solely depending on an internal clock are label with a stopwatch. Cell representation: _Cancer cell_ from the _DunBlue Center for Life Science (DBCLS)_ distributed under _Creative Commons Attribution 4.0 International license_ (modified). Stopwatch: _Stop Watch_ from _Simplelcon_ distributed under _Creative Commons Attribution 3.0 Unported_.
notation, we describe the deterministic transitions as
\[SG2\to G1: \ \left[\Delta t_{s}\geq T_{SG2}\right]\,, \tag{1}\] \[G1\to Q: \ \left[\Delta t_{s}\geq T_{G1}\right]\,,\] (2) \[Q\to H: \ \left[u_{n}<u_{n}^{H}\right]\,,\text{ and}\] (3) \[H\to Q: \ \left[u_{n}\geq u_{n}^{H}\right]\,. \tag{4}\]
To characterize the stochastic transitions, we first introduce two functions \(\varsigma\) and \(\varrho\)[17; 15] appearing repeatedly in the transition probabilities describing a smoothed Heaviside function and linear increase, respectively. The functions are given by the following equations:
\[\varsigma(x,a,b,\bar{x}) =1-\exp\left(-\left(a+\frac{1}{1+\exp(2\cdot b\cdot(x-\bar{x}))} \right)\Delta t\right)\,,\text{ and} \tag{5}\] \[\varrho(x,c,\bar{x}) =1-\exp\left(-\max\left(c\cdot\frac{x-\bar{x}}{1-\bar{x}},0 \right)\Delta t\right)\,. \tag{6}\]
The dependent variable \(x\) and its parameters are separated by a semi-colon. The construction of the functions implicitly assumes bounded values \(x,\bar{x}\in[0,1]\subset\mathbb{R}\). Moreover, \(\Delta t\) denotes the simulation time step. For \(\varsigma\), the parameter \(a\) offsets the function along the y-axis, the parameter \(b\) models the sharpness of the transition, and the parameter \(\bar{x}\) defines the transition point. For \(\varrho\), the parameter \(a\) describes the slope, and \(\bar{x}\) defines the starting point of the linear increase.
The stochastic transitions in the cell cycle are indicated by the dashed lines in Fig. 2. We extend the work of [15; 16; 18; 17] to account for the concentration of TRA and DOX. The transition probability for a tumor cell at position \(\vec{x}\) from \(Q\to SG2\) is modeled as
\[P_{Q\to SG2}=\varrho\left(u_{n}(\vec{x});c_{Q\to SG2},u_{n}^{Q\to SG2} \right)\cdot\exp\left(-\lambda_{Q\to SG2}u_{t}(\vec{x})\right)\,, \tag{7}\]
where we introduce three parameters characterizing the \(Q\to SG2\) transition indicated by a sub- or superscript. Note that \(\lambda_{Q\to SG2}\geq 0\). The exponential suppression is introduced because TRA leads to cell cycle arrest [58]. Introducing more parameters, we express the remaining stochastic transitions as
\[P_{Q\to D} =\varsigma\left(\,u_{n}(\vec{x});a_{Q\to d},b_{Q\to d},u_{n}^{Q \to d}\right)\cdot\left(1+\xi_{d}^{Q\to D}u_{d}+\xi_{t}^{Q\to D}u_{t}+\xi_{dt} ^{Q\to D}u_{d}u_{t}\right), \tag{8}\] \[P_{SG2\to SG2} =\varrho\left(u_{d}(\vec{x});c_{SG2\to SG2},u_{d}^{SG2\to SG2} \right)\,,\] (9) \[P_{SG2\to D} =\varrho\left(u_{n}(\vec{x});c_{SG2\to D},u_{n}^{SG2\to D} \right)\,,\] (10) \[P_{H\to D} =\left(r_{H\to D}\cdot\Delta t\right)\cdot\left(1+\xi_{d}^{H\to D }u_{d}+\xi_{t}^{H\to D}u_{t}+\xi_{dt}^{H\to D}u_{d}u_{t}\right)\,. \tag{11}\]
Here, linear and cross terms are added to parametrize the treatment effect. These terms appear in the \(Q\to D\) and \(H\to D\) transitions. We further introduce the \(SG2\to SG2\) transition triggering a reset of the internal clock. This transition models DOX's ability to interfere with the DNA duplication process via intercalation [45; 46]. If the DNA duplication process fails, cells may die which we capture with the added \(SG2\to D\) transition. Theoretically, the probabilities for \(Q\to D\) and \(H\to D\) may exceed 1 for certain parameter choices; practically however, this does not harm the implementation. One may formally rewrite the transitions as \(min(max(0,\bullet),1)\) to ensure a proper probability interpretation. We note that the cell cycle is parametrized by a total of 20 parameters (see Tab. 6 in B for all parameter values).
#### 3.1.2 Cell-cell forces
The cell-cell forces are taken from Rocha et al. [16]. In summary, the cells are represented as spheres with three radii (action, regular, and nuclear). The cell-to-cell force is a two-particle force depending on the distance \(d\) between two cells. It has adhesive and repulsive components that depend on the overlap of the different radii.
To compute the displacement of cell (\(i\)), all contributions from all other cells are summed; e.g.,
\[\vec{F}_{i}=\sum_{j\neq i}\vec{F}_{ij}\,. \tag{12}\]
We note that the force is zero if \(d\geq R_{A}\), where \(R_{A}\) is the sum of the two action radii of the cells involved in the interaction. In other words, the force is zero if cells do not overlap. Thus, we can define an index set
\[\mathcal{N}_{i}=\{j\neq i\mid d(\vec{x}_{i},\vec{x}_{j})\leq 2\cdot\max_{k}(r_{a} ^{(k)})\} \tag{13}\]
containing all neighbor cells whose action radii (\(r_{a}\)) overlap with the one of cell-\(i\), and compute the force as
\[\vec{F}_{i}=\sum_{j\in\mathcal{N}_{i}}\vec{F}_{ij}. \tag{14}\]
This is significantly cheaper to compute because only iterates over a small index set are used rather than all other cells. Computationally, we determine the index set \(\mathcal{N}_{j}\) through an efficient neighbor query based on an artificial, uniform grid with a discretization length \(h=2\cdot\max_{k}(r_{a}^{(k)})\), i.e., twice the largest action radius observable in the simulation at this time [13; 14]. After computing \(\vec{F}_{i}\), we update the position of the cell (\(i\)) as
\[\vec{x}_{i}(t+\Delta t)=\vec{x}_{i}(t)+\eta\vec{F}_{i}\cdot\Delta t\, \tag{15}\]
where \(\eta\) is a viscosity parameter describing the linear relationship between displacement and force. We note that we do not consider forces between the tumor cells and vessels which we discuss next.
### Blood Vessels and Angiogenesis
To model the nutrient supply, the vasculature is decomposed into small cylindrical compartments. Each compartment is computationally represented by a cylindrical agent. We base our implementation on BioDynaMo's neurite class [13] whose equations are detailed in [59].
More broadly viewed, the vasculature is represented as a linked, tree-like data structure. The structure is realized by assigning three references to each cylindrical vessel-agent containing the address of its unique predecessor and the optional addresses of up to two successors. We refer hereafter to these as mother and daughters. If a vessel-agent has no daughter, it is called the terminal end or tip cell; if it has one daughter, it is part of a regular, longer vessel segment; and if it has two daughters, we call it a branching point.
Assuming a given, initial vasculature, the individual vessel-agents evolve independently from each other based on locally available information. In the language of agent-based modeling, each agent executes the same stochastic behaviors (rules) describing how the object changes in a time step \(\Delta t\) depending on the local information. In this work, we focus on VEGF-triggered sprouting angiogenesis. Our stochastic rules differentiate between tip cells, the cells embedded in a regular vessel segment, and the branching points. The latter remain unchanged because they can neither form any further branches nor can they extend into any direction.
Any vessel agent that is part of the normal vasculature with a single mother and a single daughter is a candidate for branching and, therefore, can create a new tip cell. If the agent is allowed to branch depends on three criteria. First, we require the concentration of VEGF at the agent's position to surpass a threshold \(u_{v}(\vec{x})\geq u_{v}^{\text{thres}}\). Second, it has been observed that new tip cells only form, if there are no other tip cells in its vicinity. Hence, we require a minimum Euclidean distance \(d_{\text{tip}}\) to the closest tip cell (see discussions and references in [18; 42]). In other words, denoting the set of all tip cells as \(\mathcal{T}\), we demand that
\[\min_{k\in\mathcal{T}}(\|\vec{x}-\vec{x}_{k}\|)>d_{\text{tip}}. \tag{16}\]
Thirdly, to ensure the mechanical stability of the vessel, branching points must be separated by a minimal distance \(d_{\text{branch}}\) measured along the vessel. Denoting the curve defined by the vessel from the agent to the preceding and succeeding branching point as \(\Gamma_{p}\) and \(\Gamma_{s}\), respectively, a valid point for branching satisfies
\[\int_{\Gamma_{p}}ds>d_{\text{branch}}\ \ \text{and}\ \ \int_{\Gamma_{s}}ds>d_{\text{branch }}. \tag{17}\]
To evaluate the tip cell distance criteria, we leverage an octree implementation [60] updated after each simulation time step. To evaluate the distance to the branches, we iterate over the tree structures.
If all three criteria are satisfied, the agent evaluates a stochastic branching rule: if the generated random uniform number \(X\sim U(0,1)\) is smaller than the sprouting probability \(p_{s}=p_{s,\text{rate}}\cdot\Delta t\), the agent creates a second daughter
whose cylinder axis lies on a (random) cone around the VEGF gradient \(\nabla u_{v}\). We remark that we choose the parameter \(p_{s,\text{rate}}\) to be very small, e.g., it satisfies \(p_{s,\text{rate}}\cdot\Delta t\ll 1\) for reasonable choices of \(\Delta t\).
Tip cells, on the contrary, are never candidates for branching, i.e., we do not split vessels at the terminal end. Tip cells follow the VEGF gradient \(\nabla u_{v}\) to establish the vasculature in undersupplied regions characterized by high VEGF concentrations. We allow growth if the gradient at the tip cell's position surpasses a threshold \(\nabla u_{v}(\vec{x})\).
When growing, we elongate the tip cells in the direction of the vector
\[w_{1}\nabla u_{v}(\vec{x})+w_{2}\vec{d}+w_{3}X_{3}\, \tag{18}\]
where \(w_{1},w_{2},w_{3}\in\mathbb{R}^{+}\) are the modeling weights, \(\vec{d}\) denotes the axis of the cylinder, and \(X_{3}\sim U(-1,1)^{3}\). The modeling weights determine the direction of the growth, i.e., increasing \(w_{1}\) leads to vessels that follow the gradient more closely, increasing \(w_{2}\) creates inert vessels barely changing their directions, and increasing \(w_{3}\) allows more and more randomness in the growth.
To avoid unlimited growth after tip cell selection, a criterion to determine when to stop the growth is needed. Multiple criteria may be used; e.g., large VEGF concentrations, strong gradients, or some engineered criteria such as the quotient of the gradient magnitude and the concentration. In practice, stopping the growth once vessels reach high gradients proved effective.
The description of the initial vasculature is taken up in Section 4. To connect the new, evolving vasculature to the initial structure, we first branch from a cylinder with diameter \(d_{0}\). The diameter of the new branch is computed as
\[d_{1}=\max\big{(}5\mu m,\min\left(0.8\cdot d_{0},\ 20\mu m\right)\big{)}. \tag{19}\]
Furthermore, the diameter is decreased along the vessels. Typically, we elongate the cylinders until they reach a length of \(10\mu m\). We then split them into two cylinders of length \(9\mu m\) and \(1\mu m\). Denoting the initial diameter as \(d_{0}\), the diameter of the second agent is computed as
\[d_{1}=\max\big{(}5\mu m,\min\left(0.98\cdot d_{0},\ 20\mu m\right)\big{)}. \tag{20}\]
These heuristic criteria allow us to connect the microvasculature to larger initial vessels smoothly.
Lastly, we model the effect of TRA treatment on the vessel. TRA has been shown to regularize the vasculature and improve the supply properties [54; 55]. We capture this effect by modulating the DOX supply, \(\varphi(t)\), with a time-dependent supply factor, \(\chi(t)\),
\[\varphi(t)=1+\chi(t)\, \tag{21}\]
where a capacitor like ODE describes the time dependence
\[\frac{d\chi}{dt}=\begin{cases}\frac{1}{\tau_{1}}(\chi_{\max}-\chi)&\text{ during TRA treatment,}\\ \frac{1}{\tau_{1}}\chi&\text{else.}\end{cases} \tag{22}\]
An example of the time dependence of the supply factor is given in Fig. 3.
### Continuum models
Our model involves four continua: the nutrients, VEGF, and the drugs DOX and TRA. Recall that we denote their concentration as \(u_{n}\), \(u_{r}\), \(u_{d}\), and \(u_{t}\), respectively. In brief, the four continua have the following roles. The nutrients model the energy supply of the system. The more nutrients, the more likely the tumor cell transition into proliferative states and multiply. In the absence of nutrients, cells become hypoxic and eventually die (necrosis). VEGF is the signaling pathway allowing dying cells to trigger angiogenesis and attract blood vessels improving the nutrient supply. TRA and DOX disturb the regular cell cycle, prohibiting proliferation and favoring transitions into hypoxic or dead states.
Assuming no chemical interactions between the different substances, each of the four substances diffuse independently in the simulation domain and may naturally decay over time. The parameters depend on the substance under consideration. The tumor cells and blood vessels act as source and sink terms in the continuum models and we denote them as \(c(\vec{x},\vec{\alpha})\) and \(v(\vec{x},\vec{\beta})\), respectively, where \(\vec{\alpha}\) and \(\vec{\beta}\) are parameter vectors. Both functions are effectively a sum
of \(\delta\)-distributions with strictly positive coefficients, e.g., \(c(\vec{x},\vec{a})=\sum_{j}\alpha_{j}\delta(\vec{x}-\vec{x_{j}})\) and \(v(\vec{x},\vec{\beta})=\sum_{j}\beta_{j}\delta(\vec{x}-\vec{x_{j}})\) with \(\alpha_{j},\beta_{j}>0\) (more details for the coupling in Section 3.4). The equations governing the continuum model are
\[\left(\frac{\partial}{\partial t}-\nabla\cdot D_{n}\nabla+\lambda _{n}\right)u_{n} =-u_{n}c_{\vec{a}_{n}}+(1-u_{n})v_{\vec{\beta}_{n}} \text{with}\quad\vec{n}\cdot D_{n}\nabla u_{n} =0\ \ \text{on}\ \ \partial\Omega\, \tag{23}\] \[\left(\frac{\partial}{\partial t}-\nabla\cdot D_{v}\nabla+ \lambda_{v}\right)u_{v} =+(1-u_{v})c_{\vec{a}_{v}}-u_{v}v_{\vec{\beta}_{v}} \text{with}\quad\vec{n}\cdot D_{v}\nabla u_{v} =0\ \ \text{on}\ \ \partial\Omega\,\] (24) \[\left(\frac{\partial}{\partial t}-\nabla\cdot D_{d}\nabla+ \lambda_{d}\right)u_{d} =-u_{d}c_{\vec{a}_{d}}+\varphi(t)\ (1-u_{d})v_{\vec{\beta}_{d}} \text{with}\quad\vec{n}\cdot D_{d}\nabla u_{d} =0\ \ \text{on}\ \ \partial\Omega\,\] (25) \[\left(\frac{\partial}{\partial t}-\nabla\cdot D_{t}\nabla+ \lambda_{t}\right)u_{t} =-u_{t}c_{\vec{a}_{t}}+(1-u_{t})v_{\vec{\beta}_{t}} \text{with}\quad\vec{n}\cdot D_{t}\nabla u_{t} =0\ \ \text{on}\ \ \partial\Omega. \tag{26}\]
Here, the diffusion and decay constants are labeled as \(D_{i}\) and \(\lambda_{i}\), respectively. The terms \((u_{i})\) and \((1-u_{i})\) ensure that the concentration remains bounded by zero and one. They also indicate whether the tumor cells and the blood vessel act as a source or a sink. For instance, tumor cells secrete VEGF but consume nutrients, DOX, and TRA. For the blood vessels, the opposite is true, i.e., they consume VEGF but provide nutrients, DOX, and TRA. The factor \(\varphi(t)\) was defined in Eq. (21) and (22).
The equations are solved with a finite difference scheme on a cube domain. We use the forward difference in time and the central difference in space, a scheme commonly abbreviated as FTCS scheme. Labeling the points in the (isotropic) lattice with the triple \((i,j,k)\) and denoting the concentration at this point at time step \(n\) as \(u_{i,j,k}^{\text{f}}\), the discrete stencil computation to evolve the continuum models over time is given by
\[u_{i,j,k}^{n+1}= (1-\lambda\Delta t)u_{i,j,k}^{n}\] \[+\frac{D\Delta t}{h^{2}}\cdot\left(u_{i+1,j,k}^{n}+u_{i-1,j,k}^{n }+u_{i,j+1,k}^{n}+u_{i,j-1,k}^{n}+u_{i,j,k+1}^{n}+u_{i,j,k-1}^{n}-6u_{i,j,k}^{ n}\right) \tag{27}\] \[+\Delta t(1-u_{i,j,k}^{n})\cdot A_{+}(i,j,k)-\Delta t(u_{i,j,k}^{ n})\cdot A_{-}(i,j,k)\,\]
where \(h\) is the grid size, i.e., the distance between neighboring points. \(A_{+}\) and \(A_{-}\) characterize all agent source and sink terms, respectively. For the discrete setting, the distribution \(\delta(\vec{x})\) equals one if the grid point \((i,j,k)\) is the closest one, and is zero otherwise. In other words, if the point \(\vec{y}\) is labeled as \((l,m,n)\), the distribution \(\delta(\vec{x}-\vec{y})\) reduces to a product of Kronecker deltas \(\delta_{il}\delta_{jm}\delta_{ln}\). For stability, the time step \(\Delta t\) is bounded from above by
\[\left(\lambda+\frac{12D}{h^{2}}\right)\Delta t\leq 2\, \tag{28}\]
which follows from a standard stability analysis of the finite difference scheme.
Figure 3: Exemplary evolution of the DOX supply \(\varphi(t)\) during a treatment protocol. In this illustration, one may expect roughly 75% more DOX to be delivered via the vasculature compared to no TRA treatment. Note that \(\varphi(t)\) remains unaffected by DOX.
### Coupling of ABM and continuum
The continuum and the agent-based model are coupled: agents influence the evolution of the continuum via the source and sink terms, and the continuum values determine how the tumor cells progress in the cell cycle and drive the growth of the vasculature. For the latter, the interactions have been detailed in Section 3.1 and 3.2. For both agent types, the location of its center \(\vec{x}\) is identified and the closest grid value \(u_{i,j,k}\) is retrieved when we evaluate the probabilities with dependencies on the concentration of nutrients, VEGF, DOX, or TRA.
A detailed explanation of the source and sink terms is warranted. We denote the set of all tumor cells as \(\mathcal{T}\) and the set of all blood vessel agents as \(\mathcal{V}\). For the tumor cells, an agent \(i\) is identified by its center coordinate \(\vec{x}_{i}\). The function \(c(\vec{x},\vec{a_{i}})\) (\(i=n,v,d,t\)) takes on the general form
\[c(\vec{x},\vec{a_{i}})=\sum_{k\in\mathcal{T}}\alpha_{i,k}\delta(\vec{x}-\vec{x }_{k})\, \tag{29}\]
where the coefficients \(\alpha_{i,k}\) are functions modeling a dependence on the agents attributes. For instance, the consumption of nutrients could be chosen to be proportional to the cell size (surface or volume). Ultimately, only the VEGF model leverages the function character of the coefficients because only hypoxic cells secrete VEGF. Thus, the coefficients read
\[\alpha_{v,k}=\alpha_{v}\cdot\begin{cases}0\text{ if }s_{k}\neq H\\ 1\text{ if }s_{k}=H\end{cases}\, \tag{30}\]
with the positive, scalar parameter \(\alpha_{v}\). For all other continua (\(i=n,d,t\)), we reduce the coefficients to a scalar dependence
\[c(\vec{x},\vec{a_{i}})=\alpha_{i}\sum_{k\in\mathcal{T}}\delta(\vec{x}-\vec{x }_{i})\, \tag{31}\]
indicating that the source and sink terms only depend on the presence of the agent but not on any further attributes. We note that dead cells are exempt from the interactions because they neither consume nutrients nor secrete VEGF.
For the blood vessel, we reduce an agent to a number of points along the cylinder's center line. For a vessel-agent \(i\), the number of points \(m_{i}\) is determined at each time step based on its length \(l_{i}\) and the smallest grid constant \(h_{j}\) (\(j=n,v,d,t\)) as
\[m_{i}=\max\left(3,\text{ceil}\left(\frac{2l_{i}}{\min_{j}h_{j}}+1\right)\right)\, \tag{32}\]
where \(\text{ceil}:\mathbb{R}\rightarrow\mathbb{N}\) maps a real number to the next largest integer. For convenience, we define a line-\(\delta\) (\(\delta_{l}\)) for the vessel-agent \(i\) as
\[\delta_{i,l}(\vec{x})=\frac{2\pi r_{i}l_{i}}{m_{i}}\sum_{k=1}^{m_{i}}\delta( \vec{x}-\vec{x}_{k,m_{i}})\, \tag{33}\]
where the discretization points are given by
\[\vec{x}_{k,m_{i}}=\vec{x}_{i}^{n}+(k+1)\frac{\vec{x}_{i}^{n}-\vec{x}_{i}^{n}} {m_{i}}. \tag{34}\]
Here, \(\vec{x}_{i}^{n}\) and \(\vec{x}_{i}^{n}\) denote the start and endpoint of the cylinder's center line. Note that the line-\(\delta\) scales automatically the source and sink terms with the agent's surface and distributes the total contribution evenly between the points. The terms are eventually given by
\[v(\vec{x},\vec{\beta_{j}})=\sum_{i\in\mathcal{V}}\beta_{j,i}\cdot\delta_{i,l}( \vec{x})=\beta_{j}\sum_{i\in\mathcal{V}}\delta_{i,l}(\vec{x})\, \tag{35}\]
where we again simplify \(\beta_{j}\) to have no dependence on the agent's attribute. However, the coefficients \(\beta_{d}\) and \(\beta_{t}\) involve a time dependence; they are only non-zero during the DOX and TRA treatment. The coupling concludes our model description. While the individual modules are fairly simple, the number of modules and their interactions form a complex system parameterized by many parameters. We discuss the parameter choices and experimental setup in the next section.
## 4 Experimental Setup, Parameter Choice, and Reproducibility
In this section, we discuss the initialization of the simulation, the choice of parameters, and how to reproduce the findings of the present work. We begin by detailing the initial vasculature and tumor setup. We continue with different parameter sets. First, we discuss the parameters for the cell geometries and the forces between them. We proceed with the parameters related to the stochastic cell cycle and angiogenesis. Lastly, we discuss the continuum and coupling parameters. All parameters are given in A. We conclude the section by explaining how to reproduce the computational experiments presented in Section 5 and 6.
### Initial Simulation State
At the beginning of the simulation, the tumor cells, the vessels, and the continua are initialized to specific configurations. For the tumor cells, we randomly place cells in a sphere centered around the origin. The radius of the sphere is determined from the number of cells such that the resulting spheroid is randomly but densely packed:
\[r_{spheroid}^{3}=\frac{N}{0.64}r_{cell}^{3}. \tag{36}\]
In application, we use data from previous work [61; 62] to define the initial structure of the vessels. The data covers a volume of roughly \(1\times 1\times 2\ mm\) and serves as the starting point of our angiogenesis simulation. For the continua, we enforce homogeneous Neumann boundaries and initialize the substances to a constant value (usually 0).
The initial state of the simulation is depicted in Fig. 4. The figure shows a tumor spheroid that is surrounded by the data-realistic vasculature. While the size of the tumor cells as well as the size and structure of the vascular are motivated by data, the relative positioning of tumor and vasculature is unspecified. Here, the tumor spheroid is placed in the empty center between the vessels; however, positioning the tumor at any other place within the simulation space is a valid choice as well.
Figure 4: Initial configuration of the simulation. The tumor cells are shown in green independent of their state. The color of the vessel indicates the diameter; segments with larger diameters are shown in dark red, small diameters are shown in light red.
### Cell and Force Parameters
The tumor cells in our model describe BT-474 breast cancer cells as used in previous ABM studies [16; 18; 17; 15] and the related pre-clinical study [20]. The nuclear, regular, and action radius of the cells are chosen as \(9.953\mu m\), \(5.296\mu m\), and \(12.083\mu m\), respectively [15]. See Tab. B.5. The parameters determining the strength of the repulsive and adhesive force components are taken from [17]; including the viscosity parameter found in the related source code. See Tab. B.8.
### Cell Cycle and Angiogenesis Parameter
Lima et al. [17] calibrated their BT-474 breast cancer cell ABM with experimental data. Our cell cycle shares the deterministic and some of the stochastic transitions with their model. Here, we use their calibrated parameters as starting point. We further obtained information for effective parameters from exponential fits of Sorace's data [20] to further guide our parameter choices. These parameters describe the effective behavior of a simple exponential surrogate, i.e., the evolution of the number of tumor cells merging the effects of all transitions in two numbers. The cell-cycle parameters are summarized in Tab. B.6.
While adopting common ideas from earlier work (tip cells following the gradient), the angiogenesis model is novel and, thus, parameters can hardly be derived from the literature. Phillips and co-workers [63] recently developed an ABM for angiogenesis and calibrated it with data from an experiment specifically designed for this scenario. They found that the sprouts extend with a speed of roughly \(2\frac{\mu m}{h}\), which we adapt for this work. All other parameter choices for vessels and the angiogenesis algorithm are summarized in Tab. B.4.
### Continuum Parameter
In this work, we consider four substances: the nutrients, VEGF, DOX, and TRA. According to Eq. (23), they obey reaction-diffusion equations, and their dynamics are determined by their respective diffusion coefficients \(D_{x}\) and the decay constants \(\lambda_{x}\), where \(x\) labels the substances. None of these constants are directly available.
#### 4.4.1 Diffusion Coefficients
To overcome this limitation, we first adopt Lima and coworkers' [17] estimate for the value of the nutrient diffusion \(D_{n}=50\frac{\mu m^{2}}{h}\). Intuitively, diffusion is a passive transport phenomenon and larger, heavier objects should diffuse more slowly. Einstein [64] showed that the diffusion coefficient is inversely proportional to the radius of the diffusing particles, i.e., \(D\sim r^{-1}\). Since it is non-trivial and beyond the scope of this work to define an effective radius for complex protein structures, we work with the simple hypothesis that the mass of a molecule or a protein structure scales cubically with the radius, e.g., \(m\sim r^{3}\). We conclude that \(D\sim m^{-1/3}\). Thus, if the masses of two particles are related by the relationship \(m_{2}=\alpha m_{1}\), the respective diffusion coefficients behave as \(D_{2}=\alpha^{-1/3}D_{1}\). The masses of all four diffusing molecules are known and, together with Lima's estimate for \(D_{n}\), we compute estimates for the remaining diffusion constants. The masses, values for \(\alpha\), and the diffusion constants are given in Tab. (1).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & molecular mass [\(\frac{g}{mol}\)] & \(\alpha\) & \(\alpha^{-1/3}\) & \(D_{x}\) in [\(\frac{\mu m^{2}}{h}\)] \\ \hline Glucose (nutrients) & 180 & 1 & 1 & 50.0 \\ VEGF (monomer) & \(19.3\cdot 10^{3}\) & 107 & 0.21 & 10.5 \\ VEGF (dimer) & \(38.6\cdot 10^{3}\) & 214 & 0.16 & 8.0 \\ DOX & 543 & 3 & 0.69 & 42.5 \\ TRA & \(145\cdot 10^{3}\) & 806 & 0.11 & 5.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The table shows 1) the molecular mass of the diffusing molecules and proteins, 2) the factor \(\alpha\) expressing the mass in terms of the glucose mass, and 3) the scale factor \(\alpha^{-1/3}\) for the diffusion coefficients.
#### 4.4.2 Decay, Source, and Sink Terms
For the continua, the decay parameter and the tumor cell sink terms both lead to an exponential decay. Ignoring the diffusion and sink terms, the update rule for the substance concentration at a given position reads
\[u^{n+1}=(1-\lambda^{\prime}\cdot dt)u^{n}. \tag{37}\]
If we add the sink terms of \(N\) tumor cells, we obtain the relationship
\[u^{n+1}=\left(1-\lambda\cdot dt-\left(\sum_{i=1}^{N}\bar{r}_{i}\right)dt \right)u^{n}=\left(1-\lambda\cdot dt-\left(\sum_{i=1}^{N}\frac{r_{i}}{dx\cdot dy \cdot dz}\right)dt\right)u^{n}\, \tag{38}\]
where we rewrite the consumed concentration \(\bar{r}\) in terms of the consumed amount \(r\). Acknowledging that \(r_{i}\) is independent of the tumor cell under consideration, we use a homogenization approach to rewrite the previous equation in terms of the tumor cell density \(\rho\):
\[u^{n+1}=\left(1-\lambda\cdot dt-\frac{rN}{dx\cdot dy\cdot dz}dt \right)u^{n}=\left(1-\lambda\cdot dt-r\cdot\rho(x,y,z)dt\right)u^{n}. \tag{39}\]
Comparing the previous equations yields
\[\lambda^{\prime}=\lambda+N\bar{r}=\lambda+r\rho(x,y,z)\, \tag{40}\]
displaying that our model effectively shows a global decay of the substances as well as a local decay depending on the density distribution of the tumor cells. Furthermore, (40) allows us to compare the parameters \(\lambda\) and \(r\) with data and simple exponential surrogates \(\lambda^{\prime}\) attributing effects to tumor cell presence or not.
DOX engages in different chemical reactions whose effect we model with the decay constant and the sink terms. According to the FDA [65], DOX shows a terminal half-life of 20 to 48 hours. Data for TRA suggest a dose-dependent half-life of 1.7 to 28 days [66; 67]. These data imply the decay constants of \(\lambda^{\prime}_{d}\in[14.4\cdot 10^{-3},34.7\cdot 10^{-3}]h^{-1}\) and \(\lambda^{\prime}_{t}\in[1.0\cdot 10^{-3},17.0\cdot 10^{-3}]h^{-1}\), respectively. However, the data stems from generic experiments and is not necessarily representative. Lima and coworkers [57] recently calibrated their ODE model to fit the same data that we consider here and obtained the decay constants \(\lambda^{\prime}_{b}=12.0\cdot 10^{-3}h^{-1}\) and \(\lambda^{\prime}_{t}=20.0\cdot 10^{-3}h^{-1}\). For DOX, their estimates are slightly below the data range, and for TRA slightly above.
We realize that DOX is unspecific in its nature while TRA only interacts with the HER2 receptor of the tumor cells. We further consider the initial vasculature and the not explicitly modeled regular tissue to be in equilibrium. Consequently, the excess nutrients supplied via the new vasculature should primarily be consumed via the tumor cells (see also the Warburg effect [68; 69]). Using these findings and further values from [17], we summarize all our parameter choices regarding the continua in Tab. 9 and 7.
### Reproducibility
To ensure transparency and reproducibility, we share all source code and data used in the project2. To reproduce the results of the following sections (i.e., Section 5 and 6), we need to fix four key components:
Footnote 2: available after final publication, currently upon request
* the version of the BioDynaMo source code,
* the version of the application source code including all possible changes,
* the parameters used for the simulation, and
* the postprocessing pipeline.
We share two repositories on GitHub: (1) the repository TobiasDuswald/angiogenesis contains the application code, and (2) the repository TobiasDuswald/bdm-angiogenesis-reproducer contains the parameters, possible patches to the source code, and postprocessing routines. We structured the latter repository such that it contains one folder in "experiments/*" for each result shown in the main text. The respective folders have the information to adequately initialize the code and parameters for the simulation runs. For convenience, the initialization, build process, execution, and post-processing of each computational experiment is wrapped in bash scripts. Thus, to reproduce any figure in the main text, the reader only needs to run a single bash script. Note, however, that BioDynaMo simulations are not bit-reproducible at the time of writing.
## 5 Results
In this section, we present the results of our computational experiments. They are ordered such that their complexity gradually increases. We begin with simulating the growth of tumor spheroids in the absence of any vasculature and treatment in Section 5.1. Next, we investigate different vascular patterns arising from the angiogenesis algorithm in Section 5.2. Afterwards, we demonstrate the fully coupled model by simulating the vascular growth and treatment. We first establish a clear and visual understanding of the different phases of the simulation by showing a conceptual simulation in Section 5.3. In Section 5.4, we then focus on the key quantity of interest, the tumor volume, and show how it evolves over time for different treatment scenarios. We qualitatively compare these results to Sorace and co-workers' observations [20; 57]. For all simulations involving the vasculature, we consider the initial setup described in the previous section and summarized in Fig. 4.
### Tumor Spheroids and the Hypoxic Threshold
In this subsection, we consider the growth of a tumor spheroid ignoring vasculature and treatment protocols (\(u_{v}=u_{d}=u_{t}=0\)). To initialize the simulation, we set the nutrients to a constant value and employ Dirichlet boundaries with the same, constant value; i.e. \(u_{n}(t=0)=0.5\) and \(u_{n}(t)|_{\partial\Omega}=0.5\). In the absence of vasculature, the Dirichlet boundary conditions effectively act as nutrient supply. Here, we restrict ourselves to investigating the hypoxic threshold \(u_{n}^{H}\) because earlier 2D studies showed that this parameter has significant influence on the number of proliferative cell driving the tumor growth (17, Fig. 6B). The parameter marks the transition from the quiescent to the hypoxic state and, thus, prohibits cell proliferation in regions with insufficient nutrient availability (\(u_{n}<\sigma_{n}^{H}\)). In other words, one may say that the hypoxic threshold defines hypoxic and proliferative regions via a level-set function on \(u_{n}\). For this experiment, we place 500 tumor cells in the center of our cubical simulation domain and simulate the growth of the tumor for 200 days with a time step of one minute for different hypoxic thresholds \(u_{n}^{H}\in\{0.15,0.13,0.12,0.11\}\).
Figure 5 shows the spheroid over time for the largest hypoxic threshold \(u_{n}^{H}=0.15\). The cells are initially in proliferative states, consume nutrients, and the spheroid grows. Over time, a hypoxic region in the center of the spheroid begins to form. The border between hypoxic and proliferative regions is implicitly visualized as the sharp transition zone between proliferative (yellow, green) and hypoxic (grey, dark blue) tumor cells. The time-dependent border may be defined as a hypersurface satisfying \(u_{n}(x,t)=u_{n}^{H}\). Its dynamics together with the spacial structure of the tumor effectively determine the evolution of the spheroid. The more tumor cells lie outside the hypersurface, the more cells participate in the exponential cell proliferation. Parameters such es the nutrient consumption and the hypoxic threshold define the shape and dynamics of this surface. The dynamics of the tumor growth effectively comes down to a race between the spatial tumor extend and the hypersurface, i.e., if the hypersurface can spread quicker than the tumor to eventually enclose the entire spheroid and stop the growth. This is the case for \(u_{n}^{H}=0.15\) where the tumor eventually stops growing and dies off, see Fig. 5 on the right.
With decreasing \(u_{n}^{H}\), we observe vastly different growth patterns in Fig. 6. We display the spheroids for the remaining hypoxic thresholds (\(u_{n}^{H}\in\{0.13,0.12,0.11\}\)) at specific time points in the simulation (i.e., 200, 165, and 145 days). We further show the dynamics with a graph of the cell numbers in different states over time. In Fig. 6, each row corresponds to a hypoxic threshold. In contrast to Fig. 5, the hypersurface cannot cover the entire spheroid and the growth never stops completely. For instance for \(u_{n}^{H}=0.13\) (Fig. 6, top row), almost all cells transition into the hypoxic state after roughly 50 days. However, some cells on the surface still lie outside the hypoxic regions and continue to proliferate (similar to the fourth spheroid in Fig. 5). These few cells eventually move on to form satellite tumors on the surface of the original spheroid. Further lowering the threshold to \(u_{n}^{H}=0.12\) (Fig. 6, middle row), more and more cells on the surface remain in the proliferative states and larger parts of the spheroids are covered with proliferative cell populations. Once we reach \(u_{n}^{H}=0.12\) (Fig. 6, bottom row), the entire surface remains proliferative throughout the simulation and the tumor shows the characteristic proliferative hull around a necrotic core. Over all, the graphs clearly show that a lower hypoxic threshold leads to stronger proliferation. It is interesting to note that the stochasticity of the system breaks its symmetry for \(u_{n}^{H}\in\{0.13,0.12\}\). After investigating the dynamics of the tumor growth, we now shift our attention to the development of the vasculature via sprouting angiogenesis.
sparsity pattern of the network. The weights combining the gradient information, randomness, and previous growth direction allow us to interpolate between a random walk and smooth curves along the gradient. The VEGF and gradient thresholds put hard limits on the signal strength that a vessel agent has to sense to form a sprout. Here, we want to focus on the parameter that is easily overlooked.
The term coupling the vessels to the VEGF concentration is the most influential parameter on the structure of the vasculature. While this effect is hard to quantify, we depict the final vasculature of the simulation with and without the coupling term in Fig. 7 (all other parameters are kept identical). It is evident that the vasculature differs significantly for the two scenarios. Without the coupling, the vessels simply grow towards the tumor center in relatively straight lines. We also note that even if the vessels branch, they follow almost the same path. In contrast, the coupling term removes VEGF from the vessel's vicinity and, thus, it locally changes the gradient field encouraging new sprouts to grow away from its parent vessel, avoid other vessels growing towards the tumor, and search for alternative paths to revive the hypoxic regions. The network of the coupled simulation produces a significantly more diffuse network in which vessels surround the tumor rather than growing towards its center. In experimental work [70], researchers showed that typically the blood flows on the outskirts of the tumor spheroid indicating that the tumor-surrounding, diffuse network is more realistic than the one ignoring the coupling. It is worth noting that the diffuse growth process also produces vessels that, over time, grow in other VEGF rich regions other then the tumor core. Our model does not prune such growth; however, pruning mechanisms such as suggested in [62] are beyond the scope of the present work.
### Vascular Tumor Growth and Treatment
Combining the previous sections, we now aim to simulate vascular tumor growth and treatment. We initialize the simulation with 1000 tumor cells in the center of the vasculature, see Fig. 4. We start the simulation with no nutrients, i.e., all cells take the hypoxic state and secrete VEGF. We choose to couple the vessels to the VEGF field to achieve a more realistic, tumor-surrounding micro-vasculature. The vasculature develops around the tumor over time and we subsequently expect the tumor to grow into the surrounding, nutrient-rich regions. At day 70, we turn off the vessel growth algorithm to avoid growth resulting from sprouts that may not hit the stopping criteria. We let the tumor proliferate until day 102 on which we trigger the treatment. We simulate the treatment by turning on the TRA and DOX source terms between days 102-104 and 106-108, respectively. We remark that the time scales are somewhat arbitrary; we choose them such that the tumor covers the newly vascularized region before the treatment begins.
In Fig. 8, we show the evolution of the simulation over time. While we start with only 1000 tumor cells, the number increases by a factor of 5 until the treatment begins. Figure 8 (a) shows the simulation briefly after the initialization. All tumor cells are in the hypoxic state and secrete VEGF displayed in blue. The dynamics of VEGF are encapsulated in the PDE (Eq. (24)) and, consequently, it diffuses from the tumor into the surrounding regions. After some time that primarily depends on the secreted amount and the diffusion coefficient, VEGF reaches the vasculature and, in reaction, sprouts form and move along the gradient towards the tumor in the center. The new vasculature supplies nutrients, and once the nutrients and vessels reach the spheroid, cells on the surface begin transitioning into proliferative states (Fig. 8 (b)). Afterward, the vasculature keeps developing until terminated at day 70 (Fig. 8 (c)). The tumor evolves in the final vasculature with its proliferative ring until it reaches its largest size in Fig. 8 (d).
Figure 5: Evolution of the tumor spheroid for \(u_{n}^{H}=0.15\). Time progresses from left to right. The colors indicate the state: yellow (Q), bright green (G1), dark green (SG2), gray (H), and black (D).
Figure 6: Evolution of tumor spheroids. The rows correspond to different hypoxic threshold \(u_{\text{s}}^{H}\in\{0.13,0.12,0.11\}\) (top to bottom). The first columns shows all cells, the second column adds a cut-out revealing the inner structure, and the last column shows the number of cells in different states over time. The spheroids for the hypoxic thresholds 0.13, 0.12, and 0.11 are shown after different simulation times, i.e., after 200, 165, and 145 days, respectively. The boundaries start to affect the simulation afterwards. The colors indicate the state: yellow (Q), bright green (G1), dark green (SG2), gray (H), and dark blue (D).
It is interesting to note that the model favors growth along the vessels because these regions provide more nutrients and the cells are more likely to transition into the proliferative state SG2. Thus, more tumor forms in the well-vascularized regions. Admittedly, this is not immediately evident in Fig. 8; however, the bottom region in (d) is poorly vascularized and one can see that the hypoxic cells reach all the way to the surface of the tumor spheroid (d/e). Furthermore, considering (b), we observe a fairly dense network on the right side of the tumor resulting in an outgrowth to the right in (c). Overall, the model shows expected characteristics and significant similarities to images of 3D _in vitro_ studies [71].
After the tumor has formed, we trigger the treatment with TRA and DOX. Figure 8 (e) shows the tumor and the concentration of TRA (purple low, blue high) created by the vessel source terms. After the treatment, tumor cells stop proliferation and begin transitioning into the dead state inhibiting further tumor growth. The effects of the treatment are investigated in the following section.
### Treatment Comparison
Recall that a principal goal of this investigation is to build a model to better understand the combination of DOX and TRA for breast cancer treatment. In this section, we initialize the simulation as in the previous two experiments and Fig. 8 depicts the different simulation stages. Here, we selected the treatment parameters such that our model accurately describes Jain's hypothesis [19] and matches the trends in the data [20]. For the treatment, we allocate three slots between the days 108-109, 110-111, and 112-112. During these intervals, the vasculature acts as source terms for the cancer drugs. We consider four treatment scenarios relating to the treatment groups 2, 3, 4, and 5 of Fig. 1, i.e., DOX only, TRA only, TRA followed by DOX, and DOX followed by TRA. For each treatment scenario, we run 10 simulation runs to account for the inherent stochasticity and plot the mean and standard deviation of the number of cells in different states over time. The results are depicted in Fig. 9. All simulations use the identical set of parameters and only differ in the treatment schedule.
Figure 9 (a) shows the treatment effects when we only apply DOX. In our simulations, this protocol is ineffective and the tumor growth is barely disturbed. After the application, the quiescent cells show a slight decline but they quickly recover. Recall that DOX has a short half-life and, thus, long-term changes are not readily observed. The unaffected tumor growth agrees qualitatively with the data in Fig. 1 (b).
TRA has a significantly longer half-life and we expect to see long-term effects. In Fig. 1 (c), the data shows that the TRA treatment stalls the tumor growth. Our simulations show the same pattern, e.g., in Fig. 9 (b), the tumor stops growing. This effect is modeled with the \(Q\to SG2\) suppression through TRA.
Next, we consider the scenario in which we first apply TRA and subsequently supply DOX - the test case for Jain's hypothesis. In Fig. 9 (c), right after DOX is applied, we observe a sharp decline in quiescent cells and a strong increase in the number of dead cells. Here, the dose is significantly more effective than if only DOX is applied. This is
Figure 7: Simulated vasculature (a) with and (b) without vessels acting as sink terms. The tumor cells are shown in green independent of their state. The color of the vessel indicates the diameter; segments with larger diameters are shown in dark red, small diameters are shown in light red.
Figure 8: Visualization of a full simulation run with tumor cells colored by their cell state: Q (yellow), SG2 (dark green), G1 (light green), H (gray), and D (black). (a) Initial hypoxic population secreting VEGF (blue) to trigger angiogenesis. (b) First vessels reach the tumor surface and supply nutrients (green) leading to cells taking proliferative states on the surface. (c) Final state of the vasculature, i.e., we deactivated the vessel growth algorithm at this point. (d) Final tumor before treatment initialization. (e) Early stage of the treatment (TRA in purple).
due to the improved supply properties of the vasculature caused by the preceding TRA treatment allowing more DOX to enter the system. Among all the simulated scenarios, the TRA-TRA-DOX treatment shows the strongest treatment effect, i.e., the number of (living) tumor cells at the end is the lowest. This result agrees with Fig. 1 (e) and Jain's hypothesis.
Lastly, we want to consider the inverse case, e.g., DOX treatment followed by TRA (Fig. 9 (d)). Our simulation results are, in this case, hard to distinguish from the case in which we solely use TRA. Unfortunately, they disagree with the data (Fig. 1 (d)) which suggests that the first dose of DOX prohibits TRA from being effective. The present model does not capture this feature. Terms involving both drugs cannot explain the observation because they do not differentiate between the order in which drugs arrive. We hypothesize that DOX either affects the vessels and therefore the supply, or effects of DOX in the cell's internals disturb the pathways used by TRA. Both effects have not been considered in the model. While there are some hints that DOX may in fact damage the microvasculature [72], this would also harm the nutrient supply and contradict the strong growth in Fig. 1 (d). Thus, we lean towards the latter and hypothesize that DOX negatively influences the way TRA can work inside the cells.
## 6 Towards Large Scale Simulations
Criticism of some ABMs has arisen because of their high computational costs and lack of scalability. Typical ABM simulations focus on small-scale systems and cannot simulate medically relevant sizes. However, recent advances in ABM software [13; 14] address many of the computational bottlenecks and provide the foundation for scaling up simulations. Leveraging these optimizations through the BioDynaMo API, we here demonstrate that our model and the associated C++ code can handle medically relevant system sizes by reproducing the pre-treatment data of Sorace's pre-clinical study [20].
We choose a \(9\times 9\times 9mm\) simulation volume. For lack of data availability, we stochastically mimic the vascular density across the simulation volume. For details, consider C. In the pre-clinical study [20], the researchers injected 10 million tumor cells into rodents. To agree with the average tumor volume observed on day seven, we
Figure 9: Evolution of the number of tumor cells in the different states for different treatment protocols. The line of the _living_ cells add up all states but the dead cells. The line of the _total_ number of cells further includes the dead cells.
initialize our simulation with 6 million tumor cells in a spheroid. We focus on the pretreatment stage and simulate from day 7 to 34 with a timestep of 10 minutes. In other words, we simulate 27 days with 3888 time steps. We discretize the continua with \(22.5\times 22.5\times 22.5\mu m\) voxels (roughly the size of a tumor cell).
Figure 10 shows the tumor volume over time. In the beginning, all cells are in a hypoxic state. They then begin to stochastically transition into the dead state and no longer consume any nutrients. At the same time, the vasculature starts growing, and the available nutrients increase. From day three, we observe cells transitioning into proliferative states, and the tumor grows in some regions. Between days 10 and 15, we observe that almost as many cells are in the necrotic state as we initialized, indicating that most of the spheroid died, and the initial spheroid now forms a necrotic core. From day ten on, parts of the tumor are well supplied with nutrients after attracting the vasculature via VEGF, and we observe exponential growth. Between days 15 and 20, the number of hypoxic cells increases again. This trend suggests that the exponential growth of the tumor mass depleted newly vascularized regions, and parts of the tumor begin to die. The overall tumor dynamics seem reasonable and agree with the pre-treatment data (Tab A.2) as can be seen in the lower part of Fig. 10.
While starting with 6 million tumor cells and roughly 4.5 million vessel segments, the simulation concludes with 70.6 million tumor cells and 21 million vessel agents, respectively. The vessel volume, and therefore the vascular density increased by a factor of roughly 5. Overall, the simulation took approximately 7.9 days on a 72-core server with 1 TB of RAM and hyper-threading (4 x Intel(R) Xeon(R) E7-8890 v3 clocking at 2.50 GHz with four NUMA domains) and the memory usage peaked around 50 GB. It appears that the force computation is responsible for most of the computation. We expect that the runtime can still be significantly reduced by optimizing the model code.
## 7 Discussion
The computational and mathematical models and algorithms described in this work appear to be capable of simulating very complex growth of vascular structures and variations in tumor volume in environments in which drug protocols are designed and orchestrated to control, minimize, or eliminate tumor growth.
Figure 10: Large scale simulation: tumor volume and cell count over time. Top: volume and number of cells according to the five cell states. Bottom: Aggregated numbers proliferating (\(SG2+G1\)), living (\(SG2+G1+Q+H\)), and total (all). Tumor volume computed as \(V_{tumor}=N\cdot V_{cell}/0.64\), with the measured number of cells \(N\) and a correction factor accounting for sphere packing. The displayed _growth data_ combines the pre-treatment stage of all treatment groups in Fig. 1.
A limitation of the model is that the healthy tissue surrounding the tumor is not considered. As long as the _in silico_ tumor floats in a vacuum, it is difficult to mimic the physical traits [24] such as stress, pressure, and stiffness. Effects, such as vasculature being damaged by forces, can hardly be modelled when the cells can escape into the empty space. Ignoring the forces between tumor cells and the vasculature is another limiting factor linked to the previous point. Without the healthy tissue surrounding the vasculature and tumor, the vasculature would be pushed away rather than forming a supply network. Future work should also model the cell death in more detail to free space for the proliferative tumor mass and the healthy tissue (see [73]). Moreover, our model neglects cell migration which has proven to have a significant impact on the tumor dynamics in theoretical studies [74; 75].
While the generated vascular networks appear to be realistic and organic, the growth does not entirely stop and vessels can begin growing in unexpected directions. In fact, vasculature grows less structured in the presence of a tumor; however, our model lacks pruning mechanisms for the vessels growing in random directions. Extending the model with a flow simulation seems a promising direction for further research and would give additional information based on which one may prune and optimize the vasculature (see, e.g., [62]).
Moving away from the vasculature, we note that the model has many parameters, including some that have not been properly calibrated with data. The number of parameters is, however, a by-product of the complexity that was targeted in this work. Adding more mechanisms to the model inevitably adds more parameters. Nonetheless, by building on similar models and their (partially calibrated) parameters [15; 16; 17; 18], we were able to find reasonable parameter choices for the model which we demonstrated through the model's ability to simulate vascular tumor growth and treatment. We were able to reproduce the qualitative reaction of HER2+ breast cancer to the combination treatment with DOX and TRA. The parameter choices and modeling approach is further justified by the fact that the model produces realistic tumor volumes in agreement with the pre-clinical study [20]. However, our model ignores effects on the tissue scale (e.g., cell death and migration) and is therefore not yet adequate for predicting quantities of interest at this scale.
The models described in this work offer many interesting opportunities. They have great potential for describing small _in vitro_ experiments (see, for instance, [71]). Such data would further help calibrate our model. For these small _in vitro_ scales, parameters may be inferred using Bayesian frameworks because the model runs fast and shared-memory parallel such that frequent model evaluation may be possible. However, even though we emphasized computational efficiency during the development, the calibration of stochastic models remains a challenging subject because the repeated evaluation of the forward model may amount to substantial run times. Recent advances in Bayesian computation [76] suggest the construction of surrogates based on Gaussian processes to reduce the number of forward simulations. The method has proven efficient for other stochastic cancer models and may help to calibrate ours. Once those parameters have been calibrated, one may run large-scale simulations and compare the results to macroscopic data, as demonstrated in this work. The model and its implementation should enable researchers to bridge scales, i.e., hypothesize phenomena on microscopic, cellular scales and compare the simulation results to macroscopic data.
## 8 Conclusion
In this work, a complex hybrid model is presented together with a performant C++ implementation. The model is shown to capture many characteristics of vascular tumor growth and, qualitatively, describes the treatment effects of Doxorubicin and Trastuzumab on HER2+ breast cancer cells. Furthermore, the model and code can scale to tissue-relevant sizes and may therefore help future research to bridge scales, i.e, hypothesize cellular effects and test how they affect macroscopic quantities.
## Acknowledgements
TD would like to thank Lukas Breitwieser, Fons Rademakers, and Ahmad Hesam for the many technical discussions related to BioDynaMo that greatly helped him progressing in the present work. The authors would like to further thank Lukas Breitwieser for sharing his Docker infrastructure, helping with its set up, and testing the reproducer. The authors also thank Tobias Koppel for sharing code, data, and knowledge that helped setting up the initial vessel structure. The work of TD has been sponsored by the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA). The work of EABFL is supported by the National Institute of Health
via grant R01CA240589. The work of JTO is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Award DE-960009286. The work of BW was funded by the German Research Foundation by grants WO671/11-1.
## Appendix A Data
The data in [57, Tab. 1 and 5] contains six groups for the tumor growth before treatment begin. Generally, two sets \(S_{i}=\{x_{1},\ldots,x_{n_{i}}\}\) of real numbers with their respective cardinality \((n_{1},n_{2})\), mean \((m_{1},m_{2})\), and standard deviation \((\sigma_{1},\sigma_{2})\) can be combined in one set \(S\) with its characteristics given by
\[n =n_{1}+n_{2}\, \tag{11}\] \[m =\frac{n_{1}m_{1}+n_{2}m_{2}}{n_{1}+n_{2}}\,\text{ and}\] (12) \[\sigma =\sqrt{\frac{(n_{1}-1)\sigma_{1}^{2}+(n_{1}-1)\sigma_{1}^{2}+ \frac{n_{1}n_{2}}{n_{1}+n_{2}}(m_{1}-m_{2})^{2}}{n_{1}+n_{2}-1}}. \tag{13}\]
We apply these formulas iteratively to the dataset in [57, Tab. 5] and obtain a dataset characterizing the tumor growth before treatment in a statistically more sound way. The dataset is given in Tab. 2.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Transition** & **Parameter** & **Notation** & **Value** & **Unit** & **Source** \\ \hline \hline cell radius & \(r_{c,\mu}\) & 9.953 & \(\mu m\) & [15; 77] \\ nuclear cell radius & \(r_{c,\mu}\) & 5.296 & \(\mu m\) & [15] \\ action cell radius & \(r_{c,\mu}\) & 12.083 & \(\mu m\) & [15] \\ \hline \hline \end{tabular}
\end{table}
Table 5: Overview of the parameters affecting the tumor cell.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Parameter** & **Notation** & **Value** & **Unit** & **Source** \\ \hline Bifurcation distance & \(d_{b}\) & 80 & \(\mu m\) & - \\ Tip-cell distance & \(d_{t}\) & 150 & \(\mu m\) & - \\ Sprouting rate & \(p_{s}\) & 0.001 & \(1/min\) & - \\ Growth weight random & \(w_{r}\) & 0.2 & - & - \\ Growth weight old & \(w_{o}\) & 0.5 & - & - \\ Growth weight gradient & \(w_{\Psi}\) & 0.3 & - & - \\ Growth speed & \(s\) & 0.033 & \(\mu m/min\) & [63] \\ Supply increase & \(\tau_{\uparrow}\) & 0.4 & days & - \\ Supply decrease & \(\tau_{\downarrow}\) & 10 & days & - \\ Supply maximum & \(\chi_{\text{max}}\) & 9 & days & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Overview of the parameters affecting the vessel.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Transition** & **Parameter** & **Notation** & **Value** & **Unit** & **Source** \\ \hline \hline \(SG2\to G1\) & duration cell cycle & \(\tau_{p}=\tau_{G1}+\tau_{SG2}\) & 18 & hour & [15] \\ \hline \(G1\to Q\) & duration growth phase & \(\tau_{G1}\) & 9 & hour & [15] \\ \hline \(D\rightarrow\) gone & duration apoptosis & \(\tau_{D}\) & 8.6 & hour & [15] \\ \hline \(Q\to H\) & nutrient threshold & \(u_{n}^{Q\to H}\) & 0.09 & - & \\ \hline \(Q\to D\) & nutrient threshold & \(u_{n}^{Q\to D}\) & 0.000538 & - & - \\ & nutrient gamma & \(\gamma_{n}^{Q\to D}\) & 0.000408 & - & - \\ & nutrient alpha & \(\alpha_{n}^{Q\to D}\) & \(6.8\cdot 10^{-6}\) & - & - \\ & nutrient k & \(k_{n}^{Q\to D}\) & 50.0 & - & - \\ & DOX zeta & \(\zeta_{d}^{Q\to D}\) & 100 & - & - \\ & TRA zeta & \(\zeta_{t}^{Q\to D}\) & 0 & - & - \\ \hline & Cross term & \(\zeta_{d}^{Q\to D}\) & 0 & - & - \\ \hline \(Q\to SG2\) & nutrient threshold & \(u_{n}^{Q\to SG2}\) & 0.0538 & - & [17] \\ & nutrient alpha & \(\alpha_{n}^{Q\to SG2}\) & 0.000821 & - & [17] \\ & decay with TRA & \(a_{t}^{Q\to SG2}\) & 30 & - & - \\ \hline \(SG2\to SG2\) & DOX threshold & \(u_{d}^{SG2\to SG2}\) & 0.001 & - & - \\ & DOX alpha & \(\alpha_{d}^{SG2\to D}\) & 0.1 & - & - \\ \hline \(SG2\to D\) & DOX threshold & \(u_{d}^{SG2\to D}\) & 0.001 & - & - \\ & DOX alpha & \(\alpha_{d}^{SG2\to D}\) & 0.001 & - & - \\ \hline \(H\to D\) & Base rate & \(r\) & 0.00001 & - & - \\ & DOX zeta & \(\zeta_{d}^{H\to D}\) & 100 & - & - \\ & TRA zeta & \(\zeta_{t}^{H\to D}\) & 0 & - & - \\ & Cross term & \(\zeta_{d}^{H\to D}\) & 0 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 6: Overview of the parameters affecting the cell cycle.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Parameter** & **Notation** & **Value** & **Unit** & **Source** \\ \hline Cell-cell adhesion & \(f_{a}\) & 0.0489 & \(\mu m/min\) & [17] \\ Cell-cell repulsion & \(f_{r}\) & 10.0 & \(\mu m/min\) & [17] \\ Maximal speed & \(v_{max}\) & 10.0 & \(\mu m/min\) & [17] \\ Viscosity & \(\nu\) & 2.0 & - & [17] \\ \hline \hline \end{tabular}
\end{table}
Table 8: Overview of the parameters affecting the forces.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Continuum** & **Parameter** & **Notation** & **Value** & **Unit** & **Source** \\ \hline Nutrients & Diffusion & \(D_{n}\) & 0.833 & \(\mu m^{3}/min\) & [17] \\ Nutrients & Decay & \(\lambda_{n}\) & 0.00001 & \(min^{-1}\) & [17] \\ \hline VEGF & Diffusion & \(D_{v}\) & 0.175 & \(\mu m^{3}/min\) & - \\ VEGF & Decay & \(\lambda_{v}\) & 0.00001 & \(min^{-1}\) & - \\ \hline DOX & Diffusion & \(D_{d}\) & 0.708 & \(\mu m^{3}/min\) & - \\ DOX & Decay & \(\lambda_{d}\) & 0.0002 & \(min^{-1}\) & - \\ \hline TRA & Diffusion & \(D_{t}\) & 0.09 & \(\mu m^{3}/min\) & - \\ TRA & Decay & \(\lambda_{t}\) & 0.0 & \(min^{-1}\) & - \\ \hline \hline \end{tabular}
\end{table}
Table 9: Overview of the parameters affecting the continua. Explanation of the estimation may be found in the main text.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Agent** & **Parameter** & **Notation** & **Value** & **Unit** & **Source** \\ \hline Tumor Cell & nutrient consumption & \(c_{n}\) & 0.0483 & \(h^{-1}\) & [17] \\ Tumor Cell & VEGF supply & \(c_{v}\) & 2.73 & \(h^{-1}\) & - \\ Tumor Cell & DOX consumption & \(c_{d}\) & 0.0 & \(h^{-1}\) & - \\ Tumor Cell & TRA consumption & \(c_{t}\) & 0.000983 & \(h^{-1}\) & - \\ \hline Vessel & Nutrient supply & \(v_{n}\) & 0.0000819 & \(1/(\mu m^{2}\cdot h)\) & - \\ Vessel & VEGF consumption & \(v_{v}\) & 0.109239 & \(1/(\mu m^{2}\cdot h)\) & - \\ Vessel & DOX supply & \(v_{d}\) & 0.000136 & \(1/(\mu m^{2}\cdot h)\) & - \\ Vessel & TRA supply & \(v_{t}\) & 0.002730 & \(1/(\mu m^{2}\cdot h)\) & - \\ \hline Vessel & VEGF threshold & \(v_{v}^{synort}\) & 0.001 & 1 & - \\ Vessel & VEGF gradient threshold & \(v_{v}^{synort}\) & 0.00001 & 1 & - \\ \hline \hline \end{tabular}
\end{table}
Table 7: Overview of the parameters affecting the agent - continuum interaction. Estimation explained in the main text. Note that \(c_{v}\) and \(v_{v}\) may be chosen significantly smaller without affecting the results but we report the parameters used for the numerical experiments.
## Appendix C Large Scale Simulation - Mimicking the mirco-vasculature in the tumor environment
The initial structure of the micro-vasculature plays an important role. A denser vasculature may supply more nutrients and accelerate tumor evolution. However, data for the initialization is merely available. This section describes our approach to statistically mimic a realistic tumor micro-vasculature.
We first note that we do not consider a flow model; thus, connections between different segments of the vascular network do not affect the simulation outcome. Here, we place different vessel segments in the simulation space without explicitly considering their connections. Each segment consists of several agents that connect a start and end point and is further characterized by its diameter and length. Under these assumptions, we strive to find meaningful probability distributions to sample the diameter and length of the segments, as well as a reasonable number for the total number of segments.
In 1998, Secomb and coworkers [78] investigated the blood flow through the vasculature in the tumor region. The region's volume is \(V_{S}=(550\times 550\times 230)\mu m^{3}\), and they describe the different vessel segments by start and end point, diameter, and length. The data is available to the public and is the base for our statistical arguments.
As we ignore the connections, we neglect the start and end point information and focus on the data for diameter and length. We first compute the Pearson, Spearman, and Kendall-Tau correlation coefficients and obtain the values 0.35, 0.10, and 0.06, respectively. We conclude that treating the diameter and vessel length as independent random variables here is justified. To find a suitable probability density function (PDF), we fitted 95 PDFs to the data and chose the one yielding the most significant p-value for the Kolmogorov-Smirnov test. We find that the diameter is well described by a generalized extreme value continuous random variable, while a Wald continuous random variable better describes the length. The data and the fitted PDFs are displayed in Fig. 11.
Before sampling from the PDFs, we truncate the Wald distribution, i.e. we require a minimum segment length of \(l_{\text{min}}=100\mu m\). By computing the integral
\[\zeta=\int_{-\infty}^{l_{\text{min}}}p_{w}\left(x,\Theta_{w}\right)dx\, \tag{12}\]
where \(\Theta_{w}\) denotes the fitted parameters of the distribution, we obtain the ratio of samples that we ignore. Numerically, we evaluate the integral with an adaptive integration rule (Gauss-Kronrod 21-point) and obtain \(\zeta\approx 0.78\). In total, Secomb's data contains \(N_{S}=104\) vessel segments. According to our previous considerations, we initialize our simulation volume \(V\) with
\[N=N_{S}\cdot(1-\zeta)\cdot\frac{V}{V_{S}} \tag{13}\]
vessel segments.
After evaluating Eq. (12) and (13), we perform the following steps to initialize the vessel segments. First, we sample the diameter \(d_{v}\) from the generalized extreme value distribution and the length \(l_{v}\) from the fitted, truncated Wald distribution. Second, we sample a random point marking the segment's start in the simulation space. Third, we sample a random point on a sphere of radius \(l_{v}\) defining the segment's endpoint. If the endpoint ends up outside the simulation volume, we resample. Once the start point, the endpoint, and the diameter are defined, we fill the line with cylindrical agents of appropriate diameter. If the length \(l_{v}\) is larger than \(200\mu m\), we add smooth fluctuations from the center line, realizing that longer micro-vessels are unlikely to be straight.
To verify the approach, we compute the volume fraction occupied by the vasculature as
\[\rho_{S}=\frac{1}{V_{S}}\cdot\sum_{i=1}^{N_{S}}V_{i}\text{ and }\rho=\frac{1}{V} \cdot\sum_{i=1}^{N}V_{i}\, \tag{14}\]
where \(V_{i}\) denotes the volume of a vessel. For Secomb's data, \(V_{i}\) is computed from the length and diameter; for the simulation, \(V_{i}\) is the sum of the agent volumes. We find the values \(\rho_{S}=0.014308\) and \(\rho=0.004559\) showing that the simulated vascular density is roughly 30% of the one given in the data. Since the vasculature density increases during the simulation due to angiogenesis, it is reasonable to begin with a lower vascular density. |
2310.09690 | Configuration Validation with Large Language Models | Misconfigurations are major causes of software failures. Existing practices
rely on developer-written rules or test cases to validate configurations, which
are expensive. Machine learning (ML) for configuration validation is considered
a promising direction, but has been facing challenges such as the need of
large-scale field data and system-specific models. Recent advances in Large
Language Models (LLMs) show promise in addressing some of the long-lasting
limitations of ML-based configuration validation. We present a first analysis
on the feasibility and effectiveness of using LLMs for configuration
validation. We empirically evaluate LLMs as configuration validators by
developing a generic LLM-based configuration validation framework, named Ciri.
Ciri employs effective prompt engineering with few-shot learning based on both
valid configuration and misconfiguration data. Ciri checks outputs from LLMs
when producing results, addressing hallucination and nondeterminism of LLMs. We
evaluate Ciri's validation effectiveness on eight popular LLMs using
configuration data of ten widely deployed open-source systems. Our analysis (1)
confirms the potential of using LLMs for configuration validation, (2) explores
design space of LLMbased validators like Ciri, and (3) reveals open challenges
such as ineffectiveness in detecting certain types of misconfigurations and
biases towards popular configuration parameters. | Xinyu Lian, Yinfang Chen, Runxiang Cheng, Jie Huang, Parth Thakkar, Minjia Zhang, Tianyin Xu | 2023-10-15T00:50:27Z | http://arxiv.org/abs/2310.09690v2 | # Configuration Validation with Large Language Models
###### Abstract.
Misconfigurations are the major causes of software failures. Existing configuration validation techniques rely on manually written rules or test cases, which are expensive to implement and maintain, and are hard to be comprehensive. Leveraging machine learning (ML) and natural language processing (NLP) for configuration validation is considered a promising direction, but has been facing challenges such as the need of not only large-scale configuration data, but also system-specific features and models which are hard to generalize. Recent advances in Large Language Models (LLMs) show the promises to address some of the long-lasting limitations of ML/NLP-based configuration validation techniques. In this paper, we present an exploratory analysis on the feasibility and effectiveness of using LLMs like GPT and Codex for configuration validation. Specifically, we take a first step to empirically evaluate LLMs as configuration validators without additional fine-tuning or code generation. We develop a generic LLM-based validation framework, named Ciri, which integrates different LLMs. Ciri devises effective prompt engineering with few-shot learning based on both valid configuration and misconfiguration data. Ciri also validates and aggregates the outputs of LLMs to generate validation results, coping with known hallucination and nondeterminism of LLMs. We evaluate the validation effectiveness of Ciri on five popular LLMs using configuration data of six mature, widely deployed open-source systems. Our analysis (1) confirms the potential of using LLMs for configuration validation, (2) understands the design space of LLM-based validators like Ciri, especially in terms of prompt engineering with few-shot learning, and (3) reveals open challenges such as ineffectiveness in detecting certain types of misconfigurations and biases to popular configuration parameters.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
Footnote †: copyrighted: ©-primary authors.
+
## 1. Introduction
Modern cloud and web services evolve rapidly and deploy hundreds to thousands of configuration changes to production systems on a daily basis (Han et al., 2014; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015). For example, at Meta/Facebook, thousands of configuration changes are committed daily, outpacing the frequency of source-code changes (Goh et al., 2015; Goh et al., 2015). Other cloud services such as Google and Microsoft also frequently deploy configuration changes (Goh et al., 2015; Goh et al., 2015; Goh et al., 2015). Such rapid configuration changes inevitably lead to misconfigurations, resulting in system failures. Today, misconfigurations are among the dominating causes of production incidents (Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015). For example, misconfiguration is the second largest root-cause category of service disruptions at a main Google production service (Goh et al., 2015).
To detect misconfigurations before deployment, today's configuration management systems typically employ the "configuration-as-code" paradigm and enforce continuous configuration validation, ranging from static validation, to configuration testing, and to manual review and approval. The configuration is first checked by validation code (aka _validators_) based on predefined correctness rules (Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015); in practice, validators are written by engineers when introducing configuration parameters. After passing validators, configuration changes are then tested together with the code to ensure expected program behavior under the changed configuration (Goh et al., 2015; Goh et al., 2015). Lastly, the configuration changes go through the same process as source-code review, where the change, commonly in the form of a configuration file "diff", is reviewed before production deployment.
The aforementioned configuration validation pipeline either relies on manual inspection to spot misconfigurations in the configuration file diffs, or requires significant engineering efforts to implement and maintain validators or test cases. However, these efforts are known to be costly and incomprehensive. For example, despite the fact that mature projects all include extensive configuration checks, recent work (Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015; Goh et al., 2015) repeatedly shows that existing checks are far from sufficient to catch misconfigurations. The reasons are twofold. First, with large-scale systems exposing hundreds to thousands of configuration parameters (Goh et al., 2015), implementing validators (or test cases) for every parameter becomes a significant overhead. Recent studies (Goh et al., 2015; Goh et al., 2015) report that many parameters are not covered by existing checks, even in mature software projects with many years of development history. Second, it is nontrivial to validate each individual parameter, which could have many different correctness properties, such as type, range, semantic meaning, dependencies with other parameters, etc.; encoding each of them as validators could be laborious and error-prone, not to mention the high maintenance cost due to configuration-related software evolution (Goh et al., 2015; Goh et al., 2015). These limitations also apply to the configuration tests (Goh et al., 2015). Compared with static validation, configuration testing is also more time-consuming to run and more expensive in terms of computing resources (Goh et al., 2015).
Using machine learning (ML) and natural language processing (NLP) to detect misconfigurations has been considered a promising approach to addressing the above challenges of configuration validation. Compared with manually written static validators and test cases, ML or NLP-based approaches are automatic, easy to scale to
a large number of parameters, and applicable to different systems and environments. A number of ML/NLP-based misconfiguration detection techniques are proposed in the literature [(17; 41; 63; 64; 80; 85; 92; 103)]. The key idea is to first learn correctness rules from field configuration data [(17; 38; 41; 63; 71; 72; 80; 85; 103)] or from documents [(64; 92)], and then use the learned rules to detect misconfigurations in new configuration files. ML/NLP-based approaches have achieved good success. For example, Microsoft adopted Peer-Pressure [(80; 34)] as a part of Microsoft Product Support Services (PSS) toolkits. It collects configuration data in Windows Registry from a large number of Windows users to learn statistical golden states of system configurations.
However, ML/NLP-based misconfiguration detection has also revealed significant limitations. First, the need of large volumes of system-specific configuration data makes it hard to apply those techniques to outside corporations that collect user configurations (e.g., Windows Registry [(85)]) or maintain knowledge base or customer tickets [(64)]. For example, in cloud/datacenter systems where the same set of configurations is maintained by a small DevOps team, there is no enough information for learning [(99)]. Moreover, prior ML/NLP-based detection techniques all target specific systems/projects, and rely on predefined learning features [(63)], templates [(103)], or models [(64)]. As a result, it is hard to generalize them to different systems and their configurations.
Recent advances on Large Language Models (LLMs), such as GPT [(2)] and Codex [(3)], show promises to address some of the long-lasting limitations of traditional ML/NLP-based misconfiguration detection techniques. Specifically, LLMs are trained on massive amounts of internet data, including configuration data--configuration files in software repositories, configuration documents, knowledge-based articles, Q&A websites for resolving configuration issues, etc. Consequently, LLMs encode the extensive knowledge of both _common_ and even _project-specific_ configuration for well-known projects. Such knowledge can be utilized for configuration validation without the need for manual rule engineering. Furthermore, LLMs show the capability of _generalization_ and _reasoning_[(31; 87)] unlike the traditional ML approaches, and can potentially "understand" the configuration semantics. For example, they can not only generalize that values of a port must be in the range of [0, 65535], but also reason that a specific configuration value represents a port (e.g., based on the parameter name and description) and thus has to be within the range.
Certainly, LLMs have limitations. Notably, they are known to hallucinate responses and can be nondeterministic [(106; 13)]. Additionally, LLMs have limited input context, which can pose challenges when encoding extensive contexts like configuration-related code and execution environments. Moreover, they are reported to be biased to popular content in the training dataset. However, there are active efforts [(83; 10; 48; 50; 57)] addressing these limitations, making them promising tools.
In this paper, we present an exploratory analysis on the feasibility and effectiveness of using LLMs like GPT and Codex for configuration validation. Our goal is to empirically evaluate the promises of leveraging LLMs to develop effective configuration validators and to understand the challenges. As a first step, we empirically evaluate LLMs as configuration validators, without additional fine-tuning or code generation. We focus on basic misconfigurations (those violating explicit correctness constraints) which can potentially be detected by LLMs directly. We do not target misconfigurations specific to the execution environments or correct configuration changes triggering bugs in the code. We discuss how to further build on this work to detect those in SS7.
To this end, we develop Ciri, an LLM-empowered configuration validation framework. Ciri takes a configuration file or a file diff as the input; it outputs a list of detected misconfigurations along with the reasons that explain the misconfigurations. Ciri integrates different LLMs such as Code-Davinci-002, GPT-3.5-turbo, and GPT-4. Ciri devises effective prompt engineering with few-shot learning based on existing configuration data. Additionally, Ciri validates and aggregates the outputs of LLMs to generate validation results, coping with known hallucination and nondeterminism of LLMs. A key design principle of Ciri is separation of policy and mechanism--it implements different mechanisms to support various policies. Ciri can serve as an open framework for experimenting with different prompt engineering approaches, training datasets, and validation and aggregation methods.
We study the validation effectiveness of Ciri backed by five popular LLMs including advanced models (Code-davinci-002, GPT-3.5 Turbo, and GPT-4), and basic models (Babbage-002, and Davinci-002). We use misconfiguration datasets of six mature, widely deployed open-source systems (HCommon, HBase, Alluxio, HDFS, YARN, and ZooKeeper). Our study confirms the potential of using LLMs for configuration validation, e.g., Ciri with GPT-4 shows promising results at both file- and parameter-level, achieving up to 0.75 and 0.56 F1-scores, respectively. Our study also helps understand the design space of LLM-based validators like Ciri, especially in terms of prompt engineering with few-shot learning and voting. We find that few-shot learning using both valid configuration and misconfiguration data as shots (examples) can significantly enhance validation effectiveness. Specifically, labeled misconfigurations are pivotal to validation effectiveness. Our results also reveal open challenges such as ineffectiveness in detecting certain types of misconfigurations and biases to popular configuration parameters. While Ciri excels in identifying misconfigurations, it struggles with specific misconfiguration types such as dependency violations and version-specific misconfigurations. We also observe that the popularity of configuration parameters creates biases in validation results, causing both false positives and false negatives. In summary, this paper makes the following main contributions:
* [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsepsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsep=0pt,topsepsepsep=0pt,topsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsepsep=0pt,topsepsep=0pt,topsepsepsepsep=0pt,topsepsepsepsep=0pt,topsepsepsep=0pt,topsepsepsepsepsepsep=0pt,topsepsepsepsepsep=0pt,topsepsepsepsepsep=0pt,top
## 2. Exploratory Examples
We explore the capability of utilizing LLMs to validate configuration files off-the-shelf. We argue that vanilla LLMs are capable of detecting sophisticated misconfigurations. However, they are prone to both false negatives and false positives that require further attention and handling. Figure 1 presents four examples, two of which the LLM correctly detects the misconfiguration, and two of which the LLM misses a misconfiguration or reports a false alarm. These examples were generated using the Codex LLM (Code-Davinci-002) developed by OpenAI (Dezal et al., 2016; Chen et al., 2017).
**Detecting violation in configuration dependency.** Extracting and validating against dependency relationships between configuration parameters has been a challenging task in highly-configurable systems (Krizhevsky et al., 2016; Zhang et al., 2017). LLM has the ability to infer relations between entities from text at the level of human experts (Krizhevsky et al., 2016). This ability allows LLM to infer dependencies between parameters in a configuration file based on their corresponding names and descriptions. By simply asking if any violation exists in the configuration, off-the-shelf LLM can check if configured values satisfy the extracted relations at runtime. This allows better applicability of LLMs for validating configuration dependency when compared to prior techniques that require manually codified rules (Krizhevsky et al., 2016; Zhang et al., 2017), program analysis (Krizhevsky et al., 2016; Zhang et al., 2017), or specialized learning (Zhu et al., 2017; Zhang et al., 2017).
Figure 1 (Example 1) presents a case where values of two dependent parameters were changed (i.e., buffer.size and bytes.per. checksum). After understanding the value relationship dependency between these two parameters, the model determines that the change in bytes.per.checksum has violated the enforced dependency, and provides the correct reason for the misconfiguration.
**Detecting violation with domain knowledge.** Written configuration validation rules often require significant manual efforts to produce and maintain. They are difficult to scale, due to the diverse types and functionalities of configuration parameters. A state-of-the-art LLM is trained on a massive amount of textual data collected from the Internet, and possesses basic knowledge across a wide range of professional domains. An LLM thus could be capable of understanding the definition of a configuration parameter and reasoning with its semantics. When the LLM encounters a configuration parameter such as IP Address, permissions, masks, it invokes the domain knowledge specific to the properties of those parameters for user's further instructions, such as validation. Figure 1 (Example 2) presents a case where an HTTP address has been misconfigured to a semantically invalid value. The model identifies the misconfiguration, reasons that its configured value has been out of range, and further suggests potential direction for fixing.
**Missed misconfiguration and false alarm.** Despite that LLMs have demonstrated impressive performance across many tasks since its recent emergence, at the current stage of development, however, LLMs as configuration validators are not without errors. Examples 3 and 4 in Figure 1 show two cases where the LLM makes mistakes in the configuration validation.
In Example 3, the configuration file has provided a description of the changed parameter hostname.verifier, and explicitly listed the valid value options for the parameter. However, the model is
Figure 1. Example 1 and 2 show the LLM correctly catches and reasons the misconfigurations. Example 3 and 4 show the LLM misses a misconfiguration or reports a valid configuration as erroneous.
unable to identify that the parameter has been misconfigured to an invalid, non-existent option (STRICT_I8). Example 4 is interesting -- the description suggests that the parameter bloom.error.rate ranges from 0 to 100 (percentage), whereas the actual scale is 0 to 1 (fraction). This inconsistency supposedly confuses the model making it mark 0.01 as invalid even though it is valid (1%) -- a fairly reasonable mistake for a human to make as well.
Both examples demonstrate that employing off-the-shelf LLMs as configuration validators can result in false negatives and false positives, thereby making the predictions less trustworthy. Incorrect validation outcomes could be attributed towards a phenomenon in LLMs termed hallucination, which is being actively investigated (Kumar et al., 2017). A simple explanation is that LLMs are exposed to potentially contradictory data during training, which causes confusion to the model at inference time.
To account for these factors, our study applies and evaluates several mechanisms that can mitigate the impact of wrongful predictions made by LLMs in the context of configuration validation, including few-shot learning, and reaching validation consensus through majority voting (SS3).
## 3. Ciri: a Llm-empowered Configuration Validation Framework
We develop Ciri, a LLM-empowered configuration validation framework. Ciri takes a configuration file or a file diff as the input. It outputs a list of detected misconfigurations along with the reasons to explain the misconfigurations. If no misconfiguration is detected, Ciri outputs an empty list.
Ciri now supports five LLMs (Code-Davinci-002, GPT-3-5-turbo, GPT-4, Babbage-002 and Davinci-002). Adding a new LLM in Ciri takes a few lines of code to adopt the LLM's query APIs. Figure 2 gives an overview of Ciri. Ciri turns a configuration validation request into a prompt to the LLMs (SS3.1). The prompt includes 1) the input configuration file or diff, 2) a few examples (referred to as _shots_) to demonstrate the task of configuration validation, and 3) directive question and metadata. To generate shots, Ciri uses its database that contains labeled configuration data, including both valid configurations and misconfigurations. To validate a configuration file, Ciri sends the same query to the LLMs multiple times and aggregates the responses into the final validation result (SS3.2).
Ciri can be applied to any software project, even if it has no configuration data of that project. Ciri can directly query advanced LLMs like GPT-4 with zero shot, and achieves considerable effectiveness (Finding 2). Ciri exhibits the ability to transfer configuration-related knowledge across projects when using configurations from different projects as shots (Finding 4). Ciri's configuration validation effectiveness can also be further improved with high-quality generated shots (Finding 3).
### Prompt Engineering
**Prompt Structure.** Ciri generates a prompt that includes three elements: 1) the content of input configuration file or file diff, 2) the shots as _ValidConfig_ or misconfiguration files with example questions and ground truth responses for few-shot learning, and 3) a directive question for LLM to respond in formatted output. Figure 3 shows an illustrative example of the prompt generated by Ciri. It contains \(N\) shots, content of the validating configuration file, followed by the directive question.
Ciri phrases the prompting question as _"Are there any mistakes in the above configuration file for [PROJECT] version [VERSION]? Respond in a JSON format similar to the following:..."_. The [PROJECT] and [VERSION] are required inputs of Ciri because validity of a configuration file can change by project and project version. This prompt format enforces the LLM to respond in a unified JSON format for automated result aggregation (SS3.2). However, responses from LLMs sometimes may still deviate from the anticipated format (Friedman et al., 2017; Kumar et al., 2017). In such cases, Ciri retries a new query to the LLM.
**Few-Shot Learning.** Ciri leverages the LLM's ability to learn from examples at inference time (referred to as few-shot learning) to improve configuration validation effectiveness. To do so, Ciri simply inserts shots at the beginning of each prompt. Each shot contains a configuration file, the prompting question, and its corresponding ground truth. Figure 3 provides an example. In Figure 3, there are \(N\) shots. "Configuration File Shot #1" is the first shot, in which the parameter file.bytes-per-checksum in the configuration file is misconfigured. This shot also contains the prompting question (orange box) and the corresponding ground truth (blue box).
**Shot Generation and Selection.** Ciri maintains a database of labeled valid configurations and misconfigurations. It is used for generating valid configuration shots (_ValidConfig_) and misconfiguration shots (_Misonfig_). A _ValidConfig_ shot specifies a set of configuration parameters and their valid values. A valid value of a parameter can be its default value, or other valid values used in practice. A _Misonfig_ shot specifies a set of parameters and their values, where only one of the parameters' value is invalid. We provide more details on how to generate valid/invalid configuration values in this paper in SS4.
For a configuration file or diff of a specific project, Ciri by default generates shots using the configuration data of the same project. If Ciri's database does not contain configuration data for the target project, Ciri would use available data (from other projects) to generate shots. As we will show (SS5.2), LLMs possess transferrable knowledge in configuration across different projects.
Ciri supports various strategies to select the shot data, including randomized selection, selecting from different configuration/misconfiguration types, and selecting from configuration parameters similar to the validating configuration (using cosine similarity). We did not observe major differences in these selection strategies so Ciri uses randomized selection by default.
Figure 2. System overview of Ciri.
**Addressing Token Limits.** LLMs limit input size per query by the number of input tokens. For example, the token limits for GPT-4 and GPT-3.5-turbo (the 16K variant) are 4,097 and 16,385 respectively. To navigate these constraints, Ciri compresses the prompt if its size cannot fit the token limit. Ciri prioritizes putting the validating configuration file and the directive question in the prompt, then applies a number of strategies to maximize the number of shots that can fit into the remaining token limit. If the validating configuration file itself cannot fit into the token limit, Ciri transforms the original file into a more compact format, e.g., transforming an XML file into INI format. If the compressed input still cannot fit, Ciri aborts and returns errors. In practice, real-world configuration file are small (Bartos et al., 2016), which lends enough space to include shots. For example, prior study inspects configuration files collected from Docker, where each file contains 1 to 18 parameters, with 8 parameters on average (Bartos et al., 2016). For the extremely large configuration file, Ciri can split it into smaller snippets, which can be validated separately and reasoned together.
### Result Generation
The JSON response from LLMs contains three primary fields: 1) hasError: a boolean value indicating if a misconfiguration is detected, 2) errParameter: an array of strings specifying the misconfigured parameter, and 3) reason: an array of strings explaining the detected misconfiguration.
**Validation against Hallucination.** We use a few rules to counter the hallucination of LLMs. For example, if hasError is False, both errParameter and reason must be empty. Conversely, if hasError returns True, errParameter and reason should not be empty and have the same array size. The answer to errParameter also should not contain repeated values. If a response fails these rules, Ciri discards it and retries until the LLM returns a valid response.
**Voting against Inconsistency.** LLMs can produce inconsistent outputs in conversation (Ciri, 2017), explanation (Ciri, 2017), and knowledge extraction tasks (Li et al., 2018). To mitigate inconsistency, Ciri uses a multi-query strategy--querying the LLM multiple times using the same prompt and aggregating the responses. When aggregated with a voting mechanism, these responses converge towards a solution that is both representative of the model's understanding and more consistent than a single query. Ciri uses a frequency-based voting strategy: the output that recurs most often among the responses is selected as the final output (Zhou et al., 2018).
Note that the reason field is not considered during voting due to the diverse nature of the response. After voting, Ciri collects the reason from all responses that are associated with the selected errParameter. The reason field is important as it provides users with insights into the misconfiguration, which is different from the traditional ML approaches that only provide a binary answer with a confidence score. However, the content of reason may not always be useful due to hallucination. Ciri clusters reasons based on TF-IDF similarity, and picks a reason from the dominant cluster. We found that hallucinated reasons were often avoided this way as they tended to be very different from each other.
### Ciri Configuration
Ciri is highly customizable, with a basic principle of separating policy and mechanism. Users can customize Ciri via its own configurations. Table 1 shows several important Ciri configurations and default values. The values are chosen by pilot studies using a subset of our dataset (SS4).
## 4. Benchmarks and Metrics
**Software Systems** We evaluate six popular, open-source systems: Hadoop Common, HBase, Alluxio, HDFS, YARN, and ZooKeeper. They all are mature, widely deployed systems. These systems are highly configurable with a large number of configuration parameters. Table 3 lists the version (SHA), and the total number of parameters at that version for each system.
\begin{table}
\begin{tabular}{l l c} \hline \hline
**Parameter** & **Description** & **Default Value** \\ \hline Model & Backend LLM. Also allows users to add other LLMs. & GPT-4 \\ Temperature & Tradeoff between creativity and determinism. & 0.2 \\ \# Shots & The number of shots included in a prompt. & Dynamic \\ \# Queries & The number of queries with the same prompt. & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 1. System config of Ciri and their default values.
Figure 3. An example prompt generated by Ciri.
**Configuration Dataset** To evaluate the effectiveness of configuration validation, we create new datasets for each system. First, we collect valid configuration values based on the default configuration file from each system, as well as configuration files from the Ctest dataset (Bartos et al., 2021). The configuration files in the Ctest dataset was collected from public Docker images that deploy the target systems (Sandes et al., 2020; Sandes et al., 2020; Sandes et al., 2020). We then generate misconfigurations of different types. The generation is based on prior studies on misconfigurations (Sandes et al., 2020; Sandes et al., 2020; Sandes et al., 2020), which violates the constraints of configuration parameters as shown in Table 2.
For each project, we build two distinct sets of configuration files. First, we build a dataset of configuration files with no misconfiguration (denoted as _ValidConfig_) to measure true negatives and false positives (Table 4). Concurrently, we also build a dataset of configuration files in which each file has one misconfiguration (denoted as _Misconfig_) to measure true positives and false negatives (Table 4). A misconfiguration can be a dependency violation between values of multiple parameters.
To build _Misconfig_ for a project, we first check if a configuration parameter fits the specification of any sub-category in Table 2, and assign it to all sub-categories that fit. For example, an IP-address parameter can be assigned to "Syntax: IP Address" and "Range: IP Address". And we do so for all parameters in the project. Then, we randomly sample at most 5 parameters in each sub-category that has a non-empty set of assigned parameters, and generate invalid value(s) per sampled parameter using the corresponding generation rules. For each non-empty sub-category, we further randomly select one parameter and its generated invalid value(s) from the 5 previously-sampled parameters. We use that one parameter to create a faulty configuration file as a _Misconfig_ shot (SS3) for that sub-category, and add this shot to the project's shot pool. For the remaining 4 parameters, we use them to create 4 faulty configuration files for that sub-category, and add them to the evaluation set. If a sub-category does not have enough parameters for the aforementioned samplings, we use all its assigned parameters to create files for the evaluation set. We separate the evaluation set and shot pool to follow the practice that the learning shot data does not overlap with the testing data (Sandes et al., 2020).
We build _ValidConfig_ for a project following the same methodology we used to build the _Misconfig_ mentioned above, except that we generate valid values for the sampled parameters. Table 3 shows the size for both the _ValidConfig_ and _Misconfig_ datasets per project. It's worth noting that our datasets cover 72%-100% of the total number of parameters in each evaluated system.
**Models** We evaluate Ciri with five state-of-the-art LLMs: GPT-4, GPT-3.5-turbo (16k), Code-Davinci-002, Babbage-002, Davinci-002. These models are the most widely used LLMs, each of which differs in training procedures and/or training data. They are trained on a large amount of code data, and show promising capability in handling a number of software engineering tasks (Sandes et al., 2020; Sandes et al., 2020; Sandes et al., 2020).
* Code-Davinci-002 is optimized for code completion tasks based on GPT-3.5, and is capable at translating natural language to code. Code-Davinci-002 has 175 billion parameters, and was trained on data collected until June 2021. Its token limit per query is 8,001.
* GPT-3.5-turbo (16k) is a successor to Code-Davinci-002. Compared with Code-Davinci-002, GPT-3.5-turbo further uses an effective optimization technique called RLHF to follow instructions (Sandes et al., 2020). GPT-3.5-turbo has unknown number of parameters and was trained on data available up to September 2021. Since the base turbo model has a token limit of only 4,097 per query, we use its variant that extends the token limit to 16,385.
* GPT-4 is claimed to be the most advanced and widely-used LLM up to date (Sandes et al., 2020). Compared with GPT-3.5-turbo, GPT-4 is larger in size. GPT-4 is trained on data prior to September 2021, its token limit per query is 8,192.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Category** & **Sub-Category** & **Specification** & **Generation Rules** \\ \hline \multirow{8}{*}{Syntax} & \multirow{2}{*}{Data type} & Value set = [integer, Float, Long\_] & Use a value that doesn’t belong to set \\ \cline{3-5} & & Numbers with units & Use a non-existent unit (e.g., nounit\({}^{\prime}\)) \\ \cline{2-5} & Path & \({}^{\prime}\)(\(\sum^{\prime}\))\({}^{\prime}\)\({}^{\prime}\)\({}^{\prime}\)\({}^{\prime}\)\({}^{\prime}\)\({}^{\prime}\)\({}^{\prime}\)\({}^{\prime}\) & Use a value that violates the pattern (e.g.,/hello/world) \\ \cline{2-5} & URL & (a-z]*:/.* & Use a value that violates the pattern (e.g., file//) \\ \cline{2-5} & IP Address & [vd](\(\{\}\),3)(\(\{\)\(\{\}\),\(\{\}\),\(\{\}\),3)(\(\{\}\)) & Use a value that violates the pattern (e.g., 127.x8.0.1) \\ \cline{2-5} & Port & Data type, value set = [Octet] & Use a value that doesn’t belong to set \\ \cline{2-5} & Permission & Data type, value set = [Octet] & Use a value that doesn’t belong to set \\ \hline \multirow{8}{*}{Range} & Basic numeric & Valid Range constrained by data type & Use the values beyond the valid range (e.g., Integer.MAX\_VALUE+1) \\ \cline{2-5} & Bool & Options, value set = [true, false] & Use a value that doesn’t belong to set \\ \cline{2-5} & Enum & Options, value set = [emu1," emam2",..] & Use a value that doesn’t belong to set \\ \cline{2-5} & IP Address & Range for each octet = [0, 255] & Use a value beyond the valid range (e.g., 256.123.45.6) \\ \cline{2-5} & Port & Range = [0, 6553] & Use a value beyond the valid range \\ \cline{2-5} & Permission & Range = [000, 777] & Use a value beyond the valid range \\ \cline{2-5} & Control & (\(P_{i}\), \(V\), \(\circ\)) \(\rightarrow\)\(P_{2}\), \(\circ\)\(\in\)\(\{>,\geq,-,+,<,\leq\}\) & Use invalid control condition (\(P_{i}\), \(V\), \(\rightarrow\)\(\circ\)) \\ \hline \multirow{2}{*}{Dependency} & Value Relationship & (\(P_{1}\), \(P_{2}\), \(\circ\)), \(\circ\)\(\in\)\(\{>,\geq,\pm,+,<,\leq\}\) & Use invalid value relationship (\(P_{1}\), \(P_{2}\), \(\rightarrow\)\(\circ\)) \\ \hline \multirow{2}{*}{Version} & Parameter change & (\(V_{1}\), \(V_{2}\), \(V_{3}\), \(P_{3}\), \(P_{4}\), \(P_{5}\), \(P_{6}\), \(+\)\(P_{5}\) & Use a removed parameter in \(V_{2}\) or use an added pazemier in \(V_{1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Misconfiguration generation rules (extended from prior work (Sandes et al., 2020)). “(Sub-)Category” list different sets of violations that can be applied to a configuration parameter to generate its misconfigured values.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Software**} & \multirow{2}{*}{**Version (5MM)**} & \multirow{2}{*}{**\#Params**} & \multicolumn{2}{c}{_ValidConfig_ Dataset_} & \multicolumn{2}{c}{_Misconfig_ Dataset_} \\ \cline{4-5} & & & **Shot Pool** & **Eval. Set** & **Shot Pool** & **Eval. Set** \\ \hline \hline HCommon & a986f18 & 395 & 16 & 64 & 16 & 64 \\ HBase & @fc18a9 & 221 & 12 & 50 & 12 & 50 \\ Allvio & 7656b5b & 494 & 13 & 54 & 13 & 54 \\ HPS & a986f18 & 566 & 16 & 64 & 16 & 64 \\ YARN & a986f18 & 525 & 10 & 40 & 10 & 40 \\ ZooKeeper & e370k93 & 32 & 8 & 32 & 8 & 32 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Software systems, and configuration datasets (including both ValidConfig and Misconfig datasets).
* Babbage-002 and Davinci-002, successors to the legacy GPT-3, are two base models (Babbage and Davinci, 2002). They were not fine-tuned with instruction-following technique (Babbage and Davinci, 2002), which aligns models with specific prompts and desired outputs. Both models have a 16,384 token limit per query.
MetricsWe evaluate the effectiveness of LLMs on configuration validation at both _configuration file_ and _configuration parameter_ levels. At file level, we want to know whether the model can determine if a configuration file is fully valid or misconfigured. At parameter level, we want to know whether the model can determine if each parameter in the configuration file is valid or erroneous. We describe the definitions of the terms used in confusion matrix in Table 4. We then compute the Precision, Recall, and F1-score at both levels to assess the LLM's effectiveness. If not specified, we default to Macro averaging since each project is regarded equally. We prioritize studying parameter-level effectiveness because it provides more fine-grained measurements. We default to discussing parameter-level metrics in SS5 unless noted otherwise.
## 5. Evaluation and Findings
In this section, we first present results on evaluating the effectiveness of LLMs as configuration validators with Ciri (SS5.1). We then analyze how validation effectiveness changes with regard to shots in few-shot learning (SS5.2). We also present our understanding on when Ciri produces wrongful validation results (SS5.3) and observed biases from LLMs' training (SS5.4).
### Validation Effectiveness
Finding1._Ciri demonstrates the effectiveness of using state-of-the-art LLMs as configuration validators. It achieves file-level and parameter-level F1-scores of up to 0.75 and 0.56, respectively._
Ciri exhibits remarkable capability in configuration validation. Table 5 shows the F1-score, precision, and recall for each project and LLM under four-shot setting (SS3)--the most effective few-shot learning setting obtained from our later experiments (SS5.2). Table 5 shows that: beyond merely identifying misconfiguration files, with an average F1-score ranging from 0.62 to 0.75, LLMs can also adept at pinpoint erroneous parameters and discern the causes of the misconfigurations. Among the top-three LLMs, the parameter-level F1-scores are approximately 25% lower than their file-level counterparts, this shows that identifying misconfigured parameters is currently a more challenging task for LLMs than classifying if a configuration change is erroneous.
When using legacy models, we observe they lack the ability of effective configuration validation. Specifically, for Babbage-002, its F1-score has a sharp drop from file-level (0.62) to parameter-level (0.09), indicating that it is not able to localize the actual misconfigured parameter accurately. One reason is that Babbage-002 lacks optimization for instruction-following (Babbage and Davinci, 2002), leading it to produce inappropriate results. Furthermore, we also evaluate the Davinci-002, and it cannot detect any misconfiguration within our dataset (omitted from Table 5).
Finding2._Providing configuration file examples (shots) for the validation query can effectively improve LLMs' configuration validation effectiveness. Without shots, LLMs often report false alarms or miss misconfigurations, e.g., Code-Davinci-002's F1-score at parameter-level is as low as 0.08--with shots, its file-level and parameter-level F1-scores can be improved by 0.56 and 0.48 respectively._
Validation examples (shots) play an important role in improving the effectiveness of LLMs for configuration validation. Table 6 shows the performance of LLMs when the configuration validation query does not include shots from Ciri. In particular, comparing Table 6 to Table 5, the average F1-score of the top-three LLMs has decreased by 0.08-0.56 at the file-level, and decreased by 0.09-0.48 at the parameter-level. Without any shots, both Davinci-002 and Babbage-002 cannot detect any misconfiguration in our dataset; we thus omitted them in Table 6.
ImplicationOur result suggests that state-of-the-art LLMs (e.g., GPT-4, GPT-3.5-turbo) can be applied to configuration validation in a properly designed framework like Ciri to achieve promising effectiveness. Specifically, generating and providing configuration validation examples along with the validation query can improve off-the-shelf LLMs' misconfiguration detection effectiveness. However, legacy LLMs (e.g., Davinci-002, Babbage-002) are often incapable of configuration validation due to insufficient training data and/or outdated training mechanisms (Babbage and Davinci, 2002; Babbage and Davinci, 2002).
### Impacts of Few-shot Learning
Following the implication in Section 5.1, we conduct a series of experiments to study how the configuration validation effectiveness of LLMs can be improved over different shot combinations.
We evaluate six \(N\)-shot learning settings, where \(N\) ranges from 0 to 5. For each of these settings, we use Ciri to generate different combinations of shots drawn from the _ValidConfig_ and _Misconfig_ datasets (SS4). For example, to evaluate GPT-3.5-turbo on HCommon with a two-shot setting, three experiments will be performed: (1) two _ValidConfig_ shots; (2) one _ValidConfig_ shot plus one _Misconfig_ shot; (3) two _Misconfig_ shots, drawn from the HCommon's shot database. In total, we experiment with 21 shot combinations for each project and LLM. To control cost, we limit the experiment to two LLMs (GPT-3.5-turbo and Code-Davinci-002) on three systems (HCommon, HBase, Alluxio).
Finding3._Including both ValidConfig and Misconfig shots for LLMs delivers the optimal configuration validation effectiveness. Meanwhile, Misconfig shots are more crucial to validation effectiveness than ValidConfig shots. For example, both GPT-3.5-turbo and Code-Davinci-002 achieve their highest F1-score with three Misconfig shots and one ValidConfig shot.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Level** & **CM** & **Definition** \\ \hline \multirow{4}{*}{File} & TP & A misconfigured configuration file correctly identified \\ & FP & A correct configuration file wrongly flagged as misconfigured \\ & TN & A correct configuration file strongly identified as valid \\ & FN & A misconfigured configuration file overlooked or deemed correct \\ \hline \multirow{4}{*}{Param.} & TP & A misconfigured parameter correctly identified \\ & FP & A correct parameter wrongly flagged as misconfigured \\ & TN & A correct parameter rightly identified as valid \\ & FN & A misconfigured parameter overlooked or deemed correct \\ \hline \hline \end{tabular}
\end{table}
Table 4. Definitions for confusion matrix (CM) terms.
Figure 4 shows how the average F1-score, precision and recall across projects for GPT-3.5-turbo and Code-Davinci-002 under different shot combinations. Without _Misconfig_, LLMs' performance is easily limited. For example, when only using _ValidConfig_ shots in the prompt, Code-Davinci-002 only gets an F1-score around 0.2 (i.e., the first column of the heat map in Figure 3(b)). Compared with _ValidConfig_ shots, _Misconfig_ shots allow Ciri to more effectively identify patterns and attributes of misconfigurations at inference time. In both Figure 3(a)-3(b), F1-score increases at more _Misconfig_ shots are used in the prompt.
On the other hand, providing only _Misconfig_ shots can introduce bias to the LLM and lead to a performance decrease. This is because the text distribution in the input query can significantly impact the performance of LLMs (Wang et al., 2018). In our evaluation, after providing a sufficient number of _Misconfig_ shots, we indeed see that providing more _ValidConfig_ shots can sometimes reduce the number of false positives for Code-Davinci-002 and false negatives for GPT-3.5-turbo. Overall, using both _Misconfig_ and _ValidConfig_ in few-shot learning settings mitigates the biases of LLMs and delivers the optimal configuration validation performance.
**Finding 4**.: _Using configuration files from the same system as shots for LLMs delivers the optimal configuration validation effectiveness. When same-system shots are unavailable, using configuration files from a different system could also improve validation effectiveness over zero-shot. For example, on HCommon, the parameter-level F1-score improved by 0.39 averaged across the top three LLMs._
In situations where configuration data of target systems is unavailable, we evaluate the possibility of using configuration files from other systems as shots for LLMs can improve the configuration validation effectiveness on the target system. Table 7 shows our evaluation results of using shots from other systems for LLMs to do configuration validation on HCommon. By comparing the HC columns with other columns in Table 7, we can see that using shots from other systems is not as effective as using shots from the target system. However, by comparing the average F1-score in Table 7 with the HC columns in Table 6, we find that using shots from other systems is generally more effective than not using any shots. Our observations highlight that Ciri with the underlying LLMs can transfer configuration-related knowledge across different systems for effective configuration validation compared to traditional validation approaches (SS1).
In contrast to the other LLMs, using cross-system shots actually decreases the F1-score compared to the zero-shot setting on GPT-4. One possible explanation is that examples from other systems also introduce noise that is contradictory to the target system's configuration data. LLMs can pick up such noise and produce an inaccurate validation outcome.
**Finding 5**.: _The validation effectiveness is higher if the misconfiguration in the validating configuration file belongs to the same violation (sub-)categories as the misconfigurations in the shots, compared to when it does not._
Table 8 shows that when the validating configuration file contains misconfiguration belonging to the same (sub-)categories of violation as the misconfigurations in the shots, the LLMs' F1-score improves significantly. When the misconfigurations between the shot and the evaluated file are caused by the same category (e.g., syntax or range error from Table 2), parameter-level F1-score improves by up to 30% compared to when they are not caused by the same category of violations. When they are caused by the same sub-category of violation (i.e., they are violated similarly and have the same parameter type), F1-score improves by up to 69.5%.
**Implication.** To improve LLM's performance as a configuration validator with few-shot learning, developers can leverage frameworks like Ciri to collect and generate high-quality, comprehensive configuration validation examples as shots. The provided shots in the prompt should be composed of misconfiguration files in which
the misconfigured parameter(s) have been identified and reasoned, as well as configuration files that are entirely valid. Our experience suggests that prioritizing the provision of misconfiguration shots is more crucial than supplying valid configuration shots.
When configuration data of target systems is unavailable for few-shot learning, using configuration data from other systems as shots could improve configuration validation effectiveness of LLMs, compared with zero shot. Moreover, misconfigurations that may fall into the same possible (sub-)category of violation are particularly useful as shots. However, shots from other systems may introduce bias, and affect the validation effectiveness.
### Ineffectiveness and Difficulties
**Finding 6**.: _Under Ciri, LLMs excel at pinpointing misconfigurations caused by syntax and range violations (i.e., 12 out of all 15 sub-categories of violations), with an average F1-score of 0.8 across corresponding sub-categories. However, LLMs show limited effectiveness in pinpointing misconfigurations caused by dependency and version violations (i.e., 3 out of all 15 sub-categories), with an average F1-score of 0.2 across corresponding sub-categories._
Table 9 shows the validation effectiveness of Ciri broken down by the types of misconfigurations. The average F1-score across systems on detecting misconfigurations due to Syntax and Range violations is consistently above 0.5 and often reaches 0.8 for all three state-of-the-art LLMs, with one exception in Code-Davinci-002 on "Range: Permission" misconfigurations (an F1-score of 0.44). Meanwhile, however, F1-score rarely exceeds 0.3 when detecting misconfigurations due to violations in Dependency and Version. Only GPT-4 achieves a slightly better F1-score of 0.46 in the Value Relationship sub-category, which is still much lower than its F1-scores in other sub-categories.
The performance difference can be attributed to the inherent nature of the misconfigurations. Misconfigurations due to violations in the Syntax and Range categories are more common in practice (Syntax, 2011), from which LLMs have learned extensive knowledge. In such a case, domain-specific knowledge from LLM is sufficient to spot Syntax or Range violations. On the other hand, misconfiguration data from the Dependency and Version categories is often project-specific, e.g., the example shown in Figure 5. They are tied to detailed history and features of individual projects, thus harder to be captured or memorized by LLMs if the LLMs have not been heavily re-trained or fine-tuned on project-specific data. This performance discrepancy across different misconfiguration types exposes existing LLMs's limitation on detecting misconfigurations that require highly project-specific knowledge.
\begin{table}
\begin{tabular}{l|c|c c c c c|c c c c c|c c c c|c c c c c|c c c c} \hline \multirow{3}{*}{**Models**} & \multicolumn{11}{c|}{\multirow{3}{*}{**File-Level (F.L)**}} & \multicolumn{11}{c|}{\multirow{3}{*}{**Parameter-Level (F.L)**}} & \multicolumn{11}{c|}{\multirow{3}{*}{**F.L**}} & \multicolumn{11}{c|}{\multirow{3}{*}{**P.L**}} & \multicolumn{11}{c|}{\multirow{3}{*}{**F.L**}} & \multicolumn{11}{c|}{\multirow{3}{*}{**P.L**}} \\ & & \multicolumn{3}{c}{**HC**} & & \multicolumn{3}{c}{**HB. AL} & & \multicolumn{3}{c}{**HL**} & \multicolumn{3}{c}{**YAG**} & \multicolumn{3}{c|}{**YAG**} & \multicolumn{3}{c|}{**HCE**} & \multicolumn{3}{c|}{**HDE**} & \multicolumn{3}{c|}{**HL**} & \multicolumn{3}{c|}{**HL**} & \multicolumn{3}{c|}{**HL**} & \multicolumn{3}{c|}{**HL**} & \multicolumn{3}{c|}{**HL**} & \multicolumn{3}{c|}{**YAG**} & \multicolumn{3}{c|}{**HDE**} & \multicolumn{3}{c|}{**HDE**} & \multicolumn{3}{c}{**HDE**} & \multicolumn{3}{c}{**Avg**} \\ \hline GPT-4 & 0.80 & 0.72 & 0.77 & 0.73 & 0.74 & 0.71 & 0.73 & 0.84 & 0.42 & 0.44 & 0.47 & 0.46 & 0.33 & 0.43 & 0.67 & 0.58 & 0.50 & 0.30 & 0.98 & 0.99 & 0.87 & 0.77 \\ GPT-3-5-turbo & 0.74 & 0.72 & 0.66 & 0.71 & 0.64 & 0.68 & 0.68 & **0.52** & 0.30 & 0.22 & 0.47 & 0.34 & 0.21 & 0.31 & 0.71 & 0.62 & 0.42 & 0.21 & 0.77 & 0.78 & 0.67 & 0.61 \\ Code-Davinci-002 & 0.74 & 0.73 & 0.70 & 0.71 & 0.71 & 0.62 & 0.69 & 0.59 & 0.47 & 0.41 & 0.42 & 0.41 & 0.48 & 0.44 & 0.65 & 0.64 & 0.51 & 0.38 & 0.85 & 0.79 & 0.69 & 0.53 \\ \hline \end{tabular}
\end{table}
Table 7. Results on HCommon using shots from other systems, e.g., the HB. columns show results of using HBase shots for HCommon. The HC. columns show results of using HCommon. The HC. columns show results of using HCommon shots for HCommon.
Figure 4. Evaluation results under different shot combinations.
**Finding 7**.: _LLMs show a trend of decrease in configuration validation effectiveness as the number of parameters increases in the to-be-validated configuration file._
We evaluate how the F1-score from GPT-3.5-turbo relates to the number of configuration parameters in the validating configuration file. We present our results in Figure 6, which indicates that configuration validation becomes more challenging as the number of parameters in the validating configuration file grows. For instance, when the number of parameters jumps from 8 to 16, the performance of Ciri begins to deteriorate severely. This performance decline can be attributed to a potential information overload for LLMs at inference time. As the complexity of the configuration file grows, the difficulty of validating it also increases. The model may struggle to process all the validating configuration parameters and their relationships as more parameters are included.
**Finding 8**.: _Among the correctly identified misconfigurations, 93.6% of the reasons from LLMs directly address the root causes of the misconfigurations. Meanwhile, 6.4% of the reasons are misleading._
When an LLM identifies a misconfiguration from the validating configuration file, Ciri also requests the LLM to provide explanations to its judgment to aid developers in debugging the root cause and fixing the misconfiguration (SS3.1). To assess the clarity and accuracy of these explanations, we randomly select one answer in which the misconfiguration is correctly identified per (sub-category, system, LLM) tuple, and collect a total of 204 answers (resulted from 2,040 queries). Upon careful manual review, we determined that 93.6% of the reasons given by the LLMs are clear and directly address the root cause of the misconfigurations. This indicates that LLMs can both detect misconfigurations and provide meaningful explanations for them. 1.9% of the answers contain a mix of correct and incorrect explanations across their queries. However, these incorrect reasons were filtered out by the text clustering method outlined in SS3.1 because the correct reasons are dominating. Figure 7 presents an example of mixed reasons, with the second reason being an instance of hallucination.
**Implication.** With frameworks like Ciri, state-of-the-art LLMs can effectively validate configurations for syntax or range violations. However, they are less effective for the configurations that involve dependencies between parameters and software versions, showing the challenges for LLMs to reason the interactions between configuration and between configuration and code [53]. To improve the effectiveness for those misconfigurations, one can re-train or fine-tune LLMs with data related to dependency and versions. Our results also show that small configuration snippets are much easier for LLMs to validate, supporting incremental, continuous configuration change practice which is already widely adoped in practice [16, 76]. Lastly, while LLMs often provide correct explanations on misconfigurations that can aid debugging, it is crucial for developers to use these explanations with discretion, as they may not be consistently accurate.
\begin{table}
\begin{tabular}{l l|c c c c c c|c c c c c c c c c c c c c c} \hline \multirow{2}{*}{**Category**} & \multirow{2}{*}{**Sub-category**} & \multicolumn{5}{c|}{**GPT-4**} & \multicolumn{5}{c|}{**GPT-3-5-turbo**} & \multicolumn{5}{c}{**Code-Dayni-002**} \\ \cline{3-14} & & HC. & HB. & AL. & HD. & YA. & ZK. & **avg** & HC. & HB. & AL. & HD. & YA. & ZK. & **avg** & HC. & HB. & AL. & HD. & YA. & ZK. & **avg** \\ \hline \multirow{9}{*}{**Syntax**} & Data type & 1.00 & 0.89 & 1.00 & 0.89 & 1.00 & 0.73 & 0.92 & 0.61 & 0.89 & 0.70 & 0.94 & 0.89 & 0.80 & 0.77 & 0.94 & 0.86 & 0.67 & 0.80 & 1.00 & 1.00 & 0.86 \\ & Path & 1.00 & 0.80 & 0.80 & 1.00 & 0.89 & 0.67 & 0.85 & 1.00 & 0.86 & 0.43 & 0.55 & 1.00 & 0.89 & 0.77 & 0.50 & 1.00 & 0.75 & 0.79 \\ & URL & 1.00 & N.A. & 0.00 & 0.80 & N.A. & N.A. & 0.82 & 1.00 & N.A. & 0.00 & 1.00 & N.A. & N.A. & 0.94 & 1.00 & N.A. & 0.00 & 1.00 & N.A. & N.A. & 0.89 \\ & IP Address & 1.00 & 0.89 & 1.00 & 1.00 & 1.00 & 0.73 & 0.92 & 0.86 & 0.80 & 0.89 & 1.00 & 0.89 & 0.89 & 1.00 & 0.80 & 0.50 & 0.89 & 0.75 & 0.89 & 0.81 \\ & Port & 0.89 & 0.89 & 0.89 & 1.00 & N.A. & 0.67 & 0.85 & 0.80 & 0.89 & 0.80 & 1.00 & 0.89 & 0.86 & 1.00 & 1.00 & 0.89 & 0.89 & N.A. & 1.00 & 0.95 \\ & Permission & 1.00 & 1.00 & 0.57 & 1.00 & N.A. & 0.88 & 0.86 & 1.00 & 1.00 & 1.00 & N.A. & N.A. & 0.95 & 1.00 & 1.00 & 0.67 & 0.86 & N.A. & 0.90 \\ \hline \multirow{9}{*}{**Range**} & Basic numeric & 0.67 & 1.00 & 0.80 & 0.57 & 0.75 & 0.73 & 0.75 & 0.67 & 0.55 & 0.67 & 0.75 & 0.67 & 0.53 & 0.62 & 0.57 & 0.57 & 0.60 & 0.67 & 0.86 & 1.00 & 0.70 \\ & Boul & 0.89 & 0.55 & 1.00 & 0.80 & 0.80 & 0.62 & 0.75 & 0.57 & 0.57 & 0.47 & 0.00 & 0.44 & 0.80 & 0.50 & 0.57 & 0.75 & 1.00 & 0.67 & 0.75 & 0.80 & 0.77 \\ & Enum & 0.67 & 0.80 & 0.57 & 1.00 & 0.89 & N.A. & 0.78 & 0.75 & 0.67 & 0.40 & 0.86 & 0.67 & N.A. & 0.63 & 0.75 & 0.86 & 0.86 & 0.57 & 1.00 & 0.84 & 0.81 \\ & IP Address & 0.89 & 0.89 & 0.89 & 1.00 & 0.89 & 0.57 & 0.83 & 0.89 & 1.00 & 0.57 & 0.86 & 0.67 & 0.77 & 0.89 & 0.89 & 0.50 & 0.57 & 0.48 & 0.59 & 0.57 \\ & Port & 1.00 & 0.86 & 0.86 & 0.67 & N.A. & 0.80 & 0.83 & 0.75 & 0.73 & 0.67 & N.A. & 1.00 & 0.76 & 0.86 & 0.86 & 0.86 & 1.00 & N.A. & 1.00 & 0.91 \\ & Permission & 0.75 & 1.00 & 0.50 & 1.00 & N.A. & N.A. & 0.82 & 0.50 & 0.00 & 0.57 & 0.86 & N.A. & N.A. & 0.63 & 0.57 & 1.00 & 0.50 & 0.00 & N.A. & N.A. & 0.44 \\ \hline \multirow{2}{*}{**Dependency**} & Control & 0.00 & 0.57 & 0.00 & 0.00 & 0.00 & N.A. & 0.13 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & N.A. & 0.00 & 0.00 & 0.57 & 0.00 & 0.00 & 0.05 & N.A. & 0.18 \\ & Value Relationship & 0.57 & 0.57 & 0.67 & 0.00 & 0.57 & N.A. & 0.46 & 0.00 & 0.44 & 0.00 & 0.29 & N.A. & 0.25 & 0.00 & 0.29 & 0.00 & 0.40 & 0.25 & N.A. & 0.21 \\ \hline \multirow{2}{*}{**Version**} & Parameter Change & 0.75 & 0.00 & 0.00 & 0.57 & 0.00 & N.A. & 0.27 & 0.20 & 0.00 & 0.00 & 0.00 & 0.00 & N.A. & 0.06 & 0.50 & 0.00 & 0.00 & 0.00 & 0.00 & 0.40 & N.A. & 0.17 \\ \hline \end{tabular}
\end{table}
Table 9. Parameter-level F1-score by misconfiguration types from Table 2. N.A. means no evaluation samples.
Figure 5. Misconfiguration of Control Dependency that LLMs cannot detect. The update interval for authentication is set but the secure authentication is disabled.
Figure 7. Correct and incorrect misconfig. reasons by LLMs.
### Biases in Validation Results
**Finding 9**.: _During configuration validation, LLMs more frequently pinpoint parameters that are more popular on the Internet. When a configuration file is entirely valid, LLMs more frequently report false alarms on the more popular parameters. When a parameter is misconfigured, LLMs demonstrate higher accuracy in identifying misconfigurations on the more popular parameters._
To quantify the popularity of a configuration parameter on the Internet, we measure the number of exact-match search results returned by Google for a keyword and term it as G-hits.
We first study whether there is a correlation between a parameter's popularity and the frequency it is reported as a false alarm by LLMs in valid configuration files. For the configuration files in _ValidConfig_ dataset, we obtain the G-hits of each parameter in each file. We then track the frequency of LLMs pinpointing the parameter with the \(i^{th}\) highest G-hits in each file, where \(i=1...8\). Figure 8 shows the overall frequencies of the parameter with \(i^{th}\) highest G-hits in the file being pinpointed. The smaller figures show the overall frequencies under different shot combinations (SS5.2), the larger figure simply sums up the frequencies from the smaller figures. Overall, the frequency distributions of being pinpointed reveal a clear skewness towards parameters with higher G-hits.
We then study the correlation between a parameter's popularity and its validation accuracy. We perform similar calculations for parameters in configuration files from _Misconsin_ flag, and further separate the cases when the misconfigured parameter is identified versus when it is missed. We observe that the median G-hits of the misconfigured parameters being correctly identified is higher than the median G-hits of the misconfigured parameters being missed.
The correlations between the parameter popularity and the parameter G-hits across both datasets can be attributed to the nature of the training data of LLMs. Training data of LLMs is often sourced from publicly accessible domains (e.g., Common Crawl (Crawl, 2018)), which are easily accessible by search engines like Google. Topics or parameters that are popularly discussed are more likely to be memorized by the LLMs than the less popular ones, due to more frequent presence in the training data.
**Implication.** LLMs are predisposed to prioritize configuration parameters that are more frequently discussed on the internet during the configuration validation. As a result, when employed as configuration validators, LLMs can effectively detect misconfigurations in parameters that are commonly referenced online, while posing limited capacity in validating those that are not.
## 6. Threats to Validity
**External.** The external threats come from our evaluated LLMs and dataset. We evaluate Ciri with five state-of-the-art LLMs that are widely used to mitigate threats on evaluated models. To mitigate threats from the evaluated projects, we select six mature, widely used software systems with different types. These systems are commonly used in prior studies (K
comments, descriptions, change logs, and specifications. Such information can provide valuable context, which has been proven by the prior work [22, 58, 108].
Moreover, we plan to investigate advanced prompting techniques, such as Chain-of-Thoughts (CoT) prompting [83, 88, 107]. In configuration validation context, CoT prompting can mimic the reasoning process a system expert might follow during the validation process. By eliciting LLMs to generate intermediate reasoning steps leading to the validation answer, it not only makes the validation process more transparent but also potentially more accurate. This step-by-step reasoning may also help in identifying and rectifying biases in the model's validation process.
Lastly, integrating user feedback loops can be valuable. With user feedback on validation results, the iterative procedure can refine LLMs over time, leading to more accurate results.
**Detecting environment-related misconfigurations.** While our study primarily targets basic misconfigurations, such as syntax and semantic violations, the validity of a configuration file can vary across environments. For instance, a system's configuration might specify a file location, but the file's existence, readability, and format can determine its actual validity. To address these, LLMs can generate environment-specific scripts that can be run in the context of the environment. For example, given the configuration file as input, the LLM can generate a Python script like the following to validate the specified file path.
Such an approach can help identify issues like misconfigured paths, unreachable network addresses, missing packages, or invalid permissions. Notably, these scripts offer a lightweight alternative to more intensive configuration tests [23, 97].
For security concern, running those checks generated by LLMs need to be sandboxed. Moreover, the scripts can be reviewed by humans and transformed into lightweight validators. Given the recent success of code generation tools such as GitHub Copilot, we believe this is a promising direction to explore. In fact, our preliminary experiments show that, given appropriate examples, LLMs can generate such scripts quite well.
**Detecting source-code related misconfigurations.** In addition to the deployment environment, the system's source code can also affect the validity of a configuration. Implicit assumptions or latent software bugs can create ambiguities in understanding the true requirements of a configuration. To further illustrate this point, continuing the above example, if the documentation does not mention that the file needs to be in JSON format, but the code expects such a format, neither an LLM nor a human could infer this constraint based solely on the documentation.
To detect such issues, we can leverage LLMs' ability to reason about code. The strategy involves presenting both the configuration file and the relevant source code that exercises this configuration to
Figure 8. Frequency of the identified parameter with \(i^{th}\) highest G-hits in a configuration file. In different shot settings (subfigures), VC stands for a _ValidConfig_ shot, and MC stands for a _Misconfig_ shot.
Figure 9. The G-hits distribution of the correctly detected misconfigurations (orange), and the G-hits distribution of the missed misconfigurations (blue). The bars in box plots indicate medians.
the LLM. Techniques like static or dynamic program slicing (Rang et al., 2019; Wang et al., 2020) can help pinpoint the relevant code blocks. The LLM can then be tasked with distilling this code into a validator script. While this poses a challenge, the code reasoning capability of LLMs (Rang et al., 2019; Wang et al., 2020; Wang et al., 2020) suggests that this is promising and worth further exploration.
**Fine-tuning LLMs for configuration validation.** Apart from these, there is also the problem of tackling very system-specific parameters, which cannot be reasoned about based on common-sense knowledge. This problem is further exacerbated by the fact that software evolves constantly (Wang et al., 2020; Wang et al., 2020) by introducing new parameters and changing existing parameters to take different meanings and constraints. This is an important problem to tackle to unleash the full potential of LLMs for configuration validation. The most obvious approach for tackling this is to fine-tune models on new data to keep LLMs updated, but this is non-trivial, especially due to lack of data on the newly introduced parameters. The LLM community has found promising results in using synthetic data (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) for fine-tuning these models, reducing the need for large amounts of real data. We believe that this is a promising direction to explore for configuration validation as well.
## 8. Related Work
**Configuration Validation.** Prior studies developed frameworks for developers to implement validators (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) and test cases (Wang et al., 2020; Wang et al., 2020), as well as techniques to extract configuration constraints (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). However, manually writing validators and tests requires extensive engineering efforts, and is hard to cover various properties of different configurations (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). ML/NLP-based configuration validation techniques have been investigated. Traditional ML/NLP-based approaches learn correctness rules from configuration data (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020) and documents (Wang et al., 2020; Wang et al., 2020) and then use the learned rules for validation. These techniques face data challenges and rely on predefined learning features and models, making them hard to generalize to different projects and deployment scenarios. Complementary to prior work, we explore using LLMs for configuration validation, which can potentially address the limitations of traditional ML/NLP techniques towards automatic, effective validation solutions.
**Large Language Models for Software Engineering.** LLMs have become an exciting utility in the past few years, achieving impressive performance across various tasks such as text classification, text summarization, and logical reasoning (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). Recently, they are being actively adopted to the software engineering domain, where they have demonstrated abilities in generating, summarizing, and translating code (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020), failure diagnosis (Wang et al., 2020; Wang et al., 2020), fault localization and program repair (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). Large pre-trained models of coding data (LLMs for code) are also increasingly prominent (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020; Wang et al., 2020), and have been used for the aforementioned code-specific tasks. We take the first step to comprehensively evaluate LLMs for configuration validation. Our proposed framework for adopting LLMs as configuration validators, Ciri, is general to different LLMs.
## 9. Conclusion
As a first step to harvest recent advances of LLMs such as GPT and Codex for configuration validation, we developed Ciri as an open platform to experiment with LLMs as configuration validators and to analyze the promises and challenges of LLM-based validators. In this paper, we presented our analysis of Ciri's validation effectiveness on five popular LLMs using configuration data of six mature, widely deployed open-source systems. Our findings showed the potential of using LLMs for configuration validation--Ciri demonstrates the effectiveness of state-of-the-art LLMs as configuration validators, achieving file-level and parameter-level F1-scores of up to 0.75 and 0.56, respectively. We also explored the design space of LLM-based validators in terms of prompt engineering with few-shot learning. Despite the encouraging results, our study revealed that directly using LLMs as configuration validators is ineffective in detecting certain types of misconfigurations such as dependency violations and version-related misconfigurations and induces biases to popular parameters. We discuss the open challenges which shedding light on new, exciting research directions of the LLM-empowered validation techniques.
|
2306.14641 | Canonical equivalence of a charge in a time dependent,
spatially-homogeneous electromagnetic field to a time-dependent perturbed
oscillator | Here we prove that the classical (respectively, quantum) system, consisting
of a particle moving in a static electromagnetic field, is canonically
(respectively, unitarily) equivalent to a harmonic oscillator perturbed by a
spatially homogeneous force field. This system is canonically and unitarily
equivalent to a standard oscillator. Therefore, by composing the two
transformations we can integrate the initial problem. Actually, the eigenstates
of the initial problem turn out to be entangled states of the harmonic
oscillator. When the magnetic field is spatially homogeneous but
time-dependent, the equivalent harmonic oscillator has a time-varying
frequency. This system can be exactly integrated only for some particular cases
of the time dependence of the magnetic field. The unitary transformations
between the quantum systems are a representation of the canonical
transformations by unitary transformations of the corresponding Hilbert spaces. | Henryk Gzyl | 2023-06-26T12:29:36Z | http://arxiv.org/abs/2306.14641v1 | Canonical equivalence of a charge in a time dependent, spatially-homogeneous electromagnetic field to a time-dependent perturbed oscillator
###### Abstract
Here we prove that the classical (respectively, quantum) system, consisting of a particle moving in a static electromagnetic field, is canonically (respectively, unitarily) equivalent to a harmonic oscillator perturbed by a spatially homogeneous force field. This system is canonically and unitarily equivalent to a standard oscillator. Therefore, by composing the two transformations we can integrate the initial problem. Actually, the eigenstates of the initial problem turn out to be entangled states of the harmonic oscillator.
When the magnetic field is spatially homogeneous but time-dependent, the equivalent harmonic oscillator has a time-varying frequency. This system can be exactly integrated only for some particular cases of the time dependence of the magnetic field.
The unitary transformations between the quantum systems are a representation of the canonical transformations by unitary transformations of the corresponding Hilbert spaces.
**Keywords**: Canonical transformation, Particles in electromagnetic fields, Harmonic Oscillators
**MSC2020**: 70H15, 81QXX, 81599.
## 1 Introduction and Preliminaries
We first establish the canonical equivalence between the Hamiltonians describing the motion of a particle in a static electromagnetic field to that of a classical harmonic
oscillator. Then we show how to implement the canonical transformations by unitary transformations between the corresponding Hilbert spaces. This is done in two steps, first, we map the particle moving under the action of the electromagnetic fields into a harmonic oscillator subject to a spatially constant, but time-dependent force, and then we show that this system is equivalent to a simple harmonic oscillator.
To do this, we put together two separate lines of work, in which each of the two steps was carried out separately. See [12] and[13]. The second step was considered only in one dimension. But it contains two subcases that are useful here, as well as many references to previous work that are not mentioned here.
Let us establish some notational conventions. Vectors will be either 3-dimensional or 6-dimensional column vectors written in boldface. To refer to the components on \(\mathbf{v},\) we write \(\mathbf{v}=(v_{1},v_{2},v_{3})^{\dagger},\) where the superscript "\(\dagger\)" will always mean transpose of the corresponding object. We also need \(\mathbf{v}=(\hat{\mathbf{v}},v_{3})^{\dagger}\) where \(\hat{\mathbf{v}}=(v_{1},v_{2})^{\dagger}\) will stand for the 2-vector consisting of the first two components of \(\mathbf{v}.\) By \(\langle\mathbf{v},\mathbf{w}\rangle\) (resp. \(\mathbf{v}\times\mathbf{w}\)) we denote the usual scalar (resp. vector) product of the two vectors.
The classical dynamics of the three systems are described by:
\[H_{1}(\mathbf{x},\mathbf{p})=\frac{1}{2m}\bigl{(}\mathbf{p}-\mathbf{A}(\mathbf{x}) \bigr{)}^{2}-q\langle\mathbf{x}\mathbf{E}\rangle. \tag{1.1}\] \[H_{2}(\mathbf{Q},\mathbf{P})=\frac{1}{2m}\|\bar{\mathbf{P}}\|^{2}+\frac{1}{ 2m}P_{3}^{2}+\frac{1}{2}\omega^{2}\|\bar{\mathbf{Q}}\|^{2}-q\langle\mathbf{Q},\mathbf{E}(t )\rangle.\] (1.2) \[H_{3}(\mathbf{\xi},\mathbf{\eta})=\frac{1}{2m}\|\bar{\mathbf{\eta}}\|^{2}+ \frac{1}{2m}\eta_{3}^{2}+\frac{1}{2}\omega^{2}\|\bar{\mathbf{\xi}}\|^{2}. \tag{1.3}\]
The time evolution of the corresponding quantum systems is determined by the
operators
\[\boldsymbol{H}_{1}=\frac{1}{2m}(-i\hbar\nabla-\boldsymbol{A}( \boldsymbol{x}))^{2}-q\langle\boldsymbol{x}\boldsymbol{E}\rangle.\] \[\boldsymbol{H}_{1}=-\frac{\hbar^{2}}{2m}\Delta_{\boldsymbol{x}}- \frac{i\hbar}{2}\langle\boldsymbol{\Omega}\bar{\boldsymbol{x}},\nabla_{x} \rangle+\frac{m\omega^{2}}{2}\langle\bar{\boldsymbol{x}},\bar{\boldsymbol{x}} \rangle-q\langle\boldsymbol{x}\boldsymbol{E}\rangle \tag{1.4}\] \[\boldsymbol{H}_{2}=-\frac{\hbar^{2}}{2m}\Delta_{x}+\frac{1}{2} \omega^{2}\|\bar{\boldsymbol{Q}}\|^{2}-q\langle\boldsymbol{Q},\boldsymbol{E}( t)\rangle.\] (1.5) \[\boldsymbol{H}_{3}=-\frac{\hbar^{2}}{2m}\Delta_{\boldsymbol{ \xi}}+\frac{1}{2}\omega^{2}\|\bar{\boldsymbol{\xi}}\|^{2}. \tag{1.6}\]
The explanations of the notations come up a few lines below. In each set, the first Hamiltonian describes the motion of a particle of charge \(q\) and mass \(m\) in a static electromagnetic field. The second describes the motion of a particle under the action of a linear restoring force plus a spatially constant but time-dependent force, whereas the third describes a particle under the action of a planar restoring force. Note as well that if \(\boldsymbol{E}\) were absent, then \(H_{2}\) and \(H_{3}\) coincide. Ditto for their quantized versions. To obtain the version of \(\boldsymbol{H}_{1}\) in the second line from the first, note that when acting on functions of \(\boldsymbol{x}\), we have:
\[\langle\boldsymbol{\Omega}\boldsymbol{x},\nabla_{\boldsymbol{x}} \rangle+\langle\nabla_{\boldsymbol{x}},\boldsymbol{\Omega}\boldsymbol{x}, \rangle=2\langle\boldsymbol{\Omega}\boldsymbol{x},\nabla_{\boldsymbol{x}} \rangle+div(\boldsymbol{\Omega}\boldsymbol{x})=2\langle\boldsymbol{\Omega} \boldsymbol{x},\nabla_{\boldsymbol{x}}\rangle\]
because \(div(\boldsymbol{\Omega}\boldsymbol{x})=tr(\boldsymbol{\Omega})=0.\) Above we used the symbol \(\boldsymbol{\Omega}\) to denote the cross product matrix for \(\boldsymbol{B}\), that is \(\boldsymbol{\Omega}\boldsymbol{x}=\boldsymbol{B}\times\boldsymbol{x}\) for any \(\boldsymbol{x}.\) Except for the presence of the electric field, the classical (and quantum) equivalence of (1.1) and (1.2) (respec. (1.4) and (1.5)) is, essentially the subject matter of [12]. Here we extend the result to the current setup. The thrust of [13] is to provide a one-dimensional equivalence of (1.2) to (1.3) (respec. (1.5) to (1.6)), and to examine the possible global phase that appears when the particle under a restoring force is perturbed by a time-dependent, but spatially homogeneous force. The extension considered here implies that the presence of a magnetic field does not induce a global change of phase unless there is also an electric field present.
We also consider the case in which the magnetic field is spatially homogeneous, but time-dependent. We consider two cases: First, the magnetic field is time-dependent,
but its direction is fixed, and second, the magnetic field rotates about a fixed axis. In the first case, the system is equivalent to a harmonic oscillator subject to an external, spatially constant, but time-dependent force. As we shall see below, the two cases are equivalent to a harmonic oscillator with a time-varying frequency. In the first case, the frequency is time-dependent, but is the same for all components of the oscillator. See [18] for example. The equation of motion is of the Hill type. For a discussion in the applied mathematics literature, not overlapping much of the literature in physics see [18] for example. In the physical literature see [14], [17], [27], [8], [15]. For a group theoretical study of quadratic Hamiltonians with time-dependent coefficients, see [22]. Both citeSK and [19] use canonical transformations in a way totally unrelated to ours. For a review of the Ermakov invariant, which is an approach used in several of the works just cited, see [15]. This invariant is an ingenious way to deal with the oscillator with a time-dependent frequency, but it works for particular time dependencies. For other approaches to the problem discussed here see [7], [9], [28]. The description of the motion of a single particle in a static electromagnetic field is important for the computation of the magnetic moment of the electron. See [4] for example.
We devote the remainder of this section to introducing more notations and establishing some preliminary results. Then in Section 2, we establish the classical equivalences between the Hamiltonians, We add that the equivalence has been noted before, and it is already a textbook matter, but the contribution here is to present the equivalence as a canonical transformation. In Section 3, establish the equivalences of the quantized version of the systems, by making use of the generating functions to define the unitary that realizes the equivalence between these systems. Once the equivalence of the original system to a harmonic oscillator in an external field has been established, we relate the energy eigenstates of the original system to those of the harmonic oscillator. The result is that the eigenstates of a charged particles in a static electromagnetic field are entangled states of the simple harmonic oscillator.
In Section 5 we consider two variations on the theme of a particle moving in
a spatially constant but time-dependent electromagnetic field. There we establish that the techniques in Section 2 lead to a harmonic oscillator with time-depending frequency. This system has been studied considerably. As mentioned above, in the general case, it can only be dealt with using approximations.
### Notations and preliminary results
When \(\mathbf{B}\) is constant, \(\mathbf{A}(\mathbf{x})=\mathbf{B}\times\mathbf{x}/2=\mathbf{\Omega}\times\mathbf{x}.\) When \(\mathbf{B}=B\hat{\mathbf{k}},\) then:
\[\mathbf{\Omega}=\begin{pmatrix}0&-\omega&0\\ \omega&0&0\\ 0&0&0\end{pmatrix}=\begin{pmatrix}\mathbf{\Omega}_{0}&\mathbf{0}\\ \mathbf{0}^{t}&0\end{pmatrix}\quad\text{with}\quad\mathbf{\Omega}_{0}=\begin{pmatrix} 0&-\omega\\ \omega&0\end{pmatrix} \tag{1.7}\]
where we put \(\omega=\frac{qB}{mc}\) for the standard cyclotron frequency. The vector \(\mathbf{0}\) is two dimensional zero vector, and the superscript "t" stands for the transpose of the indicated object. Also, keep in mind that \(\mathbf{\Omega}_{0}^{t}\mathbf{\Omega}_{0}=\omega^{2}\mathbb{I}.\) Below we make extensive use of the fact that
\[R(t)=e^{t\mathbf{\Omega}}=\begin{pmatrix}e^{t\mathbf{\Omega}_{0}}&\mathbf{0}\\ \mathbf{0}^{t}&1\end{pmatrix}\quad\text{with}\quad e^{t\mathbf{\Omega}_{0}}=\begin{pmatrix} \cos(\omega t)&\sin(\omega t)\\ \sin(\omega t)&\cos(\omega t)\end{pmatrix} \tag{1.8}\]
stands for a rotation matrix about the \(z-\)axis with constant angular speed \(\omega.\)
To establish the canonical equivalence between (1.2) and (1.3), we need to compute the trajectories of the motion described by the Hamiltonian (1.2). Notice that the three degrees of freedom are separated. We have
\[\frac{1}{2m}P_{i}^{2}+\frac{1}{2}\omega^{2}Q_{i}^{2}-qQ_{i}E_{i}(t),\ \ i=1,2\]
and
\[\frac{1}{2m}P_{3}^{2}-qQ_{3}E_{3}(t).\]
The last one is obtained from the former setting \(\omega=0.\) Temporarily dropping the reference to the label of the coordinates, the Hamilton equations of motion are:
\[\begin{split}&\frac{d}{dt}\begin{pmatrix}Q\\ P\end{pmatrix}=\begin{pmatrix}\frac{\partial H}{\partial P}\\ -\frac{\partial H}{\partial Q}\end{pmatrix}=\begin{pmatrix}P\\ -\omega^{2}Q+k(t)\end{pmatrix}=\begin{pmatrix}0&1\\ -\omega^{2}&0\end{pmatrix}\begin{pmatrix}Q\\ P\end{pmatrix}+\begin{pmatrix}0\\ k(t)\end{pmatrix}\\ &=\mathbb{H}_{0}\begin{pmatrix}Q\\ P\end{pmatrix}+\begin{pmatrix}0\\ k(t)\end{pmatrix}.\end{split} \tag{1.9}\]
The initial conditions are \(Q(0)=Q_{0},P(0)=P_{0}.\) We put \(k(t)=eE(t)\) for short. The equations of motion of the unperturbed oscillator are obtained by setting \(k(t)=0,\) and the solution for \((Q_{3},P_{3})\) is obtained by setting \(\omega=0.\) Write \(Z(t)=(Q(t),P(t))^{\dagger}.\) The solution of the system (1.9) is:
\[Z(t)=U(t)Z(0)+\int_{0}^{t}U(t-s)\boldsymbol{k}(s)ds=Z_{h}(t)+Z_{nh}(t). \tag{1.10}\]
where the matrix \(U(t)\) is given by:
\[U(t)=\left(\begin{array}{cc}cos(\omega t)&\frac{1}{\omega}sin(\omega t)\\ -\omega sin(\omega t)&cos(\omega t)\end{array}\right), \tag{1.11}\]
Clearly \(U(t)\) satisfies \(U(t+s)=U(t)U(s)\) or \(U(t-s)=U(t)U(-s)\) for all \(s,t.\) In the last term of (1.10), \(\zeta_{h}(t)\) denotes the first term in the middle and subscript \(h\) stands for _homogeneous_ and, \(nh\) stands for _non-homogeneous_ in the last term.
Note that \(Z_{nh}(t)\) is just the particular solution to (1.9) with zero initial conditions, and it describes the motion of the origin of the coordinate system. therefore, we might think of (1.10) as the position of the particle with respect to a system whose origin of coordinates moves according to \(Z_{nh}.\) Also, \(Z_{h}(t)=Z(t)-Z_{nh}(t)\) describes the motion of a simple harmonic oscillator, which is consistent with the fact that
\[\langle(Z(t)-Z_{nh}(t)),\mathbb{H}_{0}(Z(t)-Z_{nh}(t))\rangle=\text{constant} =\langle Z(0),\mathbb{H}_{0}Z(0)\rangle.\]
This follows readily from the fact that
\[U^{\dagger}(t)\mathbb{H}_{0}U(t)=\mathbb{H}_{0}. \tag{1.12}\]
To go from this to the 3-dimensional case, let us introduce the following more compact notations
\[\bar{Z}_{1}=\begin{pmatrix}Q_{1}\\ P_{1}\end{pmatrix},\ \ \bar{Z}_{2}=\begin{pmatrix}Q_{2}\\ P_{2}\end{pmatrix},\ \ \bar{Z}_{3}=\begin{pmatrix}Q_{3}\\ P_{3}\end{pmatrix}, \tag{1.13}\] \[\boldsymbol{Z}=(\bar{Z}_{1},\bar{Z}_{2},\bar{Z}_{3})^{\dagger}. \tag{1.14}\]
As at the beginning, \(\mathbf{Q}=(\bar{\mathbf{Q}}^{\dagger},Q_{3})^{\dagger}=(Q_{1},Q_{2},Q_{3})^{\dagger}\) and \(\mathbf{P}=(\bar{\mathbf{P}}^{\dagger},P_{3})^{\dagger}=(P_{1},P_{2},P_{3})^{\dagger}\). To describe the solution to the full equation of motion, we introduce the following notations:
\[\mathbf{U}(t)=\begin{pmatrix}U(t)&0&0&0\\ 0&U(t)&0&0\\ 0&0&1&t\\ 0&0&0&1\end{pmatrix} \tag{1.15}\]
Here \(U(t)\) is the \(2\times 2-\)matrx introduced in (1.11). With that, we have:
\[\mathbf{Z}=\mathbf{U}(t)\mathbf{Z}(0)+\int_{0}^{t}\mathbf{U}(t-s)\mathbf{K}(s)ds=\mathbf{Z}_{h}(t)+\bm {Z}_{nh}(t). \tag{1.16}\]
Here we put \(\mathbf{K}(t)=((0,qE_{1}(t)),(0,qE_{2}(t)),((0,qE_{3}(t)))^{\dagger}.\) For the record, let us write explicitly what \(Z_{3}(t)\) looks like. According to (1.10) we have:
\[Z_{3}(t)=\begin{pmatrix}Q_{3}(0)+P_{3}(0)t\\ P_{3}(0)\end{pmatrix}+\begin{pmatrix}\int_{0}^{t}\frac{\sin(\omega(t-s)}{\omega }qE_{3}(s)ds\\ \int_{0}^{t}\cos(\omega(t-s)qE_{3}(s)ds\end{pmatrix}=Z_{3,h}(t)+Z_{3,nh}(t). \tag{1.17}\]
It is up to the reader to verify that this reduces correctly to the solution when \(\omega=0\) and/or \(E_{3}(t)=\) constant.
## 2 The classical equivalence of the Hamiltonians
The the canonical transformation relating (1.1) to (1.2) is determined from the following generating function (see [1] or [11]):
\[F(\mathbf{x},\mathbf{P},t)=\langle\mathbf{x},U(-t/2)\mathbf{P}\rangle+A(t)=\langle\bar{\mathbf{x} },e^{-t\mathbf{\Omega}_{2}/2}\bar{\mathbf{P}}\rangle+x_{3}P_{3}. \tag{2.1}\]
The transformation equations (change of variables) that (2.1) induces is:
\[\bar{\mathbf{Q}}=e^{t\mathbf{\Omega}_{0}/2}\bar{\mathbf{x}};\ \ \bar{\mathbf{P}}=e^{t\mathbf{ \Omega}_{0}/2}\bar{\mathbf{p}},\ Q_{3}=x_{3},\ P_{3}=p_{3},\ \mathbf{E}(t)=R(t/2)\mathbf{E}. \tag{2.2}\] \[H_{2}(\mathbf{Q},\mathbf{P})=H_{1}(\mathbf{x},\mathbf{p})+\frac{\partial F}{ \partial t}=\frac{1}{2}\langle\bar{\mathbf{P}},\bar{\mathbf{P}}\rangle+\frac{m\omega ^{2}}{2}\langle\bar{\mathbf{Q}},\bar{\mathbf{Q}}\rangle+\frac{1}{2}P_{3}^{2}. \tag{2.3}\]
We used the fact that \(\langle\bar{\mathbf{P}},\bar{\mathbf{P}}\rangle=\langle\bar{\mathbf{p}},\bar{\mathbf{p}}\rangle\), \(\langle\bar{\mathbf{Q}},\bar{\mathbf{Q}}\rangle=\langle\bar{\mathbf{x}},\bar{\mathbf{x}}\rangle\), and that
\[\partial F_{2}/\partial t=\frac{1}{2}\langle\bar{\mathbf{p}},\mathbf{\Omega}_{0}\bar{ \mathbf{x}}\rangle,\]
using (2.3) after differentiating. In the new coordinates, we have a two-dimensional harmonic oscillator plus a free motion along the \(Q_{3}-\)axis.
To establish the equivalence of (1.2) to (1.3) we use the transformation generated by:
\[F(\mathbf{Q},\mathbf{\eta},t)=\langle(\mathbf{Q}- \mathbf{Q}_{nh}(t)),(\mathbf{\eta}+m\dot{\mathbf{Q} }_{nh}(t))\rangle+A(t) \tag{2.4}\]
To begin with, this leads to the following change of variables:
\[\mathbf{\xi}=\mathbf{Q}-\mathbf{Q}_{nh},\ \ \ \mathbf{\eta}=\mathbf{P}-m\dot{\mathbf{Q}}_{nh}. \tag{2.5}\]
To obtain \(H_{3}(\mathbf{\xi},\mathbf{\eta}),\) use \(H_{3}=H_{2}+\partial F/\partial t,\) use (2.5) and the fact that \(m\ddot{\mathbf{Q}}_{nh}=-m\omega^{2}\mathbf{Q}_{nh}+q\mathbf{E}(t),\) and require that \(A(t)\) satisfies
\[\dot{A}(t)-\frac{m}{2}\langle\dot{\mathbf{Q}}_{nh},\dot{\mathbf{Q}}_{nh}\rangle+\frac{m\omega^{2}}{2}\langle\mathbf{Q}_{ nh},\mathbf{Q}_{nh}\rangle-q\langle\mathbf{Q}_{nh},\mathbf{E}(t)\rangle=0. \tag{2.6}\]
This leads to (1.3). What is perhaps interesting is that form (2.6) we obtain:
\[A(t)=\int_{0}^{t}L(\mathbf{Q}_{nh}(s),\dot{\mathbf{Q}}_{nh }(s))ds, \tag{2.7}\]
which happens to be the action along the curve \(t\rightarrow\mathbf{Q}_{nh}(t).\) Of course, \(L\) is the Lagrangian function dual to \(H_{2}.\)
## 3 Unitary representation of the canonical transformations
As underlying state space for any of the three systems, we consider the space \({\cal H}\) of square-integrable functions, and do not worry too much about matters related to the domain of the differential operators that come up.
Given a state vector \(|\psi\rangle,\) we denote its representation in coordinates or in momenta by \(\psi(\mathbf{x})\) and \(\psi(\mathbf{p}).\) These two are related as usual, that is:
\[\psi(\mathbf{p})=\frac{1}{(2\pi)^{3/2}}\int e^{-i\langle\mathbf{p},\mathbf{x}\rangle}\psi(\mathbf{x})d\mathbf{x},\ \ \psi(\mathbf{x})=\frac{1}{(2\pi)^{3/2}}\int e^{i\langle\mathbf{p},\mathbf{x}\rangle}\psi(\mathbf{p})d\mathbf{p}. \tag{3.1}\]
Similarly for the other two pairs of conjugate variables: \((\mathbf{Q},\mathbf{P})\) and \((\mathbf{\xi},\mathbf{\eta}).\)
Since all canonical transformations reduce to the identity at \(t=0,\) we suppose that
the state at \(t=0\) is the same in all three cases regardless of the time evolution operator chosen. Since the simplest Hamiltonian is \(H_{3},\) we suppose that we know how to solve
\[i\hbar\frac{\partial\psi(t)}{\partial t}=\mathbf{H}_{3}\psi(t)\ \ \ \mbox{with}\ \ \psi(0)\;\mbox{given}. \tag{3.2}\]
The idea is to define a representation of (2.4) by means of a time-dependent unitary transformation \(U_{t},\) and prove that \(\phi(t)=U_{t}\psi(t)\) satisfies
\[i\hbar\frac{\partial\phi(t)}{\partial t}=\mathbf{H}_{2}\phi(t)\ \ \ \mbox{with}\ \ \phi(0)=\psi(0). \tag{3.3}\]
Similarly, we implement (2.1) by a unitary transformation \(U_{t}\) -we use the same symbol and use the tags for the coordinates to tell each case apart, and prove that if \(\phi(t)\) solves (3.3), then \(\varphi(t)=U_{t}\phi(t)\) solves
\[i\hbar\frac{\partial\varphi(t)}{\partial t}=\mathbf{H}_{1}\varphi(t )\ \ \mbox{with}\ \ \varphi(0)=\phi(0)=\psi(0). \tag{3.4}\]
As the easiest equation to solve is (3.2), we first show how to obtain the solution to (3.3) from the solution to (3.2), and then how to obtain the solution to (3.4) from that of (3.3).
So, let \(\phi(t,\mathbf{Q})\) be a solution to (3.2) with initial condition \(\psi_{0}(Q).\) The unitary version of (2.1) is defined by
\[\psi(t,\mathbf{x})=U_{t}\phi(t,\mathbf{x})=\frac{1}{(2\pi)^ {3/2}}\int e^{iF(\mathbf{x},\mathbf{P},t)/\hbar}\phi(t, \mathbf{P})d\mathbf{P}. \tag{3.5}\]
Here \(F(\mathbf{x},\mathbf{P},t)\) is given by (2.1). Invoking (3.1), this reduces to the identity transformation at \(t=0.\) A simple computation using (3.1) yields
\[\psi(t,\mathbf{x})=\phi(t,R(t/2)\bar{\mathbf{x}},x_{3}). \tag{3.6}\]
The essential computation here is the following.
\[\frac{\partial}{\partial t}\psi(t,R(t/2)\bar{\mathbf{x}},x_{3})=( \frac{\partial\psi}{\partial t})(t,R(t/2)\bar{\mathbf{x}},x_{3})+ \frac{1}{2}\langle\Omega_{0}(\nabla_{\bar{\mathbf{x}}}\psi)(t,R(t/2) \bar{\mathbf{x}},x_{3}).\]
We used the fact that \(\nabla_{\bar{\mathbf{x}}}=R(t/2)\nabla_{\bar{\mathbf{Q}}}\) as follows from the change of variables. From this, it also follows that
\[\Delta_{\bar{\mathbf{Q}}}=\langle\nabla_{\bar{\mathbf{Q}}}, \nabla_{\bar{\mathbf{Q}}}=\rangle=\langle\nabla_{\bar{\mathbf{x}}},\nabla_{\bar{\mathbf{x}}}\rangle=\Delta_{\bar{\mathbf{x}}}.\]
These remarks establish the correspondence between solutions to (3.3) and (3.4). To establish the correspondence between solutions to (3.2) and (3.3), we implement (2.4) as a unitary transformation. This case was dealt with in [13], here we quote from that work. Beware of the changes in notation.
Again we use the proposal (3.5), but with (2.4) in the exponent. So, suppose that \(\varphi(t,\mathbf{\xi})\) is a solution to (3.4) with initial data \(\psi_{0}(\mathbf{\xi})\), and put
\[\phi(t,\mathbf{Q})=(U_{t}\varphi)(t,\mathbf{Q})=\frac{1}{(2 \pi)^{3/2}}\int e^{iF(\mathbf{Q},\mathbf{\eta},t)/\hbar}\phi( t,\mathbf{\eta})d\mathbf{\eta}. \tag{3.7}\]
Again, the transform can be explicitly computed. The result is:
\[\phi(t,Q)=e^{i(f(\mathbf{Q},t)+A(t))/\hbar}\varphi(t,\mathbf{ Q}-\mathbf{Q}_{nh}(t)). \tag{3.8}\]
This transformation is unitary and besides the shift relative to \(\mathbf{Q}_{nh}(t)\), the new wave function acquires a global phase, which does not affect the normalization, but it does affect the computation of transition rates due to the perturbation as shown in [13]. Where \(A(t)\) was introduced in (2.7), and we put \(f(t,\mathbf{Q})=\langle\mathbf{Q}-\mathbf{Q}_{nh }(t)),\mathbf{P}_{nh}(t)\rangle.\) It takes but a simple computation to verify that under \(U_{t}\)
\[\mathbf{\xi}\varphi(t,\mathbf{\xi})\rightarrow(\mathbf{Q}-\mathbf{Q}_{nh}(t))\phi(t,\mathbf{Q})\]
\[-i\hbar\nabla_{\mathbf{\xi}}\varphi(t,\mathbf{\xi}) \rightarrow(-i\hbar\nabla_{\mathbf{Q}}+m\hat{\mathbf{Q}}_{ nh}(t))\phi(t,\mathbf{Q}).\]
This is the quantized version of (2.5). Observe that in the Hamiltonians (1.3) and (1.6), the degrees of freedom are separated, and note as well that the generating function is additive (a sum of generating functions for each degree of freedom). Therefore to verify that \(\phi(t,\mathbf{Q})\) satisfies (3.3) if \(\varphi(t,\mathbf{\xi})\) satisfies (3.4), it suffices to do so for each degree of freedom. But this is explicitly carried out in [13] where it is carried out in detail.
To sum up, we have proved that, for a given initial state \(\psi(0)\), we have:
\[i\hbar\frac{\partial\varphi}{\partial t}=H_{3}\varphi\Longrightarrow i\hbar \frac{\partial\phi}{\partial t}H_{2}\phi\Longrightarrow i\hbar\frac{ \partial\psi}{\partial t}H_{2}\psi,\ \ \ \varphi(0)=\phi(0)=\psi(0) \tag{3.9}\]
whenever \(\psi\), \(\phi\) and \(\varphi\) are related by (3.7) and (3.5) respectively.
The transformation of eigenstates
Since the degrees of freedom are separated in \(\boldsymbol{H}_{3},\) the eigenstates of definite energy are products of the eigenstates of each degree of freedom, and the total energy is the sum of the corresponding energies. In our case, we have:
\[\varphi_{\boldsymbol{n},k}(\bar{\boldsymbol{\xi}},\xi_{3})=\varphi _{n_{1}}(\xi_{1})\varphi_{n_{2}}(\xi_{2})e^{ikx_{3}/\hbar} \tag{4.1}\] \[E_{\boldsymbol{n},k}=\hbar\omega(n_{1}+\frac{1}{2})+\hbar\omega (n_{2}+\frac{1}{2})+\frac{\hbar^{2}k^{2}}{2m}. \tag{4.2}\]
We put \(\boldsymbol{n}=(n_{1},n_{2})\) as labels of the eigenstates (eigenvalues) of the 2-dimensional oscillator embedded in \(\boldsymbol{H}_{3}.\) Thus, the spectrum of \(\boldsymbol{H}_{3}\) has a discrete part embedded in a continuous part. Also, as usual, \(\varphi_{n}\) is:
\[\varphi_{n}(x)=\left(\frac{\alpha}{\pi^{1/2}2^{n}n!}\right)^{1/2}H_{n}(\alpha x )e^{-\frac{1}{2}x^{2}}, \tag{4.3}\]
Where \(H_{n}(x)\) is the Hermite polynomial of degree \(n,\) and \(\alpha=(m^{3/2}\omega/\hbar)^{1/2}.\)
The passage from (3.4) to (3.3) using (3.8) involves each degree of freedom separately. And as the transformation only acts on the spatial part of the wave function, after applying (3.8), the transform of (4.5) is:
\[\begin{array}{l}\phi_{bn,k}(\boldsymbol{Q},t)\\ =e^{-itE_{\boldsymbol{n},k}/\hbar}e^{i(f(\boldsymbol{Q},t)+A(t))/\hbar}\varphi _{n_{1}}(Q_{1}-Q_{1,nh}(t))\varphi_{n_{2}}(Q_{2}-Q_{2,nh}(t))e^{(-i\frac{k}{ \hbar}(Q_{3}-Q_{3,nh}(t))}.\end{array} \tag{4.4}\]
The passage from the above solution to (3.3) to (3.2) is as in (3.8), that is:
\[\begin{array}{l}\psi_{\boldsymbol{n},k}(\boldsymbol{x},t)=\phi_{\boldsymbol {n},k}(R(t/2)\boldsymbol{x},t)\\ =e^{-itE_{\boldsymbol{n},k}/\hbar}e^{i(f(R(t/2)\boldsymbol{x},t)+A(t))/\hbar} \varphi_{n_{1}}(R_{1}(t)-R_{1,nh}(t))\varphi_{n_{2}}(R_{2}(t)-R_{2,nh}(t))e^{ (-i\frac{k}{\hbar}(x_{3}-x_{3,nh}(t))}.\end{array} \tag{4.5}\]
To simplify the caligraphy, we introduced the notation:
\[\begin{array}{l}R_{i}(t)=\overline{(R(t/2)\boldsymbol{x})_{i}},\ \ \ \mbox{for}\ \ i=1,2.\\ x_{3,nh}=Q_{3,nh}.\end{array} \tag{4.6}\]
The matrix \(R(t)\) was introduced in (1.8). To get rid of the shifted arguments in (4.6), we make use of the summation formula:
\[H_{n}(u+v)=\sum_{k=0}^{n}{n\choose k}v^{n-k}H_{k}(u).. \tag{4.7}\]
Invoke (4.3) to obtain:
\[\varphi(u+v)=\sum_{k=0}^{n}A_{n,k}(\alpha v)^{n-k}\varphi_{n}(x),\ \ \ \mbox{with}\ \ \ A_{n,k}=\left(\frac{2^{k}k!}{2^{n}n!}\right)^{1/2}{n\choose k}. \tag{4.8}\]
With this, we obtain \(\phi_{\boldsymbol{n},k}(\bar{\boldsymbol{x}})\) as a linear combination of products of the type
\[\varphi_{k_{1}}((R(t/2)\boldsymbol{x})_{1})\varphi_{k_{2}}((R(t/2)\boldsymbol {x})_{2}),\ \ \ 0\leq k_{1}\leq n_{1},\ 0\leq k_{2}\leq n_{2},\]
The last step in the chain consists of writing each of these products as linear combinations of products like \(\varphi_{m_{1}}(\boldsymbol{x}_{1})\varphi_{m_{1}}(\boldsymbol{x}_{2})\) where \(m_{1}+m_{2}=k_{1}+k_{2}.\) This will render \(\psi(\boldsymbol{n},k)\) as a global phase multiplying a wave function which is a linear combination of eigenstates of energies less or equal that \(E_{\boldsymbol{n},k}.\) The computation is carried out in considerable detail in [12]. We just quote the result.
\[\begin{split}\varphi_{k_{1}}&((R(t/2)\boldsymbol{x} )_{1})\varphi_{k_{2}}((R(t/2)\boldsymbol{x})_{2})\\ &=\sum_{l_{1}=0}^{k_{1}}\sum_{l_{2}=0}^{k_{2}}D(k_{1},k_{2},l_{1},l_{2},t){k_{1}\choose l_{1}}{n_{2}\choose l_{2}}s_{1}^{k_{1}}s_{2}^{k_{2}} \psi_{l_{1}+l_{2}}(x_{1})\psi_{(k_{1}+k_{2})-(l_{1}+l_{2})}(x_{2}).\end{split} \tag{4.9}\]
## 5 A particle under the action of oscillating magnetic fields and time-varying electric fields
Here we consider two variations on the theme of [7], with notations somewhat different from theirs to make it consistent with our notations.
### Case I: Magnetic field has fixed direction but is time dependent
. This case is very similar to the case considered in Section 1, except that now \(\boldsymbol{B}=(0,0,B_{3}(t))^{\dagger}.\) The classical and quantum Hamiltonians differ only in the fact that
\(B_{3}(t)\) is time-dependent, therefore, the cross-product matrix \(\mathbf{\Omega}(t)\) is time-dependent, and the rotations that it generates are a bit more elaborate. The rotation matrix \(R(t)\) introduced in (1.8), is now defined by
\[\frac{d}{dt}R(t)=\mathbf{\Omega}(t)R(t). \tag{5.1}\]
It is easy to verify that in this case
\[R(t)=\begin{pmatrix}cos(A(t))&-sin(A(t))&0\\ sin(A(t)&cos(A(t))&0\\ 0&0&1\end{pmatrix}\quad\text{where}\quad A(t)=\int_{0}^{t}\omega(s)ds. \tag{5.2}\]
This is an easy consequence of the fact that the axis of rotation is kept fixed and that the structure of \(\mathbf{\Omega}(t)\) is such that it commutes with itself at different times. This time, since \(R(t)\) commutes with \(\mathbf{\Omega}(t),\) makes it easy to translate the arguments developed above to this case as well. The difference is that now the cyclotron frequency in (1.3) and (1.6) is time-dependent and we are left with a dynamics described by an equation of the Hill type. See [18] for example.
### Case II: Magnetic field rotates about a fixed axis in space
The classical Hamiltonian of the system is
\[H_{4}(\boldsymbol{x},\boldsymbol{p})=\frac{1}{2m}\bigg{(}\boldsymbol{p}- \boldsymbol{A}(\boldsymbol{x},t)\bigg{)}^{2}-q\langle\boldsymbol{x}, \boldsymbol{E}_{0}(t)\rangle. \tag{5.3}\]
In [7] the last term is absent, and an electric field eventually comes up rearranging their Hamiltonian. Such last term is the analog of the electric field that appears in the passage from (1.1) to (1.2). But we might as well have the \(\boldsymbol{E}_{0}(t)\) because we already know how to solve the resulting problem as shown in [13] and extended above. Note that in this case, we might as well consider that the electric field \(-\partial\boldsymbol{A}/\partial t,\) which is linear in \(\boldsymbol{x},\) to be subsumed as part of \(\boldsymbol{E}_{0}(t).\) Here we prove that the Hamiltonian (5.3) is also equivalent to (1.2).
To further specify this system, we again set \(\boldsymbol{A}=\frac{1}{2}\boldsymbol{B}(t)\times\boldsymbol{x},\) where \(B(t)=R(t)\boldsymbol{B}(0),\) where \(R(t)\) describes a rotation about the \(\hat{\boldsymbol{k}}\)-axis whose phase has been adjusted so that \(\boldsymbol{B}(0)=(B_{1},0,B_{3}).\) Explicitly:
\[\mathbf{B}(t)=\begin{pmatrix}cos(\alpha t)&-sin(\alpha t)&0\\ sin(\alpha t)&cos(\alpha t)&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}B_{1}\\ 0\\ B_{3}\end{pmatrix}. \tag{5.4}\]
The infinitesimal generator of this rotation group is the matrix \(\mathbf{\Lambda}\) given by
\[\mathbf{\Lambda}=\begin{pmatrix}0&-\alpha&0\\ \alpha&0&0\\ 0&0&0\end{pmatrix}. \tag{5.5}\]
The analog of the matrix \(\mathbf{\Omega}\) introduced in (1.7) is now
\[\mathbf{\Omega}_{1}(t)=\begin{pmatrix}0&-\omega_{3}&B_{1}\sin(\alpha t)\\ B_{3}&0&-B_{1}\cos(\alpha t)\\ -B_{1}\sin(\alpha t)&B_{1}\cos(\alpha t)&0\end{pmatrix},\ \ \ \ \mathbf{\Omega}_{1}(0)=\begin{pmatrix}0&-B_{3}&0\\ B_{3}&0&-B_{1}\\ 0&B_{1}&0\end{pmatrix} \tag{5.6}\]
We have introduced the cyclotronic frequencies
\[\omega_{i}=\frac{qB_{i}}{mc}. \tag{5.7}\]
We leave it to the reader to verify that \(R(t)\mathbf{\Omega}(t)R(-t)=\mathbf{\Omega}(0).\) This means that \(\mathbf{\Omega}(t)\) satisfies the Euler equation \(\dot{\mathbf{\Omega}}+[\mathbf{\Lambda},\mathbf{\Omega}]=0\) with initial condition \(\mathbf{\Omega}(0).\) We pass to a coordinate system in which the horizontal component of the magnetic field is constant using a time-dependent canonical transformation generated by
\[F(\mathbf{x},\mathbf{P})=\langle R(-t)\mathbf{x},\mathbf{P}\rangle. \tag{5.8}\]
The new canonical variables are:
\[\mathbf{Q}=\nabla_{\mathbf{P}}F=R(-t)\mathbf{x},\ \ \text{and}\ \ \mathbf{p}=\nabla_{\mathbf{x}}F \Rightarrow\mathbf{P}=R(-t)\mathbf{p}. \tag{5.9}\]
Similarly, invoking the invariance of the scalar products under rotations, and after some simple arithmetics, the new Hamiltonian function is
\[H_{5}(\mathbf{Q},\mathbf{P})=H_{4}(R(t)\mathbf{Q},R(t)\mathbf{P})+\bigg{(}\frac{\partial F}{ \partial t}\bigg{)}(R(t)\mathbf{Q},\mathbf{P}).\]
Doing the substitutions we obtain:
\[H_{5}(\mathbf{Q},\mathbf{P})=\frac{1}{2m}\mathbf{P}^{2}-\langle\mathbf{P},(\frac{1}{2}\mathbf{\Omega} _{1}(0)+\mathbf{\Lambda})\mathbf{Q}\rangle+\frac{m}{2}\langle\mathbf{Q},\mathbf{\Omega}_{1}^{t }(0)\mathbf{\Omega}_{1}(0)\mathbf{Q}\rangle-q\langle\mathbf{Q},\mathbf{E}_{1}(t)\rangle. \tag{5.10}\]
We put \(bE_{1}(t)=R(t)\mathbf{E}_{0}(t).\) Let us write \(\mathbf{M}=\frac{1}{2}\mathbf{\Omega}_{1}(0)+\mathbf{\Lambda}\) Note that \(\mathbf{M}^{t}=-\mathbf{M}\), therefore \(G(t)=\exp(\mathbf{M}t)\) is a rotation group about the axis
\[\mathbf{n}=\left(B_{1}/2,0,(\alpha+b_{3}/2)\right)^{\dagger}/((B_{1}^{2}/2)+( \alpha+B_{3}/2)^{2})^{1/2}\]
with speed \(\theta=\left((B_{1}/2)^{2}+(\alpha+B_{3}/2)^{2}\right)^{1/2}.\) Now we repeat the procedure to eliminate the second term in the right-hand side of (5.12). Considering the transformation generated by:
\[F_{(}\mathbf{Q},\mathbf{p}^{\prime})=\langle e^{t\mathbf{M}}\mathbf{Q},\mathbf{p}^{\prime}\rangle. \tag{5.11}\]
we obtain, that in the coordinates \((\mathbf{x}^{\prime},\mathbf{p}^{\prime})\), the new Hamiltonian looks like
\[H_{5}(\mathbf{x}^{\prime},\mathbf{p}^{\prime})=\frac{1}{2m}(\mathbf{p}^{\prime})^{2}+ \frac{m}{2}\langle\mathbf{Q},\mathbf{\Omega}_{1}^{t}(0)\mathbf{\Omega}_{1}(0)\mathbf{Q}\rangle -q\langle\mathbf{Q},\mathbf{E}_{1}(t)\rangle. \tag{5.12}\]
As in the previous case, this Hamiltonian describes an oscillator with a time-dependent frequency. We end this section with the following remark. One might be tempted to carry out a time-dependent rotation that aligns the magnetic field to the \(\hat{\mathbf{k}}-\)axis. And then do as we did in Section 2 to compensate the magnetic field away. But as shown in the first case, this will lead to an oscillator with a time-dependent frequency.
## 6 Final remarks
To sum up, only when the magnetic field is static, it can be compensated away by a rotating coordinate system, in which the motion, or the time evolution, is canonically equivalent to that of a harmonic oscillator. When there is also a static electric field, the resulting motion is equivalent to that of a harmonic oscillator subject to a spatially
homogeneous time-dependent force. In this case, the solution of the Schrodinger equation acquires a global phase, and as mentioned, the eigenstates of the quantum system happen to be entangled states of simple harmonic oscillators.
When the magnetic field is time-dependent, the system is canonically (or unitarily) equivalent to an oscillator with time-varying frequency. As there does not exist a general solution for this case, one must resort to different types of approximations. See [14] for early work in this direction.
**Declaration of competing interests** I have no competing interests to declare, no funding to report and this work complies the highest standards of ethical conduct.
|
2310.16968 | Understanding Social Structures from Contemporary Literary Fiction using
Character Interaction Graph -- Half Century Chronology of Influential Bengali
Writers | Social structures and real-world incidents often influence contemporary
literary fiction. Existing research in literary fiction analysis explains these
real-world phenomena through the manual critical analysis of stories.
Conventional Natural Language Processing (NLP) methodologies, including
sentiment analysis, narrative summarization, and topic modeling, have
demonstrated substantial efficacy in analyzing and identifying similarities
within fictional works. However, the intricate dynamics of character
interactions within fiction necessitate a more nuanced approach that
incorporates visualization techniques. Character interaction graphs (or
networks) emerge as a highly suitable means for visualization and information
retrieval from the realm of fiction. Therefore, we leverage character
interaction graphs with NLP-derived features to explore a diverse spectrum of
societal inquiries about contemporary culture's impact on the landscape of
literary fiction. Our study involves constructing character interaction graphs
from fiction, extracting relevant graph features, and exploiting these features
to resolve various real-life queries. Experimental evaluation of influential
Bengali fiction over half a century demonstrates that character interaction
graphs can be highly effective in specific assessments and information
retrieval from literary fiction. Our data and codebase are available at
https://cutt.ly/fbMgGEM | Nafis Irtiza Tripto, Mohammed Eunus Ali | 2023-10-25T20:09:14Z | http://arxiv.org/abs/2310.16968v1 | Understanding Social Structures from Contemporary Literary Fiction using Character Interaction Graph - Half Century Chronology of Influential Bengali Writers
###### Abstract
Social structures and real-world incidents often influence contemporary literary fiction. Existing research in literary fiction analysis explains these real-world phenomena through the manual critical analysis of stories. Conventional Natural Language Processing (NLP) methodologies, including sentiment analysis, narrative summarization, and topic modeling, have demonstrated substantial efficacy in analyzing and identifying similarities within fictional works. However, the intricate dynamics of character interactions within fiction necessitate a more nuanced approach that incorporates visualization techniques. Character interaction graphs (or networks) emerge as a highly suitable means for visualization and information retrieval from the realm of fiction. Therefore, we leverage character interaction graphs with NLP-derived features to explore a diverse spectrum of societal inquiries about contemporary culture's impact on the landscape of literary fiction. Our study involves constructing character interaction graphs from fiction, extracting relevant graph features, and exploiting these features to resolve various real-life queries. Experimental evaluation of influential Bengali fiction over half a century demonstrates that character interaction graphs can be highly effective in specific assessments and information retrieval from literary fiction. Our data and codebase are available1.
Footnote 1: [https://cutt.ly/fbdgGEM](https://cutt.ly/fbdgGEM)
## 1 Introduction
Literary fiction, a reflection of societal values and culture (Sadraddinova and Nasirli, 2019), often employs narrative form to convey its tales. Within these narratives, character interactions drive the plot, constructing personas through their various engagements (Min and Park, 2016; Truby, 2008). Character interaction graphs visually represent these interactions and offer a versatile tool for exploring literary theories and depicting social structures (Labatut and Bost, 2019). Beyond this, they prove instrumental in solving diverse literary challenges, from role detection (Jung et al., 2013) to genre classification (Gil et al., 2011; Ardanuy and Sporleder, 2015; Agarwal et al., 2021) and storyline analysis (Weng et al., 2007). This paper introduces a data-driven approach that leverages character interaction graphs to elucidate the impact of social structures and contemporary events on literary fiction. Our study delves into the works of influential Bengali writers spanning over half a century, establishing a compelling connection between these real-world influences and the realm of literary fiction.
In our pursuit of understanding the intricate connection between language, history, and literature, we have chosen to focus on Bengali literature. This decision stems from our specialized domain knowledge and the absence of prior quantitative assessments in this area. While a previous study by Muhuri et al. (2018) visualized character networks in two plays by Rabindranath Tagore, it delved into only limited aspects of character interaction. However, literature mirrors writers' societal perspectives on historical events, gender roles, and more (Reynolds, 1990; Jarrott and McCann, 2013; White, 2002). Past research has explored these facets within Bengali liter
Figure 1: Character interaction graph on two novels of Rabindranath Tagore (_(a) The last poem, (b) European_). The bigger the node/thicker the edge is, indicate more weight to the corresponding character/relation.
role of women (Sen, 2002; Chatterjee, 2009; Banerjee, 1989), the influence of nationalist movements (Majumder, 2016), particular views on religion (Quayum, 2015; Das and Das, 2012), and the social changes reflected (Chaudhuri, 1971). However, these approaches are performed primarily through manual, non-technical analysis. Thus, they tend to overlook significant details in lengthy narratives, leaving the writer's portrayal of their viewpoint through plots and characters unverified. In contrast, we adopt a computational approach, harnessing character interaction graphs' dynamics to unveil the influence of social structures and contemporary events in modern Bengali literature.
Character interaction graphs, or character networks, are graphical representations derived from a story's narrative, where nodes represent characters, and edges signify their interactions. To illustrate, consider Figure 1, which visualizes character interaction graphs from two novels by the renowned Bengali author Rabindranath Tagore. In the first novel, characterized as a romance, a prominent and meaningful connection between the central male and female characters is evident. Conversely, the second novel, with a political theme, introduces a larger ensemble of characters and interactions. Notably, the higher graph density and an increased number of nodes with greater weight highlight the intricate nature of character relationships within political contexts.
Therefore, the primary objective of this paper is to investigate whether character interaction in fiction can depict real-world social structure and perspective from writers. Specifically, we aim to answer the following research questions (RQs).
* RQ 1 How various historical events have impacted the character development and prominence of characters in Bengali literature?
* RQ 2: To what extent the impact of various age & gender groups in Bengali society can be inferred through contemporary novels?
* RQ 3: Can the presence of different characters and character interaction graph structure be interpreted by the story's context or genre?
To answer these questions, we rely on the novels of the three most prominent writers at the beginning of modern Bengali literature (Rabindranath Tagore, Bankim Chandra, Sarat Chandra Chattopadhyay) whose combined literary career span more than a half-century (1865-1935). Additionally, we consider the novels of contemporary Bengali writers Sunil Gangopadhyay and Humayun Ahmed to validate our findings in the modern literary context. Figure 2 provides an overview of our approach. First, we construct character interaction graphs based on character co-occurrence in the narrative. We enrich our analysis by extracting various attributes from nodes and edges, incorporating sentiment & other NLP features from the story text, and annotating characters with age, gender, role, and other relevant information. Finally, we explore these features for each writer, employ statistical significance tests to affirm our findings and provide a multifaceted evaluation of the results.
Our study reveals that historical events like the widow remarriage law in Hindu society (1872) and nationalist movements such as the partition of Bengal (1906) & Gandhi's non-cooperation movement (1920) substantially impacted character interactions in contemporary literature. Moreover, despite a lower presence of female characters, their collective influence equaled or exceeded that of male characters. Also, the influence of older age groups diminished among writers who had experienced various nationalist movements. Therefore, analyzing character interaction graphs in fiction can
Figure 2: An overview of using character interaction graphs (character networks) for contemporary literary analysis.
provide valuable insights into the social dynamics of specific historical periods.
## 2 Related Work
Character interaction graphs:Character interaction representations are extensively used in digital humanities to visualize relationships between literary characters. Numerous variations and approaches to character networks exist. For instance, Elson et al. (2010) created a network from dialogue interactions in nineteenth-century British novels, where vertices represented characters and edges indicated the frequency and length of their conversations. Elsner (2012) introduced a kernel to measure novel similarity based on characters and their relationships. Ardanuy and Sporleder (2015) built social networks of characters to represent narrative structures in novels, using EM clustering to group novels by genres and authorship. The primary distinction between different character interaction works lies in character identification, interaction detection, graph creation, and scope of application (Labatut and Bost, 2019). Apart from literature, character interaction graph has also gained popularity in other media such as film (Cipresso and Riva, 2016), drama (Moretti, 2011), TV series (Weng et al., 2007b), and pop culture.
Social network analysis from character interaction:Character interaction graph has also been utilized to answer various social science questions of contemporary times. Lauzen and Dozier (2005) discuss the portrayals of different age groups and gender roles in top-grossing Hollywood films. They observe that both older men and women are dramatically underrepresented compared to their representation in real-life. Recently Kagan et al. (2020) investigated gender bias in on-screen female characters over the past century using a huge corpus of movie social networks. They discovered a trend of improvement in all aspects of women's roles in movies, including a constant rise in the central characters.
The only prior study that focuses on character interaction in Bengali literature is Muhuri et al. (2018). They have extracted character networks from two plays of Rabindranath Tagore and proposed a novel idea to analyze the characteristics of protagonist and antagonist from the influential nodes based on the complex graph. However, their study does not explain the role of contemporary social set up or gender/age group effect in the character interaction. Therefore, our study aims to fill this gap by performing an quantitative assessment in Bengali literature analysis that exploits the character interaction graph to answer these questions.
## 3 Methodology and Experiments
We resort to character interaction graph model to answer our research questions in the context of Bengali literature. Our novel contributions for these tasks are as follows.
* We adapt the procedure as discussed below to construct the character interaction graph from story text and character list for Bengali fiction.
* We analyze these graphs from various perspectives and draw the connection to answer our RQs.
* We create a novel dataset containing the 63 fictions of five prominent Bengali writers and visualize the character interaction graphs.
### Character Interaction Graph Generation
Extracting character interaction graphs from literary text mostly consists of three primary steps: 1) identification of characters, 2) detection of their interactions, and 3) extraction of the interaction graph (Labatut and Bost, 2019). We have to modify these steps that would apply to our analysis in Bengali fiction. Since stories are collected in chapters, we perform these tasks and create character interaction graphs for each chapter like previous researches (Ardanuy and Sporleder, 2015; Agarwal et al., 2014). Finally, we combine these chapter-wise graphs to construct the overall story graph.
Character identification:Character identification consists of detecting which characters appear in the story and precisely when they appear in the narrative. Current Named Entity Recognition (NER) methods (Chowdhury et al., 2018; Alam and Islam, 2020; Mandal et al., 2022) in the Bengali language are not adequate to find correct character names in the context of literary fiction. Hence, our study employs a meticulous approach to character identification that combines automated detection with manual annotation.
For each story, we leverage the BNLP toolkit (Sarker, 2021) for NER recognition of individuals and cross-verify the character list by briefly reviewing the narrative. We add any missing characters as needed and remove any person names
not integral to the narratives. In cases where a character assumes multiple aliases, we include all the names they adopt, accompanied by using pronouns, particularly for stories narrated in the first person. To identify a character's presence in the story text, we append relevant suffixes and inflections to each name. We assume that if a character's name appears anywhere within the story, they are considered present in that section.
Interaction detection:Our approach aligns with prior research that argues that the simple co-occurrence of two characters indicates an interaction Labatut and Bost (2019). In our study, we adopt the sentence as the fundamental narrative unit and posit that two characters interact when they emerge within the same or nearby sentences. To facilitate this, we employ character occurrence data to partition chapters into smaller segments, subsequently identifying their intersections. We present the detailed methodology in the Appendix.
Graph Generation:Our methodology begins by constructing character interaction graphs at the chapter level, subsequently integrating these into a comprehensive story graph (Figure 8 in Appendix). Each chapter's influence on the total story graph is proportionate to the number of sentences it contains. Nodes within the graph represent characters appearing in at least one segment of a given chapter, and edges connect nodes corresponding to characters interacting in at least one segment. We calculate node and edge weights based on various factors, including segment lengths, character appearances, and additional characteristics. Moreover, we incorporate sentiment scores, topic distributions, and supplementary data for both nodes and edges. While we offer a brief overview of node and edge weighting here, we provide comprehensive details on other significant attributes and methodologies in the Appendix.
_Node weight:_ A character's weight depends on the segment length and the number of times the character is addressed Wolyn and Simske (2023). Also, subsequent segments for a character in a chapter should indicate its higher weight than other characters present in fewer segments. Therefore, we consider a scaling factor \(\alpha=0.1\) as the number of segments increases for a character similar to Seo et al. (2013). Given a character \(C\) is present \(s_{C}\) segments in a chapter, length of the segment \(i\) is \(l_{i}\) and \(C\) is addressed in \(l^{\prime}_{i}\) sentences in that segment. If the total chapter length is \(L\) and \(\beta=0.1\) is the extra weight for the sentences that contain character \(C\)Wolyn and Simske (2023), the weight of the corresponding node is defined as
\[\omega_{C}=\frac{1}{L}\sum_{i=1}^{s_{C}}(1+i\times\alpha)(l_{i}+\beta\times l^ {\prime}_{i})\]
_Link weight:_ We adopt a frequency-based Elson et al. (2010) method to calculate edge weight. Interaction weight between two characters \(C_{1},C_{2}\) depends on the number of segments they interact with \(s_{C_{1}C_{2}}\), segment length \(l_{i}\), number of sentences they are present individually \(l^{\prime}_{i}\) with scaling weight \(\beta\) and number of sentences they are present both \(l^{\prime\prime}_{i}\) with scaling weight \(\gamma=2\times\beta\). The corresponding weight of the edge is defined as,
\[\omega_{\langle C_{1},C_{2}\rangle}=\frac{1}{L}\sum_{i=1}^{s_{\langle C_{1},C_{ 2}\rangle}}(1+i\times\alpha)(l_{i}+\beta\times l^{\prime}_{i}+\gamma\times l^{ \prime\prime}_{i})\]
### Graph Features Extraction
Following the methodology of previous works Elson et al. (2010); Muhuri et al. (2018), our analysis encompasses various attributes, including weight, degree, strength (sum of weights over the edges attached to the node), chapter presence, graph density, and other structural characteristics related to nodes, edges, and the entire graph. Additionally, we measure the sentiment scores and topic distributions associated with each node. For character nodes, we consider a range of manual attributes, including protagonist status (protagonist/antagonist/regular), gender (male/female), age group, family status (father/mother/uncle/aunt/brother to central characters), religion, social status (poor/wealthy/landlord), all aimed at elucidating connections in line with our research questions. Recognizing the limited availability of age information for most characters, we estimate three distinct age groups, mirroring real-life demographics as closely as possible.
* Age group A1: <20 year: This group mostly consists of children and adolescents.
* Age group A2: 20-40 year: Young adults and early middle-aged persons who serve as the current generation in story.
* Age group A3: >40 year: Older people. They usually play the role of the previous generation of young people (A2 group).
### Dataset
Our primary focus centers on three eminent writers, namely, Bankim Chandra (BC), Rabindranath Tagore (RT), and Sarat Chandra Chattopadhyay (SC), who belong to the early period of modern Bengali literature. Our study concentrates on their fiction works, particularly novels, as they possess distinctive attributes that set them apart from non-fictional writings (Labatut and Bost, 2019). We analyze a selection of their novels, spanning various genres such as historical, romantic, social, and political. Additionally, for comparative purposes with contemporary literature, we delve into the works of two renowned modern writers: Sunil Gangopadhyay (SG) and Humayun Ahmed (HM). Throughout the remainder of this paper, we will employ the authors' first names or abbreviated forms to represent them.
Table 1 presents a comprehensive overview of our dataset. The novels were purchased in ebook format, and the text underwent the procedures detailed earlier to create and extract character interaction graphs. To validate the character lists and assign attributes like age, gender, and other status to the characters, we engaged two annotators well-versed in the novels' contents. As a token of appreciation for their contributions, the annotators received gift cards equivalent to $20.00 each. We cannot release the original text due to copyright constraints. However, we have made our generated character interaction graphs, extracted features, and code-base publicly accessible 2.
Footnote 2: [https://cutt.ly/fBMgGEM](https://cutt.ly/fBMgGEM)
## 4 Results and Findings
Character interaction graphs, enriched with various attributes, including descriptive details, sentiment scores, and topic information, offer a unique perspective on character presence and interactions within a story. This approach outperforms traditional manual analysis in terms of efficiency and effectiveness. For instance, by examining factors like character count, weight, degree, sentiment, and protagonist status, especially in the context of different age or gender groups, we can determine which groups exert the most influence in fictional works. Furthermore, graph topology analysis from stories of distinct contexts/genres allows us to validate the representation of social structures in fiction and assess the impact of contemporary events on these narratives.
This section showcases our primary discoveries from various angles, paving the way for exploring their links to our research questions in the subsequent section. We provide concise insights into the roles played by different age and gender groups, protagonist attributes, and variations in graph structures. To assess the significance of these distinctions based on gender or age, we employ the independent two-sample t-test (Keselman et al., 2004). This statistical test is chosen for its suitability in cases where the two samples are independent and originate from populations with roughly normal distributions (Manfei et al., 2017).
Age and gender distribution:How age and gender are depicted in popular media is an interesting area of study (Lauzen and Dozier, 2005; Kagan et al., 2020) and can portray writers' perspective on social structure (Bilston, 2004). Table 2 demonstrates the proportion of different age & gender groups and the mean (over stories) of aggregated weight \(\omega\) across various groups in all writers' works. Notably, male characters are more prevalent than female characters across all writers, aligning with prior research findings in different media contexts (Lauzen and Dozier, 2005; Kagan et al., 2020). Additionally, age group A2 appears more frequently than A1 and A3 for all writers.
Bankim and Humayun's fiction tends to feature more aged characters from the A3 group. Bankim's novels evoke a feudalistic societal structure (Chaudhuri, 1971), which is reflected in the prevalence of older characters. Humayun, on the other hand, focuses on middle-class family struggles in contemporary settings, hence the higher representation of older characters (Mamun et al., 2014). Sarat's fiction resembles Bankim's in terms of the presence of A1 characters, often in central roles due to the prevalence of early marriages in their context (Chaudhuri, 1971). However, Rabindranath's
\begin{table}
\begin{tabular}{l c c} \hline \hline Author & Career & \# novel \\ \hline Bankim Chandra (BC) & 1865-1885 & 12 \\ Rabindranath Tagore (RT) & 1883-1935 & 11 \\ Sarat Chandra & 1907-1940 & 16 \\ Chattopadhay (SC) & 1970-2011 & 15 \\ Humayun Ahmed (HM) & 1965-2012 & 14 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of dataset
contemporary fiction, characterized by characters from the upper-middle class with liberal education (Park et al., 2012), sees fewer A1 characters and a relatively higher presence of A2 characters than Bankim and Sarat's narratives. An intriguing observation emerges as the female character percentage does not increase for the fiction of modern-day writers, which shows an exception from existing studies in other media (Kagan et al., 2020). Notably, the weight attributed to female characters in modern Bengali literature surpasses their relatively lower representation, with statistical significance observed for Bankim and Sarat; this phenomenon is in line with the historical trend of early Bengali literature where women played central roles in plot development (Chatterjee, 2009; Sen, 2002), but contemporary writers exhibit gender-neutral character weight distribution, reflecting their distinct narrative priorities.
In Figure 3, we present the evolving proportions of different age groups in the works of three previous writers. A notable surge in the A1 age group occurs in Bankim's writings between 1873 and 1877, coinciding with the enactment of the widow remarriage law in 1872 (Mukherjee, 1985) (as listed in Table 9 in Appendix). In 1916, during the economic turmoil resulting from World War I, Sarat's fiction prominently featured the A3 age group revolving around rural society struggles (Dutt and Dhussa, 1981). Subsequently, from approximately 1918 onwards, Rabindranath and Sarat witnessed a significant rise in the A2 age group's presence, alongside a decline in the A3 age group. This shift aligns with the influence of nationalist movements and non-cooperation activities (Gupta, 2016), prompting their fiction to transition from romantic and conventional social issues to more politically and socially crisis-oriented narratives, characterized by an increased presence of A2 characters and the near absence of A1 characters in their writings during this period.
pation of women in various nationalist movements in the 1920s [14, 15].
Despite having lower connectivity, female protagonists exhibit relatively higher weight in the narratives of all writers. They carry a slightly negative emotional sentiment, often signaling tragic endings in the stories [14]. Additionally, their topic distribution tends to be concentrated on specific themes, such as social and family matters, while male protagonists feature a more diverse range of topics in their narratives.
Variation in graph structure:First, we present the average count of node, edge and graph density for all writers in Table 4. Rabindranath's portrayal of the higher middle-class, educated urban society [13, 15, 16] is characterized by a compact social structure with fewer nodes, in contrast to Sarat's depiction of rural society [12], which involves more characters but with a smaller density. Furthermore, Figure 5 illustrates the relationship between graph density and node count across different genres. Romantic novels exhibit either small, dense networks (fewer nodes but higher density) or large, sparse networks (more nodes but lower density). Historical fiction typically features many characters, while political novels maintain a high graph density even as the character count increases.
## 5 Discussion
Based on our key findings in the previous section, we answer our research questions and validate our assumptions in this section.
### Influence of Real-life Events
To investigate the influence of historical and social events on contemporary fiction, we compile a list of noteworthy national events during our study period (see Table 9 in the Appendix). Specifically, the widow remarriage law in Hindu society (1872) substantially impacts Bankim's contemporary novels, as elaborated below. Additionally, during the nineteenth century, various nationalist movements inspired Rabindranath and Sarat, leading them to produce several social and political novels, further detailed in the Appendix.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Author & \# Node & Density & \# Edge \\ \hline BC & 11.6923 & 0.4952 & 32.2308 \\ RT & 10 & 0.4565 & 19 \\ SC & 14.4375 & 0.363 & 37.25 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Graph structure property for different auhtors
Figure 4: Protagonist and genre information in chronological order (S:Social, P:Political, R:Romantic, H:Historical).
Figure 5: Node count & graph density
\begin{table}
\begin{tabular}{l|l l
Impact of widow remarriage law:The Brahma Marriage Act of 1872, which lifted the ban on widow remarriage, notably influenced Bankim's works. Bankim had previously addressed this issue in various non-fiction writings [11]. Following the passage of the Remarriage Act, there was a significant increase in the presence of female A1 characters in Bankim's writings. This shift is particularly evident in his novels, _"Bisabrksa"_ (The Poison Tree, 1872) and _"Krishnakantre Will"_ (The Will of Krishnakanta, 1878).
Bankim's opposition to the widow remarriage law is evident in his novels, where he incorporates widows into complex relationship triangles with married men and their lawful wives (1-0-2 in 10a and 2-1-8 in 10b). These relationship triangles are visually represented in the character interaction graphs, accompanied by an overall negative sentiment. Moreover, previous studies have confirmed the predominantly negative outcomes of these stories [11]. However, two of Bankim's novels during this period, _"Yugalanguriya"_ (1874) and _"Radharani"_ (1876), feature female A1 protagonists who are not widows, and these graphs do not exhibit such relationship triangles.
### Influence of Age and Gender Group
In earlier times, despite a lower ratio of female characters, their weight in the narrative was significant due to stories centered around women and their societal roles. However, contemporary writers no longer consistently emphasize increased weight for female characters. While political novels typically exclude female characters from influential roles, the situation evolves with the involvement of women in nationalist movements. The prevalence of A2 age group characters aligns with societal norms, while the appearance of A1 female characters as central figures mirrors early marriage practices influenced by real-life events. Post-1916, social and nationalist movements reshaped novels, shifting them towards socio-political themes and significantly increasing A2 group representation while other groups diminished in importance.
### Interpretation of Graph from Context
Finally, we assess whether the graph's topological structure and character presence can be inferred from the context or genre of the fiction. Bankim's novels reflect a feudalistic social structure with landlords and kings, thus incorporating more aged characters than other writers. Rabindranath's urban-centric plots feature upper-middle-class educated characters, predominantly young, with a greater emphasis on female protagonists but fewer noticeable female A1 group characters compared to Bankim or Sarat. Sarat's rural settings include fewer female characters, yet their presence, connectivity, and weight are more pronounced. Minor characters in Rabindranath's fiction have significantly lower node counts and edge weights than Sarat's, reflecting the urban setting's fewer characters and interactions than the rural context.
Similarly, genre shapes the character presence and graph structure in fiction. Romantic novels feature female and male protagonists, leading to densely or sparsely connected networks. Political novels exhibit higher graph density regardless of node count. Historical novels tend to have more nodes, reflecting their expansive nature.
## 6 Conclusion
This paper presents an exploration of social structures within contemporary Bengali literature. We employ character interaction graphs to model the works of prominent Bengali writers spanning over half a century, extracting pertinent features. Our analysis rigorously addresses three pivotal research questions regarding the influence of social structures in Bengali literary fiction. Our findings substantiate the profound impact of historical events, such as the widow remarriage act and nationalist movements, on contemporary literary works. Notably, our study unveils the substantial significance accorded to female characters despite their relatively lower prevalence. By providing visualization and quantitative assessment tools for analyzing influential fiction, our research empowers modern researchers to engage in critical literary analysis.
Figure 6: Character interaction graph for two novels of Bankim. Protagonist of both stories are widow.
### Limitations
Our study has certain limitations that warrant acknowledgment. Given the challenges of working with a low-resource language, our dataset is limited to five writers. The manual annotation of characters and attributes requires enormous effort and detailed knowledge of these novels. Some characters could be missing in our character interaction graphs due to unwanted annotation errors, although these are predominantly minor characters with minimal impact on our analysis quality. We have opted for static graphs (story-wise) in our analysis to specifically examine the influence of contemporary events and character group presence in fiction. Future research avenues could explore the dynamics of character interaction, sentiment, and weight changes throughout the narrative, requiring a separate study. Our future plans also expand our dataset to encompass more writers and diverse chronological periods. We intend to incorporate previously unexplored character attributes, such as religion and economic status, to offer multifaceted insights into our analysis.
## Ethics Statement
While the ultimate goal of this study is to investigate social structures represented in contemporary Bengali literature through character interaction graphs, we acknowledge the potential for these graphs to unveil sensitive connections such as gender or religious issues that may not have been the writers' original intent. While we support our findings with analyses validated by prior research on the writers' works, it is essential to recognize the possibility of some conclusions being subject to interpretation. Nonetheless, our research contributes valuable visualization and quantitative assessment tools, which can facilitate researchers in conducting rigorous literary analysis with greater ease.
|
2308.03849 | One-loop Effective Action up to Dimension Eight: Integrating out Heavy
Fermion(s) | We present the universal one-loop effective action up to dimension eight
after integrating out heavy fermion(s) using the Heat-Kernel method. We have
discussed how the Dirac operator being a weak elliptic operator, the fermionic
operator still can be written in the form of a strong elliptic one such that
the Heat-Kernel coefficients can be used to compute the fermionic effective
action. This action captures the footprint of both the CP conserving as well as
violating UV interactions. As it does not rely on the specific forms of either
UV or low energy theories, can be applicable for a very generic action. Our
result encapsulates the effects of heavy fermion loops only. | Joydeep Chakrabortty, Shakeel Ur Rahaman, Kaanapuli Ramkumar | 2023-08-07T18:01:21Z | http://arxiv.org/abs/2308.03849v1 | # One-loop Effective Action up to Dimension Eight: Integrating out Heavy Fermion(s)
###### Abstract
We present the universal one-loop effective action up to dimension eight after integrating out heavy fermion(s) using the Heat-Kernel method. We have discussed how the Dirac operator being a weak elliptic operator, the fermionic operator still can be written in the form of a strong elliptic one such that the Heat-Kernel coefficients can be used to compute the fermionic effective action. This action captures the footprint of both the CP conserving as well as violating UV interactions. As it does not rely on the specific forms of either UV or low energy theories, can be applicable for a very generic action. Our result encapsulates the effects of heavy fermion loops only.
## 1 Introduction
Physics of different lengths, i.e., energy scales are expected to be unfolded gradually. In that case, the most influential technique is the Effective Field Theory (EFT) [1; 2; 3; 4] as it has an inherent property of capturing the new physics effects even without knowing it exactly. This is the bottom-up approach of EFT. On the other side, employing the Wilsonian method, we can express a full theory into a _effective_ one after integrating out heavy modes whose mass decides the cut-off scale. In the process of integrating out one needs to be careful about the decoupling limit and validate the truncation of the effective Lagrangian [5; 6].
The Standard Model (SM) of particle physics appears to have prevailed after the discovery of the Higgs boson, a decade of collider searches, and many experimental advancements. However, the neutrino mass, the Baryon asymmetry of the universe (BAU), and some other observations, e.g., flavor puzzle may point to the existence of beyond Standard model (BSM) physics around an energy scale that may not be directly accessible by the current experiments. In summary, at this present juncture of particle physics, we have passive
tantalizing hints of new physics whose exact nature is yet to be unveiled. This is when EFT can be one of our best tools to take on board in search of BSM physics. In the absence of any direct evidence 1, we can still capture the indirect effects of heavy new particles, if any, by extending the renormalisable SM Lagrangian by including the higher mass dimensional effective operators a.k.a. considering the Standard Model Effective Theory (SMEFT) [7; 8].
Footnote 1: No experiment has not confirmed yet the on-shell existence of any new particle beyond the SM ones.
The method of integrating out the heavy particles and then performing matching between two theories at an energy scale are the sole of Wilsonian EFT [9; 2; 10]. The emerged low energy effective Lagrangian captures the footprint of integrated out UV interactions through the higher mass dimensional effective operators that are accompanied by definite coefficients, known as Wilson coefficients (WCs). In the bottom-up approach [11; 12; 13; 14; 15; 16; 17; 18; 19], we have no clue about the origin of these WCs and thus they are independent of each other. But, in the top-down approach [20; 21], the WCs are functions of the UV parameters. In the Wilsonian EFT, the challenging task is to compute the effective action after integrating out the heavy fields, especially beyond the tree level. Till now many attempts have been made to compute the one-loop effective action consisting of effective operators up to mass dimension six employing different methods [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. A gauge covariant approach to perform the matching has resulted in a comprehensive formula known as the Universal One-Loop Effective Action (UOLEA) [23; 24; 25; 26; 27; 28; 29; 30]. Compared to the traditional approach of matching via the Feynman diagrams, it is straightforward and algorithmic in nature. Based on the UOLEA some automated tools have been developed in aid of the matching procedure [31; 32; 33; 34; 35]. To date, this formula was restricted up to mass dimension six, until recently in our previous paper [36] we extend UOLEA up to dimension eight (\(D8\)) 2. It is worth mentioning that in recent times, dimension eight effective action has drawn much interest and related phenomenology has been explored [5; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54]. In this previous paper [36] we introduced the Heat-Kernel (HK) method [55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65] to compute the one-loop effective action and provide the UOLEA. We also employed the covariant diagram [66] method to testify part of our UOLEA result.
Footnote 2: We will use the convention \(Di\) to signify operators of mass dimension \(i\).
The one-loop effective action computed in our previous paper using the HK method is really universal in the sense that it does not require to assume of any specific UV or low-energy theory. On top of that, though we have emphasized that our UOLEA is an aftermath of integrating heavy scalars, this result is equally applicable for any action written in the form of a strong elliptic operator (\(D^{2}+U+M^{2}\)) [64; 65; 58]. In the case of the scalar field, the action contains the Klein-Gordon operator which is already in that desired form. In this paper, we want to compute the effective action when heavy fermions are integrated out, and that too using the HK method. Unfortunately, the Dirac operator is a weak elliptic operator [67; 68; 69]. Thus, we can not directly compute the HK coefficients until we manage to construct a strong elliptic operator in terms of this weak one. In the absence of non-trivial background when we can use the mass-evenness property of the Dirac operator
we can perform bosonization and rewrite the Dirac operator in terms of a Klein-Gordon-like operator [70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80]. Once we express the fermionic action in the form of a strong elliptic operator, i.e., \((D^{2}+U+M^{2})\), we can use the previously computed UOLEA up to dimension eight. In the process, the covariant derivative and the potential term (\(U\)) are modified and contain Clifford generators. Thus, in addition to the method adopted in the case of heavy scalar, we have to further perform the traces over Clifford matrices (\(\gamma_{\mu},\gamma_{5}\)) to achieve the desired and usable form of fermionic UOLEA. We will discuss this in detail in the following sections.
Charge(C)-Parity(P) violation (CPV), is one of the most intriguing possible signatures of new physics that goes beyond the SM. ATLAS has recently carried out a comprehensive study of \(Zjj\) production and also interpreted the data in terms of CPV SMEFT operators of dimension six [81; 82; 83]. The CP-violation also plays a pivotal role in explaining the BAU as it is one of the necessary conditions [84]. Although SM has a source of CPV, it is not sufficient to explain the observed asymmetry in matter-antimatter. A BSM scenario is necessary to enhance the CPV. Along this thought, the SM extended by Vector-like fermions (VLF) scenarios draw much attention due to its easy incorporation of CPV in the Yukawa sector [82; 83; 85]. In VLF models one can directly invoke the heavy fermion mass term without employing any spontaneous symmetry breaking. For example, even within the SM gauge symmetry framework, we can write down the Dirac mass term for the VLFs and as they are not the outcome of electroweak symmetry breaking vacuum expectation value (\(v_{ew}\)), can be sufficiently heavier than \(v_{ew}\) such that EFT can be validated once we integrate them out. It is important to stress at this point that our results are not restricted to the SM and(or) its VLF extension. Even a similar fermionic UOLEA is applicable for heavy Majorana fermions with suitable modification up to a numerical factor. We must aware the readers about the specialty of interactions involving \(\gamma_{5}\). In the process of fermion integrating out, the presence of \(\gamma_{5}\) requires special attention which we discuss later. The one-loop effective action after fermion integrating out up to dimension six has been discussed in [29; 30].
This paper is structured as follows. In Sec. 2, we briefly introduce the HK method and its connection with the one-loop effective action. We emphasize the subtleties associated with fermion fields. Especially, when bosonization is performed to rewrite the Dirac operator in terms of a strong second-order elliptic operator such that the HK coefficients can be used to compute the effective operators in this case as well. We present the fermionic universal one-loop effective action in Sec. 3. We catalogue our results for different mass dimensions, up to dimension eight, separately. In the following Sec. 4, we discuss some features of the fermionic effective action - how the flavor dependence emerges, unlike the scalar case. We discuss how the CP violation in the UV theory leaves its footprint in the effective operators at low energy. We also highlight the interplay between CP-conserving and violating operators based on the nature of UV interaction, and also the hierarchical nature of CP violation among equivalent effective operators of different mass dimensions. Then we conclude with a note on possible future directions that need to be explored.
Heat-Kernel and Universal One-Loop Effective Action
Starting from a UV complete action \(S\), functional of heavy (\(\Phi\)) and light (\(\phi\)) fields, the one-loop effective action obtained by integrating out the heavy field is given by,
\[e^{i\,S_{\rm eff}[\phi]}=\int[\mathcal{D}\Phi]e^{i\,S[\Phi,\phi]} =\int[\mathcal{D}\eta]\,\exp\left[i\,\left(S[\Phi_{c},\phi]\,+\, \frac{1}{2}\frac{\delta^{2}S}{\delta\Phi^{2}}\bigg{|}_{\Phi=\Phi_{c}}\eta^{2} \,+\,\mathcal{O}(\eta^{3})\right)\right],\] \[S_{\rm eff} \approx S[\Phi_{c}]\,+\,\frac{i}{2}\ln\left(\text{Det}\,\frac{\delta^{2}S }{\delta\Phi^{2}}\bigg{|}_{\Phi=\Phi_{c}}\right). \tag{1}\]
Here, the heavy field is expanded around its classical value (\(\Phi_{c}\)), and (\(\eta\)) is the fluctuation. For a generic Lagrangian of the form \(\mathcal{L}=\Phi^{\dagger}\Delta\Phi\), the one-loop contribution to the effective action (\(S_{\rm eff}^{(1)}\)) is given by the spectral function,
\[S_{\rm eff}^{(1)}=\frac{i}{2}\text{Tr}\ln\Delta=-\frac{i}{2}\text{Tr}\int_{0} ^{\infty}\frac{dt}{t}e^{-t\Delta}. \tag{2}\]
The '\(\ln\)'-function, as in the above equation, is written in the integral form using the following identity
\[\ln\lambda=-\int_{0}^{\infty}\frac{dt}{t}e^{-t\lambda}, \tag{3}\]
with \(\lambda>0\). In the case of scalar fields, the Lagrangian operator \(\Delta\) takes the generic form as
\[\Delta=-P^{2}+U_{s}+M_{s}^{2}, \tag{4}\]
where \(P\) is the covariant derivative given by \(P_{\mu}=i\,D_{\mu}=i\,(\partial_{\mu}-iA_{\mu})\)3 and \(M_{s}\) is the mass of the heavy field. Here, \(U_{s}=\delta^{2}S/\delta\Phi^{\dagger}\delta\Phi\) is a functional of light-fields of mass dimension \(+2\), and contains the information about the UV interactions. We can convert this operator \(\Delta\) into a second-order strong elliptic operator invoking Wick rotation and rewriting the same in Euclidean space as \(\Delta_{E}\)4. This allows us to identify the exponent in Eq. (2), i.e., \(e^{-t\Delta}\) with the Heat-Kernel \(K(t,x,x,\Delta)\)[61, 64]. Hence, the one-loop contribution to the effective Lagrangian after integrating out heavy scalars can be written in terms of the Heat-Kernel in the Euclidean space as
Footnote 3: Here, the coupling constant is absorbed in the definition of the gauge field \(A_{\mu}\).
Footnote 4: We will drop the subscript \(E\) from \(\Delta_{E}\) from now on wards.
\[\mathcal{L}_{\rm eff}^{\Phi}=\frac{1}{2}\,\text{tr}\int_{0}^{\infty}\frac{dt} {t}K(t,x,x,\Delta). \tag{5}\]
For a general second-order elliptic operator of the form, \(\Delta=D^{2}+M^{2}+U\), the Heat-Kernel can be written as a power law expansion in the parameter \(t\) as [58, 59]
\[K(t,x,y,\Delta)=(4\pi t)^{-d/2}\,\operatorname{Exp}\left[\frac{z^{2}}{4t}-t\, M^{2}\right]\sum_{k}\frac{(-t)^{k}}{k\,!}b_{k}(x,y), \tag{6}\]
where \(z_{\mu}=(x-y)_{\mu}\), \(d=4\) is flat-Euclidean space-time dimension, and \(b_{k}\) are the Heat-Kernel coefficients (HKCs). In the coincidence limit \(x\to y\), the HKCs are denoted by square brackets (\([b_{k}]=b_{k}(x,x)\)).
In Ref. [36], we have used the Heat-Kernel expansion to derive the universal one-loop effective action (UOLEA) expanded up to dimension eight after integrating out scalar fields. In the current paper, we extend the results presented in [36] to encompass the effect of heavy fermion field integration out at the one-loop leading to _Fermionic_ UOLEA, i.e., FUOLEA. The massive free fermionic Lagrangian, in terms of the Dirac operator, is given as
\[\mathcal{L}^{\Psi}=\overline{\Psi}(\not{P}-M_{f})\Psi, \tag{7}\]
with \(M_{f}\) being the mass of the fermion. Here, we use the Feynman slash notation to denote contractions with gamma matrices, i.e., \(\not{P}=\gamma^{\mu}P_{\mu}\). Following the similar prescription as in Eq. (1), we can compute the effective action after integrating out the fluctuations \((\overline{\eta},\eta)\) of the heavy fermions over the classical backgrounds \((\overline{\Psi_{c}},\Psi_{c})\) as
\[e^{i\,S_{\text{eff}}[\psi]}=\int[\mathcal{D}\overline{\Psi}][ \mathcal{D}\Psi]e^{i\,S[\Psi,\psi]} =\int[\mathcal{D}\overline{\eta}][\mathcal{D}\eta]\,\exp\left[i \,\left(S[\Psi_{c},\psi]\,+\,\overline{\eta}\,\frac{\delta^{2}S}{\delta \overline{\Psi}\delta\Psi}\right|_{\Psi=\Psi_{c}}\eta\,+\,\mathcal{O}(\eta^{ 3})\right)\right],\] \[\text{where}\,\,\,S_{\text{eff}} \approx S[\Psi_{c}]\,-\,i\,\ln\left(\text{Det}\,\frac{\delta^{2}S}{\delta \overline{\Psi}\delta\Psi}\bigg{|}_{\Psi=\Psi_{c}}\right). \tag{8}\]
It is worth noting that apart from different constant pre-factor and different signs, the functional form of the one-loop fermionic effective action possesses a similar mathematical structure of the same computed for the scalar case, see Eq. (2). But in the case of fermions, the one-loop effective action comprising the Dirac operator \((\not{P}-M_{f})\) is given as
\[S_{\text{eff}}^{(1)}\big{|}_{\text{fermion}}=-i\,\text{Tr}\ln[\not{P}-M_{f}]. \tag{9}\]
The difficulty that arises in working with the Dirac operator is that it is a weakly elliptic first-order differential operator whose spectrum is unbounded, unlike the Klein-Gordon operator in the scalar Lagrangian, which is a strong elliptic operator [67, 68, 69]. The identity in Eq. (3) that allows rewriting effective action in terms of the Heat-Kernel is true if \(\lambda>0\). If we identify \(\lambda\) as an eigenvalue of the Dirac operator, then so is \(-\lambda\)[80]. Then, these negative eigenvalues of the Dirac operator refrain one to employ the HK method to compute the fermionic effective action.
To proceed further we will employ the mass evenness property of \((\not{P}-M_{f})\) operator [70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80] that reads as \(\text{Det}[\not{P}-M_{f}]=\text{Det}[\not{P}+M_{f}]\) up to a diverging physically irrelevant constant, which is a direct consequence of the symmetry of the spectrum of Dirac operator. This allows one to recast the one-loop effective action in terms of a second-order elliptic operator \((\not{P}^{2})\) as,
\[S_{\text{eff}}^{(1)}\big{|}_{\text{fermion}} =-i\,\ln\text{Det}[\not{P}-M_{f}]=-\frac{i}{2}\big{\{}\ln\text{ Det}[\not{P}-M_{f}]+\ln\text{Det}[-\not{P}-M_{f}]\big{\}}\] \[=-\frac{i}{2}\big{\{}\ln\text{Det}\big{[}-\not{P}^{2}+M_{f}^{2} \big{]}-\mathcal{A}\big{\}}, \tag{10}\]
where \(\mathcal{A}\) is the multiplicative anomaly defined as
\[\mathcal{A}=\ln\text{Det}\big{[}-\not{P}^{2}+M_{f}^{2}\big{]}-\ln\text{Det}[ \not{P}-M_{f}]-\ln\text{Det}[-\not{P}-M_{f}]. \tag{11}\]
Note that, though \(\ln[\lambda_{i}\lambda_{j}]=\ln\lambda_{i}+\ln\lambda_{j}\) works for the individual eigenvalues of the spectrum, it does not hold for the spectral determinant as in general, \(\text{Det}[AB]\neq\text{Det}[A]\text{Det}[B]\). Hence, the multiplicative anomaly is in general non-zero. In Ref. [77] the multiplicative anomaly for the massive Dirac operator has been discussed for even dimensions (\(d\)) in terms of the HKC as
\[\mathcal{A}=2\sum_{j=1}^{d/2}\frac{(-1)^{j}\,M_{f}^{2j}\,Q_{j}}{j!}\,[b_{d/2-j}], \tag{12}\]
where \([b_{k}]\) are the HKCs of \((\not{P}^{2})\) operator, and
\[Q_{j}=\sum_{l=1}^{j}\frac{1}{2l-1}.\]
It is worthwhile to note that for the massless case, the multiplicative anomaly is identically zero. For the massive case at \(d=4\), the multiplicative anomaly is given by
\[\mathcal{A}=-2M_{f}^{2}\,\,[b_{1}]+\frac{4}{3}M_{f}^{4}\,\,[b_{0}], \tag{13}\]
where the Heat-Kernel initial condition sets the zeroth HKC to identity, i.e., \([b_{0}]=I\) and \([b_{1}]\) is the first non-trivial HKC. Thus, the multiplicative anomaly only gives correction to the renormalised part of the one-loop effective action operators of mass dimension \(\leq 2\), and for the higher dimensional operator computation, we can focus only on \((-\not{P}^{2}+M_{f}^{2})\) operator. This whole game of expressing the Dirac operator in terms of a strongly elliptic second-order operator is called the bosonization and is equally valid for an interacting theory as well. In this present work, we consider the interaction of the following form \(\overline{\Psi}\Sigma\Psi\) in the fermionic Lagrangian 5.
Footnote 5: \(\Sigma\) contains scalar as well as pseudo-scalar interactions.
For the rest of the paper, we will not pay attention to this multiplicative anomaly as it is not of much relevance here. Employing the properties of the strongly elliptic second-order operator, the fermionic one-loop effective action can be derived from Eq. (2). This allows us to write a unified one-loop effective Lagrangian in the Euclidean space as
\[\mathcal{L}_{\text{eff}}=c_{s}\text{tr}\ln[-\mathcal{P}^{2}+M^{2}+U]=c_{s}\, \text{tr}\int_{0}^{\infty}\frac{dt}{t}K(t,x,x,\Delta), \tag{14}\]
where \(c_{s}=+1/2\), \(+1\), and \(-\frac{1}{2}\) for real scalar and complex scalar and fermionic backgrounds respectively. Here, the second-order elliptic operator (\(\Delta\)) has a generic form \(\Delta=-\mathcal{P}^{2}+M^{2}+U\) where \(\mathcal{P}\) and \(U\) are the generalised covariant derivative and the field-dependent functional whose structure depends on the background fields being integrated, respectively. This leads to the universal one-loop effective action (UOLEA) in terms of the HKCs [36]
\[\mathcal{L}_{\text{eff}}=\frac{c_{s}}{(4\pi)^{d/2}}\sum_{k=0}^{\infty}M^{d-2k }\frac{(-1)^{k}}{k!}\,\,\Gamma[k-d/2]\,\,\text{tr}[b_{k}]. \tag{15}\]
In appendix C, we provide the explicit form of the UOLEA in terms of the generalised operators \(\mathcal{P}\) and \(U\), mimicking the same given in Ref. [36]. Substituting the field-specific
form of the generalised operators, \(\mathcal{P}\) and \(U\), one can obtain one-loop effective action operators up to dimension eight for both scalar and fermionic background fields. For the fermion case, we must perform additional "traces (tr)" over Clifford matrices to bring the effective action in usable form.
It is important to note that for \(k\leq 2\) in Eq. (15), the effective Lagrangian is divergent due to the occurrence of simple poles in the gamma function at zero and negative arguments. Thus, one requires to regularise and renormalise the Lagrangian simultaneously. In the case of integrating out scalar fields, dimensional regularisation, and \(\overline{MS}\) renormalisation scheme have been used, see Ref. [36]. In the case of fermionic fields, as mentioned earlier, one has to be careful about the additional contributions from the multiplicative anomaly, see Eq. (12). While employing the HK method, we used dimensional regularisation which is equivalent to the zeta function regularisation [86]. As we are aiming for higher mass dimensional effective operator computation, the loop contributions are finite and consistent with the renormalisation theorem. Hence, we do not require to invoke any analytical continuation into \(d\neq 4\) dimensions. This ensures that the renormalised results will equally be valid for the fermionic case even in the presence of \(\gamma^{5}\). In the following subsections, we show how the scalar fermion cases can be brought on the same footing at the operator level such that we can propose the UOLEA for both of them. We further mention why the fermion case requires special attention.
### Generic Lagrangian for Heavy Scalar
While integrating out heavy scalar we start with generic Lagrangian in the following form
\[\mathcal{L}^{\Phi}=\Phi^{\dagger}(-P^{2}+M_{s}^{2}+U_{s})\Phi, \tag{16}\]
the generalised covariant derivative is \(\mathcal{P}_{\mu}\equiv P_{\mu}=iD_{\mu}\) and the stress tensor is given as the commutator of a generalised covariant derivative is given by,
\[G_{\mu\nu}=[\mathcal{P}_{\mu},\mathcal{P}_{\nu}]\equiv[P_{\mu},P_{\nu}]=F_{\mu \nu}.\]
We do not require the specific form of the field-dependent functional \(U\) that contains information about the interaction between heavy scalar and other light fields, ie., infrared (IR) ones.
### Generic Lagrangian for Heavy Fermion
The heavy fermion field Lagrangian of our interest is written as
\[\mathcal{L}^{\Psi}=\overline{\Psi}(\not{P}-M_{f}-\Sigma)\Psi, \tag{17}\]
with \(M_{f}\) being the mass of the heavy fermion (\(\Psi\)) and \(\Sigma\) capturing all its interaction with the light (IR) fields. Note that, our method of fermionic UOLEA (FUOLEA) computation is independent of the inner structure of \(\Sigma\). In this paper, we emphasize the interaction between heavy fermion and light (pseudo)-scalars. From that perspective, we assume the following form of interaction:
\[\Sigma=S+iR\gamma^{5},\]
where, \(S\) and \(R\) are scalar and pseudo-scalar respectively.
As we have discussed earlier, this form of action will not allow us to use the HK method. Thus, our first aim is to perform a consistent bosonization procedure to recast the first-order operator in terms of the strong elliptic second-order operator. The bosonized fermionic Lagrangian reads as
\[\begin{split}\mathcal{L}^{\Psi}_{\text{eff}}&=-i\, \text{tr}\ln[\not{P}-M_{f}-S-i\,\gamma^{5}R]\\ &=\frac{-i}{2}\text{tr}\Big{\{}\ln[\not{P}-M_{f}-S-i\,\gamma^{5} R]+\ln[-\not{P}-M_{f}-S-i\,\gamma^{5}R]\Big{\}}\\ &=\frac{-i}{2}\text{tr}\,\ln\Big{[}-P^{2}-\frac{1}{2}\sigma_{ \mu\nu}F_{\mu\nu}+2i\gamma^{5}R\not{P}+M_{f}^{2}+S^{2}-R^{2}-(\not{P}S)\\ &\hskip 113.811024pt+2M(S+i\gamma^{5}R)+i\gamma^{5}(RS+SR)+i \gamma^{5}(\not{P}R)\Big{]},\end{split} \tag{18}\]
where,
\[F_{\mu\nu}=[P_{\mu},P_{\nu}],\ \sigma_{\mu\nu}=\frac{1}{2}[\gamma_{\mu}, \gamma_{\nu}],\ (P_{\mu}S)=[P_{\mu},S],\]
and we have ignored the contribution from the multiplicative anomaly. Note that in the above equation, Eq. (18), the presence of a first-order derivative operator (\(\not{P}\)) refrains us from directly identifying it with the unified one-loop effective action, see Eq. (14). This issue can be resolved by a redefinition of the covariant derivative
\[\tilde{P}_{\mu}=P_{\mu}-i\gamma^{5}\gamma_{\mu}R. \tag{19}\]
Now we can write the fermionic one-loop effective Lagrangian in terms of a Laplace-type operator as
\[\mathcal{L}^{\Psi}_{\text{eff}}=\frac{-i}{2}\text{tr}\,\ln[-\tilde{P}^{2}+M_{f }^{2}+U_{f}], \tag{20}\]
where,
\[U_{f}=Y+2M_{f}\Sigma,\] \[Y=-\frac{1}{2}\sigma_{\mu\nu}F_{\mu\nu}+S^{2}+3R^{2}-(\not{P}S)+ i\gamma^{5}(RS+SR). \tag{21}\]
At this point, we can identify the operator in Eq. (20) with same depicted in Eq. (14), with the generalised covariant derivative \(\mathcal{P}\equiv\tilde{P}_{\mu}=P_{\mu}-i\gamma^{5}\gamma_{\mu}R\), and the \(U\) equivalent term is given by \(U_{f}\) in Eq. (21). The commutator of the generalised covariant derivative \(G_{\mu\nu}\) leads to
\[G_{\mu\nu}=[\mathcal{P}_{\mu},\mathcal{P}_{\nu}]=[P_{\mu}-i\gamma^{5}\gamma_{ \mu}R,P_{\nu}-i\gamma^{5}\gamma_{\nu}R]=F_{\mu\nu}+\Gamma_{\mu\nu}, \tag{22}\]
where,
\[\Gamma_{\mu\nu}=i\gamma^{5}\gamma_{\mu}(P_{\nu}R)-i\gamma^{5}\gamma_{\nu}(P_{ \mu}R)+2\sigma_{\mu\nu}R^{2}. \tag{23}\]
We summarise how the generic Laplacian operator can be mapped for scalar and fermion cases in Table 1.
Note that, in the case of fermion, \(U_{f}\) can be decomposed into two parts, see Eq. (21). Among them, \(Y\) is a mass dimension two operators while \(\Sigma\) is an operator with mass dimension one. Hence, when one expands \(\frac{1}{M^{n}}U_{f}^{m}\), it contributes to operators of different dimensions starting from \(\mathcal{O}(1/M^{n})\). Here, contributions to different mass dimensional operators from \(Y\) and \(\Sigma\) are non-universal due to the involvement of \(M\) with \(\Sigma\). For example, let us consider the operator \(\frac{1}{M^{4}}U^{4}\) from the FUOLEA 6. The contributions from \(Y\) and \(\Sigma\) can be collected in the following form
Footnote 6: In the following sections, we will use \(U\) instead of \(U_{f}\) equivalently.
\[\frac{1}{M^{4}}\text{tr}[U^{4}]=\text{tr}\bigg{[}16\,\Sigma^{4}+ \frac{32}{M}\,Y\,\Sigma^{3}+\frac{8}{M^{2}}\,\{2Y^{2}\,\Sigma^{2}+(Y\,\Sigma)^ {2}\}+\frac{8}{M^{3}}\,Y^{3}\,\Sigma+\frac{1}{M^{4}}\,Y^{4}\bigg{]}. \tag{24}\]
It is evident that, \(\frac{1}{M^{4}}U^{4}\) operator in the FUOLEA contributes to dimension four (\(\Sigma^{4}\)), dimension five (\(Y\,\Sigma^{3}\)), dimension six (\(Y^{2}\,\Sigma^{2}\), (\(Y\,\Sigma)^{2}\)), dimension seven (\(Y^{3}\,\Sigma\)), and dimension eight (\(Y^{4}\)) operators.
## 3 FUOLEA: UOLEA after Integrating Out Heavy Fermions
This section presents the fermionic one-loop effective Lagrangian operators in terms of the fermion Lagrangian operators. We present the explicit calculation for obtaining dimension five operators from the general one-loop effective Lagrangian given in Appendix C. Then We provide the results for higher dimension operators up to dimension eight.
The initial Lagrangian Eq. (17) is written in the Minkowski space with (\(+---\)) metric signature. As the HK method is defined for the Euclidean metric signature, we use the following convention for the gamma matrices in Euclidean space (\(\gamma_{E}^{\mu}\)).
\[\gamma_{E}^{i} =\gamma^{i}, \gamma_{E}^{4} =i\gamma^{0}, \gamma_{E}^{5} =-(\gamma^{1}\gamma^{2}\gamma^{3}\gamma^{4})_{E}=\gamma^{5},\] \[\{\gamma_{E}^{\mu},\gamma_{E}^{\nu}\} =2g^{\mu\nu}, \gamma_{E}^{5}{}^{\dagger} =\gamma_{E}^{5}, (\gamma_{E}^{5})^{2} =1. \tag{25}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Scalar & Fermion \\ \hline \(c_{s}\) & 1 or 1/2 & -1/2 \\ \hline \(\mathcal{P}_{\mu}\) & \(P_{\mu}\) & \(P_{\mu}-i\gamma^{5}\gamma_{\mu}R\) \\ \hline \(U\) & \(U_{s}\) & \(U_{f}=Y+2M\Sigma\), \\ & & \(Y=-\frac{1}{2}\sigma_{\mu\nu}G_{\mu\nu}+S^{2}+3R^{2}-(\not{P}S)+i\gamma^{5}( RS+SR),\ \Sigma=S+i\gamma^{5}R\) \\ \hline \(G_{\mu\nu}\) & \(F_{\mu\nu}\) & \(F_{\mu\nu}+\Gamma_{\mu\nu}\), \\ & & \(\Gamma_{\mu\nu}=i\gamma^{5}\gamma_{\mu}(P_{\nu}R)-i\gamma^{5}\gamma_{\nu}(P_{ \mu}R)+2\sigma_{\mu\nu}R^{2}\) \\ \hline \end{tabular}
\end{table}
Table 1: Generalisation of one-loop effective Lagrangian.
Here the gamma matrices without subscripts are in the Minkowski metric. After Wick rotation, the metric tensor is given by \(g^{\mu\nu}=-\delta^{\mu\nu}\). We drop the subscript \(E\) from the Euclidean gamma matrices in future sections.
### Dimension Five operators in FUOLEA
Dimension five operators are \(\mathcal{O}(1/M)\) in the inverse mass expansion of the one-loop effective Lagrangian. Following the above-mentioned power counting procedure, \(1/M^{2}\) operators of UOLEA containing at least one \(U\) operator, \(1/M^{4}\) operators containing at least three \(U\) operators, and \(1/M^{6}\) operators containing at least five \(U\) operators contribute to the D5 fermionic one-loop effective Lagrangian operator. These include the following operators from the UOLEA mentioned in Appendix C,
\[\mathcal{L}_{\rm eff}=\frac{c_{s}}{(4\pi)^{2}}{\rm tr}\,\bigg{\{} \frac{1}{M^{2}}\frac{1}{6}\,\Big{[}-U^{3}-\frac{1}{2}(\mathcal{P}_{\mu}U)^{2} -\frac{1}{2}U\left(G_{\mu\nu}\right)^{2}\Big{]}+\frac{1}{M^{4}}\frac{1}{24} \Big{[}U^{4}-U^{2}(\mathcal{P}^{2}U)\Big{]}\] \[\qquad\qquad\qquad+\frac{1}{M^{6}}\frac{1}{60}\,\Big{[}-U^{5} \Big{]}\bigg{\}}. \tag{10}\]
Here the trace is over all internal indices including the spinor indices. Below, we systematically show how the fermionic effective Lagrangian operators can be obtained from Eq. (10).
**Contributions from \(1/M^{2}\) terms in Eq. (10)**
Considering definitions of interaction function \(U\) and covariant derivative \(\mathcal{P}\) from Table 1, terms proportional to \(1/M^{2}\) in Eq. (10) can be expanded as,
\[\mathcal{L}_{\rm eff}^{\Psi}[\![1/M^{2}]\!]=\frac{c_{s}}{(4\pi)^{ 2}}{\rm tr}\,\frac{1}{M^{2}}\frac{1}{6}\,\Big{[} -(Y+2M\Sigma)^{3}-\frac{1}{2}[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,(Y+ 2M\Sigma)]^{2}\] \[-\frac{1}{2}(Y+2M\Sigma)(F_{\mu\nu}+\Gamma_{\mu\nu})^{2}\Big{]}. \tag{11}\]
Here we have used the fact that the closed derivatives in Eq. (10) are commutators, i.e. \((\mathcal{P}U)=[\mathcal{P},U]\). Since we are considering \(1/M^{2}\) terms of the UOLEA, only the terms of \(\mathcal{O}(M)\) contribute to \(D5\) and hence we would neglect terms of other orders in mass. When expanding the operators in Eq. (10), we encounter terms such as \(Y^{2}\) and \((\Gamma_{\mu\nu})^{2}\). Hence we give their expansion below.
\[Y^{2}= \frac{1}{4}(\sigma_{\mu\nu}F_{\mu\nu})^{2}-\frac{1}{2}\sigma_{\mu \nu}F_{\mu\nu}S^{2}-\frac{3}{2}\sigma_{\mu\nu}F_{\mu\nu}R^{2}-\frac{i}{2} \sigma_{\mu\nu}\gamma^{5}F_{\mu\nu}(RS+SR)-\frac{1}{2}\sigma_{\mu\nu}S^{2}F_{ \mu\nu}\] \[+\frac{1}{2}\sigma_{\mu\nu}F_{\mu\nu}(\not{P}S)+\frac{1}{2}(\not{ P}S)\sigma_{\mu\nu}F_{\mu\nu}-\frac{3}{2}\sigma_{\mu\nu}R^{2}F_{\mu\nu}-\frac{i}{2} \gamma^{5}\sigma_{\mu\nu}(RS+SR)F_{\mu\nu}+S^{4}\] \[+9R^{4}+3S^{2}R^{2}-RSRS-SRSR-RS^{2}R-SR^{2}S+(\not{P}S)^{2}+3R^{ 2}S^{2}+i\gamma^{5}RS^{3}\] \[+i\gamma^{5}S^{2}RS+i\gamma^{5}S^{3}R+3i\gamma^{5}R^{3}S+3i \gamma^{5}R^{2}SR+3i\gamma^{5}RSR^{2}+i\gamma^{5}SRS^{2}+3i\gamma^{5}SR^{3}\] \[-i\gamma^{5}(RS+SR)(\not{P}S)-i(\not{P}S)\gamma^{5}(RS+RS)-(\not {P}S)S^{2}-3(\not{P}S)R^{2}-S^{2}(\not{P}S)\] \[-3R^{2}(\not{P}S). \tag{12}\]
\[(\Gamma_{\mu\nu})^{2}=8(P_{\nu}R)^{2}-2(P\!\!\!/R)^{2}+4i\gamma^{5}\gamma_{\mu} \sigma_{\mu\nu}(P_{\nu}R)R^{2}+4i\gamma^{5}\sigma_{\mu\nu}\gamma_{\mu}R^{2}(P_{ \nu}R)-48R^{4}. \tag{3.5}\]
The trace over the spinor indices has been performed when providing the final expression. Therefore, we first provide results of some frequently occurring spinor traces of a few terms that would be helpful in the calculation of the \(D5\) operators.
\[\mathrm{tr}^{s}\,Y=4S^{2}+12R^{3},\quad\mathrm{tr}^{s}\,\gamma^{5} Y=4i(RS+SR),\quad\mathrm{tr}^{s}\,\gamma_{\mu}Y=-4(P_{\mu}S),\quad\mathrm{tr}^{s}\, \gamma^{5}\gamma_{\mu}Y=0,\] \[\mathrm{tr}^{s}\,\Gamma_{\mu\nu}=0,\quad\mathrm{tr}^{s}\,\gamma^ {5}\Gamma_{\mu\nu}=0,\quad\mathrm{tr}^{s}\,\gamma^{5}(\Gamma_{\mu\nu})^{2}=0, \quad\mathrm{tr}^{s}\,(\Gamma_{\mu\nu})^{2}=24(P_{\mu}R)^{2}-192P^{4},\] \[\mathrm{tr}^{s}\,Y^{2}=4\Big{\{}3S^{2}R^{2}+S^{4}+3R^{2}S^{2}+9R ^{4}-RSRS-SRSR-RS^{2}R-SR^{2}S+(P_{\mu}S)^{2}-\frac{1}{2}(F_{\mu\nu})^{2} \Big{\}},\] \[\mathrm{tr}^{s}\,\gamma^{5}Y^{2}=4i\Big{\{}\frac{1}{2}\tilde{F}_ {\mu\nu}F_{\mu\nu}+S^{2}RS+S^{3}R+3R^{3}S+3R^{2}SR+RS^{3}+SRS^{2}+3RSR^{2}+3 SR^{3}\Big{\}}.\]
Here we have used,
\[\tilde{F}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}F_{\rho\sigma}. \tag{3.6}\]
The trace over all internal gauge and field indices is represented by \(\mathrm{tr}^{i}\), trace over spinor indices is given by \(\mathrm{tr}^{s}\) and \(\mathrm{tr}\equiv\mathrm{tr}^{i}\cdot\mathrm{tr}^{s}\) represents trace over all internal indices. Since \(\mathrm{tr}^{i}\) and \(\mathrm{tr}^{s}\) act on different space, internal index, and Clifford space respectively, they can be performed in any order. Now getting back to Eq. (3.3), we expand and simplify the individual terms.
\[\mathrm{tr}\,\,(Y+2M\Sigma)^{3}\llbracket M\rrbracket =\mathrm{tr}\big{[}Y^{2}(2M\Sigma)+Y(2M\Sigma)Y+(2M\Sigma)Y^{2} \big{]}\] \[=\mathrm{tr}\big{[}6MY^{2}\Sigma\big{]}=\mathrm{tr}\big{[}6M\,Y^ {2}(S+i\gamma^{5}R)\big{]}\] \[=\mathrm{tr}^{i}\Big{[}24M\Big{\{}-\frac{1}{2}S(F_{\mu\nu})^{2}+ \frac{1}{2}R\tilde{F}_{\mu\nu}F_{\mu\nu}+S^{5}-3SR^{4}\] \[\quad+3S^{3}R^{2}-5S^{2}RSR+S(P_{\mu}S)^{2}\Big{\}}\Big{]}. \tag{3.7}\]
\[\mathrm{tr}\,\,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,(Y+2M\Sigma)]^{2} \llbracket M\rrbracket =\mathrm{tr}\big{[}2[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,Y][P_{\mu} -i\gamma^{5}\gamma_{\mu}R,2M\Sigma]\big{]}\] \[=\mathrm{tr}\big{[}4M\big{\{}(P_{\mu}Y)(P_{\mu}\Sigma)-i\gamma^{5 }\gamma_{\mu}RY(P_{\mu}\Sigma)+iYR\gamma^{5}\gamma_{\mu}(P_{\mu}\Sigma)\] \[\quad-i(P_{\mu}Y)\gamma^{5}\gamma_{\mu}R\Sigma+i(P_{\mu}Y)\Sigma R \gamma^{5}\gamma_{\mu}-\gamma^{5}\gamma_{\mu}RY\gamma^{5}\gamma_{\mu}R\Sigma\] \[\quad-4R^{2}Y\Sigma-4YR^{2}\Sigma-YR\gamma^{5}\gamma_{\mu}\Sigma R \gamma^{5}\gamma_{\mu}\big{\}}\big{]}\] \[=\mathrm{tr}^{i}\big{[}16M\big{\{}-2S(P_{\mu}R)^{2}+2S(P_{\mu}S)^ {2}+8S^{2}RSR+32SR^{4}\] \[\quad-8S^{3}R^{2}-R(P_{\mu}R)(P_{\mu}S)-R(P_{\mu}S)(P_{\mu}R)\big{\}} \big{]}. \tag{3.8}\]
\[\mathrm{tr}\,\,(Y+2M\Sigma)(F_{\mu\nu}+\Gamma_{\mu\nu})^{2} \llbracket M\rrbracket =\mathrm{tr}\big{[}2M\Sigma(F_{\mu\nu}+\Gamma_{\mu\nu})^{2}\big{]}\] \[=\mathrm{tr}\big{[}2M\Sigma\big{\{}(F_{\mu\nu})^{2}+(\Gamma_{ \mu\nu})^{2}+F_{\mu\nu}\Gamma_{\mu\nu}+\Gamma_{\mu\nu}F_{\mu\nu}\big{\}}\big{]}\] \[=\mathrm{tr}^{i}\big{[}8M\big{\{}S(F_{\mu\nu})^{2}+6S(P_{\mu}R)^ {2}-48SR^{4}\big{\}}\big{]}. \tag{3.9}\]
Here we use double brackets \(\llbracket M\rrbracket\) to represent terms of \(\mathcal{O}(M)\). In the above calculation, we have performed trace over spinor indices and have used trace properties over other internal indices to shuffle the structures in order to simplify.
**Contributions from \(1/M^{4}\) terms in Eq. (3.2)**
**Terms proportional to** \(1/M^{4}\) **in Eq. (**3.2**) are expanded as,**
\[\mathcal{L}_{\text{eff}}^{\Psi}[\![1/M^{4}]\!]= \frac{c_{s}}{(4\pi)^{2}}\text{tr}\,\frac{1}{M^{4}}\frac{1}{24} \left[(Y+2M\Sigma)^{4}-(Y+2M\Sigma)^{2}\right.\] \[\left.[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,[P_{\mu}-i\gamma^{5} \gamma_{\mu}R,(Y+2M\Sigma)]]\right]\!\right]\!. \tag{3.10}\]
**Since we are considering** \(1/M^{4}\) **terms of the UOLEA, only the terms of** \(\mathcal{O}(M^{3})\) **contribute to** \(D5\) **and hence we would neglect terms of other orders in mass. Expanding the individual terms in Eq. (**3.10**) and performing trace over the spinor indices (**\(\text{tr}^{s}\)**),**
\[\text{tr}\,\,(Y+2M\Sigma)^{4}[\![M^{3}]\!] =\text{tr}\big{[}4(2M)^{3}Y\Sigma^{3}\big{]}\] \[=\text{tr}\big{\{}32M^{3}Y\{S^{3}-SR^{2}-RSR-R^{S}+i\gamma^{5}(S^ {2}R+SRS+RS^{2}-R^{3})\}\big{\}}\] \[=\text{tr}^{i}\big{[}128M^{3}\{S^{5}-S^{3}R^{2}-5S^{2}RSR-7SR^{4} \}\big{]},\]
\[\text{tr}\,\,(Y+2M\Sigma)^{2}[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,[P _{\mu}-i\gamma^{5}\gamma_{\mu}R,(Y+2M\Sigma)]][\![M^{3}]\!]\] \[=\text{tr}\big{\{}(2M)^{3}\Sigma^{2}[P_{\mu}-i\gamma^{5}\gamma_{ \mu}R,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,\Sigma]]\big{\}}\] \[=\text{tr}\big{\{}(2M)^{3}\Sigma^{2}[P_{\mu}-i\gamma^{5}\gamma_{ \mu}R,(P_{\mu}\Sigma)-i\gamma^{5}\gamma_{\mu}R\Sigma+i\Sigma R\gamma^{5}\gamma _{\mu}]\big{\}}\] \[=\text{tr}\big{[}(2M)^{3}\Sigma^{2}\{(P^{2}\Sigma)-i\gamma^{5} \gamma_{\mu}(P_{\mu}R)\Sigma-i\gamma^{5}\gamma_{\mu}R(P_{\mu}\Sigma)\] \[\quad+i(P_{\mu}\Sigma)R\gamma^{5}\gamma_{\mu}+i\Sigma(P_{\mu}R) \gamma^{5}\gamma_{\mu}+4R^{2}\Sigma+2\gamma^{5}\gamma_{\mu}R\Sigma R\gamma^{5 }\gamma_{\mu}\] \[\quad-i\gamma^{5}\gamma_{\mu}R(P_{\mu}\Sigma)+i(P_{\mu}\Sigma)R \gamma^{5}\gamma_{\mu}+4\Sigma R^{2}\}\big{]}\] \[=\text{tr}^{i}\big{[}32M^{3}\big{\{}-32SR^{4}+8S^{3}R^{2}-8S^{2} RSR-2S(P_{\mu}S)^{2}+2(P_{\mu}R)^{2}\] \[\qquad\qquad\qquad+2R(P_{\mu}S)(P_{\mu}R)+2R(P_{\mu}R)(P_{\mu}S) \big{\}}\big{]}. \tag{3.11}\]
**Contributions from \(1/M^{6}\) terms in Eq. (3.2)**
**The term proportional to** \(1/M^{6}\) **in Eq. (**3.2**) is expanded as,**
\[\mathcal{L}_{\text{eff}}^{\Psi}[\![1/M^{6}]\!]=-\frac{c_{s}}{(4\pi)^{2}}\text {tr}\,\frac{1}{M^{6}}\frac{1}{60}\left[U^{5}\right]\!. \tag{3.12}\]
**Here, only terms of** \(\mathcal{O}(M^{5})\) **contribute to** \(D5\) **operators. Hence we consider only the** \(2M\Sigma\) **term in** \(U\)**. Expanding and simplifying spinor traces, we get,**
\[U^{5}[\![M^{5}]\!]=\text{tr}\big{[}(2M)^{5}\Sigma^{5}\big{]}=\text {tr}^{i}\big{[}4(2M)^{5}\big{\{}S^{5}+5SR^{4}-5S^{2}RSR-5S^{3}R^{2}\big{\}} \big{]}.\]
**Combining all the contributions, fermionic one-loop effective Lagrangian can be expressed in terms of operators of dimension five as**
\[\mathcal{L}_{\text{eff}}^{\Psi(D5)}=\frac{c_{s}}{(4\pi)^{2}}\text {tr}^{i}\,\frac{1}{M}\Big{\{} -2RF_{\mu\nu}\tilde{F}_{\mu\nu}+\frac{4}{3}S(F_{\mu\nu})^{2}-\frac{ 4}{5}S^{5}-4S(P_{\mu}S)^{2}-4S(P_{\mu}R)^{2}-4SR^{4}\] \[\quad+4S^{2}RSR-\frac{20}{3}S^{3}R^{2}-\frac{4}{3}R(P_{\mu}S)(P_ {\mu}R)-\frac{4}{3}R(P_{\mu}R)(P_{\mu}S)\Big{\}}. \tag{3.13}\]
### Dimension Six operators in FUOLEA
Following the power counting procedure mentioned in Sec. 2, terms that contribute to \(D6\) operators from the UOLEA given in Appendix C are,
\[\mathcal{L}_{\text{eff}}=\frac{c_{s}}{(4\pi)^{2}} \text{tr}\bigg{\{}\frac{1}{M^{2}}\frac{1}{6}\bigg{[}-U^{3}-\frac{1 }{2}(\mathcal{P}_{\mu}U)^{2}-\frac{1}{2}U\,(G_{\mu\nu})^{2}-\frac{1}{10}(J_{\nu })^{2}+\frac{1}{15}\,G_{\mu\nu}\,G_{\nu\rho}\,G_{\rho\mu}\bigg{]}\] \[+\frac{1}{M^{4}}\frac{1}{24}\bigg{[}U^{4}-U^{2}(\mathcal{P}^{2}U) +\frac{4}{5}U^{2}(G_{\mu\nu})^{2}+\frac{1}{5}(U\,G_{\mu\nu})^{2}-\frac{2}{5}U\, (\mathcal{P}_{\mu}U)\,J_{\mu}\] \[+\frac{1}{5}(\mathcal{P}^{2}U)^{2}\bigg{]}+\frac{1}{M^{6}}\frac{1 }{60}\bigg{[}-U^{5}+2\,U^{3}(\mathcal{P}^{2}U)+U^{2}(\mathcal{P}_{\mu}U)^{2} \bigg{]}+\frac{1}{M^{8}}\frac{1}{120}\,\bigg{[}U^{6}\bigg{]}\bigg{\}}. \tag{3.14}\]
Expanding the generalised operators and collecting the \(\mathcal{O}(1/M^{2})\) terms, we get,
\[\mathcal{L}_{\text{eff}}^{\Psi(D6)}=\frac{c_{s}}{(4\pi)^{2}} \text{tr}\frac{1}{M^{2}}\bigg{\{} -\frac{1}{6}Y^{3}-\frac{4}{3}Y\,\Sigma^{4}-\frac{1}{12}Y\,(F_{ \mu\nu})^{2}-\frac{1}{12}Y\,(\Gamma_{\mu\nu})^{2}+\frac{2}{3}Y^{2}\,\Sigma^{2}\] \[+\frac{2}{15}\Sigma^{2}\,(\Gamma_{\mu\nu})^{2}+\frac{4}{15}\Sigma ^{2}\,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,\Sigma]^{2}+\frac{2}{15}\Sigma^{2}\,( F_{\mu\nu})^{2}\] \[-\frac{1}{6}\Sigma^{2}\,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,[P_{\mu }-i\gamma^{5}\gamma_{\mu}R,Y]]+\frac{1}{3}(Y\,\Sigma)^{2}\] \[+\frac{8}{15}\Sigma^{3}\,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,[P_{\mu }-i\gamma^{5}\gamma_{\mu}R,\Sigma]]+\frac{1}{30}(\Sigma\,F_{\mu\nu})^{2}\] \[-\frac{1}{60}[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,F_{\mu\nu}][P_{\rho }-i\gamma^{5}\gamma_{\rho}R,F_{\rho\nu}]+\frac{1}{15}\Sigma\,F_{\mu\nu}\Sigma \,\Gamma_{\mu\nu}\] \[-\frac{1}{60}[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,F_{\mu\nu}][P_{ \rho}-i\gamma^{5}\gamma_{\rho}R,\Gamma_{\rho\nu}]+\frac{1}{30}(\Sigma\, \Gamma_{\mu\nu})^{2}\] \[-\frac{1}{30}[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,\Gamma_{\mu\nu}][P _{\rho}-i\gamma^{5}\gamma_{\rho}R,\Gamma_{\rho\nu}]+\frac{1}{90}F_{\mu\nu}F_{ \nu\rho}F_{\rho\mu}\] \[-\frac{1}{6}Y\Sigma\,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,[P_{\mu}-i \gamma^{5}\gamma_{\mu}R,\Sigma]]-\frac{1}{12}Y\,F_{\mu\nu}\,\Gamma_{\mu\nu}\] \[-\frac{1}{15}\Sigma\,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,\Sigma][P_{ \nu}-i\gamma^{5}\gamma_{\nu}R,F_{\nu\mu}]+\frac{2}{15}\Sigma^{2}F_{\mu\nu} \Gamma_{\mu\nu}\] \[-\frac{1}{15}\Sigma\,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,\Sigma][P_ {\nu}-i\gamma^{5}\gamma_{\nu}R,F_{\nu\mu}]+\frac{2}{15}\Sigma^{2}\Gamma_{\mu \nu}F_{\mu\nu}\] \[-\frac{1}{12}[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,Y]^{2}+\frac{1}{30}[ P_{\mu}-i\gamma^{5}\gamma_{\mu}R,[P_{\mu}-i\gamma^{5}\gamma_{\mu}R,\Sigma]^{2}\] \[+\frac{2}{15}\Sigma^{2}\,F_{\mu\nu}\,\Gamma_{\mu\nu}+\frac{2}{15} \Sigma^{2}\,\Gamma_{\mu\nu}\,F_{\mu\nu}+\frac{1}{90}F_{\mu\nu}\,F_{\nu\rho}\,F _{\rho\mu}\] \[+\frac{1}{30}F_{\mu\nu}\,F_{\nu\rho}\,\Gamma_{\rho\mu}+\frac{1}{30 }F_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{\rho\mu}+\frac{1}{90}\Gamma_{\mu\nu}\, \Gamma_{\nu\rho}\,\Gamma_{\rho\mu}\bigg{\}}. \tag{3.15}\]
Following the steps explained in the previous section, the dimension six fermionic one-loop effective operators are given by,
\[\mathcal{L}_{\text{eff}}^{\Psi(D6)}=\frac{c_{s}}{(4\pi)^{2}}\text{tr}^{i}\, \frac{1}{M^{2}}\Big{\{}\frac{2}{15}S^{6}+\frac{4}{3}S^{4}R^{2}+\frac{4}{3}S^{3} RSR-2(S^{2}R)^{2}-\frac{4}{3}S^{2}R^{4}+4R^{3}SRS\]
\[-\frac{2}{3}(R^{2}S)^{2}-\frac{2}{3}R^{6}+\frac{4}{5}S^{2}(P_{\mu}S)^ {2}-\frac{4}{3}SR(P_{\mu}R)(P_{\mu}S)-\frac{4}{3}R^{2}(P_{\mu}S)^{2}\] \[-\frac{4}{3}RS(P_{\mu}S)(P_{\mu}R)+\frac{8}{3}SR(P_{\mu}S)(P_{\mu} R)+\frac{8}{3}RS(P_{\mu}R)(P_{\mu}S)\] \[-\frac{8}{3}R^{2}(P_{\mu}R)^{2}+\frac{6}{5}(S(P_{\mu}S))^{2}+2(S( P_{\mu}R))^{2}+\frac{2}{3}(R(P_{\mu}S))^{2}\] \[-\frac{2}{3}(R(P_{\mu}R))^{2}-\frac{1}{5}(P^{2}S)^{2}-\frac{1}{3} (P^{2}R)^{2}-\frac{7}{15}S^{2}(F_{\mu\nu})^{2}-\frac{1}{3}R^{2}(F_{\mu\nu})^{2}\] \[-\frac{1}{5}(SF_{\mu\nu})^{2}+\frac{1}{3}(RF_{\mu\nu})^{2}+\frac{ 16}{15}S(P_{\mu}S)(P_{\nu}F_{\nu\mu})+\frac{4}{3}R(P_{\mu}R)(P_{\nu}F_{\nu\mu})\] \[+\frac{2}{3}(SR+RS)\tilde{F}_{\mu\nu}F_{\mu\nu}+\frac{2}{3}S \tilde{F}_{\mu\nu}RF_{\mu\nu}+\frac{4}{15}(P_{\nu}F_{\nu\mu})^{2}+\frac{2}{45 }F_{\mu\nu}F_{\nu\rho}F_{\rho\mu}\Big{\}}. \tag{3.16}\]
The above-given dimension six operators can be classified into different classes in the Green's basis [87]
\[\Phi^{6},\;\Phi^{4}D^{2},\;\Phi^{2}D^{4},\;F\Phi^{2}D^{2},\;F^{2} \Phi^{2},\;F^{2}D^{2},\;F^{3}, \tag{3.17}\]
where, \([S,R]\in\Phi\), \(D\) represents the covariant derivative \(P_{\mu}\), and \(F\) represents the field tensor \(F_{\mu\nu}\).
### Dimension Seven operators in FUOLEA
Following the power counting procedure mentioned in Sec. 2, the dimension seven fermionic one-loop effective Lagrangian operators get contribution from the following terms of the UOLEA in Appendix C.
\[\mathcal{L}_{\rm eff}= \frac{c_{s}}{(4\pi)^{2}}\,{\rm tr}\bigg{\{}\frac{1}{M^{4}}\frac{ 1}{24}\bigg{[}U^{4}-U^{2}(\mathcal{P}^{2}U)+\frac{4}{5}U^{2}(G_{\mu\nu})^{2}+ \frac{1}{5}(U\,G_{\mu\nu})^{2}-\frac{2}{5}U\,(\mathcal{P}_{\mu}U)\,J_{\mu}\] \[\qquad\qquad\qquad+\frac{1}{5}(\mathcal{P}^{2}U)^{2}+\frac{2}{5} U(J_{\mu})^{2}-\frac{2}{15}(\mathcal{P}^{2}U)(G_{\rho\sigma})^{2}-\frac{4}{15}U \,G_{\mu\nu}G_{\nu\rho}G_{\rho\mu}\] \[\qquad\qquad\qquad-\frac{8}{15}(\mathcal{P}_{\mu}\mathcal{P}_{ \nu}U)\,G_{\rho\mu}G_{\rho\nu}\bigg{]}\] \[+\frac{1}{M^{6}}\frac{1}{60}\bigg{[}-U^{5}+2\,U^{3}(\mathcal{P}^ {2}U)+U^{2}(\mathcal{P}_{\mu}U)^{2}-\frac{2}{3}U^{2}G_{\mu\nu}U\,G_{\mu\nu}-U^ {3}(G_{\mu\nu})^{2}\] \[\qquad\qquad\qquad+\frac{1}{3}U^{2}(\mathcal{P}_{\mu}U)J_{\mu}- \frac{1}{3}U\,(\mathcal{P}_{\mu}U)(\mathcal{P}_{\nu}U)\,G_{\mu\nu}-\frac{1}{3} U^{2}J_{\mu}(\mathcal{P}_{\mu}U)\] \[\qquad\qquad\qquad-\frac{1}{3}U\,G_{\mu\nu}(\mathcal{P}_{\mu}U)( \mathcal{P}_{\nu}U)-U\,(\mathcal{P}^{2}U)^{2}-\frac{2}{3}(\mathcal{P}^{2}U)( \mathcal{P}_{\nu}U)^{2}\bigg{]}\] \[+\frac{1}{M^{8}}\frac{1}{120}\left[U^{6}-3\,U^{4}(\mathcal{P}^{2}U )-2\,U^{3}(\mathcal{P}_{\nu}U)^{2}\right]+\frac{1}{M^{10}}\frac{1}{210}\, \bigg{[}-U^{7}\bigg{]}\bigg{\}}. \tag{3.18}\]
Expanding the functional \(U\) and collecting the terms of \(\mathcal{O}(1/M^{3})\), the one-loop effective Lagrangian operators in terms of \(\Sigma\), \(Y\) and the generalised covariant derivative (\(\mathcal{P}\)) are given in Appendix A. These are more generalised forms of the effective Lagrangian operators where \(\mathcal{P}\) contains any gauge fields and additional operators that arise due to the bosonization of the Dirac operator, \(\Sigma\) contains any mass dimension one interaction and \(Y\) is the corresponding
mass dimension two operators produced during the bosonization. Due to a large number of operators in dimension seven, we present the effective Lagrangian in terms of Green's operator classes [87]:
\[\Phi^{7},\;\Phi^{5}D^{2},\;\Phi^{3}D^{4},\;F\Phi^{3}D^{2},\;F^{2}\Phi^{3},\;F^{2} \Phi D^{2},\;F^{3}\Phi. \tag{3.19}\]
Operators of a particular class are represented by the double brackets \(\mathcal{L}\llbracket class\rrbracket\). The individual classes are further classified based on the CP-conserving nature of the operators and the presence of Levi-Civita tensor (\(\varepsilon_{\alpha\beta\mu\nu}\)). CP-violating operators are represented by a superscript \(CPV\) and the CP-conserving operators of the same class are represented by the superscript \(CPC\). Other classes with operators having Levi-Civita tensor are placed in two categories \(I\) and \(II\) where \(II\) contains operators with the Levi-Civita tensor. Hence the dimension seven one-loop effective Lagrangian operators are given by,
\[\mathcal{L}_{\text{eff}}^{\Phi(D7)}= \frac{c_{s}}{(4\pi)^{2}}\,\text{tr}\bigg{[}\mathcal{L}\llbracket \Phi^{7}\rrbracket+\mathcal{L}\llbracket\Phi^{5}D^{2}\rrbracket+\mathcal{L} \llbracket\Phi^{3}D^{4}\rrbracket+\mathcal{L}_{I}\llbracket F\Phi^{3}D^{2} \rrbracket+\mathcal{L}_{II}\llbracket F\Phi^{3}D^{2}\rrbracket\] \[+\mathcal{L}^{CPC}\llbracket F^{2}\Phi^{3}\rrbracket+\mathcal{L }^{CPV}\llbracket F^{2}\Phi^{3}\rrbracket+\mathcal{L}_{I}\llbracket F^{2}\Phi D ^{2}\rrbracket+\mathcal{L}_{II}\llbracket F^{2}\Phi D^{2}\rrbracket\] \[+\mathcal{L}^{CPC}\llbracket F^{3}\Phi\rrbracket+\mathcal{L}^{ CPV}\llbracket F^{3}\Phi\rrbracket\bigg{]}. \tag{3.20}\]
For the sake of clarity in the text, we use the following notation hereafter.
\[S_{\mu_{1}\mu_{2}...\mu_{n}} \equiv[P_{\mu_{n}}...,[P_{\mu_{2}},[P_{\mu_{1}},S]]],\] \[R_{\mu_{1}\mu_{2}...\mu_{n}} \equiv[P_{\mu_{n}}...,[P_{\mu_{2}},[P_{\mu_{1}},R]]],\] \[F_{\alpha\beta\mu_{1}...\mu_{n}} \equiv[P_{\mu_{n}}...,[P_{\mu_{1}},F_{\alpha\beta}]].\]
Dimension seven operators of different classes are given below.
\[\mathcal{L}\llbracket\Phi^{7}\rrbracket= \,\text{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}\frac{4}{3}\,SR^{ 6}-\frac{4}{3}\,SRSRSR^{2}-\frac{4}{3}\,S^{2}R^{3}SR+\frac{4}{3}\,S^{2}R^{2} SR^{2}-\frac{4}{3}\,S^{2}RSR^{3}\] \[+\frac{4}{3}\,S^{3}R^{4}+\frac{4}{3}\,S^{3}RS^{2}R-\frac{4}{3}\, S^{4}RSR-\frac{4}{15}\,S^{5}R^{2}-\frac{4}{105}\,S^{7}\bigg{\}}\bigg{]}. \tag{3.21}\]
\[\mathcal{L}\llbracket\Phi^{5}D^{2}\rrbracket= \,\text{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}-\frac{4}{3}\,S^{ 2}R_{\mu}S_{\mu}R+\frac{16}{15}\,S^{2}S_{\mu}R_{\mu}R+\frac{2}{3}\,SR_{\mu}RR_ {\mu}R-2\,SR_{\mu}SS_{\mu}R\] \[+\frac{4}{3}\,SR_{\mu}R_{\mu}R^{2}-\frac{2}{3}\,SR_{\mu}S_{\mu}SR- \frac{2}{3}\,SS_{\mu}RS_{\mu}R+\frac{6}{5}\,SS_{\mu}SR_{\mu}R-\frac{2}{3}\,SS_ {\mu}R_{\mu}SR\] \[+\frac{16}{15}\,SS_{\mu}S_{\mu}R^{2}+\frac{2}{3}\,R_{\mu}RS_{\mu} R^{2}+\frac{2}{3}\,R_{\mu}SRR_{\mu}R+\frac{2}{5}\,R_{\mu}S^{2}S_{\mu}R+2\,R_{\mu}SR_ {\mu}R^{2}\] \[-2\,R_{\mu}SR_{\mu}S^{2}+\frac{6}{5}\,R_{\mu}SS_{\mu}SR+\frac{2}{ 3}\,R_{\mu}R_{\mu}RSR+\frac{4}{3}\,R_{\mu}R_{\mu}SR^{2}+\frac{2}{3}\,R_{\mu}R _{\mu}S^{3}\] \[+\frac{2}{3}\,R_{\mu}S_{\mu}R^{3}+\frac{16}{15}\,R_{\mu}S_{\mu}S^ {2}R+\frac{2}{3}\,S_{\mu}RR_{\mu}R^{2}-\frac{2}{3}\,S_{\mu}SRS_{\mu}R+\frac{2} {5}\,S_{\mu}S^{2}R_{\mu}R\] \[-2\,S_{\mu}SR_{\mu}SRR+\frac{6}{5}\,S_{\mu}SS_{\mu}R^{2}+\frac{2} {3}\,S_{\mu}R_{\mu}R^{3}-\frac{4}{3}\,S_{\mu}R_{\mu}S^{2}R-\frac{2}{3}\,S_{ \mu}S_{\mu}RSR\] \[+\frac{16}{15}\,S_{\mu}S_{\mu}SR^{2}-\frac{6}{5}\,S_{\mu}SS_{\mu} S^{2}-\frac{2}{15}\,S_{\mu}S_{\mu}S^{3}\bigg{\}}\bigg{]}. \tag{3.22}\]
\[\mathcal{L}[\![F\Phi^{3}D^{2}]\!] = \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}\frac{2}{5}\,R_{\mu} S_{\mu\nu\nu}R-\frac{2}{45}\,S_{\mu}R_{\nu\nu}R_{\mu}+\frac{8}{15}\,R_{\mu\mu}R_{ \nu\nu}S+\frac{8}{15}\,R_{\mu\mu}S_{\nu\nu}R \tag{3.24}\] \[-\frac{2}{3}\,\tilde{F}_{\mu\nu}SF_{\mu\nu}SR-\frac{1}{3}\,\tilde {F}_{\mu\nu}F_{\mu\nu}S^{2}R-\frac{2}{3}\,\tilde{F}_{\mu\nu}RF_{\mu\nu}R^{2}+ \frac{4}{3}\,\tilde{F}_{\mu\nu}F_{\mu\nu}R^{3}\bigg{\}}\bigg{]}.\]
\[\mathcal{L}_{I}[\![F^{2}\Phi D^{2}]\!] = \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}-\frac{1}{6}\, \tilde{F}_{\mu\nu\alpha}F_{\mu\nu}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu \kappa}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu}R_{\kappa\kappa}\bigg{\}} \bigg{]}. \tag{3.25}\]
\[\mathcal{L}_{II}[\![F^{3}D^{2}]\!] = \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}-\frac{1}{6}\, \tilde{F}_{\mu\nu\alpha}F_{\mu\nu}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu \kappa}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu}R_{\kappa\kappa}\bigg{\}} \bigg{]}. \tag{3.26}\]
\[\mathcal{L}^{CPC}[\![F^{2}\Phi^{3}]\!] = \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}\frac{4}{15}\,SF_{ \mu\nu}RF_{\mu\nu}R-\frac{2}{3}\,SF_{\mu\nu}F_{\mu\nu}R^{2}+\frac{4}{15}\,F_{ \mu\nu}SRF_{\mu\nu}R-\frac{4}{15}\,F_{\mu\nu}SF_{\mu\nu}R^{2} \tag{3.27}\] \[-\frac{4}{15}\,F_{\mu\nu}F_{\mu\nu}RSR-\frac{2}{3}\,F_{\mu\nu}F_ {\mu\nu}SR^{2}+\frac{2}{5}\,F_{\mu\nu}SF_{\mu\nu}S^{2}+\frac{2}{45}\,F_{\mu \nu}F_{\mu\nu}S^{3}\bigg{\}}\bigg{]}.\]
\[\mathcal{L}^{CPV}[\![F^{2}\Phi^{3}]\!] = \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}-\frac{1}{3}\,S^{ 2}\,\tilde{F}_{\mu\nu}F_{\mu\nu}R-\frac{2}{3}\,S\tilde{F}_{\mu\nu}SF_{\mu\nu} R-\frac{2}{3}\,S\tilde{F}_{\mu\nu}F_{\mu\nu}SR+\frac{2}{3}\,\tilde{F}_{\mu\nu}S^{2}F_{ \mu\nu}R\] \[-\frac{2}{3}\,\tilde{F}_{\mu\nu}SF_{\mu\nu}SR-\frac{1}{3}\,\tilde {F}_{\mu\nu}F_{\mu\nu}S^{2}R-\frac{2}{3}\,\tilde{F}_{\mu\nu}RF_{\mu\nu}R^{2}+ \frac{4}{3}\,\tilde{F}_{\mu\nu}F_{\mu\nu}R^{3}\bigg{\}}\bigg{]}.\]
\[\mathcal{L}_{I}[\![F^{2}\Phi D^{2}]\!] = \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}-\frac{1}{6}\, \tilde{F}_{\mu\nu\alpha}F_{\mu\nu}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu \kappa}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu\kappa}R-\frac{1}{6}\,\tilde {F}_{\mu\nu}F_{\mu\nu}R_{\kappa\kappa}\bigg{\}}\bigg{]}. \tag{3.28}\]
\[\mathcal{L}_{II}[\![F^{2}\Phi D^{2}]\!] = \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{3}}\bigg{\{}-\frac{1}{6}\, \tilde{F}_{\mu\nu\alpha\alpha}F_{\mu\nu}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{ \mu\nu\kappa\kappa}R-\frac{1}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu}R_{\kappa\kappa} \bigg{\}}\bigg{]}. \tag{3.29}\]
\[{\cal L}^{CPC}[\![F^{3}\Phi]\!]= \,{\rm tr}^{i}\biggl{[}\frac{1}{M^{3}}\biggl{\{}-\frac{64}{45}\,F_{ \mu\nu}F_{\nu\alpha}F_{\alpha\mu}S\biggr{\}}\biggr{]}. \tag{3.30}\]
\[{\cal L}^{CPV}[\![F^{3}\Phi]\!]= \,{\rm tr}^{i}\biggl{[}\frac{1}{M^{3}}\biggl{\{}\frac{2}{3}\, \tilde{F}_{\mu\nu}F_{\nu\kappa}F_{\kappa\mu}R+\frac{1}{6}\,F_{\mu\nu}\tilde{F} _{\nu\kappa}F_{\kappa\mu}R+\frac{1}{2}\,F_{\mu\nu}F_{\nu\kappa}\tilde{F}_{ \kappa\mu}R\biggr{\}}\biggr{]}. \tag{3.31}\]
### Dimension Eight operators in FUOLEA
Following the power counting procedure mentioned in Sec. 2, the dimension eight fermionic one-loop effective Lagrangian operators get contribution from the following terms of the UOLEA in Appendix C.
\[{\cal L}_{\rm eff}= \frac{c_{s}}{(4\pi)^{2}}\,{\rm tr}\biggl{\{}\frac{1}{M^{4}}\frac{ 1}{24}\biggl{[}U^{4}-U^{2}({\cal P}^{2}U)+\frac{4}{5}U^{2}(G_{\mu\nu})^{2}+ \frac{1}{5}(U\,G_{\mu\nu})^{2}+\frac{1}{5}({\cal P}^{2}U)^{2}\] \[\qquad\qquad\qquad-\frac{2}{5}U\,({\cal P}_{\mu}U)\,J_{\mu}+\frac {2}{5}U(J_{\mu})^{2}-\frac{2}{15}({\cal P}^{2}U)(G_{\rho\sigma})^{2}+\frac{1} {35}({\cal P}_{\nu}J_{\mu})^{2}\] \[\qquad\qquad\qquad-\frac{4}{15}U\,G_{\mu\nu}G_{\nu\rho}G_{\rho \mu}-\frac{8}{15}({\cal P}_{\mu}{\cal P}_{\nu}U)\,G_{\rho\mu}G_{\rho\nu}+ \frac{16}{105}G_{\mu\nu}J_{\mu}J_{\nu}\] \[\qquad\qquad\qquad+\frac{1}{420}(G_{\mu\nu}G_{\rho\sigma})^{2}+ \frac{17}{210}(G_{\mu\nu})^{2}(G_{\rho\sigma})^{2}+\frac{2}{35}(G_{\mu\nu}G_ {\nu\rho})^{2}\] \[\qquad\qquad\qquad+\frac{1}{105}G_{\mu\nu}G_{\nu\rho}G_{\rho \sigma}G_{\sigma\mu}+\frac{16}{105}({\cal P}_{\mu}J_{\nu})G_{\nu\sigma}G_{ \sigma\mu}\biggr{]}\] \[\qquad\qquad+\frac{1}{M^{6}}\frac{1}{60}\,\biggl{[} -U^{5}+2\,U^{3}({\cal P}^{2}U)+U^{2}({\cal P}_{\mu}U)^{2}-\frac{2}{3}U^{2}G_{ \mu\nu}U\,G_{\mu\nu}-U^{3}(G_{\mu\nu})^{2}\] \[\qquad\qquad\qquad+\frac{1}{3}U^{2}({\cal P}_{\mu}U)J_{\mu}-\frac {1}{3}U\,({\cal P}_{\mu}U)({\cal P}_{\nu}U)\,G_{\mu\nu}-\frac{1}{3}U^{2}J_{\mu }({\cal P}_{\mu}U)\] \[\qquad\qquad\qquad-\frac{1}{3}U\,G_{\mu\nu}({\cal P}_{\mu}U)({ \cal P}_{\nu}U)-U\,({\cal P}^{2}U)^{2}-\frac{2}{3}({\cal P}^{2}U)({\cal P}_{ \nu}U)^{2}-\frac{1}{7}(({\cal P}_{\mu}U)G_{\mu\alpha})^{2}\] \[\qquad\qquad\qquad+\frac{2}{7}U^{2}G_{\mu\nu}G_{\nu\alpha}G_{ \alpha\mu}+\frac{8}{21}U\,G_{\mu\nu}U\,G_{\nu\alpha}G_{\alpha\mu}-\frac{4}{7}U ^{2}(J_{\mu})^{2}-\frac{3}{7}(U\,J_{\mu})^{2}\] \[\qquad\qquad\qquad+\frac{4}{7}U\,({\cal P}^{2}U)(G_{\mu\nu})^{2}+ \frac{4}{7}({\cal P}^{2}U)U(G_{\mu\nu})^{2}-\frac{2}{7}U\,({\cal P}_{\mu}U)J_{ \nu}G_{\mu\nu}\] \[\qquad\qquad\qquad-\frac{2}{7}({\cal P}_{\mu}U)U\,G_{\mu\nu}J_{ \nu}-\frac{4}{7}U\,({\cal P}_{\mu}U)G_{\mu\nu}J_{\nu}-\frac{4}{7}({\cal P}_{ \mu}U)U\,J_{\nu}G_{\mu\nu}\] \[\qquad\qquad\qquad+\frac{4}{21}U\,G_{\mu\nu}({\cal P}^{2}U)G_{ \mu\nu}+\frac{11}{21}({\cal P}_{\alpha}U)^{2}(G_{\mu\nu})^{2}-\frac{10}{21}({ \cal P}_{\mu}U)J_{\nu}U\,G_{\mu\nu}\] \[\qquad\qquad\qquad-\frac{10}{21}({\cal P}_{\mu}U)G_{\mu\nu}U\,J_{ \nu}-\frac{2}{21}({\cal P}_{\mu}U)({\cal P}_{\nu}U)G_{\mu\alpha}G_{\alpha\nu}+ \frac{10}{21}({\cal P}_{\nu}U)({\cal P}_{\mu}U)G_{\mu\alpha}G_{\alpha\nu}\] \[\qquad\qquad\qquad-\frac{1}{7}(G_{\alpha\mu}({\cal P}_{\mu}U))^{2}- \frac{1}{42}(({\cal P}_{\alpha}U)G_{\mu\nu})^{2}-\frac{1}{14}({\cal P}_{\mu}{ \cal P}^{2}U)^{2}-\frac{4}{21}({\cal P}^{2}U)({\cal P}_{\mu}U)J_{\mu}\] \[\qquad\qquad\qquad+\frac{4}{21}({\cal P}_{\mu}U)({\cal P}^{2}U)J_{ \mu}+\frac{2}{21}({\cal P}_{\mu}U)({\cal P}_{\nu}U)({\cal P}_{\mu}J_{\nu})- \frac{2}{21}({\cal P}_{\nu}U)({\cal P}_{\mu}U)({\cal P}_{\mu}J_{\nu})\biggr{]}\] \[\qquad\qquad+\frac{1}{M^{8}}\frac{1}{120}\,\biggl{[}U^{6}-3\,U^{4} ({\cal P}^{2}U)-2\,U^{3}({\cal P}_{\nu}U)^{2}+\frac{12}{7}U^{2}({\cal P}_{\mu}{ \cal P}_{\nu}U)({\cal P}_{\nu}{\cal P}_{\mu}U)\] \[\qquad\qquad\qquad+\frac{26}{7}({\cal P}_{\mu}{\cal P}_{\nu}U)U \,({\cal P}_{\mu}U)({\cal P}_{\nu}U)+\frac{26}{7}({\cal P}_{\mu}{\cal P}_{\nu}U)( {\cal P}_{\mu}U)({\cal P}_{\nu}U)U+\frac{9}{7}({\cal P}_{\mu}U)^{2}({\cal P}_{ \nu}U)^{2}\] \[\qquad\qquad\qquad+\frac{9}{7}U\,({\cal P}_{\mu}{\cal P}_{\nu}U)U \,({\cal P}_{\nu}{\cal P}_{\mu}U)+\frac{17}{14}(({\cal P}_{\mu}U)({\cal P}_{ \nu}U))^{2}+\frac{8}{7}U^{3}G_{\mu\nu}U\,G_{\mu\nu}\]
\[+\frac{5}{7}U^{4}(G_{\mu\nu})^{2}+\frac{18}{7}G_{\mu\nu}({\cal P}_{ \mu}U)U^{2}({\cal P}_{\nu}U)+\frac{9}{14}(U^{2}G_{\mu\nu})^{2}\] \[+\frac{18}{7}G_{\mu\nu}U\,({\cal P}_{\mu}U)({\cal P}_{\nu}U)U+\frac {18}{7}({\cal P}_{\mu}{\cal P}_{\nu}U)({\cal P}_{\mu}U)U\,({\cal P}_{\nu}U)\] \[+\left(\frac{8}{7}G_{\mu\nu}U\,({\cal P}_{\mu}U)U\,({\cal P}_{\nu }U)+\frac{26}{7}G_{\mu\nu}({\cal P}_{\mu}U)U\,({\cal P}_{\nu}U)U\right)\] \[+\left(\frac{24}{7}G_{\mu\nu}({\cal P}_{\mu}U)({\cal P}_{\nu}U)U^ {2}-\frac{2}{7}G_{\mu\nu}U^{2}({\cal P}_{\mu}U)({\cal P}_{\nu}U)\right)\Biggr{]}\] \[+\frac{1}{M^{10}}\frac{1}{210}\,\bigg{[}-U^{7}-5\,U^{4}({\cal P}_ {\nu}U)^{2}-8\,U^{3}({\cal P}_{\mu}U)U({\cal P}_{\mu}U)-\frac{9}{2}(U^{2}({ \cal P}_{\mu}U))^{2}\bigg{]}\] \[+\frac{1}{M^{12}}\frac{1}{336}\,\bigg{[}U^{8}\bigg{]}\bigg{\}}. \tag{3.32}\]
Expanding the functional \(U\) and collecting the terms of \({\cal O}(1/M^{4})\), the one-loop effective Lagrangian operators in terms of \(\Sigma\), \(Y\) and the generalised covariant derivative (\({\cal P}\)) are given in Appendix B.
We will present the effective Lagrangian in terms of the dimension eight Green's basis operator classes [87]
\[\Phi^{8},\;\Phi^{6}D^{2},\;\Phi^{4}D^{4},\;\Phi^{2}D^{6},\;F\Phi^{4}D^{2},\;F \Phi^{2}D^{4},\;F^{2}\Phi^{4},\;F^{2}\Phi^{2}D^{2},\;F^{2}D^{4}\;F^{3}\Phi^{2},\;F^{3}D^{2},\;F^{4}.\]
Hence the dimension eight one-loop effective Lagrangian operators are given by,
\[L_{\rm eff}^{\Psi(D8)}= \frac{c_{s}}{(4\pi)^{2}}\bigg{[}{\cal L}[\![\Phi^{8}]\!]+{\cal L }[\![\Phi^{6}D^{2}]\!]+{\cal L}_{I}[\![\Phi^{4}D^{4}]\!]+{\cal L}_{II}[\![\Phi^ {4}D^{4}]\!]+{\cal L}[\![\Phi^{2}D^{6}]\!]+{\cal L}_{I}[\![F\Phi^{4}D^{2}]\!]\] \[+{\cal L}_{II}[\![F\Phi^{4}D^{2}]\!]+{\cal L}_{I}[\![F\Phi^{2}D^{4 }]\!]+{\cal L}_{II}[\![F\Phi^{2}D^{4}]\!]+{\cal L}^{CPC}[\![F^{2}\Phi^{4}]\!]+ {\cal L}^{CPV}[\![F^{2}\Phi^{4}]\!]\] \[+{\cal L}_{I}[\![F^{2}\Phi^{2}D^{2}]\!]+{\cal L}_{II}[\![F^{2}\Phi ^{2}D^{2}]\!]+{\cal L}[\![F^{2}D^{4}]\!]+{\cal L}^{CPV}[\![F^{3}\phi^{2}]\!]\] \[+{\cal L}[\![F^{3}D^{2}]\!]+{\cal L}[\![F^{4}]\!]\bigg{]}. \tag{3.33}\]
Here we have followed the notations of \(CPC\), \(CPV\), \(I\), and \(II\) as described in the previous section. The dimension eight operators of different classes are given below.
\[{\cal L}[\![\Phi^{8}]\!]= {\rm tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}-\frac{2754}{35}\,(SR^ {3})^{2}-\frac{2048}{105}\,SR^{2}SR^{4}-\frac{6284}{105}\,SRSR^{5}-\frac{339}{ 7}\,(SR)^{4}\] \[-\frac{10240}{21}\,S^{2}R^{6}-\frac{3072}{35}\,S^{2}R^{2}SRSR+ \frac{6614}{105}\,S^{2}R^{2}S^{2}R^{2}-\frac{8564}{105}\,S^{2}RSR^{2}SR\] \[-\frac{3072}{35}\,S^{2}SRSR^{2}+\frac{3100}{21}\,S^{2}RS^{2}R^{3 }+\frac{512}{21}\,S^{3}R^{3}SR-\frac{5204}{105}\,S^{3}R^{2}SR^{2}\] \[+\frac{512}{21}\,S^{3}RSR^{3}-\frac{2}{3}\,S^{3}RS^{3}R+\frac{203 96}{105}\,S^{4}R^{4}-\frac{512}{21}\,S^{4}RS^{2}R+\frac{5204}{105}\,S^{5}RSR\] \[-\frac{512}{21}\,S^{6}R^{2}+\frac{24611}{210}\,R^{8}+\frac{1}{70} \,S^{8}\bigg{\}}\bigg{]}. \tag{3.34}\]
\[{\cal L}[\![\Phi^{6}D^{2}]\!]= {\rm tr}^{i}\bigg{[}\frac{121}{M^{4}}\bigg{\{}-\frac{121}{21}\,S ^{3}R_{\mu}S_{\mu}R-\frac{703}{105}\,S^{3}S_{\mu}R_{\mu}R+\frac{2}{15}\,S^{2}S _{\mu}RS_{\mu}R-\frac{6}{5}\,S^{2}S_{\mu}SR_{\mu}R\]
\[-\frac{206}{105}\,S^{2}R_{\mu}RR_{\mu}R+\frac{134}{35}\,S^{2}R_{\mu}SS_ {\mu}R+\frac{121}{21}\,S^{2}R_{\mu}R_{\mu}R^{2}-\frac{107}{21}\,S^{2}R_{\mu}S_{ \mu}SR\] \[-\frac{661}{105}\,S^{2}S_{\mu}R_{\mu}SR-\frac{703}{105}\,S^{2}S_{ \mu}S_{\mu}R^{2}-\frac{4}{3}\,SR_{\mu}R^{2}S_{\mu}R+\frac{2}{15}\,SR_{\mu}RSR_{ \mu}R\] \[-\frac{2}{15}\,SR_{\mu}RS_{\mu}R^{2}-\frac{4}{3}\,SR_{\mu}SRR_{\mu }R+\frac{4}{5}\,SR_{\mu}S^{2}S_{\mu}R-\frac{82}{105}\,SR_{\mu}SR_{\mu}R^{2}\] \[-\frac{10}{7}\,SR_{\mu}SS_{\mu}SR+\frac{107}{21}\,SR_{\mu}R_{\mu} RSR+\frac{159}{35}\,SR_{\mu}R_{\mu}SR^{2}+\frac{221}{35}\,SR_{\mu}S_{\mu}R^{3}\] \[-\frac{661}{105}\,SR_{\mu}S_{\mu}S^{2}R+\frac{4}{15}\,SS_{\mu}R^{ 2}R_{\mu}R+\frac{2}{3}\,SS_{\mu}RSS_{\mu}R-\frac{206}{105}\,SS_{\mu}RR_{\mu}R^ {2}\] \[-\frac{4}{15}\,SS_{\mu}SRS_{\mu}R-\frac{4}{5}\,SS_{\mu}S^{2}R_{ \mu}R-\frac{10}{7}\,SS_{\mu}SR_{\mu}SR-\frac{6}{5}\,SS_{\mu}RS_{\mu}R^{2}\] \[+\frac{661}{105}\,SS_{\mu}R_{\mu}R^{3}-\frac{107}{21}\,SS_{\mu}R_ {\mu}S^{2}R-\frac{661}{105}\,SS_{\mu}S_{\mu}RSR-\frac{703}{105}\,SS_{\mu}S_{ \mu}SR^{2}\] \[+\frac{326}{105}\,R_{\mu}R^{2}R_{\mu}R^{2}+\frac{1222}{105}\,R_{ \mu}RR_{\mu}R^{3}+\frac{122}{105}\,R_{\mu}RS_{\mu}RSR-\frac{2}{15}\,R_{\mu}SR^ {2}S_{\mu}R\] \[+\frac{314}{105}\,R_{\mu}SRSR_{\mu}R+\frac{2}{15}\,R_{\mu}SRR_{ \mu}SR-\frac{4}{3}\,R_{\mu}SRS_{\mu}R^{2}-\frac{206}{105}\,R_{\mu}S^{2}RR_{ \mu}R\] \[+\frac{598}{105}\,R_{\mu}SR_{\mu}RSR-\frac{82}{105}\,R_{\mu}SR_{ \mu}SR^{2}+\frac{2}{5}\,R_{\mu}SR_{\mu}S^{3}-\frac{142}{35}\,R_{\mu}SS_{\mu}R ^{3}\] \[-\frac{6}{5}\,R_{\mu}SS_{\mu}S^{2}R+\frac{361}{105}\,R_{\mu}R_{ \mu}R^{4}+\frac{661}{105}\,R_{\mu}R_{\mu}RS^{2}R+\frac{107}{21}\,R_{\mu}R_{ \mu}SRSR\] \[+\frac{121}{21}\,R_{\mu}R_{\mu}S^{2}R^{2}-\frac{703}{105}\,R_{\mu }R_{\mu}S^{4}+\frac{27}{7}\,R_{\mu}S_{\mu}R^{2}SR+\frac{107}{21}\,R_{\mu}S_{ \mu}RSR^{2}\] \[+\frac{661}{105}\,R_{\mu}SR_{\mu}SR^{3}-\frac{703}{105}\,R_{\mu}S _{\mu}S^{3}R-\frac{242}{105}\,S_{\mu}R^{2}S_{\mu}R^{2}+\frac{122}{105}\,S_{ \mu}RR_{\mu}RSR\] \[-\frac{2}{3}\,S_{\mu}RS_{\mu}R^{3}-\frac{206}{105}\,S_{\mu}SR^{2} R_{\mu}R+\frac{2}{3}\,S_{\mu}SRSS_{\mu}R+\frac{4}{15}\,S_{\mu}SRR_{\mu}R^{2}\] \[+\frac{2}{3}\,S_{\mu}SRS_{\mu}SR+\frac{2}{15}\,S_{\mu}S^{2}RS_{ \mu}R-\frac{2}{5}\,S_{\mu}S^{3}R_{\mu}R+\frac{4}{5}\,S_{\mu}S^{2}R_{\mu}SR\] \[-\frac{4}{5}\,S_{\mu}S^{2}S_{\mu}R^{2}+\frac{18}{35}\,S_{\mu}S^{2 }S_{\mu}S^{2}-\frac{142}{35}\,S_{\mu}SR_{\mu}R^{3}+\frac{134}{35}\,S_{\mu}SR_{ \mu}S^{2}R\] \[+\frac{2}{5}\,S_{\mu}SS_{\mu}RSR-\frac{6}{5}\,S_{\mu}SS_{\mu}SR^{2 }+\frac{18}{35}\,S_{\mu}SS_{\mu}S^{3}+\frac{107}{21}\,S_{\mu}R_{\mu}R^{2}SR\] \[+\frac{27}{7}\,S_{\mu}R_{\mu}RSR^{2}+\frac{221}{35}\,S_{\mu}R_{ \mu}SR^{3}-\frac{121}{21}\,S_{\mu}R_{\mu}S^{3}R+\frac{661}{105}\,S_{\mu}S_{ \mu}R^{4}\] \[-\frac{107}{21}\,S_{\mu}S_{\mu}RS^{2}R-\frac{661}{105}\,S_{\mu}S_{ \mu}SRSR-\frac{703}{105}\,S_{\mu}S_{\mu}S^{2}R^{2}+\frac{91}{15}\,S_{\mu}S_{ \mu}S^{4}\] \[-\frac{2}{5}\,R_{\mu}S^{3}S_{\mu}R-\frac{4}{5}\,R_{\mu}S^{2}R_{ \mu}R^{2}+\frac{6}{5}\,R_{\mu}S^{2}R_{\mu}S^{2}-\frac{4}{5}\,R_{\mu}S^{2}S_{ \mu}SR\bigg{\}}\bigg{]}. \tag{3.35}\]
\[\mathcal{L}_{I}[\![\Phi^{4}D^{4}]\!]= \mathrm{tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}\frac{29}{63}\,SR_{ \mu\mu}S_{\nu\nu}R-\frac{271}{315}\,SR_{\mu\nu}S_{\mu\nu}R+\frac{22}{45}\,SS_{ \mu\mu}R_{\nu\nu}R-\frac{16}{45}\,SS_{\mu\nu}R_{\mu\nu}R\] \[+\frac{83}{315}\,R_{\mu}R_{\mu}R_{\nu}R_{\nu}+\frac{89}{315}\,R_{ \mu}R_{\mu}R_{\nu\nu}R+\frac{32}{315}\,R_{\mu}R_{\mu}S_{\nu\nu}S-\frac{313}{630} \,R_{\mu}R_{\nu}R_{\mu}R_{\nu}\] \[-\frac{22}{63}\,R_{\mu}R_{\nu}R_{\mu\nu}R-\frac{59}{63}\,R_{\mu} R_{\nu}S_{\mu\nu}S+\frac{137}{315}\,R_{\mu}S_{\mu}R_{\nu\nu}S+\frac{22}{315}\,R_{\mu}S_{ \mu}S_{\nu\nu}R\] \[-\frac{111}{70}\,R_{\mu}S_{\nu}S_{\mu}R_{\nu}-\frac{724}{315}\,R_{ \mu}S_{\nu}R_{\mu\nu}S-\frac{319}{210}\,R_{\mu}S_{\nu}S_{\mu\nu}R+\frac{52}{105} \,R_{\mu}R_{\mu\nu}R_{\nu}R\]
\[\mathcal{L}_{II}[\Phi^{4}D^{4}]= {\rm tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}\frac{7}{45}\varepsilon_{ \alpha\beta\mu\nu}\,R_{\mu}R_{\nu\alpha}S_{\alpha}R_{\beta}+\frac{4}{45} \varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu}S_{\alpha\beta}R-\frac{1}{5} \varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}S_{\nu}R_{\alpha\beta}R\] \[-\frac{1}{5}\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu\alpha} S_{\beta}R+\frac{8}{45}\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}S_{\nu\alpha}R ^{2}-\frac{7}{45}\varepsilon_{\alpha\beta\mu\nu}\,S_{\mu}R_{\nu}R_{\alpha}R_{\beta}\] \[-\frac{1}{3}\varepsilon_{\alpha\beta\mu\nu}\,S_{\mu}S_{\nu}S_{ \alpha\beta}R+\frac{1}{5}\varepsilon_{\alpha\beta\mu\nu}\,S_{\mu}R_{\nu\alpha} R_{\beta}R+\frac{1}{5}\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu\nu}S_{\alpha}R_{ \beta}R\] \[-\frac{4}{45}\varepsilon_{\alpha\beta\mu\nu}\,S_{\mu\nu}R_{\alpha }R_{\beta}R+\frac{1}{3}\varepsilon_{\alpha\beta\mu\nu}\,S_{\mu\nu}S_{\alpha}S _{\beta}R+\frac{8}{45}\varepsilon_{\alpha\beta\mu\nu}\,S_{\mu\nu\alpha}R_{ \beta}R^{2}\bigg{\}}\bigg{]}. \tag{3.37}\]
\[\mathcal{L}[\![F\Phi^{2}D^{6}]\!]= \,{\rm tr}^{i}\Bigg{[}\frac{1}{M^{4}}\bigg{\{}\frac{1}{35}\,R_{\mu \mu\nu}R_{\alpha\alpha\nu}+\frac{1}{210}\,R_{\mu\nu\alpha}R_{\mu\nu\alpha}-\frac {2}{105}\,S_{\mu\mu\nu}S_{\alpha\alpha\nu}+\frac{1}{30}\,S_{\mu\nu\nu}S_{\mu \alpha\alpha}\bigg{\}}\Bigg{]}. \tag{3.38}\]
\[\mathcal{L}_{I}[\![F\Phi^{4}D^{2}]\!]= \,{\rm tr}^{i}\Bigg{[}\frac{1}{M^{4}}\bigg{\{}\frac{59}{63}\,S^{2 }F_{\mu\nu}R_{\mu\nu}R-\frac{473}{315}\,SR_{\mu}S_{\nu}F_{\mu\nu}R+\frac{4}{7} \,R_{\mu\nu}RF_{\mu\nu}R^{2}+\frac{1}{5}\,R_{\mu}F_{\mu\nu}S_{\nu}SR\] \[+\frac{2}{15}\,SS_{\mu}F_{\mu\nu}R_{\nu}R-\frac{24}{35}\,SR_{\mu \nu}SF_{\mu\nu}R-\frac{48}{35}\,SS_{\mu\nu}RF_{\mu\nu}R-\frac{16}{315}\,SF_{ \mu\nu}RS_{\mu\nu}R\] \[+\frac{137}{315}\,SF_{\mu\nu}SR_{\mu\nu}R-\frac{97}{315}\,SF_{\mu \nu}R_{\mu}S_{\nu}R+\frac{23}{21}\,SF_{\mu\nu}S_{\mu}R_{\nu}R+\frac{59}{63}\,SF _{\mu\nu}R_{\mu\nu}SR\] \[+\frac{68}{105}\,SF_{\mu\nu}S_{\mu\nu}R^{2}+\frac{5}{9}\,R_{\mu \nu}SF_{\mu\nu}R+\frac{4}{315}\,R_{\mu}SS_{\nu}F_{\nu\mu}R+\frac{647}{630}\,R _{\mu}SF_{\mu\nu}S_{\nu}R\] \[+\frac{317}{315}\,R_{\mu}R_{\nu}RF_{\mu\nu}R-\frac{142}{63}\,R_{ \mu}R_{\nu}SF_{\mu\nu}S+\frac{68}{105}\,R_{\mu}R_{\nu}F_{\mu\nu}R^{2}-\frac{1 36}{105}\,R_{\mu}R_{\nu}F_{\mu\nu}S^{2}\] \[-\frac{73}{90}\,R_{\mu}S_{\nu}SF_{\mu\nu}R-\frac{47}{210}\,R_{\mu }S_{\nu}F_{\mu\nu}SR-\frac{7}{15}\,R_{\mu}F_{\mu\nu}RR_{\nu}R+\frac{338}{315} \,R_{\mu}F_{\mu\nu}SR_{\nu}S\] \[+\frac{37}{126}\,R_{\mu}F_{\mu\nu}SS_{\nu}R-\frac{241}{315}\,R_{ \mu}F_{\mu\nu}R_{\nu}R^{2}+\frac{23}{21}\,R_{\mu}F_{\mu\nu}R_{\nu}S^{2}-\frac {362}{315}\,SS_{\mu}R_{\nu}F_{\mu\nu}R\] \[-\frac{13}{105}\,S_{\mu}SR_{\nu}F_{\mu\nu}R+\frac{8}{105}\,S_{\mu }SR_{\nu}F_{\mu\nu}R+\frac{127}{105}\,S_{\mu}SF_{\mu\nu}R_{\nu}R-\frac{431}{315 }\,S_{\mu}R_{\nu}SF_{\mu\nu}R\] \[-\frac{37}{21}\,S_{\mu}R_{\nu}F_{\mu\nu}SR-\frac{317}{315}\,S_{ \mu}S_{\nu}RF_{\mu\nu}R-\frac{19}{126}\,S_{\mu}S_{\nu}SF_{\mu\nu}S-\frac{64}{1 05}\,S_{\mu}S_{\nu}F_{\mu\nu}R^{2}\] \[+\frac{277}{630}\,S_{\mu}S_{\nu}F_{\mu\nu}S^{2}-\frac{44}{105}\,S _{\mu}F_{\mu\nu}RS_{\nu}R+\frac{1}{9}\,S_{\mu}F_{\mu\nu}SR_{\nu}R+\frac{229}{ 210}\,S_{\mu}F_{\mu\nu}SS_{\nu}S\] \[+\frac{376}{315}\,S_{\mu}F_{\mu\nu}R_{\nu}SR-\frac{11}{105}\,S_{ \mu}F_{\mu\nu}S_{\nu}R^{2}-\frac{5}{21}\,S_{\mu}F_{\mu\nu}S_{\nu}S^{2}+\frac{3 97}{315}\,SR_{\mu}F_{\mu\nu}S_{\nu}R\] \[-\frac{32}{35}\,R_{\mu\nu}S^{2}F_{\mu\nu}R-\frac{316}{315}\,S_{ \mu\nu}SRF_{\mu\nu}R+\frac{24}{35}\,S_{\mu\nu}SF_{\mu\nu}S^{2}+\frac{88}{315} \,F_{\mu\nu}RR_{\mu\nu}R^{2}\] \[-\frac{20}{21}\,F_{\mu\nu}SR_{\mu\nu}R-\frac{48}{35}\,F_{\mu\nu}S ^{2}R_{\mu\nu}R-\frac{11}{21}\,F_{\mu\nu}SR_{\mu}S_{\nu}R-\frac{1}{5}\,F_{\mu \nu}SS_{\mu}R_{\nu}R\] \[-\frac{79}{315}\,F_{\mu\nu}SR_{\mu\nu}SR+\frac{76}{105}\,F_{\mu\nu} SS_{\mu\nu}R^{2}+\frac{8}{35}\,F_{\mu\nu}SS_{\mu\nu}S^{2}+\frac{83}{105}\,F_{\mu\nu} R_{\mu}RR_{\nu}R\] \[-\frac{103}{45}\,F_{\mu\nu}R_{\mu}SR_{\nu}S-\frac{157}{105}\,F_{\mu \nu}R_{\mu}SS_{\nu}R+\frac{314}{315}\,F_{\mu\nu}R_{\mu}R_{\nu}R^{2}-\frac{1 13}{315}\,F_{\mu\nu}R_{\mu}R_{\nu}S^{2}\] \[+\frac{5}{7}\,F_{\mu\nu}R_{\mu}S_{\nu}SR+\frac{46}{105}\,F_{\mu \nu}R_{\nu}RR_{\mu}R-\frac{7}{45}\,F_{\mu\nu}R_{\nu}SR_{\mu}S-\frac{4}{315}\,F_ {\mu\nu}R_{\nu}SS_{\mu}R\] \[-\frac{37}{35}\,F_{\mu\nu}S_{\mu}RS_{\nu}R-\frac{257}{315}\,F_{\mu \nu}S_{\mu}SR_{\nu}R+\frac{19}{105}\,F_{\mu\nu}S_{\mu}SS_{\nu}S+\frac{272}{315} \,F_{\mu\nu}S_{\mu}R_{\nu}SR\] \[+\frac{8}{5}\,F_{\mu\nu}S_{\mu}S_{\nu}R^{2}-\frac{286}{315}\,F_{\mu \nu}S_{\mu}S_{\nu}S^{2}+\frac{11}{105}\,F_{\mu\nu}S_{\nu}RS_{\mu}R-\frac{8}{10 5}\,F_{\mu\nu}S_{\nu}SR_{\mu}R\] \[+\frac{1}{10}\,F_{\mu\nu}S_{\nu}SS_{\mu}S-\frac{268}{315}\,F_{\mu \nu}R_{\mu\nu}R^{3}+\frac{32}{35}\,F_{\mu\nu}R_{\mu\nu}S^{2}R+\frac{316}{315} \,F_{\mu\nu}S_{\mu\nu}RSR\] \[+\frac{316}{315}\,F_{\mu\nu}S_{\mu\nu}SR^{2}-\frac{32}{35}\,F_{\mu \nu}S_{\mu\nu}S^{3}\bigg{\}}\Bigg{]}. \tag{3.39}\]
\[\mathcal{L}_{II}[\![F\Phi^{4}D^{2}]\!]= \,{\rm tr}^{i}\Bigg{[}\frac{1}{M^{4}}\bigg{\{}SR_{\mu}R_{\nu}\tilde{F }_{\mu\nu}R+SR_{\mu}F_{\beta\mu}R_{\beta}R+S\tilde{F}_{\mu\nu}R_{\mu}R_{\nu}R+R _{\mu}SR_{\nu}\tilde{F}_{\mu\nu}R\]
\[\mathcal{L}_{II}[F\Phi^{2}D^{4}]= \,{\rm tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}-\frac{1}{15}\,S_{\mu} \tilde{F}_{\kappa\mu\beta\beta}R_{\kappa}+\frac{2}{15}\,S_{\mu\nu}\tilde{F}_{\mu \nu\kappa\kappa}R+\frac{1}{15}\,S_{\mu\nu\nu}\tilde{F}_{\kappa\mu}R_{\kappa}- \frac{1}{15}\,\tilde{F}_{\mu\nu\alpha\alpha}S_{\mu}R_{\nu}\] \[-\frac{2}{15}\,\tilde{F}_{\mu\nu\alpha\alpha}S_{\mu\nu}R-\frac{1 }{15}\,\tilde{F}_{\mu\nu\alpha}R_{\alpha\mu}S_{\nu}+\frac{1}{15}\,\tilde{F}_{ \mu\nu}S_{\mu\beta}R_{\beta\nu}-\frac{1}{15}\,\tilde{F}_{\mu\nu}S_{\mu\nu}R_{ \kappa\kappa}\]
\[+\frac{1}{15}\,\tilde{F}_{\mu\nu}S_{\mu\beta\beta}R_{\nu}\bigg{\}} \bigg{]}. \tag{3.42}\]
\[\mathcal{L}^{CPC}[\![F^{2}\phi^{4}]\!]= \,\mathrm{tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}-\frac{2}{21}\,S ^{2}F_{\mu\nu}RF_{\mu\nu}R+\frac{239}{630}\,S^{2}F_{\mu\nu}F_{\mu\nu}R^{2}- \frac{37}{210}\,SF_{\mu\nu}RSF_{\mu\nu}R\] \[-\frac{3}{7}\,SF_{\mu\nu}SRF_{\mu\nu}R+\frac{67}{315}\,SF_{\mu\nu} SF_{\mu\nu}R^{2}+\frac{33}{70}\,SF_{\mu\nu}F_{\mu\nu}RSR+\frac{223}{630}\,SF_{ \mu\nu}F_{\mu\nu}SR^{2}\] \[+\frac{6}{35}\,F_{\mu\nu}SRSF_{\mu\nu}R-\frac{37}{210}\,F_{\mu\nu }SRF_{\mu\nu}SR-\frac{2}{21}\,F_{\mu\nu}S^{2}RF_{\mu\nu}R+\frac{2}{63}\,F_{\mu \nu}S^{2}F_{\mu\nu}R^{2}\] \[+\frac{14}{45}\,F_{\mu\nu}SF_{\mu\nu}RSR+\frac{67}{315}\,F_{\mu \nu}SF_{\mu\nu}SR^{2}-\frac{1}{42}\,F_{\mu\nu}F_{\mu\nu}RS^{2}R+\frac{33}{70} \,F_{\mu\nu}F_{\mu\nu}SRSR\] \[+\frac{239}{630}\,F_{\mu\nu}F_{\mu\nu}S^{2}R^{2}-\frac{9}{14}\,F _{\mu\nu}F_{\mu\nu}R^{4}+\frac{1}{42}\,F_{\mu\nu}R^{2}F_{\mu\nu}R^{2}+\frac{3 46}{315}\,F_{\mu\nu}RF_{\mu\nu}R^{3}\] \[-\frac{11}{70}\,F_{\mu\nu}S^{2}F_{\mu\nu}S^{2}-\frac{4}{21}\,F_{ \mu\nu}SF_{\mu\nu}S^{3}+\frac{1}{70}\,F_{\mu\nu}F_{\mu\nu}S^{4}\bigg{\}} \bigg{]}. \tag{3.43}\]
\[\mathcal{L}^{CPV}[\![F^{2}\phi^{4}]\!]= \,\mathrm{tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}\frac{1}{30}\,S ^{3}\tilde{F}_{\mu\nu}F_{\mu\nu}R+\frac{3}{5}\,S^{2}\tilde{F}_{\mu\nu}SF_{\mu \nu}R+\frac{3}{10}\,S^{2}\tilde{F}_{\mu\nu}F_{\mu\nu}SR-\frac{4}{45}\,S\tilde {F}_{\mu\nu}R^{2}F_{\mu\nu}R\] \[+\frac{34}{45}\,S\tilde{F}_{\mu\nu}RF_{\mu\nu}R^{2}+\frac{4}{5}\, S\tilde{F}_{\mu\nu}SF_{\mu\nu}SR-\frac{83}{90}\,S\tilde{F}_{\mu\nu}F_{\mu\nu}R^{3}+ \frac{3}{10}\,S\tilde{F}_{\mu\nu}F_{\mu\nu}S^{2}R\] \[+\frac{2}{3}\,\tilde{F}_{\mu\nu}RF_{\mu\nu}RSR+\frac{34}{45}\, \tilde{F}_{\mu\nu}SR^{2}F_{\mu\nu}R-\frac{4}{45}\,\tilde{F}_{\mu\nu}SRF_{\mu \nu}R^{2}-\frac{2}{3}\,\tilde{F}_{\mu\nu}S^{3}F_{\mu\nu}R\] \[-\frac{22}{45}\,\tilde{F}_{\mu\nu}SF_{\mu\nu}R^{3}+\frac{3}{5}\, \tilde{F}_{\mu\nu}SF_{\mu\nu}S^{2}R-\frac{5}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu} R^{2}SR-\frac{5}{6}\,\tilde{F}_{\mu\nu}F_{\mu\nu}RSR^{2}\] \[-\frac{83}{90}\,\tilde{F}_{\mu\nu}F_{\mu\nu}SR^{3}+\frac{1}{30}\, \tilde{F}_{\mu\nu}F_{\mu\nu}S^{3}R\bigg{\}}\bigg{]}. \tag{3.44}\]
\[\mathcal{L}_{I}[\![F^{2}\phi^{2}D^{2}]\!]= \,\mathrm{tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}\frac{4}{15}\,F _{\mu\nu}F_{\alpha\mu}R-\frac{2}{15}\,F_{\mu\nu}F_{\mu\nu\alpha}S_{\alpha}S- \frac{1}{9}\,F_{\mu\nu}F_{\alpha\mu}R_{\nu}R_{\alpha}-\frac{1}{12}\,F_{\mu\nu }F_{\alpha\mu}S_{\nu}S_{\alpha}\] \[+\frac{17}{45}\,R_{\mu}F_{\mu\nu}F_{\alpha\nu\alpha}R+\frac{23}{ 126}\,R_{\mu}F_{\mu\nu}F_{\alpha\nu}R_{\alpha}+\frac{61}{630}\,R_{\mu}F_{ \nu\alpha}F_{\mu\alpha\nu}R-\frac{11}{126}\,R_{\mu}F_{\nu\alpha}F_{\nu\alpha \mu}R\] \[-\frac{89}{630}\,R_{\mu}F_{\nu\alpha}F_{\nu\mu\alpha}R-\frac{107}{ 630}\,R_{\mu}F_{\nu\alpha}F_{\mu\alpha}R_{\nu}+\frac{17}{70}\,R_{\mu}F_{\nu \mu}F_{\alpha\nu}R_{\alpha}-\frac{8}{105}\,S_{\mu}F_{\nu\alpha\alpha}F_{\nu \mu}S\] \[-\frac{2}{15}\,S_{\mu}F_{\nu\alpha\mu}F_{\nu\alpha}S-\frac{16}{105} \,S_{\mu}F_{\mu\nu}F_{\alpha\nu\alpha}S-\frac{179}{252}\,S_{\mu}F_{\mu\nu}F_{ \alpha\nu}S_{\alpha}+\frac{29}{180}\,S_{\mu}F_{\nu\alpha}F_{\mu\alpha}S_{\nu}\] \[+\frac{31}{180}\,S_{\mu}F_{\nu\mu}F_{\alpha\nu}S_{\alpha}-\frac{1} {420}\,R_{\mu\mu}F_{\nu\alpha}F_{\nu\alpha}R+\frac{1}{105}\,R_{\mu\nu}F_{\alpha \mu}F_{\alpha\nu}R-\frac{97}{315}\,R_{\mu\nu}F_{\alpha\nu}F_{\alpha\mu}R\] \[-\frac{4}{45}\,S_{\mu\nu}F_{\alpha\nu}F_{\alpha\mu}S+\frac{61}{180 }\,F_{\mu\nu\alpha\alpha}F_{\mu\nu}R^{2}-\frac{11}{60}\,F_{\mu\nu\alpha\alpha}F_{ \mu\nu}S^{2}-\frac{5}{63}\,F_{\mu\nu\mu\alpha}F_{\alpha\nu}R^{2}\] \[+\frac{4}{45}\,F_{\mu\nu\nu\alpha}F_{\alpha\mu}R^{2}-\frac{2}{15}\, F_{\mu\nu\alpha}RF_{\alpha\mu\nu}R+\frac{2}{15}\,F_{\mu\nu\alpha}RF_{\alpha\nu}R-\frac{13}{105} \,F_{\mu\nu\alpha}RF_{\mu\nu\alpha}R\] \[+\frac{52}{315}\,F_{\mu\nu\alpha}R_{\alpha}F_{\mu\nu}R+\frac{1}{9}\, F_{\mu\nu\alpha}R_{\mu}F_{\alpha\nu}R-\frac{43}{315}\,F_{\mu\nu\alpha}R_{\nu}F_{ \alpha\mu}R-\frac{2}{15}\,F_{\mu\nu\alpha}S_{\alpha}F_{\mu\nu}S\] \[+\frac{2}{15}\,F_{\mu\nu\alpha}F_{\alpha\mu\nu}R^{2}-\frac{2}{15}\, F_{\mu\nu\alpha}F_{\alpha\nu\mu}R^{2}+\frac{9}{35}\,F_{\mu\nu\alpha}F_{\mu\nu \alpha}R^{2}-\frac{2}{15}\,F_{\mu\nu\alpha}F_{\mu\nu\alpha}S^{2}\] \[+\frac{1}{45}\,F_{\mu\nu\alpha}F_{\alpha\mu}R_{\nu}R-\frac{1}{45}\, F_{\mu\nu\alpha}F_{\alpha\nu}R_{\mu}R+\frac{6}{35}\,F_{\mu\nu\alpha}F_{\mu\nu}R_{ \alpha}R+\frac{32}{105}\,F_{\mu\nu\mu}RF_{\alpha\alpha}R\]
\[-\frac{4}{35}\,F_{\mu\nu\mu}SF_{\alpha\nu\alpha}S+\frac{64}{315}\,F_{ \mu\nu\mu}R_{\alpha}F_{\alpha\nu}R-\frac{8}{63}\,F_{\mu\nu\mu}S_{\alpha}F_{ \alpha\nu}S+\frac{3}{7}\,F_{\mu\nu\mu}F_{\alpha\nu\alpha}R^{2}\] \[-\frac{3}{35}\,F_{\mu\nu\mu}F_{\alpha\nu\alpha}S^{2}+\frac{121}{3 15}\,F_{\mu\nu\mu}F_{\alpha\nu}R_{\alpha}R-\frac{16}{105}\,F_{\mu\nu\mu}F_{ \alpha\nu}S_{\alpha}S-\frac{2}{15}\,F_{\mu\nu\nu}RF_{\alpha\mu\alpha}R\] \[-\frac{1}{15}\,F_{\mu\nu\nu}R_{\alpha}F_{\alpha\mu}R-\frac{2}{15} \,F_{\mu\nu\nu}F_{\alpha\mu\alpha}R^{2}-\frac{1}{15}\,F_{\mu\nu\nu}F_{\alpha \mu}R_{\alpha}R-\frac{2}{105}\,F_{\mu\nu}RF_{\alpha\nu\alpha\mu}R\] \[+\frac{4}{15}\,F_{\mu\nu}RF_{\mu\nu\alpha\alpha}R-\frac{4}{15}\,F _{\mu\nu}SF_{\mu\nu\alpha\alpha}S-\frac{7}{45}\,F_{\mu\nu}R_{\alpha}F_{\alpha \mu\nu}R+\frac{1}{15}\,F_{\mu\nu}R_{\alpha}F_{\alpha\nu\mu}R\] \[+\frac{44}{315}\,F_{\mu\nu}R_{\alpha}F_{\mu\nu\alpha}R+\frac{23}{ 1260}\,F_{\mu\nu}R_{\alpha}F_{\alpha\nu}R_{\mu}+\frac{31}{210}\,F_{\mu\nu}R_{ \alpha}F_{\mu\nu}R_{\alpha}+\frac{10}{63}\,F_{\mu\nu}R_{\mu}F_{\alpha\nu \alpha}R\] \[+\frac{25}{252}\,F_{\mu\nu}R_{\mu}F_{\alpha\nu}R_{\alpha}-\frac{ 5}{21}\,F_{\mu\nu}R_{\nu}F_{\alpha\mu\alpha}R-\frac{2}{15}\,F_{\mu\nu}S_{ \alpha}F_{\mu\nu\alpha}S+\frac{9}{70}\,F_{\mu\nu}S_{\alpha}F_{\alpha\nu}S_{\mu}\] \[-\frac{44}{315}\,F_{\mu\nu}S_{\alpha}F_{\mu\nu}S_{\alpha}-\frac{ 1}{6}\,F_{\mu\nu}S_{\alpha}F_{\nu\alpha}S_{\mu}-\frac{8}{63}\,F_{\mu\nu}S_{ \mu}F_{\alpha\nu\alpha}S+\frac{1}{6}\,F_{\mu\nu}S_{\mu}F_{\alpha\nu}S_{\alpha}\] \[-\frac{9}{70}\,F_{\mu\nu}S_{\nu}F_{\alpha\mu}S_{\alpha}+\frac{199 }{630}\,F_{\mu\nu}R_{\alpha\alpha}F_{\mu\nu}R+\frac{8}{315}\,F_{\mu\nu}R_{ \alpha\mu}F_{\alpha\nu}R+\frac{37}{630}\,F_{\mu\nu}R_{\mu\alpha}F_{\alpha\nu}R\] \[+\frac{1}{30}\,F_{\mu\nu}R_{\nu\alpha}F_{\alpha\mu}R-\frac{4}{15} \,F_{\mu\nu}S_{\alpha\alpha}F_{\mu\nu}S+\frac{8}{45}\,F_{\mu\nu}F_{\alpha\mu \nu\alpha}R^{2}+\frac{1}{105}\,F_{\mu\nu}F_{\alpha\nu\alpha}R^{2}\] \[+\frac{61}{180}\,F_{\mu\nu}F_{\mu\nu\alpha\alpha}R^{2}-\frac{11}{6 0}\,F_{\mu\nu}F_{\mu\nu\alpha\alpha}S^{2}+\frac{32}{315}\,F_{\mu\nu}F_{\alpha \mu\alpha}R_{\nu}R-\frac{1}{10}\,F_{\mu\nu}F_{\alpha\mu\nu}R_{\alpha}R\] \[+\frac{8}{105}\,F_{\mu\nu}F_{\alpha\nu}R_{\mu}R-\frac{8}{105}\,F_ {\mu\nu}F_{\alpha\nu\alpha}S_{\mu}S+\frac{1}{10}\,F_{\mu\nu}F_{\alpha\nu}R_{ \alpha}R+\frac{7}{30}\,F_{\mu\nu}F_{\mu\nu\alpha}R_{\alpha}R\] \[-\frac{1}{15}\,F_{\mu\nu}F_{\alpha\mu}R_{\nu\alpha}R+\frac{4}{45} \,F_{\mu\nu}F_{\alpha\mu}S_{\alpha\nu}S+\frac{23}{210}\,F_{\mu\nu}F_{\alpha \nu}R_{\mu}R_{\alpha}+\frac{1}{12}\,F_{\mu\nu}F_{\alpha\nu}S_{\mu}S_{\alpha}\] \[-\frac{8}{315}\,F_{\mu\nu}F_{\alpha\nu}R_{\alpha\mu}R-\frac{32}{31 5}\,F_{\mu\nu}F_{\alpha\nu}R_{\mu\alpha}R-\frac{8}{45}\,F_{\mu\nu}F_{\mu \alpha}R_{\nu}R_{\alpha}+\frac{347}{1260}\,F_{\mu\nu}F_{\mu\alpha}S_{\nu}S_{\alpha}\] \[+\frac{4}{63}\,F_{\mu\nu}F_{\mu\nu}R_{\alpha}R_{\alpha}-\frac{1 4}\,F_{\mu\nu}F_{\mu\nu}S_{\alpha}S_{\alpha}+\frac{71}{252}\,F_{\mu\nu}F_{ \mu\nu}R_{\alpha\alpha}R-\frac{67}{1260}\,S_{\mu\mu}F_{\nu\alpha}F_{\nu\alpha}S\] \[-\frac{121}{1260}\,F_{\mu\nu}R_{\alpha}F_{\nu\alpha}R_{\mu}-\frac{ 83}{1260}\,F_{\mu\nu}R_{\nu}F_{\alpha\mu}R_{\alpha}-\frac{67}{1260}\,F_{\mu \nu}F_{\mu\nu}S_{\alpha\alpha}S\biggr{\}}\biggr{]}. \tag{3.45}\]
\[\mathcal{L}_{II}[\![F^{2}\phi^{2}D^{2}]\!] = \mathrm{tr}^{i}\biggl{[}\frac{1}{M^{4}}\biggl{\{}\frac{11}{60}S \tilde{F}_{\mu\nu\alpha\alpha}F_{\mu\nu}R+\frac{2}{15}S\tilde{F}_{\mu\nu \alpha}F_{\mu\nu\alpha}R+\frac{11}{60}S\tilde{F}_{\mu\nu}F_{\mu\nu\kappa \kappa}R\] \[+\frac{2}{15}R_{\mu}\tilde{F}_{\alpha\nu\mu}F_{\alpha\nu}S+\frac{ 1}{6}S_{\beta}\tilde{F}_{\kappa\mu}F_{\beta\mu}R_{\kappa}+\frac{1}{60}S_{\mu}F_{ \alpha\nu\alpha}\tilde{F}_{\mu\nu}R-\frac{1}{15}\tilde{F}_{\beta\mu\kappa}F_{ \beta\kappa}R\] \[+\frac{2}{15}S_{\mu}F_{\alpha\nu\mu}\tilde{F}_{\alpha\nu}R+\frac{1 2}{12}F_{\mu\nu}\tilde{F}_{\mu\nu\alpha}R+\frac{1}{3}S_{\mu}\tilde{F}_{\beta \mu}F_{\beta\kappa\kappa}R+\frac{1}{6}S_{\mu}F_{\alpha\nu}\tilde{F}_{\alpha \mu\nu}R\] \[+\frac{3}{20}S_{\mu}F_{\alpha\nu}\tilde{F}_{\mu\nu}R_{\alpha}+\frac{ 4}{15}S_{\mu}\tilde{F}_{\beta\mu}F_{\beta\kappa}R_{\kappa}-\frac{7}{60}S_{ \mu}\tilde{F}_{\alpha\nu}F_{\alpha\nu}R_{\mu}+\frac{1}{15}S_{\mu}F_{\alpha\nu} \tilde{F}_{\alpha\mu}R_{\nu}\] \[+\frac{1}{12}S_{\mu}F_{\mu\nu}\tilde{F}_{\kappa\nu}R_{\kappa}+\frac{ 11}{60}R_{\mu\mu}\tilde{F}_{\alpha\nu}F_{\alpha\nu}S-\frac{1}{6}S_{\alpha\nu}F_{ \alpha\mu}\tilde{F}_{\mu\nu}R-\frac{1}{3}S_{\kappa\nu}\tilde{F}_{\mu\nu}F_{ \kappa\mu}R\] \[-\frac{1}{6}S_{\mu\alpha}F_{\alpha\nu}\tilde{F}_{\mu\nu}R-\frac{1}{3 }S_{\mu\kappa}\tilde{F}_{\mu\nu}F_{\kappa\nu}R+\frac{1}{60}S_{\mu\mu}\tilde{F}_{ \alpha\nu}F_{\alpha\nu}R+\frac{4}{15}\tilde{F}_{\mu\nu\alpha}SF_{\mu\nu}R\] \[+\frac{11}{60}\tilde{F}_{\mu\nu\alpha\alpha}F_{\mu\nu}SR+\frac{2}{15} \tilde{F}_{\mu\nu\alpha}R_{\alpha}F_{\mu\nu}S+\frac{2}{15}\tilde{F}_{\mu\nu \alpha}S_{\alpha}F_{\mu\nu}R-\frac{7}{12}\tilde{F}_{\mu\nu\alpha}S_{\mu}F_{ \alpha\nu}R\] \[+\frac{2}{15}\tilde{F}_{\mu\nu\alpha}F_{\mu\nu\alpha}SR+\
\[\mathcal{L}[\![F^{3}D^{2}]\!]= \,{\rm tr}^{i}\bigg{[}\frac{1}{M^{4}}\bigg{\{}\frac{19}{630}\,(F_{ \mu\nu})^{2}(F_{\alpha\beta})^{2}+\frac{8}{315}\,(F_{\mu\nu}F_{\alpha\beta})^{2} +\frac{53}{315}\,F_{\mu\nu}F_{\nu\alpha}F_{\alpha\beta}F_{\beta\mu}\] \[-\frac{34}{105}\,(F_{\mu\alpha}F_{\alpha\mu})^{2}\bigg{\}}\bigg{]}. \tag{3.51}\]
### Dimension One-Four operators in FUOLEA
Using the results obtained by dimensional regularisation and \(\overline{MS}\) renormalisation scheme, the finite part of the renormalisable Lagrangian, operators of dimension \(\leq 4\), are calculated for the fermionic case. Here we have neglected contributions due to the multiplicative anomaly discussed in Sec. 2.
**Dimension One operators in FUOLEA**
Operators from UOLEA given in Appendix C contributing to dimension one operators are,
\[\mathcal{L}_{\text{eff}}=\frac{c_{s}}{(4\pi)^{2}}\text{tr}\bigg{\{}-M^{2}\ \left(\ln\left[\frac{M^{2}}{\mu^{2}}\right]-1\right)\,U\bigg{\}}. \tag{3.52}\]
Expanding the generalised operators and retaining only the terms \(\mathcal{O}(M^{3})\), we get,
\[\mathcal{L}_{\text{eff}}^{\Psi(D1)}=\frac{c_{s}}{(4\pi)^{2}}\text{tr}\bigg{\{} -2M^{3}\left(\ln\left[\frac{M^{2}}{\mu^{2}}\right]-1\right)\,\Sigma\bigg{\}}. \tag{3.53}\]
Performing the trace over the spinor indices, the dimension one operator in terms of the functionals in the fermionic Lagrangian is,
\[\mathcal{L}_{\text{eff}}^{\Psi(D1)}=\frac{c_{s}}{(4\pi)^{2}}\text{tr}^{\text {i}}\bigg{\{}8M^{3}\left(1-\ln\left[\frac{M^{2}}{\mu^{2}}\right]\right)\,S \bigg{\}}. \tag{3.54}\]
**Dimension Two operators in FUOLEA**
The dimension two operators get contribution from the following operators of the UOLEA given in Appendix C.
\[\mathcal{L}_{\text{eff}}=\frac{c_{s}}{(4\pi)^{2}}\text{tr}\bigg{\{}-M^{2}\, \left(\ln\left[\frac{M^{2}}{\mu^{2}}\right]-1\right)\,U-\frac{1}{2}\ln\left[ \frac{M^{2}}{\mu^{2}}\right]\,U^{2}\bigg{\}}. \tag{3.55}\]
Expanding the generalised operators and retaining only the terms \(\mathcal{O}(M^{3})\), we get,
\[\mathcal{L}_{\text{eff}}^{\Psi(D2)}=\frac{c_{s}}{(4\pi)^{2}}\text{tr}\bigg{\{} -M^{2}\bigg{[}\left(\ln\left[\frac{M^{2}}{\mu^{2}}\right]-1\right)\,Y+2\ln \left[\frac{M^{2}}{\mu^{2}}\right]\,\Sigma^{2}\bigg{]}\bigg{\}}. \tag{3.56}\]
Performing the spin trace, the dimension two operators are given by,
\[\mathcal{L}_{\text{eff}}^{\Psi(D2)}=\frac{c_{s}}{(4\pi)^{2}}\text{tr}^{\text {i}}\bigg{\{}4M^{2}\bigg{[}\left(1-3\ln\left[\frac{M^{2}}{\mu^{2}}\right] \right)\,S^{2}+\left(3-\ln\left[\frac{M^{2}}{\mu^{2}}\right]\right)\,R^{2} \bigg{]}\bigg{\}}. \tag{3.57}\]
**Dimension Three operators in FUOLEA**
Following the power counting mentioned in Sec. 2, the operators from the UOLEA that contribute to the dimension three operators are,
\[\mathcal{L}_{\text{eff}}= \frac{c_{s}}{(4\pi)^{2}}\text{tr}\bigg{\{}-\frac{1}{2}\ln\left[ \frac{M^{2}}{\mu^{2}}\right]\,U^{2}-\frac{1}{M^{2}}\frac{U^{3}}{6}\bigg{\}}. \tag{3.58}\]
Expanding the generalised operators and collecting terms of \(\mathcal{O}(M)\), we get,
\[\mathcal{L}_{\text{eff}}= \frac{c_{s}}{(4\pi)^{2}}\text{tr}\bigg{\{}-M\ln\left[\frac{M^{2}} {\mu^{2}}\right]\,\Sigma Y-\frac{4}{3}M\Sigma^{3}\bigg{\}}. \tag{3.59}\]
Writing in terms of the functionals in the Lagrangian, the dimension three operators, after taking trace over spinor indices are given by,
\[\mathcal{L}_{\rm eff}= \frac{c_{s}}{(4\pi)^{2}}{\rm tr}^{i}\bigg{\{}8M\bigg{[}\bigg{(}2- \ln\left[\frac{M^{2}}{\mu^{2}}\right]\bigg{)}\,SR^{2}-\bigg{(}\ln\left[\frac{M^{ 2}}{\mu^{2}}\right]+\frac{2}{3}\bigg{)}\,S^{3}\bigg{]}\bigg{\}}. \tag{3.60}\]
**Dimension Four operators in FUOLEA**
Operators from UOLEA given in Appendix C contributing to dimension one operators are,
\[\mathcal{L}_{\rm eff}= \frac{c_{s}}{(4\pi)^{2}}{\rm tr}\bigg{\{}\frac{1}{2}\bigg{[}-\ln \left[\frac{M^{2}}{\mu^{2}}\right]\,U^{2}-\frac{1}{6}\ln\left[\frac{M^{2}}{\mu ^{2}}\right]\,(G_{\mu\nu})^{2}\bigg{]}\] \[+\frac{1}{M^{2}}\frac{1}{6}\bigg{[}-U^{3}-\frac{1}{2}(\mathcal{P }_{\mu}U)^{2}\bigg{]}+\frac{1}{M^{4}}\frac{U^{4}}{24}\bigg{\}}. \tag{3.61}\]
Terms of \(\mathcal{O}(M^{0})\) which contribute to the dimension four operators are given by,
\[\mathcal{L}_{\rm eff}= \frac{c_{s}}{(4\pi)^{2}}{\rm tr}\bigg{\{}\frac{1}{2}\bigg{[}-\ln \left[\frac{M^{2}}{\mu^{2}}\right]\,Y^{2}-\frac{1}{6}\ln\left[\frac{M^{2}}{\mu ^{2}}\right]\,(F_{\mu\nu}+\Gamma_{\mu\nu})^{2}\bigg{]}\] \[+\frac{1}{3}\bigg{[}-6Y\Sigma^{2}-(\mathcal{P}_{\mu}\Sigma)^{2} \bigg{]}+\frac{2}{3}\Sigma^{4}\bigg{\}}. \tag{3.62}\]
Simplifying the dimension four operators by performing the spinor trace and writing it in terms of the interaction functionals gives,
\[\mathcal{L}_{\rm eff}^{\Psi(D4)}= \frac{c_{s}}{(4\pi)^{2}}{\rm tr}^{i}\bigg{\{}\frac{2}{3}\ln\left[ \frac{M^{2}}{\mu^{2}}\right](F_{\mu\nu})^{2}-8\ln\left[\frac{M^{2}}{\mu^{2}} \right]S^{2}R^{2}-\bigg{(}\frac{16}{3}+2\ln\left[\frac{M^{2}}{\mu^{2}}\right] \bigg{)}S^{4}\] \[+\bigg{(}\frac{16}{3}-2\ln\left[\frac{M^{2}}{\mu^{2}}\right] \bigg{)}R^{4}+4\ln\left[\frac{M^{2}}{\mu^{2}}\right]SRSR-\bigg{(}\frac{4}{3}+ 2\ln\left[\frac{M^{2}}{\mu^{2}}\right]\bigg{)}(P_{\mu}S)^{2}\] \[+\bigg{(}\frac{4}{3}-2\ln\left[\frac{M^{2}}{\mu^{2}}\right]\bigg{)} (\mathcal{P}_{\mu}R)^{2}\bigg{\}}. \tag{3.63}\]
The results for operators of dimensions one to six derived in Secs. 3.1, 3.2 and 3.5 have been verified with the results provided in the Refs. [29; 30]7.
Footnote 7: In these references, authors have employed the BMHV regularisation scheme [88; 89; 90; 91] to compute the finite contribution to the renormalisable IR Lagrangian, i.e., D1 to D4 operators. Though, we have used dimensional regularisation which is equivalent to zeta function regularisation [86], our finite parts are in good agreement with the results given in Refs. [29; 30].
## 4 Flavors, CP conservation, and violation in FUOLEA
In this section, we discuss some of the salient features of the results stated in the previous section. The one-loop effective action up to dimension eight computed in our previous paper [36] is true for any strong elliptic operator of the form (\(D^{2}+M^{2}+U\)). In the case of scalars, this is automatically satisfied. But for fermions, the Dirac operator which is a weakly elliptic operator needs to be brought into that form through bosonization. From a mathematical sense, our effective action is universal. While computing that for fermions
in the presence of scalar and pseudo-scalar Yukawa interactions, we find some specific characteristics of the effective action that are absent when heavy scalar is integrated out.
In case of heavy scalar integration out, the term \(U_{s}\) in the elliptic operator is computed from
\[\frac{\delta^{2}\mathcal{L}^{s}}{\delta\Phi^{\dagger}\delta\Phi}\supset U_{s}\]
having mass dimension \(+2\) and is a functional of light fields (IR DOFs). At this point, we must recall that the operator dimension is not directly related to the mass dimension of \(U_{s}\) as it may contain mass-dimensionful couplings. While in the case of fermion integration out, the term \(U_{f}=Y+2M_{f}\Sigma\), see Eq. (21) is an artefact of the bosonization. This has also mass dimension \(+2\) and it contains (pseudo-)scalar fields along with mass-dimensionless couplings in a very convoluted manner, see Eq. (21). Thus, the dimension of effective operators can not be directly computed by looking into \(U_{f}\), see Eq. (24).
In the case of fermion which possesses generation (flavor) indices, we have
\[[U_{f}]_{ij}=[Y+M_{f}(S+i\gamma^{5}R)]_{ij};\qquad\text{where, $i,j$ are the flavor indices.}\]
The interaction terms \(S\) and \(R\) stem from Yukawa-like interactions. This is further reflected in the Wilson coefficients (WCs) that possess now the flavor indices as well. From a phenomenological perspective, this suggests that the role of low-energy experiments, such as flavor physics observations, must leave a significant impact on these WCs.
Our computed effective action contains information about both scalar (\(S\)) and pseudo-scalar (\(R\)) interactions. We have noted that if we switch off the pseudo-scalar part, i.e., set \(R=0\), we can only generate the CP-conserving (CPC) effective operators. It is as per our expectation as the CP-violation (CPV) arises through the non-universal Yukawa couplings between different chiralities of fermions. This serves as a simple consistency test of our result.
Here, we emphasize, in detail, some of the interesting features of CPV operators:
* Only for operator classes that do not involve the covariant derivatives \(D\), the CPV nature can be directly identified, if any. For such definite cases, with the (CPC) and (CPV) labels, we have delineated the operator classes consisting of only \(\Phi\) and \(F\). Regarding the remaining classes, we have simply classified them based on the presence of the Levi-Civita tensor. For example, the operators in Eq. (37) are not always CP-violating despite the fact that each operator is associated with a Levi-Civita tensor. A careful inspection reveals that by the IBP, these operators can be combined with the \(F\Phi^{4}D^{2}\) classes. Let us consider the operator \((\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu}S_{\alpha}R_{\beta})\) in the \(\Phi^{4}D^{4}\) class. By IBP, this operator can be rewritten as, \[\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu}S_{\alpha}R_{\beta}=- \varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu}S_{\alpha\beta}R-\varepsilon_{ \alpha\beta\mu\nu}\,R_{\mu}R_{\nu\beta}S_{\alpha}R-\varepsilon_{\alpha\beta \mu\nu}\,R_{\mu\beta}R_{\nu}S_{\alpha}R.\] Using the anti-symmetric property of the Levi-Civita tensor, operators of the form \(\varepsilon_{\alpha\beta\mu\nu}\Phi_{\alpha\beta}\) can be written as, \[\varepsilon_{\alpha\beta\mu\nu}\Phi_{\alpha\beta}=\frac{1}{2}\varepsilon_{ \alpha\beta\mu\nu}F_{\beta\alpha}\Phi-\frac{1}{2}\varepsilon_{\alpha\beta\mu \nu}\Phi F_{\beta\alpha}.\]
Here, \(\Phi\) is a scalar, and a tensor of any order can be constructed through repeated action of covariant derivative on it, e.g., a rank two tensor is constructed as \(P_{\beta}P_{\alpha}\Phi=\Phi_{\alpha\beta}\). Using the above identity, we can transform \(\Phi^{4}D^{4}\) class to \(F\Phi^{4}D^{2}\) class: \[\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu}S_{\alpha}R_{\beta} =-\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu}S_{\alpha\beta}R -\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu\beta}S_{\alpha}R- \varepsilon_{\alpha\beta\mu\nu}\,R_{\mu\beta}R_{\nu}S_{\alpha}R\] \[=\frac{1}{2}\big{\{}\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{ \nu}SRF_{\beta\alpha}-\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}R_{\nu}F_{\beta \alpha}SR-\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}F_{\beta\nu}RS_{\alpha}R\] \[\quad+\varepsilon_{\alpha\beta\mu\nu}\,R_{\mu}RF_{\beta\nu}S_{ \alpha}R-\varepsilon_{\alpha\beta\mu\nu}\,F_{\beta\mu}RR_{\nu}S_{\alpha}R+ \varepsilon_{\alpha\beta\mu\nu}\,RF_{\beta\mu}R_{\nu}S_{\alpha}R\big{\}}\] \[=R_{\mu}R_{\nu}\tilde{F}_{\mu\nu}SR-R_{\mu}R_{\nu}SR\tilde{F}_{ \mu\nu}+R_{\mu}\tilde{F}_{\alpha\mu}RS_{\alpha}R-R_{\mu}R\tilde{F}_{\alpha\mu }S_{\alpha}R\] \[\quad-\tilde{F}_{\alpha\nu}RR_{\nu}S_{\alpha}R+R\tilde{F}_{\alpha \nu}R_{\nu}S_{\alpha}R.\]
* We note that in the process of integrating out the heavy fermions at the one-loop level, only the CP-conserving operator class \(F^{4}\) emerges. This was expected as even at the dimension six level, only the CP-conserving operator class emerges up to one-loop. We expect CPV \(F^{4}\) operators to be generated, for the first time, at the two-loop level, as per the lesson noted in Refs. [82; 92; 93] for dimension six CPV class \(F^{3}\).
* The CPV operators in the \(F^{3}\) class first appear at two-loop [82; 92; 93], whereas the CPV operators in the \(\Phi^{2}F^{3}\) operator class at dimension eight appear at one-loop level. In the case of SMEFT, the \(F^{3}\) operator class may receive a contribution from \(\Phi^{2}F^{3}\) class after electroweak symmetry breaking and thus the respective WC will be suppressed by a factor of \(v_{ew}^{2}/\Lambda^{2}\), where \(v_{ew}\) is the vacuum expectation value (vev) of SM Higgs and \(\Lambda\) is the scale of new physics. However, since the \(F^{3}\) operator class will have an additional \((4\pi)^{-2}\) loop factor suppression, it is possible that \(\Phi^{2}F^{3}\) offers a dominant contribution to \(F^{3}\) class compared to the two-loop generated contributions to the same.
## 5 Conclusion and Outlook
In the hunch of achieving more precision and extracting more information from a theory, we are in an attempt to compute the one-loop effective action up to dimension eight for the first time. As a follow-up of our previous paper [36], we have now computed the universal one-loop effective action (UOLEA) up to dimension eight, achieved after integrating out heavy fermions having any allowed SM gauge quantum numbers. We have emphasized though the Dirac operator is a weak elliptic operator, still, it is possible to use the HK method after the successful bosonization of the fermionic operator. This enables us to use the UOLEA computed in our previous work [36] and highlights its true universal features. This also displays the robustness of the HK method for a model-independent computation of one-loop effective action. We agree with the results, for lower dimensional effective action, computed in the existing literature.
We have explicitly computed the fermionic UOLEA using the HK method that captures the footprint of both CP-conserving and violating interactions in UV theories. Our result is equally applicable to any UV as well as low energy theory. We have discussed many features
of CP violations that can be captured in our generic effective action. For example, this will be very useful to compute the CPV effects very precisely at low energy after integrating our top quark. At this point, our result captures the effect of heavy fermion loops only and the CPV through Yukawa interactions. As our future endeavour, we are in the process of adding the contributions from loops containing information about the mixed spin propagators along with the light-heavy ones as well. But those are beyond the scope of this article.
## Acknowledgements
We acknowledge the useful discussions with Shamik Banerjee, Diptarka Das, and Nilay Kundu. SR would like to thank the Institute of Physics Bhubaneswar for the hospitality where part of this work was done.
## Appendix A D7 operators in terms of \(\Sigma\) and \(Y\)
Expanding the generalised functional \(U\) in UOLEA and collecting the \(\mathcal{O}(1/M^{3})\) terms, we get,
\[\mathcal{L}_{\rm eff}^{\Psi(D7)}= \frac{c_{s}}{(4\pi)^{2}}\,{\rm tr}\bigg{[}\frac{1}{M^{3}}\,\bigg{\{} -\frac{64}{105}\Sigma^{7}+\frac{8}{5}Y\,\Sigma^{5}-\frac{2}{3}Y^{2}\,\Sigma^{3 }-\frac{1}{12}Y^{2}\,(\mathcal{P}^{2}\Sigma)+\frac{1}{3}Y^{3}\,\Sigma-\frac{2 }{15}\Sigma^{3}\,F_{\mu\nu}^{2}\] \[-\frac{2}{15}\Sigma^{3}\,\Gamma_{\mu\nu}^{2}+\frac{4}{15}\Sigma^ {3}\,(\mathcal{P}^{2}Y)-\frac{8}{15}\Sigma^{3}\,(\mathcal{P}_{\nu}\Sigma)^{2 }-\frac{4}{5}\Sigma^{4}\,(\mathcal{P}^{2}\Sigma)-\frac{1}{90}F_{\rho\sigma}^{ 2}\,(\mathcal{P}^{2}\Sigma)\] \[-\frac{1}{90}\Gamma_{\rho\sigma}^{2}\,(\mathcal{P}^{2}\Sigma)+ \frac{1}{60}(\mathcal{P}_{\mu}\,\mathcal{P}_{\mu}Y)(\mathcal{P}^{2}\Sigma)- \frac{4}{45}(\mathcal{P}_{\mu}\,\mathcal{P}_{\mu}\Sigma)(\mathcal{P}_{\nu} \Sigma)^{2}+\frac{1}{60}(\mathcal{P}_{\mu}\,\mathcal{P}_{\mu}\Sigma)( \mathcal{P}^{2}Y)\] \[+\frac{1}{15}Y\,\Sigma\,F_{\mu\nu}^{2}+\frac{1}{15}Y\,\Sigma\, \Gamma_{\mu\nu}^{2}+\frac{2}{15}Y\,\Sigma\,(\mathcal{P}_{\mu}\Sigma)^{2}- \frac{1}{12}Y\,\Sigma\,(\mathcal{P}^{2}Y)+\frac{4}{15}Y\,\Sigma^{2}\,( \mathcal{P}^{2}\Sigma)\] \[+\frac{1}{15}Y\,F_{\mu\nu}^{2}\,\Sigma+\frac{1}{15}Y\,\Gamma_{ \mu\nu}^{2}\,\Sigma-\frac{1}{30}Y\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{ \nu}\,F_{\nu\mu})-\frac{1}{30}Y\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\nu }\,\Gamma_{\nu\mu})\] \[+\frac{2}{15}Y\,(\mathcal{P}_{\mu}\Sigma)^{2}\,\Sigma-\frac{1}{1 2}Y\,(\mathcal{P}^{2}Y)\Sigma+\frac{4}{15}Y\,(\mathcal{P}^{2}\Sigma)\Sigma^{ 2}-\frac{1}{30}\Sigma\,(\mathcal{P}_{\mu}Y)\,(\mathcal{P}_{\nu}\,F_{\nu\mu})\] \[-\frac{1}{30}\Sigma\,(\mathcal{P}_{\mu}Y)\,(\mathcal{P}_{\nu}\, \Gamma_{\nu\mu})-\frac{2}{15}\Sigma\,(\mathcal{P}^{2}\Sigma)(\mathcal{P}^{2} \Sigma)+\frac{1}{30}\Sigma\,(\mathcal{P}_{\nu}\,F_{\nu\mu})(\mathcal{P}_{\rho }\,F_{\rho\mu})\] \[+\frac{1}{30}\Sigma\,(\mathcal{P}_{\nu}\,F_{\nu\mu})\mathcal{P}_{ \rho}\,\Gamma_{\rho\mu}+\frac{1}{30}\Sigma\,(\mathcal{P}_{\nu}\,\Gamma_{\nu \mu})(\mathcal{P}_{\rho}\,F_{\rho\mu})+\frac{1}{30}\Sigma\,(\mathcal{P}_{\nu} \,\Gamma_{\nu\mu})(\mathcal{P}_{\rho}\,\Gamma_{\rho\mu})\] \[+\frac{2}{15}\Sigma^{2}\,(\mathcal{P}_{\mu}Y)\,(\mathcal{P}_{\mu} \Sigma)+\frac{2}{15}\Sigma^{2}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\mu}Y )+\frac{2}{45}\Sigma^{2}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\nu}\,F_{ \nu\mu})\] \[+\frac{2}{45}\Sigma^{2}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{ \nu}\,\Gamma_{\nu\mu})-\frac{2}{45}\Sigma^{2}\,\mathcal{P}_{\mu}\,F_{\mu\nu}( \mathcal{P}_{\nu}\Sigma)-\frac{2}{45}\Sigma^{2}\,\mathcal{P}_{\mu}\,\Gamma_{\mu \nu}(\mathcal{P}_{\nu}\Sigma)-\frac{2}{15}\Sigma^{3}\,F_{\mu\nu}\,\Gamma_{ \mu\nu}\] \[-\frac{2}{15}\Sigma^{3}\,\Gamma_{\mu\nu}\,F_{\mu\nu}-\frac{2}{45}F _{\rho\mu}\,F_{\rho\nu}\,(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu}\Sigma)-\frac{2 }{45}F_{\rho\mu}\,\Gamma_{\rho\nu}\,(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu} \Sigma)-\frac{2}{45}F_{\rho\nu}\,(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu}\Sigma) \Gamma_{\rho\mu}\] \[-\frac{1}{90}F_{\rho\sigma}\,\Gamma_{\rho\sigma}\,(\mathcal{P}^{2} \Sigma)-\frac{1}{90}F_{\rho\sigma}\,(\mathcal{P}^{2}\Sigma)\Gamma_{\rho\sigma}- \frac{2}{45}\Gamma_{\rho\mu}\,\Gamma_{\rho\nu}\,(\mathcal{P}_{\mu}\, \mathcal{P}_{\nu}\Sigma)-\frac{2}{3}Y\,\Sigma\,Y\,\Sigma^{2}\] \[+\frac{1}{15}Y\,\Sigma\,F_{\mu\nu}\,\Gamma_{\mu\nu}+\frac{1}{15}Y \,\Sigma\,\Gamma_{\mu\nu}\,F_{\mu\nu}+\frac{4}{15}Y\,\Sigma\,(\mathcal{P}^{2} \Sigma)\Sigma+\frac{1}{30}Y\,F_{\mu\nu}\,\Sigma\,F_{\mu\nu}\] \[+\frac{1}{30}Y\,F_{\mu\nu}\,\Sigma\,\Gamma_{\mu\nu}\,\frac{1}{15}Y \,F_{\mu\nu}\,\Gamma_{\mu\nu}\,\Sigma+\frac{1}{30}Y\,\Gamma_{\mu\nu}\,\Sigma\,F_ {\mu\nu}+\frac{1}{30}Y\,\Gamma_{\mu\nu}\,\Sigma\,\Gamma_{\mu\nu}\] \[+\frac{1}{15}Y\,\Gamma_{\mu\nu}\,F_{\mu\nu}\,\Sigma-\frac{1}{45} \Sigma\,F_{\mu\nu}\,F_{\nu\rho}\,F_{\rho\mu}-\frac{1}{45}\Sigma\,F_{\mu\nu} \,F_{\nu\rho}\,\Gamma_{\rho\mu}-\frac{1}{45}\Sigma\,F_{\mu\nu}\,\Gamma_{\nu\rho }\,F_{\rho\mu}\]
\[-\frac{2}{45}\Sigma\,F_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{\rho\mu}- \frac{2}{45}\Sigma\,F_{\mu\nu}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\nu} \Sigma)-\frac{1}{45}\Sigma\,\Gamma_{\mu\nu}\,F_{\nu\rho}\,F_{\rho\mu}-\frac{1} {45}\Sigma\,\Gamma_{\mu\nu}\,F_{\nu\rho}\,\Gamma_{\rho\mu}\] \[-\frac{1}{45}\Sigma\,\Gamma_{\mu\nu}\,\Gamma_{\nu\rho}\,F_{\rho \mu}-\frac{1}{45}\Sigma\,\Gamma_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{\rho\mu}- \frac{2}{45}\Sigma\,\Gamma_{\mu\nu}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{ \nu}\Sigma)-\frac{4}{45}\Sigma^{2}\,F_{\mu\nu}\,\Sigma\,F_{\mu\nu}\] \[-\frac{2}{45}\Sigma\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\nu }\Sigma)\,F_{\mu\nu}-\frac{4}{45}\Sigma^{2}\,F_{\mu\nu}\,\Sigma\,\Gamma_{\mu \nu}-\frac{4}{45}\Sigma^{2}\,\Gamma_{\mu\nu}\,\Sigma\,F_{\mu\nu}-\frac{4}{45} \Sigma^{2}\,\Gamma_{\mu\nu}\,\Sigma\,\Gamma_{\mu\nu}\] \[-\frac{2}{45}\Sigma\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\nu }\Sigma)\,\Gamma_{\mu\nu}\bigg{\}}\bigg{]}.\] (A.1)
## Appendix B D8 operators in terms of \(\Sigma\) and \(Y\)
Expanding the generalised functional \(U\) in UOLEA and collecting the \(\mathcal{O}(1/M^{4})\) terms, we get,
\[\mathcal{L}_{\rm eff}^{\Psi(D8)}= \frac{c_{s}}{(4\pi)^{2}}\,{\rm tr}\bigg{[}\frac{1}{M^{4}}\bigg{\{} \frac{1}{24}Y^{4}+\frac{16}{21}\Sigma^{8}-\frac{32}{15}Y\,\Sigma^{6}+\frac{4} {5}Y^{2}\,\Sigma^{4}+\frac{1}{30}Y^{2}\,F_{\mu\nu}^{2}+\frac{1}{30}Y^{2}\, \Gamma_{\mu\nu}^{2}\] \[+\frac{1}{15}Y^{2}\,(\mathcal{P}_{\mu}\Sigma)^{2}-\frac{1}{24}Y^ {2}\,(\mathcal{P}^{2}Y)-\frac{1}{3}Y^{3}\,\Sigma^{2}+\frac{1}{15}\Sigma^{2}\, (\mathcal{P}_{\mu}Y)^{2}+\frac{2}{21}\Sigma^{4}\,F_{\mu\nu}^{2}+\frac{2}{21} \Sigma^{4}\,\Gamma_{\mu\nu}^{2}\] \[-\frac{32}{21}\Sigma^{4}\,(\mathcal{P}_{\mu}\Sigma)^{2}-\frac{2} {5}\Sigma^{4}\,(\mathcal{P}^{2}Y)+\frac{17}{5040}F_{\mu\nu}^{2}\,F_{\rho\sigma }^{2}+\frac{17}{5040}F_{\mu\nu}^{2}\,\Gamma_{\rho\sigma}^{2}+\frac{11}{315}F_ {\mu\nu}^{2}\,(\mathcal{P}_{\rho}\Sigma)^{2}\] \[+\frac{17}{5040}F_{\rho\sigma}^{2}\,\Gamma_{\mu\nu}^{2}-\frac{1} {180}F_{\rho\sigma}^{2}\,(\mathcal{P}^{2}Y)+\frac{17}{5040}\Gamma_{\mu\nu}^{2 }\,\Gamma_{\rho\sigma}^{2}+\frac{11}{315}\Gamma_{\mu\nu}^{2}\,(\mathcal{P}_{ \rho}\Sigma)^{2}-\frac{1}{180}\Gamma_{\rho\sigma}^{2}\,(\mathcal{P}^{2}Y)\] \[+\frac{6}{35}(\mathcal{P}_{\mu}\Sigma)^{2}\,(\mathcal{P}_{\nu} \Sigma)^{2}-\frac{2}{45}(\mathcal{P}^{2}Y)(\mathcal{P}_{\nu}\Sigma)^{2}+ \frac{1}{120}(\mathcal{P}^{2}Y)(\mathcal{P}^{2}Y)-\frac{1}{210}(\mathcal{P}_ {\mu}\,\mathcal{P}^{2}\Sigma)\,(\mathcal{P}_{\mu}\,\mathcal{P}^{2}\Sigma)\] \[+\frac{1}{840}(\mathcal{P}_{\rho}\,\mathcal{P}_{\alpha}\,F_{\alpha \nu})^{2}+\frac{1}{840}(\mathcal{P}_{\rho}\,\mathcal{P}_{\alpha}\,F_{\alpha \nu})\,(\mathcal{P}_{\rho}\,(\mathcal{P}_{\mu}\,\Gamma_{\mu\nu}))+\frac{1}{84 0}(\mathcal{P}_{\rho}\,\mathcal{P}_{\alpha}\,\Gamma_{\alpha\nu})\,(\mathcal{P} _{\rho}\,(\mathcal{P}_{\mu}\,F_{\mu\nu}))\] \[+\frac{1}{840}(\mathcal{P}_{\rho}\,\mathcal{P}_{\alpha}\,\Gamma_{ \alpha\nu})^{2}+\frac{4}{315}\Sigma\,F_{\mu\nu}(\mathcal{P}^{2}\Sigma)F_{\mu \nu}+\frac{4}{315}\Sigma\,F_{\mu\nu}(\mathcal{P}^{2}\Sigma)F_{\mu\nu}+\frac{4} {315}\Sigma\,F_{\mu\nu}(\mathcal{P}^{2}\Sigma)\Gamma_{\mu\nu}\] \[+\frac{4}{315}\Sigma\,\Gamma_{\mu\nu}(\mathcal{P}^{2}\Sigma)\Gamma _{\mu\nu}-\frac{1}{15}Y\,\Sigma^{2}\,F_{\mu\nu}^{2}-\frac{1}{15}Y\,\Sigma^{2} \,\Gamma_{\mu\nu}^{2}+\frac{2}{15}Y\,\Sigma^{2}\,(\mathcal{P}^{2}Y)-\frac{4}{15 }Y\,\Sigma^{2}\,(\mathcal{P}_{\nu}\Sigma)^{2}\] \[-\frac{2}{5}Y\,\Sigma^{3}\,(\mathcal{P}^{2}\Sigma)-\frac{1}{15}Y\,F _{\mu\nu}^{2}\,\Sigma^{2}-\frac{1}{15}Y\,\Gamma_{\mu\nu}^{2}\,\Sigma^{2}-\frac{ 1}{60}Y\,(\mathcal{P}_{\mu}Y)\,(\mathcal{P}_{\nu}\,F_{\nu\mu})-\frac{2}{5}Y\,( \mathcal{P}^{2}\Sigma)\Sigma^{3}\] \[-\frac{1}{60}Y\,(\mathcal{P}_{\mu}Y)\,(\mathcal{P}_{\nu}\,\Gamma_{ \nu\mu})+\frac{2}{15}Y\,(\mathcal{P}^{2}Y)\Sigma^{2}-\frac{1}{15}Y\,( \mathcal{P}^{2}\Sigma)(\mathcal{P}^{2}\Sigma)-\frac{4}{15}Y\,(\mathcal{P}_{ \nu}\Sigma)^{2}\,\Sigma^{2}\] \[+\frac{1}{60}Y\,(\mathcal{P}_{\nu}\,F_{\nu\mu})((\mathcal{P}_{\rho }\,F_{\rho\mu}))+\frac{1}{60}Y\,(\mathcal{P}_{\nu}\,F_{\nu\mu})(\mathcal{P}_{ \rho}\,\Gamma_{\rho\mu})+\frac{1}{60}Y\,(\mathcal{P}_{\nu}\,\Gamma_{\nu\mu})(( \mathcal{P}_{\rho}\,F_{\rho\mu}))\] \[+\frac{1}{60}Y\,(\mathcal{P}_{\nu}\,\Gamma_{\nu\mu})(\mathcal{P}_{ \rho}\,\Gamma_{\rho\mu})+\frac{2}{15}Y^{2}\,\Sigma\,(\mathcal{P}^{2}\Sigma)+ \frac{1}{30}Y^{2}\,F_{\mu\nu}\,\Gamma_{\mu\nu}+\frac{1}{30}Y^{2}\,\Gamma_{ \mu\nu}\,F_{\mu\nu}\] \[+\frac{2}{15}Y^{2}\,(\mathcal{P}^{2}\Sigma)\Sigma+\frac{4}{105}\Sigma \,F_{\nu\mu}^{2}\,(\mathcal{P}^{2}\Sigma)+\frac{4}{105}\Sigma\,\Gamma_{\nu \mu}^{2}\,(\mathcal{P}^{2}\Sigma)-\frac{1}{15}\Sigma\,(\mathcal{P}^{2}Y)( \mathcal{P}^{2}\Sigma)\] \[+\frac{4}{105}\Sigma\,(\mathcal{P}^{2}\Sigma)F_{\mu\nu}^{2}+\frac{4} {105}\Sigma\,(\mathcal{P}^{2}\Sigma)\Gamma_{\mu\nu}^{2}-\frac{1}{15}\Sigma\,( \mathcal{P}^{2}\Sigma)(\mathcal{P}^{2}Y)+\frac{1}{45}\Sigma^{2}\,(\mathcal{P}_{ \mu}Y)\,(\mathcal{P}_{\nu}\,F_{\nu\mu})\] \[+\frac{1}{45}\Sigma^{2}\,(\mathcal{P}_{\mu}Y)\,(\mathcal{P}_{\nu} \,\Gamma_{\nu\mu})-\frac{1}{45}\Sigma^{2}\,(\mathcal{P}_{\mu}\,F_{\mu\nu})( \mathcal{P}_{\nu}Y)+\frac{2}{21}\Sigma^{4}\,\Gamma_{\mu\nu}\,F_{\mu\nu}+\frac{1 7}{5040}F_{\mu\nu}\,F_{\rho\sigma}^{2}\
\[+\frac{17}{5040}F_{\mu\nu}\,\Gamma_{\mu\nu}\,\Gamma_{\rho\sigma}^{2}+ \frac{11}{315}F_{\mu\nu}\,\Gamma_{\mu\nu}\left(\mathcal{P}_{\rho}\Sigma\right)^{ 2}+\frac{17}{5040}F_{\mu\nu}\,\Gamma_{\rho\sigma}^{2}\,\Gamma_{\mu\nu}+\frac{1 7}{5040}F_{\mu\nu}^{2}\,F_{\rho\sigma}\,\Gamma_{\rho\sigma}\] \[+\frac{2}{315}F_{\mu\nu}\left(\mathcal{P}_{\alpha}\,F_{\alpha\mu} \right)(\mathcal{P}_{\rho}\,F_{\rho\nu})+\frac{2}{315}F_{\mu\nu}\left(\mathcal{ P}_{\alpha}\,F_{\alpha\mu}\right)(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu})+\frac{17}{5040 }F_{\mu\nu}^{2}\,\Gamma_{\rho\sigma}\,F_{\rho\sigma}\] \[+\frac{2}{315}F_{\mu\nu}\left(\mathcal{P}_{\alpha}\,\Gamma_{ \alpha\mu}\right)(\mathcal{P}_{\rho}\,F_{\rho\nu})+\frac{2}{315}F_{\mu\nu} \left(\mathcal{P}_{\alpha}\,\Gamma_{\alpha\mu}\right)(\mathcal{P}_{\rho}\, \Gamma_{\rho\nu})+\frac{11}{315}F_{\mu\nu}\left(\mathcal{P}_{\rho}\Sigma \right)^{2}\Gamma_{\mu\nu}\] \[+\frac{2}{315}F_{\nu\sigma}\,F_{\sigma\rho}\left(\mathcal{P}_{ \rho}\,\mathcal{P}_{\mu}\,F_{\mu\nu}\right)+\frac{2}{315}F_{\nu\sigma}\,F_{ \sigma\rho}\left(\mathcal{P}_{\rho}\,\mathcal{P}_{\mu}\,\Gamma_{\mu\nu}\right) +\frac{2}{315}F_{\nu\sigma}\,\Gamma_{\sigma\rho}\left(\mathcal{P}_{\rho}\, \mathcal{P}_{\mu}\,F_{\mu\nu}\right)\] \[+\frac{2}{315}F_{\nu\sigma}\,\Gamma_{\sigma\rho}\left(\mathcal{P} _{\rho}\,\mathcal{P}_{\mu}\,\Gamma_{\mu\nu}\right)-\frac{1}{45}F_{\rho\mu}\,F_ {\rho\nu}\left(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu}Y\right)-\frac{1}{45}F_{ \rho\mu}\,\Gamma_{\rho\nu}\left(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu}Y\right)\] \[+\frac{17}{5040}F_{\rho\sigma}\,\Gamma_{\mu\nu}^{2}\,\Gamma_{ \rho\sigma}+\frac{17}{5040}F_{\rho\sigma}\,\Gamma_{\rho\sigma}^{2}\,\Gamma_{ \mu\nu}^{2}-\frac{1}{180}F_{\rho\sigma}\,\Gamma_{\rho\sigma}\,(\mathcal{P}^{ 2}Y)-\frac{1}{180}F_{\rho\sigma}\,(\mathcal{P}^{2}Y)\Gamma_{\rho\sigma}\] \[+\frac{2}{315}F_{\sigma\rho}\left(\mathcal{P}_{\rho}\,\mathcal{P} _{\mu}\,F_{\mu\nu}\right)\Gamma_{\nu\sigma}+\frac{2}{315}F_{\sigma\rho}\left( \mathcal{P}_{\rho}\,\mathcal{P}_{\mu}\,\Gamma_{\mu\nu}\right)\Gamma_{\nu\sigma }+\frac{2}{315}\Gamma_{\mu\nu}\left(\mathcal{P}_{\alpha}\,F_{\alpha\mu} \right)(\mathcal{P}_{\rho}\,F_{\rho\nu})\] \[+\frac{2}{315}\Gamma_{\mu\nu}\left(\mathcal{P}_{\alpha}\,F_{ \alpha\mu}\right)(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu})+\frac{2}{315}\Gamma_{ \mu\nu}\left(\mathcal{P}_{\alpha}\,\Gamma_{\alpha\mu}\right)(\mathcal{P}_{\rho }\,F_{\rho\nu})+\frac{2}{315}\Gamma_{\mu\nu}\left(\mathcal{P}_{\alpha}\, \Gamma_{\alpha\mu}\right)(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu})\] \[+\frac{2}{315}\Gamma_{\nu\sigma}\,\Gamma_{\sigma\rho}\left( \mathcal{P}_{\rho}\,\mathcal{P}_{\mu}\,F_{\mu\nu}\right)+\frac{2}{315}\Gamma_ {\nu\sigma}\,\Gamma_{\sigma\rho}\left(\mathcal{P}_{\rho}\,\mathcal{P}_{\mu}\, \Gamma_{\mu\nu}\right)-\frac{1}{45}\Gamma_{\rho\mu}\,\Gamma_{\rho\nu}\left( \mathcal{P}_{\mu}\,\mathcal{P}_{\nu}Y\right)\] \[-\frac{1}{45}F_{\rho\nu}\left(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu }Y\right)\Gamma_{\rho\mu}-\frac{2}{315}(\mathcal{P}_{\mu}\Sigma)\left(\mathcal{ P}_{\mu}\,\mathcal{P}_{\rho}\,F_{\rho\nu}\right)(\mathcal{P}_{\nu}\Sigma)+\frac{1}{15}Y\, \Sigma\left(\mathcal{P}_{\mu}\Sigma\right)(\mathcal{P}_{\mu}Y)\] \[+\frac{2}{315}(\mathcal{P}_{\mu}\Sigma)\left(\mathcal{P}_{\nu} \Sigma\right)\left(\mathcal{P}_{\mu}\,\mathcal{P}_{\rho}\,\Gamma_{\rho\nu} \right)-\frac{4}{315}(\mathcal{P}_{\mu}\Sigma)\left(\mathcal{P}_{\nu}\,F_{\nu \mu}\right)(\mathcal{P}^{2}\Sigma)-\frac{1}{15}Y\,\Sigma\,\Gamma_{\mu\nu}^{2}\,\Sigma\] \[+\frac{4}{315}(\mathcal{P}_{\mu}\Sigma)\left(\mathcal{P}^{2} \Sigma\right)(\mathcal{P}_{\nu}\,F_{\nu\mu})+\frac{4}{315}(\mathcal{P}_{\mu} \Sigma)\left(\mathcal{P}^{2}\Sigma\right)(\mathcal{P}_{\nu}\,\Gamma_{\nu\mu})+ \frac{2}{15}Y\,\Sigma\,(\mathcal{P}^{2}Y)\Sigma\] \[-\frac{2}{45}(\mathcal{P}^{2}\Sigma)(\mathcal{P}_{\nu}\Sigma) \left(\mathcal{P}_{\nu}Y\right)+\frac{4}{5}Y\,\Sigma\,Y\,\Sigma^{3}+\frac{2}{15 }Y\,\Sigma\,Y\,(\mathcal{P}^{2}\Sigma)-\frac{1}{15}Y\,\Sigma\,F_{\mu\nu}^{2}\,\Sigma\] \[-\frac{2}{315}(\mathcal{P}_{\mu}\Sigma)\left(\mathcal{P}_{\mu}\, \mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right)(\mathcal{P}_{\nu}\Sigma)-\frac{2}{45 }(\mathcal{P}^{2}\Sigma)(\mathcal{P}_{\nu}Y)\left(\mathcal{P}_{\nu}\Sigma\right) +\frac{1}{15}Y\,\Sigma\left(\mathcal{P}_{\mu}Y\right)(\mathcal{P}_{\mu}\Sigma)\] \[+\frac{1}{45}Y\,\Sigma\left(\mathcal{P}_{\mu}\Sigma\right)( \mathcal{P}_{\nu}\,F_{\nu\mu})+\frac{1}{45}Y\,\Sigma\left(\mathcal{P}_{\mu} \Sigma\right)(\mathcal{P}_{\nu}\,\Gamma_{\nu\mu})-\frac{1}{45}Y\,\Sigma\left( \mathcal{P}_{\mu}\,F_{\mu\nu}\right)(\mathcal{P}_{\nu}\Sigma)\] \[-\frac{4}{315}(\mathcal{P}_{\mu}\Sigma)\left(\mathcal{P}_{\nu}\, \Gamma_{\nu\mu}\right)(\mathcal{P}^{2}\Sigma)+\frac{2}{315}(\mathcal{P}_{\mu} \Sigma)\left(\mathcal{P}_{\nu}\Sigma\right)(\mathcal{P}_{\mu}\,\mathcal{P}_{ \rho}\,F_{\rho\nu})-\frac{2}{5}Y\,\Sigma^{2}\,(\mathcal{P}^{2}\Sigma)\Sigma\] \[-\frac{2}{5}Y\,\Sigma\,(\mathcal{P}^{2}\Sigma)\Sigma^{2}-\frac{4}{ 15}Y\,\Sigma\left(\mathcal{P}_{\nu}\Sigma\right)^{2}\Sigma+\frac{1}{120}Y\,F_{ \mu\nu}\,Y\,F_{\mu\nu}+\frac{1}{15}Y\left(\mathcal{P}_{\mu}\Sigma\right)( \mathcal{P}_{\mu}Y)\,\Sigma\] \[-\frac{1}{45}Y\,\Sigma\left(\mathcal{P}_{\mu}\,\Gamma_{\mu\nu} \right)(\mathcal{P}_{\nu}\Sigma)+\frac{2}{5}Y\,\Sigma^{2}\,Y\,\Sigma^{2}- \frac{1}{15}Y\,\Sigma^{2}\,F_{\mu\nu}\,\Gamma_{\mu\nu}-\frac{1}{15}Y\,\Sigma^{2} \,\Gamma_{\mu\nu}\,F_{\mu\nu}\] \[+\frac{1}{60}Y\,F_{\mu\nu}\,Y\,\Gamma_{\mu\nu}-\frac{2}{45}Y\,F_{ \mu\nu}\,\Sigma^{2}\,F_{\mu\nu}-\frac{2}{45}Y\,F_{\mu\nu}\,\Sigma^{2}\, \Gamma_{\mu\nu}-\frac{1}{90}Y\,F_{\mu\nu}\,F_{\nu\rho}\,F_{\rho\mu}\] \[-\frac{1}{90}Y\,F_{\mu\nu}\,F_{\
\[+\frac{1}{45}Y\left(\mathcal{P}_{\mu}\Sigma\right)\left(\mathcal{P}_{ \nu}\,\Gamma_{\nu\mu}\right)\Sigma-\frac{1}{45}Y\left(\mathcal{P}_{\mu}\,F_{ \mu\nu}\right)\left(\mathcal{P}_{\nu}\Sigma\right)\Sigma-\frac{1}{45}Y\left( \mathcal{P}_{\mu}\,\Gamma_{\mu\nu}\right)\left(\mathcal{P}_{\nu}\Sigma\right)\Sigma\] \[-\frac{1}{45}\Sigma\,F_{\mu\nu}\left(\mathcal{P}_{\mu}Y\right) \left(\mathcal{P}_{\nu}\Sigma\right)-\frac{1}{45}\Sigma\,F_{\mu\nu}\left( \mathcal{P}_{\mu}\Sigma\right)\left(\mathcal{P}_{\nu}Y\right)-\frac{2}{63} \Sigma\,F_{\mu\nu}\left(\mathcal{P}_{\mu}\Sigma\right)\left(\mathcal{P}_{\rho }\,F_{\rho\nu}\right)\] \[-\frac{2}{63}\Sigma\,F_{\mu\nu}\left(\mathcal{P}_{\mu}\Sigma \right)\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right)-\frac{2}{105}\Sigma \,F_{\mu\nu}\left(\mathcal{P}_{\rho}\,F_{\rho\nu}\right)\left(\mathcal{P}_{ \mu}\Sigma\right)-\frac{2}{105}\Sigma\,F_{\mu\nu}\left(\mathcal{P}_{\rho}\, \Gamma_{\rho\nu}\right)\left(\mathcal{P}_{\mu}\Sigma\right)\] \[+\frac{4}{105}\Sigma\,F_{\nu\mu}\,\Gamma_{\nu\mu}\left(\mathcal{P }^{2}\Sigma\right)-\frac{1}{45}\Sigma\,\Gamma_{\mu\nu}\left(\mathcal{P}_{\mu}Y \right)\left(\mathcal{P}_{\nu}\Sigma\right)-\frac{1}{45}\Sigma\,\Gamma_{\mu \nu}\left(\mathcal{P}_{\mu}\Sigma\right)\left(\mathcal{P}_{\nu}Y\right)\] \[-\frac{2}{105}\Sigma\,\Gamma_{\mu\nu}\left(\mathcal{P}_{\rho}\,F_ {\rho\nu}\right)\left(\mathcal{P}_{\mu}\Sigma\right)-\frac{2}{105}\Sigma\, \Gamma_{\mu\nu}\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right)-\frac{1}{45} \Sigma\left(\mathcal{P}_{\mu}Y\right)\left(\mathcal{P}_{\nu}\Sigma\right)F_{ \mu\nu}\] \[+\frac{4}{105}\Sigma\,\Gamma_{\mu\nu}\,F_{\nu\mu}\left(\mathcal{P }^{2}\Sigma\right)-\frac{1}{45}\Sigma\left(\mathcal{P}_{\mu}Y\right)\left( \mathcal{P}_{\nu}\Sigma\right)\Gamma_{\mu\nu}-\frac{4}{105}\Sigma\left( \mathcal{P}_{\mu}\Sigma\right)F_{\mu\nu}\left(\mathcal{P}_{\rho}\,F_{\rho\nu}\right)\] \[-\frac{4}{105}\Sigma\left(\mathcal{P}_{\mu}\Sigma\right)F_{\mu\nu }\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right)-\frac{4}{105}\Sigma\left( \mathcal{P}_{\mu}\Sigma\right)\Gamma_{\mu\nu}\left(\mathcal{P}_{\rho}\,F_{\rho \nu}\right)-\frac{1}{45}\Sigma\left(\mathcal{P}_{\mu}\Sigma\right)\left( \mathcal{P}_{\nu}Y\right)\Gamma_{\mu\nu}\] \[+\frac{52}{105}\Sigma\left(\mathcal{P}_{\mu}\Sigma\right)\left( \mathcal{P}_{\nu}\Sigma\right)\left(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu} \Sigma\right)-\frac{2}{105}\Sigma\left(\mathcal{P}_{\mu}\Sigma\right)\left( \mathcal{P}_{\rho}\,F_{\rho\nu}\right)F_{\mu\nu}+\frac{2}{105}\Sigma^{2}\,F_ {\mu\nu}\,\Gamma_{\nu\rho}\,F_{\rho\mu}\] \[-\frac{2}{105}\Sigma\left(\mathcal{P}_{\mu}\Sigma\right)\left( \mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right)F_{\mu\nu}-\frac{2}{105}\Sigma\left( \mathcal{P}_{\mu}\Sigma\right)\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right) \Gamma_{\mu\nu}+\frac{4}{105}\Sigma\left(\mathcal{P}^{2}\Sigma\right)F_{\mu\nu }\,\Gamma_{\mu\nu}\] \[+\frac{4}{105}\Sigma\left(\mathcal{P}^{2}\Sigma\right)\Gamma_{\mu \nu}\,F_{\mu\nu}+\frac{6}{35}\Sigma\left(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu} \Sigma\right)\Sigma\left(\mathcal{P}_{\nu}\,\mathcal{P}_{\mu}\Sigma\right)+\frac{5 2}{105}\Sigma\left(\mathcal{P}_{\mu}\,\mathcal{P}_{\nu}\Sigma\right)\left( \mathcal{P}_{\mu}\Sigma\right)\left(\mathcal{P}_{\nu}\Sigma\right)\] \[+\frac{12}{35}\Sigma\left(\mathcal{P}_{\nu}\Sigma\right)\left( \mathcal{P}_{\mu}\,\mathcal{P}_{\nu}\Sigma\right)\left(\mathcal{P}_{\mu}\Sigma \right)-\frac{1}{35}\Sigma\left(\mathcal{P}_{\nu}\,F_{\nu\mu}\right)\Sigma \left(\mathcal{P}_{\rho}\,F_{\rho\mu}\right)-\frac{1}{35}\Sigma\left(\mathcal{ P}_{\nu}\,F_{\nu\mu}\right)\Sigma\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\mu}\right)\] \[-\frac{1}{35}\Sigma\left(\mathcal{P}_{\nu}\,\Gamma_{\nu\mu}\right) \Sigma\left(\mathcal{P}_{\rho}\,F_{\rho\mu}\right)-\frac{1}{35}\Sigma\left( \mathcal{P}_{\nu}\,\Gamma_{\nu\mu}\right)\Sigma\left(\mathcal{P}_{\rho}\,\Gamma_{ \rho\mu}\right)-\frac{4}{105}\Sigma\left(\mathcal{P}_{\rho}\,F_{\rho\nu}\right) F_{\mu\nu}\left(\mathcal{P}_{\mu}\Sigma\right)\] \[-\frac{4}{105}\Sigma\left(\mathcal{P}_{\rho}\,F_{\rho\nu}\right) \Gamma_{\mu\nu}\left(\mathcal{P}_{\mu}\Sigma\right)-\frac{2}{63}\Sigma\left( \mathcal{P}_{\rho}\,F_{\rho\nu}\right)\left(\mathcal{P}_{\mu}\Sigma\right)F_{ \mu\nu}-\frac{2}{63}\Sigma\left(\mathcal{P}_{\rho}\,F_{\rho\nu}\right)\left( \mathcal{P}_{\mu}\Sigma\right)\Gamma_{\mu\nu}\] \[-\frac{4}{105}\Sigma\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu} \right)F_{\mu\nu}\left(\mathcal{P}_{\mu}\Sigma\right)-\frac{4}{105}\Sigma\left( \mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right)\Gamma_{\mu\nu}\left(\mathcal{P}_{ \mu}\Sigma\right)-\frac{2}{63}\Sigma\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu} \right)\left(\mathcal{P}_{\mu}\Sigma\right)F_{\mu\nu}\] \[-\frac{2}{63}\Sigma\left(\mathcal{P}_{\rho}\,\Gamma_{\rho\nu}\right) \left(\mathcal{P}_{\mu}\Sigma\right)\Gamma_{\mu\nu}+\frac{2}{105}\Sigma^{2}\,F_ {\mu\nu}\,F_{\nu\rho}\,F_{\rho\mu}+\frac{2}{105}\Sigma^{2}\,F_{\mu\nu}\,F_{ \nu\rho}\,\Gamma_{\rho\mu}\] \[+\frac{2}{105}\Sigma^{2}\,F_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{ \rho\mu}+\frac{16}{35}\Sigma^{2}\,F_{\mu\nu}\left(\mathcal{P}_{\mu}\Sigma\right) \left(\mathcal{P}_{\nu}\Sigma\right)+\frac{2}{105}\Sigma^{2}\,\Gamma_{\mu \nu}\,F_{\nu\rho}\,F_{\rho\mu}\] \[+\frac{2}{105}\Sigma^{2}\,\Gamma_{\mu\nu}\,F_{\nu\rho}\,\Gamma_{ \rho\mu}+\frac{2}{105}\Sigma^{2}\,\Gamma_{\mu\nu}\,\Gamma_{\nu\rho}\,F_{ \rho\mu}+\frac{2}{105}\Sigma^{2}\,\Gamma_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{ \rho\mu}+\frac{6}{35}\Sigma^{2}\,\Gamma_{\mu\nu}\,\Sigma^{2}\,F_{\mu\nu}\] \[+\frac{16}{35}\Sigma^{2}\,\Gamma_{\mu\nu}\left(\mathcal{P}_{\mu} \Sigma\right)\left(\mathcal{P}_{\nu}\Sigma\right)-\frac{48}{35}\Sigma^{2}\left( \mathcal{P}_{\mu}\Sigma\right)\Sigma^{2}\left(\mathcal{P}_{\mu}\Sigma\right)+ \frac{3}{35}\Sigma^{2}\,\Gamma_{\mu\nu}\,\Sigma^{2}\,\Gamma_{\mu\nu}\] \[+\frac{3}{35}\Sigma^{2}\,F_{\mu\nu}\,\Sigma^{2}\,F_{\mu\nu}+\frac{1 2}{35}\Sigma^{2}(\mathcal{P}_{\nu}\Sigma)\,F_{\mu\nu}\left(\mathcal{P}_{\mu} \Sigma\right)
\[+\frac{1}{2520}F_{\alpha\mu}\,F_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{ \rho\alpha}+\frac{1}{2520}F_{\alpha\mu}\,\Gamma_{\mu\nu}\,F_{\nu\rho}\,F_{\rho \alpha}+\frac{1}{2520}F_{\alpha\mu}\,\Gamma_{\mu\nu}\,F_{\nu\rho}\,\Gamma_{\rho\alpha}\] \[+\frac{1}{2520}F_{\alpha\mu}\,\Gamma_{\mu\nu}\,\Gamma_{\nu\rho}\,F _{\rho\alpha}+\frac{1}{2520}F_{\alpha\mu}\,\Gamma_{\mu\nu}\,\Gamma_{\nu\rho}\, \Gamma_{\rho\alpha}-\frac{1}{105}F_{\alpha\nu}\,(\mathcal{P}_{\mu}\Sigma)\,F_{ \mu\nu}\,(\mathcal{P}_{\alpha}\Sigma)\] \[-\frac{1}{105}F_{\alpha\nu}\,(\mathcal{P}_{\mu}\Sigma)\,\Gamma_{ \mu\nu}\,(\mathcal{P}_{\alpha}\Sigma)+\frac{1}{420}F_{\mu\nu}\,F_{\nu\rho}\,F_ {\mu\sigma}\,F_{\sigma\rho}+\frac{1}{420}F_{\mu\nu}\,F_{\nu\rho}\,F_{\mu \sigma}\,\Gamma_{\sigma\rho}\] \[+\frac{1}{2520}F_{\mu\nu}\,F_{\nu\rho}\,F_{\rho\alpha}\,\Gamma_{ \alpha\mu}+\frac{1}{420}F_{\mu\nu}\,F_{\nu\rho}\,\Gamma_{\mu\sigma}\,F_{\sigma \rho}+\frac{1}{420}F_{\mu\nu}\,F_{\nu\rho}\,\Gamma_{\mu\sigma}\,\Gamma_{\sigma\rho}\] \[+\frac{1}{2520}F_{\mu\nu}\,F_{\nu\rho}\,\Gamma_{\rho\alpha}\, \Gamma_{\alpha\mu}+\frac{1}{10080}F_{\mu\nu}\,F_{\rho\sigma}\,F_{\mu\nu}\,F_ {\rho\sigma}+\frac{1}{5040}F_{\mu\nu}\,F_{\rho\sigma}\,F_{\mu\nu}\,\Gamma_{ \rho\sigma}\] \[+\frac{1}{5040}F_{\mu\nu}\,F_{\rho\sigma}\,\Gamma_{\mu\nu}\,F_{ \rho\sigma}+\frac{1}{5040}F_{\mu\nu}\,F_{\rho\sigma}\,\Gamma_{\mu\nu}\,\Gamma_{ \rho\sigma}+\frac{17}{5040}F_{\mu\nu}\,F_{\rho\sigma}\,\Gamma_{\rho\sigma}\, \Gamma_{\mu\nu}\] \[+\frac{17}{5040}F_{\mu\nu}\,F_{\mu\nu}\,F_{\rho\sigma}\,\Gamma_{ \rho\sigma}+\frac{17}{5040}F_{\mu\nu}\,\Gamma_{\mu\nu}\,\Gamma_{\rho\sigma}\, F_{\rho\sigma}+\frac{1}{420}F_{\mu\nu}\,\Gamma_{\nu\rho}\,F_{\mu\sigma}\,F_{ \sigma\rho}\] \[+\frac{1}{420}F_{\mu\nu}\,\Gamma_{\nu\rho}\,F_{\mu\sigma}\,\Gamma _{\sigma\rho}+\frac{1}{2520}F_{\mu\nu}\,\Gamma_{\nu\rho}\,F_{\rho\alpha}\, \Gamma_{\alpha\mu}+\frac{1}{420}F_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{\mu \sigma}\,F_{\sigma\rho}\] \[+\frac{1}{420}F_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{\mu\sigma}\, \Gamma_{\sigma\rho}+\frac{1}{2520}F_{\mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{\rho \alpha}\,\Gamma_{\alpha\mu}+\frac{1}{10080}F_{\mu\nu}\,\Gamma_{\rho\sigma}\,F _{\mu\nu}\,\Gamma_{\rho\sigma}\] \[+\frac{17}{5040}F_{\mu\nu}\,\Gamma_{\rho\sigma}\,F_{\rho\sigma}\, \Gamma_{\mu\nu}+\frac{1}{5040}F_{\mu\nu}\,\Gamma_{\rho\sigma}\,\Gamma_{\mu\nu }\,F_{\rho\sigma}+\frac{1}{5040}F_{\mu\nu}\,\Gamma_{\rho\sigma}\,\Gamma_{\mu \nu}\,\Gamma_{\rho\sigma}\] \[-\frac{1}{105}F_{\mu\nu}\,(\mathcal{P}_{\alpha}\Sigma)\,\Gamma_{ \alpha\nu}\,(\mathcal{P}_{\mu}\Sigma)-\frac{1}{630}F_{\mu\nu}\,(\mathcal{P}_{ \rho}\Sigma)\,F_{\mu\nu}\,(\mathcal{P}_{\rho}\Sigma)-\frac{1}{315}F_{\mu\nu}\,( \mathcal{P}_{\rho}\Sigma)\,\Gamma_{\mu\nu}\,(\mathcal{P}_{\rho}\Sigma)\] \[-\frac{2}{315}F_{\mu\rho}\,F_{\rho\nu}\,(\mathcal{P}_{\mu}\Sigma) \,(\mathcal{P}_{\nu}\Sigma)+\frac{2}{63}F_{\mu\rho}\,F_{\rho\nu}\,(\mathcal{P}_ {\nu}\Sigma)\,(\mathcal{P}_{\mu}\Sigma)-\frac{2}{315}F_{\mu\rho}\,\Gamma_{\rho \nu}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\nu}\Sigma)\] \[+\frac{2}{63}F_{\mu\rho}\,\Gamma_{\rho\nu}\,(\mathcal{P}_{\nu} \Sigma)\,(\mathcal{P}_{\mu}\Sigma)+\frac{1}{420}F_{\mu\sigma}\,F_{\sigma\rho}\, \Gamma_{\mu\nu}\,F_{\nu\rho}+\frac{1}{420}F_{\mu\sigma}\,F_{\sigma\rho}\, \Gamma_{\mu\nu}\,\Gamma_{\nu\rho}\] \[+\frac{1}{420}F_{\mu\sigma}\,\Gamma_{\sigma\rho}\,\Gamma_{\mu\nu}\,F _{\nu\rho}+\frac{1}{420}F_{\mu\sigma}\,\Gamma_{\sigma\rho}\,\Gamma_{\mu\nu}\, \Gamma_{\nu\rho}+\frac{1}{2520}F_{\nu\rho}\,F_{\rho\alpha}\,\Gamma_{\alpha\mu} \,\Gamma_{\mu\nu}\] \[+\frac{1}{420}F_{\nu\rho}\,\Gamma_{\mu\sigma}\,F_{\sigma\rho}\, \Gamma_{\mu\nu}+\frac{1}{420}F_{\nu\rho}\,\Gamma_{\mu\sigma}\,\Gamma_{\sigma \rho}\,\Gamma_{\mu\nu}\,\Gamma_{\rho\sigma}\,\Gamma_{\mu\nu}+\frac{1}{2520}F_{ \nu\rho}\,\Gamma_{\rho\alpha}\,\Gamma_{\alpha\mu}\,\Gamma_{\mu\nu}\] \[+\frac{1}{2520}F_{\rho\alpha}\,\Gamma_{\alpha\mu}\,\Gamma_{\mu\nu}\, \Gamma_{\nu\rho}-\frac{1}{105}F_{\rho\mu}\,(\mathcal{P}_{\mu}\Sigma)\,F_{\rho \nu}\,(\mathcal{P}_{\nu}\Sigma)-\frac{1}{105}F_{\rho\mu}\,(\mathcal{P}_{\mu} \Sigma)\,\Gamma_{\rho\nu}\,(\mathcal{P}_{\nu}\Sigma)\] \[-\frac{2}{315}F_{\rho\nu}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{ \nu}\Sigma)\,\Gamma_{\mu\rho}-\frac{1}{105}F_{\rho\nu}\,(\mathcal{P}_{\nu} \Sigma)\,\Gamma_{\rho\mu}\,(\mathcal{P}_{\mu}\Sigma)+\frac{2}{63}F_{\rho\nu} \,(\mathcal{P}_{\nu}\Sigma)\,(\mathcal{P}_{\mu}\Sigma)\,\Gamma_{\mu\rho}\] \[+\frac{1}{10080}F_{\rho\sigma}\,\Gamma_{\mu\nu}\,F_{\rho\sigma}\, \Gamma_{\mu\nu}+\frac{1}{5040}F_{\rho\sigma}\,\Gamma_{\mu\nu}\,\Gamma_{\rho \sigma}\,\Gamma_{\mu\nu}+\frac{1}{420}F_{\sigma\rho}\,\Gamma_{\mu\nu}\, \Gamma_{\nu\rho}\,\Gamma_{\mu\sigma}\] \[+\frac{1}{2520}\Gamma_{\alpha\mu}\,\Gamma_{\mu\nu}\,\Gamma_{\nu \rho}\,\Gamma_{\rho\alpha}-\frac{1}{105}\Gamma_{\alpha\nu}\,(\mathcal{P}_{\mu} \Sigma)\,\Gamma_{\mu\nu}\,(\mathcal{P}_{\alpha}\Sigma)+\frac{1}{420}\Gamma_{ \mu\nu}\,\Gamma_{\nu\rho}\,\Gamma_{\mu\sigma}\,\Gamma_{\sigma\rho}\] \[+\frac{1}{10080}\Gamma_{\mu\nu}\,\Gamma_{\rho\sigma}\,\Gamma_{\mu \nu}\,\Gamma_{\rho\sigma}-\frac{1}{630}\Gamma_{\mu\nu}\,(\mathcal{P}_{\rho} \Sigma)\,\Gamma_{\mu\nu}\,(\mathcal{P}_{\rho}\Sigma)-\frac{2}{315}\Gamma_{ \mu\rho}\,\Gamma_{\rho\nu}\,(\mathcal{P}_{\mu}\Sigma)\,(\mathcal{P}_{\nu} \Sigma)\] \[+\frac{2}{63}\Gamma_{\mu\rho}\,\Gamma_{\rho\nu}\,(\mathcal{P}_{\nu} \Sigma)\,(\mathcal{P}_{\mu}\Sigma)-\frac{1}{105}\Gamma_{\rho\mu}\,(\mathcal{P}_{ \mu}\Sigma)\,\Gamma_{\rho\nu}\,(\mathcal{P}_
\[+\frac{8}{315}\Sigma\,F_{\mu\nu}\,\Sigma\,F_{\nu\rho}\,\Gamma_{\rho \mu}+\frac{8}{315}\Sigma\,F_{\mu\nu}\,\Sigma\,\Gamma_{\nu\rho}\,F_{\rho\mu}+ \frac{8}{315}\Sigma\,F_{\mu\nu}\,\Sigma\,\Gamma_{\nu\rho}\,\Gamma_{\rho\mu}\] \[+\frac{52}{105}\Sigma\,F_{\mu\nu}\,(\mathcal{P}_{\mu}\Sigma)\, \Sigma\,(\mathcal{P}_{\nu}\Sigma)+\frac{8}{315}\Sigma\,F_{\nu\rho}\,F_{\rho\mu} \,\Sigma\,\Gamma_{\mu\nu}+\frac{8}{315}\Sigma\,F_{\nu\rho}\,\Gamma_{\rho\mu} \,\Sigma\,\Gamma_{\mu\nu}\] \[+\frac{8}{315}\Sigma\,\Gamma_{\mu\nu}\,\Sigma\,\Gamma_{\nu\rho}\, \Gamma_{\rho\mu}+\frac{12}{35}\Sigma\,\Gamma_{\mu\nu}\,\Sigma\,(\mathcal{P}_{ \mu}\Sigma)\,(\mathcal{P}_{\nu}\Sigma)+\frac{52}{105}\Sigma\,\Gamma_{\mu\nu} \,(\mathcal{P}_{\mu}\Sigma)\,\Sigma\,(\mathcal{P}_{\nu}\Sigma)\] \[+\frac{12}{35}\Sigma\,F_{\mu\nu}\,\Sigma\,(\mathcal{P}_{\mu} \Sigma)\,(\mathcal{P}_{\nu}\Sigma)+\frac{8}{315}\Sigma\,\Gamma_{\mu\nu}\, \Sigma\,\Gamma_{\nu\rho}\,F_{\rho\mu}+\frac{16}{105}\Sigma\,(\mathcal{P}_{\mu }\Sigma)\,\Sigma\,(\mathcal{P}_{\nu}\Sigma)\,F_{\mu\nu}\] \[+\frac{16}{105}\Sigma\,(\mathcal{P}_{\mu}\Sigma)\,\Sigma\,( \mathcal{P}_{\nu}\Sigma)\,\Gamma_{\mu\nu}\bigg{\}}\bigg{]}. \tag{113}\]
## Appendix C Universal One-Loop Effective Lagrangian up to D8
UOLEA derived in Ref. [36] in terms of the generalised covariant derivative and interaction functional defines in Eq. (14) is given below.
\[\mathcal{L}_{\rm eff}^{d\leq 8}= \frac{c_{s}}{(4\pi)^{2}}\,M^{4}\left[-\frac{1}{2}\,\left(\ln\left[ \frac{M^{2}}{\mu^{2}}\right]-\frac{3}{2}\right)\right]+\frac{c_{s}}{(4\pi)^{2 }}{\rm tr}\bigg{\{}M^{2}\,\left[\,-\left(\ln\left[\frac{M^{2}}{\mu^{2}}\right] -1\right)\,U\right]\] \[+M^{0}\,\,\frac{1}{2}\bigg{[}-\ln\left[\frac{M^{2}}{\mu^{2}} \right]\,U^{2}-\frac{1}{6}\ln\left[\frac{M^{2}}{\mu^{2}}\right]\,(G_{\mu\nu}) ^{2}\bigg{]}\] \[+\frac{1}{M^{2}}\frac{1}{6}\bigg{[}-U^{3}-\frac{1}{2}(\mathcal{P }_{\mu}U)^{2}-\frac{1}{2}U\,(G_{\mu\nu})^{2}-\frac{1}{10}(J_{\nu})^{2}+\frac{1 }{15}\,G_{\mu\nu}\,G_{\nu\rho}\,G_{\rho\mu}\bigg{]}\] \[+\frac{1}{M^{4}}\frac{1}{24}\bigg{[}U^{4}-U^{2}(\mathcal{P}^{2}U) +\frac{4}{5}U^{2}(G_{\mu\nu})^{2}+\frac{1}{5}(U\,G_{\mu\nu})^{2}+\frac{1}{5}( \mathcal{P}^{2}U)^{2}\] \[\qquad\qquad\qquad\qquad-\frac{2}{5}U\,(\mathcal{P}_{\mu}U)\,J_ {\mu}+\frac{2}{5}U(J_{\mu})^{2}-\frac{2}{15}(\mathcal{P}^{2}U)(G_{\rho\sigma} )^{2}+\frac{1}{35}(\mathcal{P}_{\nu}J_{\mu})^{2}\] \[\qquad\qquad\qquad\qquad-\frac{4}{15}U\,G_{\mu\nu}G_{\nu\rho}G_{ \rho\mu}-\frac{8}{15}(\mathcal{P}_{\mu}\mathcal{P}_{\nu}U)\,G_{\rho\mu}G_{ \rho\nu}+\frac{16}{105}G_{\mu\nu}J_{\mu}J_{\nu}\] \[\qquad\qquad\qquad\qquad+\frac{1}{420}(G_{\mu\nu}G_{\rho\sigma}) ^{2}+\frac{17}{210}(G_{\mu\nu})^{2}(G_{\rho\sigma})^{2}+\frac{2}{35}(G_{\mu \nu}G_{\nu\rho})^{2}\] \[\qquad\qquad\qquad\qquad+\frac{1}{105}G_{\mu\nu}G_{\nu\rho}G_{ \rho\sigma}G_{\sigma\mu}+\frac{16}{105}(\mathcal{P}_{\mu}J_{\nu})G_{\nu\sigma}G_{ \sigma\mu}\bigg{]}\] \[+\frac{1}{M^{6}}\frac{1}{60}\bigg{[}-U^{5}+2\,U^{3}(\mathcal{P}^{ 2}U)+U^{2}(\mathcal{P}_{\mu}U)^{2}-\frac{2}{3}U^{2}G_{\mu\nu}U\,G_{\mu\nu}-U^{ 3}(G_{\mu\nu})^{2}\] \[\qquad\qquad\qquad\qquad+\frac{1}{3}U^{2}(\mathcal{P}_{\mu}U)J_ {\mu}-\frac{1}{3}U\,(\mathcal{P}_{\mu}U)(\mathcal{P}_{\nu}U)\,G_{\mu\nu}- \frac{1}{3}U^{2}J_{\mu}(\mathcal{P}_{\mu}U)\] \[\qquad\qquad\qquad\qquad-\frac{1}{3}U\,G_{\mu\nu}(\mathcal{P}_{ \mu}U)(\mathcal{P}_{\nu}U)-U\,(\mathcal{P}^{2}U)^{2}-\frac{2}{3}(\mathcal{P}^ {2}U)(\mathcal{P}_{\nu}U)^{2}-\frac{1}{7}((\mathcal{P}_{\mu}U)G_{\mu\alpha})^{2}\] \[\qquad\qquad\qquad\qquad+\frac{2}{7}U^{2}G_{\mu\nu}G_{\nu\alpha}G_ {\alpha\mu}+\frac{8}{21}U\,G_{\mu\nu}U\,G_{\nu\alpha}G_{\alpha\mu}-\frac{4}{7}U ^{2}(J_{\mu})^{2}-\frac{3}{7}(U\,J_{\mu})^{2}\] \[\qquad\qquad\qquad\qquad+\frac{4}{7}U\,(\mathcal{P}^{2}U)(G_{\mu \nu})^{2}+\frac{4}{7}(\mathcal{P}^{2}U)U(G_{\mu\nu})^{2}-\frac{2}{7}U\,( \mathcal{P}_{\mu}U)J_{\nu}G_{\mu\nu}\] \[\qquad\qquad\qquad\qquad-\frac{2}{7}(\mathcal{P}_{\mu}U)U\,G_{\mu \nu}J_{\nu}-\frac{4}{7}U\,(\mathcal{P}_{\mu}U)G_{\mu\nu}J_{\nu}-\frac{4}{7}( \mathcal{P}_{\mu}U)U\,J_{\nu}G_{\mu\nu}\] \[\qquad\qquad\qquad+\frac{4}{21}U\,G_{\mu\nu}(\mathcal{P}^{2}U)G_{ \mu\nu}+\frac{11}{21}(\mathcal{P}_{\alpha}U)^{2}(G_{\mu\nu})^{2}-\frac{10}{21 }(\mathcal{P}_{\mu}U)J_{\nu}U\,G_{\mu\nu}\] \[\qquad\qquad\qquad\qquad-\frac{10}{21}(\mathcal{P}_{\mu}U)G_{\mu \nu}U\,J_{\nu}-\frac{2}{21}(\mathcal{P}_{\mu}U)(\mathcal{P}_{\nu}U)G_{\mu \alpha}G_{\alpha\nu}+\frac{10}{21}(\mathcal{P}_{\nu}U)(\mathcal{P}_{\mu}U)G_{ \mu\alpha}G_{\alpha\nu}\]
\[-\frac{1}{7}(G_{\alpha\mu}(\mathcal{P}_{\mu}U))^{2}-\frac{1}{42}( \left(\mathcal{P}_{\alpha}U\right)G_{\mu\nu})^{2}-\frac{1}{14}(\mathcal{P}_{\mu} \mathcal{P}^{2}U)^{2}-\frac{4}{21}(\mathcal{P}^{2}U)(\mathcal{P}_{\mu}U)J_{\mu}\] \[+\frac{4}{21}(\mathcal{P}_{\mu}U)(\mathcal{P}^{2}U)J_{\mu}+\frac{ 2}{21}(\mathcal{P}_{\mu}U)(\mathcal{P}_{\nu}U)(\mathcal{P}_{\mu}J_{\nu})-\frac {2}{21}(\mathcal{P}_{\nu}U)(\mathcal{P}_{\mu}U)(\mathcal{P}_{\mu}J_{\nu})\bigg{]}\] \[+\frac{1}{M^{8}}\frac{1}{120}\left[U^{6}-3\,U^{4}(\mathcal{P}^{2 }U)-2\,U^{3}(\mathcal{P}_{\nu}U)^{2}+\frac{12}{7}U^{2}(\mathcal{P}_{\mu} \mathcal{P}_{\nu}U)(\mathcal{P}_{\nu}\mathcal{P}_{\mu}U)\right.\] \[+\frac{26}{7}(\mathcal{P}_{\mu}\mathcal{P}_{\nu}U)U\left( \mathcal{P}_{\mu}U\right)(\mathcal{P}_{\nu}U)+\frac{26}{7}(\mathcal{P}_{\mu} \mathcal{P}_{\nu}U)(\mathcal{P}_{\mu}U)(\mathcal{P}_{\nu}U)U+\frac{9}{7}( \mathcal{P}_{\mu}U)^{2}(\mathcal{P}_{\nu}U)^{2}\] \[+\frac{9}{7}U\left(\mathcal{P}_{\mu}\mathcal{P}_{\nu}U\right)U \left(\mathcal{P}_{\nu}\mathcal{P}_{\mu}U\right)+\frac{17}{14}((\mathcal{P}_{ \mu}U)(\mathcal{P}_{\nu}U))^{2}+\frac{8}{7}U^{3}G_{\mu\nu}U\,G_{\mu\nu}\] \[+\frac{5}{7}U^{4}(G_{\mu\nu})^{2}+\frac{18}{7}G_{\mu\nu}( \mathcal{P}_{\mu}U)U^{2}(\mathcal{P}_{\nu}U)+\frac{9}{14}(U^{2}G_{\mu\nu})^{2}\] \[+\frac{18}{7}G_{\mu\nu}U\left(\mathcal{P}_{\mu}U\right)(\mathcal{ P}_{\nu}U)U+\frac{18}{7}(\mathcal{P}_{\mu}\mathcal{P}_{\nu}U)(\mathcal{P}_{\mu}U)U \left(\mathcal{P}_{\nu}U\right)\] \[+\left(\frac{8}{7}G_{\mu\nu}U\left(\mathcal{P}_{\mu}U\right)U \left(\mathcal{P}_{\nu}U\right)+\frac{26}{7}G_{\mu\nu}(\mathcal{P}_{\mu}U)U \left(\mathcal{P}_{\nu}U\right)U\right)\] \[+\left(\frac{24}{7}G_{\mu\nu}(\mathcal{P}_{\mu}U)(\mathcal{P}_{ \nu}U)U^{2}-\frac{2}{7}G_{\mu\nu}U^{2}(\mathcal{P}_{\mu}U)(\mathcal{P}_{\nu}U )\right)\bigg{]}\] \[+\frac{1}{M^{10}}\frac{1}{210}\left[\,-U^{7}-5\,U^{4}(\mathcal{P }_{\nu}U)^{2}-8\,U^{3}(\mathcal{P}_{\mu}U)U(\mathcal{P}_{\mu}U)-\frac{9}{2}(U^ {2}(\mathcal{P}_{\mu}U))^{2}\right]\] \[+\frac{1}{M^{12}}\frac{1}{336}\left[U^{8}\right]\bigg{\}}\bigg{]}.\] (C.1)
Here, we consider that the tensors \(G_{\mu\nu}\) and \(J_{\mu}\) are functions of \(\mathcal{P}\): \(G_{\mu\nu}=[\mathcal{P}_{\mu},\mathcal{P}_{\nu}]\), and \(J_{\mu}=\mathcal{P}_{\nu}G_{\nu\mu}=[\mathcal{P}_{\nu},[\mathcal{P}_{\nu}, \mathcal{P}_{\mu}]]\). Please note that the hermitian conjugates are already fed in the above expression such that the effective Lagrangian is self-hermitian.
|
2305.14311 | Statistical Indistinguishability of Learning Algorithms | When two different parties use the same learning rule on their own data, how
can we test whether the distributions of the two outcomes are similar? In this
paper, we study the similarity of outcomes of learning rules through the lens
of the Total Variation (TV) distance of distributions. We say that a learning
rule is TV indistinguishable if the expected TV distance between the posterior
distributions of its outputs, executed on two training data sets drawn
independently from the same distribution, is small. We first investigate the
learnability of hypothesis classes using TV indistinguishable learners. Our
main results are information-theoretic equivalences between TV
indistinguishability and existing algorithmic stability notions such as
replicability and approximate differential privacy. Then, we provide
statistical amplification and boosting algorithms for TV indistinguishable
learners. | Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas | 2023-05-23T17:49:56Z | http://arxiv.org/abs/2305.14311v1 | # Statistical Indistinguishability of Learning Algorithms
###### Abstract
When two different parties use the same learning rule on their own data, how can we test whether the distributions of the two outcomes are similar? In this paper, we study the similarity of outcomes of learning rules through the lens of the Total Variation (TV) distance of distributions. We say that a learning rule is TV indistinguishable if the expected TV distance between the posterior distributions of its outputs, executed on two training data sets drawn independently from the same distribution, is small. We first investigate the learnability of hypothesis classes using TV indistinguishable learners. Our main results are information-theoretic equivalences between TV indistinguishability and existing algorithmic stability notions such as replicability and approximate differential privacy. Then, we provide statistical amplification and boosting algorithms for TV indistinguishable learners.
## 1 Introduction
Lack of replicability in experiments has been a major issue, usually referred to as the _reproducibility crisis_, in many scientific areas such as biology and chemistry. Indeed, the results of a survey that appeared in Nature [1] are very worrisome: more than \(70\%\) of the researchers that participated in it could not replicate other researchers' experimental findings while over half of them were not able to even replicate their own conclusions. In the past few years the number of scientific publications in the Machine Learning (ML) community has increased exponentially. Significant concerns and questions regarding replicability have also recently been raised in the area of ML. This can be witnessed by the establishment of various reproducibility challenges in major ML conferences such as the ICLR 2019 Reproducibility Challenge [10] and the NeurIPS 2019 Reproducibility Program [11].
Reproducibility of outcomes in scientific research is a necessary condition to ensure that the conclusions of the studies reflect inherent properties of the underlying population and are not an artifact of the methods that scientists used or the random sample of the population that the study was conducted on. In its simplest form, it requires that if two different groups of researchers carry out an experiment using the same methodologies but _different_ samples of the _same_ population, it better be the case that the two outcomes of their studies are _statistically indistinguishable_. In this paper, we investigate this notion in the context of ML (cf. Definition1), and characterize for which learning problems statistically indistinguishable learning algorithms exist. Furthermore, we show how statistical indistinguishability, as a property of learning algorithms, is naturally related to various notions of algorithmic stability such as replicability of experiments, and differential privacy.
While we mainly focus on the fundamental ML task of binary classification to make the presentation easier to follow, many of our results extend to other statistical tasks (cf. AppendixA.2). More formally, the
objects of interest are _randomized_ learning rules \(A:(\mathcal{X}\times\{0,1\})^{n}\to\{0,1\}^{\mathcal{X}}\). These learning rules take as input a sequence \(S\) of \(n\) pairs from \(\mathcal{X}\times\{0,1\}\), i.e., points from a domain \(\mathcal{X}\) along with their labels, and map them to a binary classifier in a randomized manner. We assume that this sequence \(S\) is generated i.i.d. from a distribution \(\mathcal{D}\) on \(\mathcal{X}\times\{0,1\}\). We denote by \(\{0,1\}^{\mathcal{X}}\) the space of binary classifiers and by \(A(S)\) the random variable that corresponds to the output of \(A\) on input \(S\)1. We also adopt a more algorithmic viewpoint for \(A\) where we denote it as a _deterministic_ mapping \((\mathcal{X}\times\{0,1\})^{n}\times\mathcal{R}\to\{0,1\}^{\mathcal{X}}\), which takes as input a training set \(S\) of size \(n\) made of instance-label pairs and a random string \(r\sim\mathcal{R}\) (we use \(\mathcal{R}\) for both the probability space and the distribution) corresponding to the algorithm's _internal randomness_, and outputs a hypothesis \(A(S,r)\in\{0,1\}^{\mathcal{X}}.\) Thus, \(A(S)\) corresponds to a random variable while \(A(S,r)\) is a deterministic object. To make the distinction clear, we refer to \(A(S)\) as (the image of) a _learning rule_ and to \(A(S,r)\) as (the image of) a _learning algorithm_.
Footnote 1: We identify with \(A(S)\) the posterior distribution of \(A\) on input \(S\) when there is no confusion.
Indistinguishability.We measure how much two distributions over hypotheses differ using some notion of **statistical dissimilarity**\(d\), which can belong to a quite general class; we could let it be either an Integral Probability Metric (IPM) (e.g., TV or Wasserstein distance, see Definition13) or an \(f\)-divergence (e.g., KL or Renyi divergence). For further details, see [10]. We are now ready to introduce the following general definition of _indistinguishability of learning rules_.
**Definition 1** (Indistinguishability).: _Let \(d\) be a statistical dissimilarity measure. A learning rule \(A\) is \(n\)-sample \(\rho\)-indistinguishable with respect to \(d\) if for any distribution \(\mathcal{D}\) over inputs and two independent sets \(S,S^{\prime}\sim\mathcal{D}^{n}\) it holds that_
\[\mathop{\mathbf{E}}_{S,S^{\prime}\sim\mathcal{D}^{n}}\left[d\left(A(S),A(S^{ \prime})\right)\right]\leq\rho\,.\]
In words, Definition1 states that the expected dissimilarity of the outputs of the learning rule when executed on two training sets that are drawn independently from \(\mathcal{D}\) is small. We view Definition1 as a general information-theoretic way to study indistinguishability as a property of learning rules. In particular, it captures the property that the distribution of outcomes of a learning rule being _indistinguishable_ under the resampling of its inputs. Definition1 provides the flexibility to define the dissimilarity measure according to the needs of the application domain. For instance, it captures as a special case the global stability property [1] (see AppendixA.2).
Replicability.Since the issue of replicability is omnipresent in scientific disciplines it is important to design a formal framework through which we can argue about the replicability of experiments. Recently, various works proposed algorithmic definitions of replicability in the context of learning from samples [11, 12, 13], optimization [15], bandits [16] and clustering [17], and designed algorithms that are provably replicable under these definitions. A notion that is closely related to Definition1 was introduced by [11]: reproducibility or replicability2 of learning algorithms is defined as follows:
Footnote 2: This property was originally defined as “reproducibility” in [11], but later it was pointed out that the correct term for this definition is “replicability” (see also [12]). We use the term replicability throughout our work.
**Definition 2** (Replicability [11]).: _Let \(\mathcal{R}\) be a distribution over random strings. A learning algorithm \(A\) is \(n\)-sample \(\rho\)-replicable if for any distribution \(\mathcal{D}\) over inputs and two independent sets \(S,S^{\prime}\sim\mathcal{D}^{n}\) it holds that_
\[\mathop{\mathbf{Pr}}_{S,S^{\prime}\sim\mathcal{D}^{n},r\sim\mathcal{R}}[A(S,r )\neq A(S^{\prime},r)]\leq\rho\,.\]
The existence of a shared random seed \(r\) in the definition of replicability is one of the main distinctions between Definition1 and 2. This shared random string can be seen as a way to achieve a _coupling_ (see Definition12) between two executions of the algorithm \(A\). An interesting aspect of this definition is that replicability is verifiable; replicability under Definition2 can be tested using polynomially many samples, random seeds \(r\) and queries to \(A\). We remark that the work of [10] introduced the closely related notion of pseudo-global stability (see Definition6); the definitions of replicability and pseudo-global stability are equivalent up to polynomial factors in the parameters.
Differential Privacy.The notions of algorithmic indistinguishability and replicability that we have discussed so far have close connections with the classical definition of approximate differential privacy [14]. For \(a,b,\varepsilon,\delta\in[0,1]\), let \(a\approx_{\varepsilon,\delta}b\) denote the statement \(a\leq e^{\varepsilon}b+\delta\) and \(b\leq e^{\varepsilon}a+\delta\). We say that two probability distributions \(P,Q\) are \((\varepsilon,\delta)\)-indistinguishable if \(P(E)\approx_{\varepsilon,\delta}Q(E)\) for any measurable event \(E\).
**Definition 3** (Approximate Differential Privacy [14]).: _A learning rule \(A\) is an \(n\)-sample \((\varepsilon,\delta)\)-differentially private if for any pair of samples \(S,S^{\prime}\in(\mathcal{X}\times\{0,1\})^{n}\) that disagree on a single example, the induced posterior distributions \(A(S)\) and \(A(S^{\prime})\) are \((\varepsilon,\delta)\)-indistinguishable._
We remind the reader that, in the context of PAC learning, any hypothesis class \(\mathcal{H}\) can be PAC-learned by an approximate differentially-private algorithm if and only if it has a finite Littlestone dimension \(\operatorname{Ldim}(\mathcal{H})\) (see Definition14), i.e., there is a qualitative equivalence between online learnability and private PAC learnability [1, 1, 1, 2].
Broader Perspective.Our work lies in the fundamental research direction of responsible ML. Basic concepts in this area, such as DP, replicability, and different forms of fairness, are formalized using various forms of stability. Therefore, it is natural and important to formally study the interrelations between different types of algorithmic stability. Our main purpose is to study statistical indistinguishability and replicability as properties of algorithms and, under the perspective of stability, investigate rigorous connections with DP. We view both replicability and DP as two fundamental blocks in the area of responsible and reliable ML. Hence, we believe that establishing formal connections between a priori not clearly related notions of "reliability" is a way to increase our understanding towards the design of responsible ML systems.
### TV Indistinguishable Learning Rules
As we discussed, our Definition1 captures the property of a learning rule having _indistinguishable_ outcomes under the resampling of its inputs from the same distribution. In what follows, we instantiate Definition1 with \(d\) being the total variation (TV) distance, probably the most well-studied notion of statistical distance in theoretical computer science. Total variation distance between two distributions \(P\) and \(Q\) over the probability space \((\Omega,\Sigma_{\Omega})\) can be expressed as
\[d_{\mathrm{TV}}(P,Q) =\sup_{A\in\Sigma_{\Omega}}P(A)-Q(A) \tag{1}\] \[=\inf_{(X,Y)\sim\Pi(P,Q)}\mathbf{Pr}[X\neq Y]\,,\]
where the infimum is over all couplings between \(P\) and \(Q\) so that the associated marginals are \(P\) and \(Q\) respectively. A _coupling_ between the distributions \(P\) and \(Q\) is a set of variables \((X,Y)\) on some common probability space with the given marginals, i.e., \(X\sim P\) and \(Y\sim Q\). We think of a coupling as a construction of random variables \(X,Y\) with prescribed laws.
Setting \(d=d_{\mathrm{TV}}\) in Definition1, we get the following natural definition. For simplicity, we use the term TV indistinguishability to capture indistinguishability with respect to the TV distance.
**Definition 4** (Total Variation Indistinguishability).: _A learning rule \(A\) is \(n\)-sample \(\rho\)-TV indistinguishable if for any distribution over inputs \(\mathcal{D}\) and two independent sets \(S,S^{\prime}\sim\mathcal{D}^{n}\) it holds that_
\[\operatorname*{\mathbf{E}}_{S,S^{\prime}\sim\mathcal{D}^{n}}[d_{\mathrm{TV}}( A(S),A(S^{\prime}))]\leq\rho\,.\]
For some equivalent definitions, we refer to AppendixA.3. Moreover, for some extensive discussion about the motivation of this definition, see AppendixA.5. We emphasize that the notion of TV distance has very strong connections with statistical indistinguishability of distributions. If two distributions \(P\) and \(Q\) are close in TV distance, then, intuitively, no statistical test can distinguish whether an observation was drawn from \(P\) or \(Q\). In particular, if \(d_{\mathrm{TV}}(P,Q)=\rho\), then \(\rho/2\) is the maximum advantage an analyst can achieve in determining whether a random sample \(X\) came from \(P\) or from \(Q\) (where \(P\) or \(Q\) is used with probability \(1/2\) each). In what follows, we focus on this particular notion of statistical dissimilarity.
As a warmup, we start by proving a generalization result for TV indistinguishable learners. Recall that if we _fix_ some binary classifier we can show, using standard concentration bounds, that its performance on a
sample is close to its performance on the underlying population. However, when we train an ML algorithm using a dataset \(S\) to output a classifier \(h\) we cannot just use the fact that it has small loss on \(S\) to claim that its loss on the population is small because \(h\) depends on \(S\). The following result shows that we can get such generalization bounds if \(A\) is a \(\rho\)-TV indistinguishable algorithm. We remark that a similar result regarding replicable algorithms appears in [11]. The formal proof, stated in a slightly more general way, is in Appendix F.
**Proposition 1** (TV Indistinguishability Implies Generalization).: _Let \(\delta,\rho\in(0,1)^{2}\). Let \(\mathcal{D}\) be a distribution over inputs and \(S=\{(x_{i},y_{i})\}_{i\in[n]}\) be a sample of size \(n\) drawn i.i.d. from \(\mathcal{D}\). Let \(h:\mathcal{X}\to\{0,1\}\) be the output of an \(n\)-sample \(\rho\)-TV indistinguishable learning rule \(A\) with input \(S\). Then, with probability at least \(1-\delta-4\sqrt{\rho}\) over \(S\), it holds that,_
\[\left|\operatorname*{\mathbf{E}}_{h\sim A(S)}[L(h)]-\operatorname*{\mathbf{E} }_{h\sim A(S)}\left[\widehat{L}(h)\right]\right|\leq\sqrt{\frac{\log(2/\delta )}{2n}}+\sqrt{\rho}\,,\]
_where \(L(h)\triangleq\operatorname{\mathbf{Pr}}_{(x,y)\sim\mathcal{D}}[h(x)\neq y]\) and \(\widehat{L}(h)\triangleq\frac{1}{n}\sum_{(x,y)\in S}1\{h(x)\neq y\}\)._
### Summary Of Contributions
In this work, we investigate the connections between TV indistinguishability, replicability and differential privacy.
* In Section2, we show that TV indistinguishability and replicability are equivalent. This equivalence holds for countable domains3 and extends to general statistical tasks (cf. Appendix C.2). Footnote 3: We remark that the direction replicability implies TV indistinguishability holds for general domains. We remark that our transformations between replicable and TV indistinguishable learners do not change the (possibly randomized) input \(\to\) output map which is induced by the learner; i.e., given a TV indistinguishable learner \(\mathcal{A}\), we transform it to a replicable learner \(\mathcal{A}^{\prime}\) such that \(\mathcal{A}(S)\) and \(\mathcal{A}^{\prime}(S)\) are the same distributions over output hypotheses for every input sample \(S\). At this point we would like to highlight a subtle difference between replicability and other well studied notions of stability that arise in learning theory such as differential privacy, TV indistinguishability, one-way perfect generalization, and others. The latter notions of stability depend only on the input \(\to\) output map which is induced by the learner. In contrast, the definition of replicability has to do with the way the algorithm is implemented (in particular the way randomness is used). In other words, the definition of replicability enables having two learning rules \(\mathcal{A}^{\prime},\mathcal{A}^{\prime\prime}\) that compute exactly the same input \(\to\) output map, but such that \(\mathcal{A}^{\prime}\) is replicable and \(\mathcal{A}^{\prime\prime}\) is not. Thus, our equivalence suggests an interpretation of TV indistinguishability as an abstraction/extension of replicability that only depends on the input-output mechanism. Footnote 4: We remark that the direction \((\varepsilon,\delta)\)-DP implies TV indistinguishability holds for general domains.
* In Section3, we show that TV indistinguishability and \((\varepsilon,\delta)\)-DP are statistically equivalent. This equivalence holds for countable4 domains in the context of PAC learning. As an intermediate result, we also show that replicability and \((\varepsilon,\delta)\)-DP are statistically equivalent in the context of PAC learning, and this holds for general domains. Footnote 4: We remark that the direction \((\varepsilon,\delta)\)-DP implies TV indistinguishability holds for general domains.
* In Section4, we provide statistical amplification and boosting algorithms for TV indistinguishable learners for countable domains. En route, we improve the sample complexity of some routines provided in [11].
### Related Work
Our work falls in the research agenda of replicable algorithm design, which was initiated by [11]. In particular, [11] introduced the notion of replicable learning algorithms, established that any statistical query algorithm can be made replicable, and designed replicable algorithms for various applications such as
halfspace learning. Next, [14] studied reproducibility in optimization and [15] provided replicable bandit algorithms.
The most closely related prior work to ours is the recent paper by [1]. In particular, as we discuss below in greater detail, an alternative proof of the equivalence between TV indistinguishability, replicability, and differential privacy follows from [1]. In contrast with our equivalence, the transformations by [1] are restricted to finite classes. On the other hand, [1] give a constructive proof whereas our proof is purely information-theoretic.
In more detail, [1] establish a variety of equivalences between different notions of stability such as differential privacy, replicability, and one-way perfect generalization, and the latter contains TV indistinguishability as a special case:
**Definition 5** ((One-Way) Perfect Generalization [16, 15]).: _A learning rule \(A:\mathcal{X}^{n}\to\mathcal{Y}\) is \((\beta,\varepsilon,\delta)\)-perfectly generalizing if, for every distribution \(\mathcal{D}\) over \(\mathcal{X}\), there exists a distribution \(\mathcal{P}_{\mathcal{D}}\) such that, with probability at least \(1-\beta\) over \(S\) consisting of \(n\) i.i.d. samples from \(\mathcal{D}\), and every set of outcomes \(\mathcal{O}\subseteq\mathcal{Y}\)_
\[e^{-\varepsilon}\left(\operatorname*{\mathbf{Pr}}_{\mathcal{D}}[\mathcal{O}] -\delta\right)\leq\operatorname*{\mathbf{Pr}}[A(S)\in\mathcal{O}]\leq e^{ \varepsilon}\operatorname*{\mathbf{Pr}}_{\mathcal{D}}[\mathcal{O}]+\delta\,.\]
_Moreover, \(A\) is \((\beta,\varepsilon,\delta)\)-one-way perfectly generalizing if \(\operatorname*{\mathbf{Pr}}[A(S)\in\mathcal{O}]\leq e^{\varepsilon} \operatorname*{\mathbf{Pr}}_{\mathcal{P}_{\mathcal{D}}}[\mathcal{O}]+\delta\)._
Note indeed that plugging \(\varepsilon=0\) to the definition of perfect generalization specializes the above definition to an equivalent variant of TV indistinguishability (see also Definition 20). [1] derives an equivalence between replicability and one-way perfect generalization with \(\varepsilon>0\). However, in a personal communication they pointed out to us that their argument also applies to the case \(\varepsilon=0\), and hence to TV indistinguishability. In more detail, an intermediate step of their proof shows that any \((\beta,\varepsilon,\delta)\)-perfectly generalizing algorithm \(A\) is also \((\beta,0,2\varepsilon+\delta)\)-perfectly generalizing, which is qualitatively equivalent with our main definition (see Definition 4). As noted earlier our proof applies more generally to infinite countable domains but is non-constructive.
Differential Privacy.Differential privacy [11, 12, 13, 14] is quite closely related to replicability. The first connection between replicability and DP in the context of PAC learning was, implicitly, established by [13] (for finite domains \(\mathcal{X}\)), via the technique of correlated sampling (see Appendix A.4) and the notion of pseudo-global stability (which is equivalent to replicability as noticed by [10]):
**Definition 6** (Pseudo-Global Stability [13]).: _Let \(\mathcal{R}\) be a distribution over random strings. A learning algorithm \(A\) is said to be \(n\)-sample \((\eta,\nu)\)-pseudo-globally stable if for any distribution \(\mathcal{D}\) there exists a hypothesis \(h_{r}\) for every \(r\in\operatorname*{supp}(\mathcal{R})\) (depending on \(\mathcal{D}\)) such that_
\[\operatorname*{\mathbf{Pr}}_{r\sim\mathcal{R}}\left[\operatorname*{\mathbf{Pr }}_{S\sim\mathcal{D}^{n}}[A(S,r)=h_{r}]\geq\eta\right]\geq\nu\,.\]
The high-level connection between these notions appears to boil down to the notion of stability [1, 1, 1, 1, 1, 10] (see [1] for further details between stability, online learnability and differential privacy). In particular, [13] showed that a class of finite Littlestone dimension admits a list-globally stable learner (see Theorem 18 in [13]). The work of [13] (among other things) showed (i) how to perform a reduction from list-global stability to pseudo-global stability via correlated sampling in finite domains (see Theorem 20 in [13]) and (ii) how to perform a reduction from pseudo-global stability to approximate DP via DP selection (see Theorem 25 in [13]). We highlight that this equivalence between differential privacy and replicability for finite domains was made formal by [1] and was extended to arbitrary statistical tasks.
TV Stability.The definition of TV indistinguishability that we propose has close connections with the definition of TV stability. This notion has appeared in the context of adaptive data analysis. The work of [14] studied the following problem: suppose there is an unknown distribution \(P\) and a set \(S\) of \(n\) independent samples drawn i.i.d. from \(P\). The goal is to design an algorithm that, with input \(S\), will accurately answer a sequence of adaptively chosen queries about the unknown distribution \(P\). The main
question is how many samples must one draw from the distribution, as a function of the type of queries, the number of queries, and the desired level of accuracy to perform well? [14] provide various results that rely on the connections between algorithmic stability, differential privacy and generalization. To this end, they think of differential privacy as max-KL stability and study the performance of other notions of stability such as TV stability. Crucially, in their definition, TV stability considers any pair of neighboring datasets \(S,S^{\prime}\) and not two independent draws from \(P\). More concretely, they propose the following definition.
**Definition 7** (Total Variation Stability [14]).: _A learning rule \(A\) is \(n\)-sample \(\rho\)-TV stable if for any pair of samples \(S,S^{\prime}\in(\mathcal{X}\times\{0,1\})^{n}\) that disagree on a single example, it holds that \(d_{\mathrm{TV}}(A(S),A(S^{\prime}))\leq\rho\)._
We underline that for any constant \(\rho^{5}\) it is not challenging to obtain a \(\rho\)-TV stable algorithm in the learning setting we are interested in. It suffices to just sub-sample a small enough subset of the data. Hence, any class with finite VC dimension is TV stably learnable under this definition. As it is evident from our results (cf. Theorem4), this is in stark contrast with the definition we propose. We remind the readers that just sub-sampling the dataset is not enough to achieve differential privacy. This is because it is required that \(\delta=o(1/n).\) We remark that the definition of total variation stability a la [14] also appears in [13].
The above definition of TV stability has close connections to machine unlearning. This problem refers to the ability of a user to delete their data that were used to train a ML algorithm. When this happens, the machine learning algorithm has to move to a state as if it had never used that data for training, hence the term _machine unlearning_. One can see that Definition7 is suitable for this setting since it states that if one point of the dataset is deleted, the distribution of the algorithm should not be affected very much. For convex risk minimization problems, [15] design TV stable algorithms based on noisy Stochastic Gradient Descent (SGD). Such approaches lead to the design of efficient unlearning algorithms, which are based on sub-sampling the dataset and constructing a maximal coupling of Markov chains for the noisy SGD procedure.
KL Stability and PAC-Bayes.In AppendixA.3 we provide some equivalent definitions to TV indistinguishability. In particular, Definition20 has connections with the line of work that studies distribution-dependent generalization bounds. To be more precise, if instead of the TV distance we use the KL divergence to measure the distance between the prior and the output of the algorithm we get the definition of the quantity that is used to derive PAC-Bayes generalization bounds. Interestingly, [13] show that the PAC-Bayes framework cannot be used to derive distribution-free PAC learning bounds for classes that have infinite Littestone dimension; they show that for any algorithm that learns 1-dimensional linear classifiers (thresholds), there exists a realizable distribution for which PAC-Bayes bounds are trivial. Recently, a similar PAC-Bayes framework was proposed in [1], where the KL divergence is replaced with a general family of Integral Probability Metrics (cf. Definition13).
Probably Eventually Correct Learning.The work of [15] introduced the _Probably Eventually Correct_ (PEC) model of learning. In this model, a learner outputs the same hypothesis6, with probability one, after a uniformly bounded number of revisions. Intuitively, this corresponds to the property that the global stability parameter is close to 1. Interestingly, prior work on global stability [1, 1] had characterized _Littlestone classes_ as being PAC learnable by an algorithm which outputs some fixed hypothesis with nonzero probability. However, the frequency of this hypothesis was typically very small and its loss was a priori non-zero. [15] give a new characterization to Littlestone classes by identifying them with the classes that can be PEC learned in a stable fashion. Informally, this means that the learning rule for \(\mathcal{H}\) stabilizes on some hypothesis after changing its mind at most \(L\) times, where \(L\) is the Littlestone dimension of \(\mathcal{H}\) (cf. Definition14). Interestingly, [15] manage to show that the well-known _Standard Optimal Algorithm_ (SOA) [13] is a stable PEC learner, using tools from the theory of universal learning [1, 12, 13, 14, 15]. Moreover, they list various different notions of algorithmic stability and show that they all have something in common: a class \(\mathcal{H}\) is learnable by such learners if and only if its Littlestone dimension is finite. Our main result shows that, indeed, classes that are learnable by TV indistinguishable learners fall into that category.
Footnote 6: In fact, even for \(\rho\geq 1/n^{c},0<c<1\).
## 2 TV Indistinguishability and Replicability
Our information-theoretic definition of TV indistinguishability seems to put weaker restrictions on learning rules than the notion of replicability in two ways: (i) it allows for _arbitrary_ couplings between the two executions of the algorithm (recall the coupling definition of TV distance, see Eq.(1)), and, (ii) it allows for _different_ couplings between every pair of datasets \(S,S^{\prime}\) (the optimal coupling in the definition of TV distance will depend on \(S,S^{\prime}\) of Definition4). In short, our definition allows for _arbitrary data-dependent_ couplings, instead of just sharing the randomness across two executions. TV indistinguishability can be viewed as a statistical generalization of replicability (cf. Definition2) since it describes a property of _learning rules_ rather than _learning algorithms_.
In this section, we will show that TV indistinguishability and replicability are (perhaps surprisingly) equivalent in a rather strong sense: under a mild measure-theoretic condition, every TV indistinguishable algorithm can be converted into an _equivalent_ replicable one by _re-interpreting_ its internal randomness. This will be made formal shortly.
We start by showing that any replicable algorithm is TV indistinguishable.
**Theorem 1** (Replicability \(\Rightarrow\) TV Indistinguishability).: _If a learning rule \(A\) is \(n\)-sample \(\rho\)-replicable, then it is also \(n\)-sample \(\rho\)-TV indistinguishable._
Proof.: Fix some distribution \(\mathcal{D}\) over inputs. Let \(A\) be \(n\)-sample \(\rho\)-replicable with respect to \(\mathcal{D}\). For the random variables \(A(S),A(S^{\prime})\) where \(S,S^{\prime}\sim\mathcal{D}^{n}\) are two independent samples and using Eq.(1), we have
\[\mathop{\mathbf{E}}_{S,S^{\prime}\sim\mathcal{D}^{n}}[d_{\mathrm{TV}}(A(S),A (S^{\prime}))]=\mathop{\mathbf{E}}_{S,S^{\prime}\sim\mathcal{D}^{n}}\left[ \inf_{(h,h^{\prime})\sim\Pi(A(S),A(S^{\prime}))}\mathbf{Pr}[h\neq h^{\prime}] \right]\,. \tag{1}\]
Let \(\mathcal{R}\) be the source of randomness that \(A\) uses. The expected optimal coupling of Eq.(1) is at most \(\mathop{\mathbf{E}}_{S,S^{\prime}\sim\mathcal{D}^{n}}\left[\mathbf{Pr}_{r\sim \mathcal{R}}[A(S,r)\neq A(S^{\prime},r)]\right]\). This inequality follows from the fact that using shared randomness between the two executions of \(A\) is a particular way to couple the two random variables. To complete the proof, it suffices to notice that this upper bound is equal to
\[\mathop{\mathbf{Pr}}_{S,S^{\prime}\sim\mathcal{D}^{n},r\sim\mathcal{R}}[A(S,r )\neq A(S^{\prime},r)]\leq\rho\,.\]
The last inequality follows since \(A\) is \(\rho\)-replicable.
We now deal with the opposite direction, i.e., we show that TV indistinguishability implies replicability. In order to be formal, we need to discuss some measure theoretic properties first. Let us recall the definition of absolute continuity for two measures.
**Definition 8** (Absolute Continuity).: _Consider two measures \(P,Q\) on a \(\sigma\)-algebra \(\mathcal{B}\) of subsets of \(\Omega\). We say that \(P\) is absolutely continuous with respect to \(Q\) if for any \(E\in\mathcal{B}\) such that \(Q(E)=0\), it holds that \(P(E)=0\)._
Since the learning rules induce posterior distributions over hypotheses, this definition extends naturally to such rules.
**Definition 9**.: _Given learning rule \(A\), distribution over inputs \(\mathcal{D}\) and reference probability measure \(\mathcal{P}\), we say that \(A\) is absolutely continuous with respect to \(\mathcal{P}\) on inputs from \(\mathcal{D}\) if, for almost every sample \(S\) drawn from \(\mathcal{D}\), the posterior distribution \(A(S)\) is absolutely continuous with respect to \(\mathcal{P}\)._
In the previous definition, we fixed the data-generating distribution \(\mathcal{D}\). We next consider its distribution-free version.
**Definition 10**.: _Given learning rule \(A\) and reference probability measure \(\mathcal{P}\), we say that \(A\) is absolutely continuous with respect to \(\mathcal{P}\) if, for any distribution over inputs \(\mathcal{D}\), \(A\) is absolutely continuous with respect to \(\mathcal{P}\) on inputs from \(\mathcal{D}\)._
If \(\mathcal{X}\) is finite, then one can take \(\mathcal{P}\) to be the uniform probability measure over \(\{0,1\}^{\mathcal{X}}\) and any learning rule is absolutely continuous with respect to \(\mathcal{P}\). We now show how we can find such a prior \(\mathcal{P}\) in the case where \(\mathcal{X}\) is countable.
**Claim 1** (Reference Probability Measure for Countable Domains).: _Let \(\mathcal{X}\) be a countable domain and \(A\) be a learning rule. Then, there is a reference probability measure \(\mathcal{P}\) such that \(A\) is absolutely continuous with respect to \(\mathcal{P}\)._
Proof.: Since \(\mathcal{X}\) is countable, for a fixed \(n\), we can consider an enumeration of all the \(n\)-tuples \(\{S_{i}\}_{i\in\mathbb{N}}\). Then, we can take \(\mathcal{P}\) to be a countable mixture of these probability measures, i.e., \(\mathcal{P}=\sum_{i=1}^{\infty}\frac{1}{2^{i}}A(S_{i})\). Notice that since, each \(A(S_{i})\) is a measure and \(1/2^{i}>0\) for \(i\in\mathbb{N}\), and, \(\sum_{i=1}^{\infty}1/2^{i}=1\), we have that \(\mathcal{P}\) is indeed a probability measure. We now argue that each \(A(S_{i})\) is absolutely continuous with respect to \(\mathcal{P}\). Assume towards contradiction that this is not the case and let \(E\in\mathcal{B}\) be a set such that \(\mathcal{P}(E)=0\) but \(A(S_{j})(E)\neq 0\), for some \(j\in\mathbb{N}.\) Notice that \(A(S_{j})\) appears with coefficient \(1/2^{j}>0\) in the mixture that we consider, hence if \(A(S_{j})(E)>0\implies 1/2^{j}A(S_{j})(E)>0.\) Moreover \(A(S_{i})(E)\geq 0,\forall i\in\mathbb{N}\), which means that \(\mathcal{P}(E)>0\), so we get a contradiction.
We next define when two learning rules \(A,A^{\prime}\) are equivalent.
**Definition 11** (Equivalent Learning Rules).: _Two learning rules \(A,A^{\prime}\) are equivalent if for every sample \(S\) it holds that \(A(S)=A^{\prime}(S)\), i.e., for the same input they induce the same distribution over hypotheses._
In the next result, we show that for every TV indistinguishable algorithm \(A\), that is absolutely continuous with respect to some reference probability measure \(\mathcal{P}\), there exists an equivalent learning rule which is replicable.
**Theorem 2** (TV Indistinguishability \(\Rightarrow\) Replicability).: _Let \(\mathcal{P}\) be a reference probability measure over \(\{0,1\}^{\mathcal{X}}\), and let \(A\) be a learning rule that is \(n\)-sample \(\rho\)-TV indistinguishable and absolutely continuous with respect to \(\mathcal{P}\). Then, there exists an equivalent learning rule \(A^{\prime}\) that is \(n\)-sample \(\frac{2\rho}{1+\rho}\)-replicable._
In this section, we only provide a sketch of the proof and we refer the reader to Appendix C.1 for the complete one. Let us first state how we can use the previous result when \(\mathcal{X}\) is countable.
**Corollary 1**.: _Let \(\mathcal{X}\) be a countable domain and let \(A\) be a learning rule that is \(n\)-sample \(\rho\)-TV indistinguishable. Then, there exists an equivalent learning rule \(A^{\prime}\) that is \(n\)-sample \(\frac{2\rho}{1+\rho}\)-replicable._
The proof of this result follows immediately from Claim 1 and Theorem 2.
Proof Sketch of Theorem 2.Let us consider a learning rule \(A\) satisfying the conditions of Theorem 2. Fix a distribution \(\mathcal{D}\) over inputs. The crux of the proof is that given two random variables \(X,Y\) whose TV distance is bounded by \(\rho\), we can couple them using only a carefully designed source of shared randomness \(\mathcal{R}\) so that the probability that the realizations of these random variables differ is at most \(2\rho/(1+\rho).\) We can instantiate this observation with \(X=A(S)\) and \(Y=A(S^{\prime})\). Crucially, in the countable \(\mathcal{X}\) setting, we can pick the shared randomness \(\mathcal{R}\) in a way that only depends on the learning rule \(A\), but not on \(S\) or \(S^{\prime}\). Let us now describe how this coupling works. Essentially, it can be thought of as a generalization of the von Neumann rejection-based sampling which does not necessarily require that the distribution has bounded density. Following [1], we pick \(\mathcal{R}\) to be a Poisson point process which generates points of the form \((h,y,t)\) with intensity7\(\mathcal{P}\times\mathrm{Leb}\times\mathrm{Leb}\), where \(\mathcal{P}\) is a reference probability measure with respect to which \(A\) is absolutely continuous and \(\mathrm{Leb}\) is the Lebesgue measure over \(\mathbb{R}_{+}\). Intuitively, \(h\sim\mathcal{P}\) lies in the hypotheses' space, \(y\) is a non-negative real value and \(t\) corresponds to a time value. The coupling mechanism performs _rejection sampling_ for each distribution we would like to couple (here \(A(S)\) and \(A(S^{\prime})\)): it checks (in the ordering indicated by the time parameter) for each point \((h,y,t)\) whether \(f(h)>y\) (i.e., if \(y\) falls below the density curve \(f\) at \(h\)) and accepts the first point that satisfies this condition. In the formal proof, there will be two density functions; \(f\) (resp. \(f^{\prime}\)) for the density function of \(A(S)\) (resp. \(A(S^{\prime})\)). We also refer to Figure 1. One can show (see Theorem 8) that \(\mathcal{R}\) gives rise to a coupling between \(A(S)\) and \(A(S^{\prime})\) under the condition that both measures are absolutely continuous with respect to the reference probability measure \(\mathcal{P}\).
This coupling technique appears in [1]. We can then apply it and get
\[\mathop{\mathbf{Pr}}_{r\sim\mathcal{R}}[A(S,r)\neq A(S^{\prime},r)]\leq\frac{2d_ {\mathrm{TV}}(A(S),A(S^{\prime}))}{1+d_{\mathrm{TV}}(A(S),A(S^{\prime}))}\,.\]
Taking the expectation with respect to the draws of \(S,S^{\prime}\), we show (after some algebraic manipulations) that \(\mathop{\mathbf{Pr}}_{S,S^{\prime}\sim\mathcal{D}^{n}r\sim\mathcal{R}}[A(S,r) \neq A(S^{\prime},r)]\leq 2\rho/(1+\rho)\). We conclude this section with the following remarks.
**Remark 1** (General Equivalence).: _In Appendix C.2, we discuss how the above equivalence actually holds for general statistical tasks beyond binary classification. We first generalize the notions of indistinguishability, replicability and \(\mathrm{TV}\) indistinguishability for general input spaces \(\mathcal{I}\) and output spaces \(\mathcal{O}\). We then discuss that replicability and \(\mathrm{TV}\) indistinguishability remain equivalent (under the same measure theoretic conditions) in these more general abstract learning scenarios._
**Remark 2** (Implementation of the Coupling).: _We note that, in order to implement algorithm \(A^{\prime}\) of Theorem2, we need sample access to a Poisson point process with intensity \(\mathcal{P}\times\mathrm{Leb}\times\mathrm{Leb}\), where \(\mathcal{P}\) is the reference probability measure from Claim1 and \(\mathrm{Leb}\) is the Lebesgue measure over \(\mathbb{R}_{+}\). Importantly, \(\mathcal{P}\) depends only on \(A\). Moreover, we need full access to the values of the density \(f_{i}\) of the distribution \(A(S_{i})\) with respect to the reference probability measure \(\mathcal{P}\), for any sample \(S_{i}\). We underline that these quantities do not depend on the data-generating distribution \(\mathcal{D}\) (since we iterate over any possible sample)._
**Remark 3** (TV Indistinguishability vs. Replicability).: _Notice that in the definition of replicability (cf. Definition2) the source of randomness \(\mathcal{R}\) needs to be specified and by changing it we can observe different behaviors for coupled executions of the algorithm. On the other hand, the definition of \(\mathrm{TV}\) indistinguishability (cf. Definition4) does not require the specification of \(\mathcal{R}\) as it states a property of the posterior distribution of the learning rule._
Figure 1: Our goal is to couple \(A(S)\) with \(A(S^{\prime})\), where these two distributions are absolutely continuous with respect to the reference probability measure \(\mathcal{P}\). A sequence of points of the form \((h,y,t)\) is generated by the Poisson point process with intensity \(\mathcal{P}\times\mathrm{Leb}\times\mathrm{Leb}\) where \(h\sim\mathcal{P},(y,t)\in\mathbb{R}_{+}^{2}\) and \(\mathrm{Leb}\) is the Lebesgue measure over \(\mathbb{R}_{+}\) (note that we do not have upper bounds for the densities). Intuitively, \(h\) lies in the hypotheses’ space, \(y\) is a non-negative real value and \(t\) corresponds to a time value. Let \(f\) be the Radon-Nikodym derivate of \(A(S)\) with respect to \(\mathcal{P}\). We assign the first (the one with minimum \(t\)) value \(h\) to \(A(S)\) that satisfies the property that \(f(h)>y\), i.e., \(y\) falls below the density curve of \(A(S)\). We assign a hypothesis to \(A(S^{\prime})\) in a similar manner. This procedure defines a data-independent way to couple the two random variables and naturally extends to multiple ones. In the figure’s example, we set \(A(S)=h_{2}\) and \(A(S^{\prime})=h_{4}\) given that \(t_{1}<t_{2}<t_{3}<t_{4}\).
## 3 TV Indistinguishability and Differential Privacy
In this section we investigate the connections between TV indistinguishability and approximate DP in binary classification. Consider a hypothesis class \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\). We will say that \(\mathcal{H}\) is learnable by a \(\rho\)-TV indistinguishable learning rule \(A\) if this rule satisfies the notion of learnability under the standard realizable PAC learning model and is \(\rho\)-TV indistinguishable (see Definition16).
The main result of this section is an equivalence between approximate DP and TV indistinguishability for countable domains \(\mathcal{X}\), in the context of PAC learning. We remark that the equivalence of differential privacy with the notion of replicability is formally stated for finite outcome spaces (i.e., under the assumption that \(\mathcal{X}\) is finite) due to the use of a specific correlated sampling strategy for the direction that "DP implies replicability" in the context of classification [1]. Moreover, [14] gave a constructive way to transform a DP algorithm to a replicable one for general statistical tasks and for finite domains. Thus, combining our results in Section2 and the result of [1, 13, 14], the equivalence of TV indistinguishability and DP for _finite_ domains is immediate. We will elaborate more on the differences of our approach and [1, 14] later on. We also discuss our coupling and correlated sampling in AppendixA.4.
Recall that a learner is \((\alpha,\beta)\)-accurate if its misclassification probability is at most \(\alpha\) with probability at least \(1-\beta\).
**Theorem 3** (\((\varepsilon,\delta)\)-Dp \(\Rightarrow\) TV Indistinguishability).: _Let \(\mathcal{X}\) be a (possibly infinite) domain and \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\). Let \(\gamma\in(0,1/2),\alpha,\beta,\rho\in(0,1)^{3}\). Assume that \(\mathcal{H}\) is learnable by an \(n\)-sample \((1/2-\gamma,1/2-\gamma)\)-accurate \((0.1,1/(n^{2}\log(n)))\)-differentially private learner. Then, it is also learnable by an \((\alpha,\beta)\)-accurate \(\rho\)-TV indistinguishable learning rule._
Proof Sketch of Theorem3.The proof goes through the notion of global stability (cf. Definition19). The existence of an \((\varepsilon,\delta)\)-DP learner implies that the hypothesis class \(\mathcal{H}\) has finite Littlestone dimension [1] (cf. Theorem10). Thus, we know that there exists a \(\rho\)-globally stable learner for \(\mathcal{H}\)[1] (cf. Theorem11). The next step is to use the replicable heavy-hitters algorithm (cf. Algorithm1, [13]) with frequency parameter \(O(\rho)\) and replicability parameter \(O(\rho^{\prime})\), where \(\rho^{\prime}\in(0,1)\) is the desired TV indistinguishability parameter of the learning rule. The global stability property implies that the list of heavy-hitters will be non-empty and it will contain at least one hypothesis with small error rate, with high probability. Finally, since the list of heavy-hitters is finite and has bounded size, we feed the output into the replicable agnostic learner (cf. Algorithm2). Thus, we have designed a replicable learner for \(\mathcal{H}\), and Theorem1 shows that this learner is also TV indistinguishable.
The formal proof of Theorem3 is deferred to AppendixD.2. We also include a result which shows that _list-global_ stability implies TV indistinguishability for general domains and general statistical tasks, which could be of independent interest (cf. Proposition3).
We proceed to the opposite direction where we provide an algorithm that takes as input a TV indistinguishable learning rule for \(\mathcal{H}\) and outputs a learner for \(\mathcal{H}\) which is \((\varepsilon,\delta)\)-DP. In this direction countability of \(\mathcal{X}\) is crucial.
**Theorem 4** (TV Indistinguishability \(\Rightarrow(\varepsilon,\delta)\)-Dp).: _Let \(\mathcal{X}\) be a countable domain. Assume that \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\) is learnable by an \((\alpha,\beta)\)-accurate \(\rho\)-TV indistinguishable learner \(A\), for some \(\rho\in(0,1),\alpha\in(0,1/2),\beta\in\left(0,\frac{1-\rho}{1+\rho}\right)\). Then, for any \((\alpha^{\prime},\beta^{\prime},\varepsilon,\delta)\in(0,1)^{4},\) it is also learnable by an \((\alpha+\alpha^{\prime},\beta^{\prime})\)-accurate \((\varepsilon,\delta)\)-differentially private learner \(A^{\prime}\)._
We refer to AppendixD.4 for the proof. In the above statements, we omit the details about the sample complexity. We refer to Proposition3 and Proposition4 for these details. Let us now comment on the differences between [1, 14] which establish a transformation from a replicable learner to an approximately DP learner and our result. The high-level idea to obtain both of these results is similar. Essentially, the proof of [1, 14] can be viewed as a coupling between sufficiently many posteriors of the replicable learning rule using _shared randomness_ in order to achieve this coupling. In our proof, instead of using shared randomness we use the reference measure we described in previous sections to achieve this coupling. We remark that we could have obtained the same qualitative result, i.e., that TV indistinguishability implies approximate DP, by using the transformation from replicability to approximate
DP of [11, 12] in a black-box manner along with our result that \(\mathrm{TV}\) indistinguishability implies replicability (cf. Theorem2). However, this leads to worse guarantees in terms of the range of the parameters \(\alpha,\beta,\delta,\varepsilon,\rho\) than the ones stated in Theorem4. Thus, we have chosen to do a more careful analysis based on the coupling we proposed that leads to a stronger quantitative result. More concretely, the proof in [11, 12] starts by sampling many random strings independently of the dataset \(\{S_{i}\}_{i\in[k]}\) and considers many executions of the algorithm using the same random strings but different data. In our algorithm we first sample the sets \(\{S_{i}\}_{i\in[k]}\) and then we consider an optimal coupling along the \(\{A(S_{i})\}_{i\in[k]}\) which is also independent of the dataset, thus it satisfies the DP requirements. Moreover, our procedure covers a wider range of parameters \(\alpha,\beta,\rho\) compared to [11]. The reason we need countability of \(\mathcal{X}\) is because it allows us to design a _data-independent_ reference probability measure \(\mathcal{P}\), the same one as in Claim1. Then, using this reference probability measure for the coupling helps us establish the DP properties. Nevertheless, we propose a simple change to our approach which we conjecture applies to general domains \(\mathcal{X}\) and we leave it open as an interesting future direction. For a more detailed discussion, we refer the reader to AppendixD.5.
Interestingly, we underline that, as is shown in [11, 12] and as opposed to Theorem4, replicability implies DP in general spaces (cf. Theorem12).
We conclude this section by stating a general equivalence between \((\varepsilon,\delta)\)-DP and replicability for PAC learning, that follows from the previous discussion, in particular by combining Theorem12[11, 12], and Lemma9.
**Theorem 5** (Replicability \(\iff\) Differential Privacy in PAC Learning).: _Let \(\mathcal{X}\) be a (possibly infinite) domain and let \(\mathcal{H}\subseteq\left\{0,1\right\}^{\mathcal{X}}\). Then, \(\mathcal{H}\) is replicably learnable if and only if it is approximately-DP learnable._
**Remark 4** (Dependence on the Parameters).: _In the case of \(\mathrm{TV}\) indistinguishability \(\Rightarrow\) DP, the blowup in the sample complexity is stated explicitly in Proposition4._
_For the direction \(\mathrm{DP}\Rightarrow\mathrm{TV}\) indistinguishability it is a bit trickier to state the exact sample complexity blow-up because we do not make explicit use of the DP learner. Instead, we use the fact that the existence of a non-trivial DP learner implies that the class has finite Littlestone dimension and then we use an appropriate algorithm that is known to work for such classes. In this case, it suffices to let the parameters of the DP learner to be \(\varepsilon\in(0,0.1),\delta\in\left(0,\frac{1}{n^{2}\log(n)}\right),\alpha \in(0,1/2),\beta\in(0,1/2)\) and the parameters of the desired \(\mathrm{TV}\) indistinguishable \((\alpha^{\prime},\beta^{\prime})\)-accurate learner are unconstrained, i.e., \(\rho\in(0,1),\alpha^{\prime}\in(0,1),\beta^{\prime}\in(0,1)\). If we denote the Littlestone dimension of the class by \(L\), then, as shown in Proposition3 the sample complexity of the \(\mathrm{TV}\) indistinguishable learner is \(\mathrm{poly}(L,1/\rho,1/\alpha^{\prime},\log(1/\beta^{\prime}))\)8._
Footnote 8: This holds under the (standard) assumption that uniform convergence holds for Littlestone classes. If this is not the case, we get \(\mathrm{poly}(2^{2L},1/\rho,1/\alpha^{\prime},\log(1/\beta^{\prime}))\) sample complexity (Corollary6).
**Remark 5** (Beyond Binary Classification).: _The only transformation that is restricted to binary classification is the one from DP to \(\mathrm{TV}\) indistinguishability. All the other transformations, (and the boosting algorithms that we present in the upcoming section), extend to general statistical tasks. Let us now shortly discuss how to extend our result e.g., to the multi-class setting, using results from the private multiclass learning literature [13, 12]. [13] showed that private multiclass learnability implies finite multiclass Littlestone dimension and [12] showed how to extend the binary list-globally stable learner that we use to the multiclass setting. Using these two main ingredients, the rest of our approach for the binary classification setting should extend to the multiclass setting. The extension to the regression problem seems to be more challenging. Even though [13] showed that private regression implies finiteness of some appropriate Littlestone dimension, it is not clear yet how to derive a (list-)globally stable algorithm for this problem._
## 4 Amplifying and Boosting \(\mathrm{TV}\) Indistinguishable Algorithms
In this section we study the following fundamental question.
**Question 1**.: _Consider a weak \(\mathrm{TV}\) indistinguishable learning rule both in terms of the indistinguishability parameter and the accuracy. Is it possible to amplify its indistinguishability and to boost its accuracy?_
For instance, in the context of approximate differential privacy, a series of works has lead to (constructive) algorithms that boost the accuracy and amplify the privacy guarantees (e.g., [13, 14, 15]). This result builds upon the equivalence of online learnability and approximate differential privacy. Our result relating DP to TV indistinguishability implies the following existential result.
**Corollary 2**.: _Let \(\mathcal{X}\) be a countable domain. Suppose that for some sample size \(n_{0}\), there exists an \((\alpha_{0},\beta_{0})\)-accurate \(\rho_{0}\)-TV indistinguishable learner \(A\) for a class \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\) with \(\alpha_{0}\in(0,1/2),\rho_{0}\in(0,1),\beta_{0}\in\left(0,\frac{1-\rho_{0}}{1+ \rho_{0}}\right)\). Then, for any \((\alpha,\beta,\rho)\in(0,1)^{3}\), \(\mathcal{H}\) admits an \((\alpha,\beta)\)-accurate \(\rho\)-TV indistinguishable learner \(A^{\prime}\)._
This result relies on connections between learnability by TV indistinguishable learners and finiteness of the Littestone dimension of the underlying hypothesis class that were discussed in Section3. In particular, Corollary7 shows that the existence of such a non-trivial TV indistinguishable learner implies that the \(\mathcal{H}\) has finite Littestone dimension, and Proposition3, states that the finiteness of the Littestone dimension of \(\mathcal{H}\) implies the existence of an \((\alpha,\beta)\)-accurate \(\rho\)-TV indistinguishable learner, for arbitrarily small choices of \(\alpha,\beta,\rho\). It is not hard to see that we need to constrain \(\alpha\in(0,1/2)\), because the algorithm needs to have an advantage compared to the random classifier. Moreover, it should be the case that \(\beta\in(0,1-\rho).\) If \(\beta\geq 1-\rho\) then the algorithm which outputs a constant classifier with probability \(\beta\) and an \(\alpha\)-good one with the remaining probability is \(\rho\)-TV indistinguishable and \((\alpha,\beta)\)-accurate. An interesting open problem is to investigate what happens when \(\beta\in\left(\frac{1-\rho}{1+\rho},1-\rho\right)\).
We underline that Corollary2 is existential and does not make actual use of the weak TV indistinguishable learner that is given as input. Hence, it is natural to try to come up with sample-efficient and constructive approaches that utilize the weak learner through black-box oracle calls to it during the derivation of the strong one. In what follows, we aim to design such algorithms. We remind the reader that if we constrain ourselves to work in the setting where \(\mathcal{X}\) is countable, then the absolute continuity requirement in the next theorems comes immediately, due to Claim1.
Indistinguishability Amplification.We first consider the amplification of the indistinguishability guarantees of an algorithm. An important ingredient of our approach is a replicable algorithm for finding heavy hitters of a distribution, i.e., elements whose frequency is above some given threshold. This algorithm has appeared in [11, 13]. However, the dependence of the number of samples in the confidence parameter in these works is polynomial. We present a new variant of this algorithm that has polylogarithmic dependence on the confidence parameter. Moreover, using a stronger concentration inequality, we improve the dependence of the number of samples on the error parameter. We believe that this result could be of independent interest. We also design an agnostic learner for finite hypothesis classes. However, the dependence of the number of samples on \(|\mathcal{H}|\) is polynomial. We believe that an interesting question is to design agnostic learners with polylogarithmic dependence on \(|\mathcal{H}|\). We refer the reader to AppendixE.
**Theorem 6** (Indistinguishability Amplification).: _Let \(\mathcal{P}\) be a reference probability measure over \(\{0,1\}^{\mathcal{X}}\) and \(\mathcal{D}\) be a distribution over inputs. Consider the source of randomness \(\mathcal{R}\) to be a Poisson point process with intensity \(\mathcal{P}\times\mathrm{Leb}\times\mathrm{Leb}\), where \(\mathrm{Leb}\) is the Lebesgue measure over \(\mathbb{R}_{+}\). Consider a weak learning rule \(A\) that is (i) \(\rho\)-TV indistinguishable with respect to \(\mathcal{D}\) for some \(\rho\in(0,1)\), (ii) \((\alpha,\beta)\)-accurate for \(\mathcal{D}\) for some \((\alpha,\beta)\in(0,1)^{2}\), such that \(\beta<\frac{2\rho}{\rho+1}-2\sqrt{\frac{2\rho}{\rho+1}}+1\), and, (iii) absolutely continuous with respect to \(\mathcal{P}\) on inputs from \(\mathcal{D}\). Then, for any \(\rho^{\prime},\varepsilon,\beta^{\prime}\in(0,1)^{3}\), there exists a learner \(\textsc{Ampl}(A,\mathcal{R},\beta^{\prime},\varepsilon,\rho^{\prime})\) that is \(\rho^{\prime}\)-TV indistinguishable with respect to \(\mathcal{D}\), and \((\alpha+\varepsilon,\beta^{\prime})\)-accurate for \(\mathcal{D}\)._
We remark that the above result makes strong use of the equivalence between replicability and TV indistinguishability. Our algorithm is a variant of the amplification algorithm that appeared in [13], which (i) works for a wider range of parameters and (ii) its sample complexity is polylogarithmic in the parameter \(\beta^{\prime}\).
Accuracy Boosting.Next, we design an algorithm that boosts the accuracy of an \(n\)-sample \(\rho\)-TV indistinguishable algorithm and preserves its TV indistinguishability guarantee. Our algorithm is a variant of the boosting mechanism provided in [13]. Similarly as in the case of amplification, our variant improves upon the dependence of the number of samples on the parameter \(\beta^{\prime}\).
**Theorem 7** (Accuracy Boosting).: _Let \(\mathcal{P}\) be a reference probability measure over \(\{0,1\}^{\mathcal{X}}\) and \(\mathcal{D}\) be a distribution over inputs. Consider the source of randomness \(\mathcal{R}\) to be a Poisson point process with intensity \(\mathcal{P}\times\mathrm{Leb}\times\mathrm{Leb},\) where \(\mathrm{Leb}\) is the Lebesgue measure over \(\mathbb{R}_{+}\). Consider a weak learning rule \(A\) that is (i) \(\rho\)-\(\mathrm{TV}\) indistinguishable with respect to \(\mathcal{D}\) for some \(\rho\in(0,1)\), (ii) \((1/2-\gamma,\beta)\)-accurate for \(\mathcal{D}\) for some \((\gamma,\beta)\in(0,1)^{2}\), and, (iii) absolutely continuous with respect to \(\mathcal{P}\) on inputs from \(\mathcal{D}\). Then, for any \(\beta^{\prime},\varepsilon,\rho^{\prime}\in(0,1)^{3}\), there exists a learner \(\textsc{Boost}(A,\mathcal{R},\varepsilon)\) that is \(\rho^{\prime}\)-\(\mathrm{TV}\) indistinguishable with respect to \(\mathcal{D}\) and \((\varepsilon,\beta^{\prime})\)-accurate for \(\mathcal{D}\)._
We can combine the amplification and boosting results for a wide range of parameters and get the next corollary.
**Corollary 3**.: _Let \(\mathcal{X}\) be a countable domain and \(A\) be an \(n\)-sample \(\rho\)-\(\mathrm{TV}\) indistinguishable \((\alpha,\beta)\)-accurate algorithm, for some \(\rho\in(0,1),\alpha\in(0,1/2),\beta\in\left(0,\frac{2\rho}{\rho+1}-2\sqrt{ \frac{2\rho}{\rho+1}}+1\right).\) Then, for any \(\rho^{\prime},\alpha^{\prime},\beta^{\prime}\in(0,1)^{3},\) there exists a \(\rho^{\prime}\)-\(\mathrm{TV}\) indistinguishable \((\alpha^{\prime},\beta^{\prime})\)-accurate learner \(A^{\prime}\) that requires at most \(O\left(\mathrm{poly}\left(1/\rho,1/\alpha^{\prime},\log(1/\beta^{\prime}) \right)\cdot n\right)\) samples from \(\mathcal{D}\)._
The proof of this result follows immediately from Theorem6, Theorem7, and from the fact that we can design the reference probability measure \(\mathcal{P}\) for countable domains (cf. Claim1). This result leads to two natural questions: what is the tightest range of \(\beta\) for which we can amplify the stability parameter \(\rho\) and under what assumptions can we design such boosting and amplification algorithms for general domains \(\mathcal{X}\)? For a more detailed discussion, we refer the reader to AppendixE.3, AppendixE.4.
**Remark 6** (Dependence on the Parameters).: _We underline that the polynomial dependence on \(\rho\) in the boosting result is not an artifact of the algorithmic procedure or the analysis we provide, but it is rather an inherent obstacle in \(\mathrm{TV}\) indistinguishable. [11] show that in order to estimate the bias of a coin \(\rho\)-replicably with accuracy \(\tau\) one needs at least \(1/(\tau^{2}\rho^{2})\) coin tosses. Since \(\rho\)-\(\mathrm{TV}\) indistinguishable implies \((2\rho/(1+\rho))\)-replicability as we have shown (without any blow-up in the sample complexity), we also inherit this lower bound. Our main goal behind the study of the boosting algorithms is to identify the widest range of parameters \(\alpha,\rho,\beta\) such that coming up with a \(\rho\)-\(\mathrm{TV}\) indistinguishable algorithm switches from being trivial to being difficult. For example, in PAC learning we know that if the accuracy parameter is strictly less than \(1/2\), then there are sample-efficient boosting algorithms that can drive it down to any \(\varepsilon>0\). In the setting we are studying, it is crucial to understand the relationship between \(\beta,\rho\), see AppendixE.3._
## 5 Conclusion
In this work, we studied TV indistinguishability and established connections to similar notions that have been proposed in the past, i.e., differential privacy, replicability, global stability, and pseudo-global stability, under mild measure-theoretic assumptions (e.g., countable \(\mathcal{X}\)). Our work leaves the following open problems:
1. Does the equivalence between TV indistinguishability and replicability hold for general spaces, i.e., when the input domain is not countable?
2. Does the equivalence between TV indistinguishability and \((\varepsilon,\delta)\)-DP hold for general spaces?
3. How can we boost the correctness and amplify the indistinguishability parameter of a weak TV indistinguishable learner to a strong one in general spaces?
4. What is the minimal condition that characterizes TV indistinguishable PAC learnability? This is closely related to understanding the limits of TV indistinguishable boosting algorithms.
## 6 Acknowlegdements
We thank Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, Satchit Sivakumar, and Jessica Sorrell for illustrating discussions regarding the connection of this work with the recent paper [1]. We also thank Kyriakos Lotidis for helpful discussions about the Poisson point process. |
2307.05465 | Simulation of magnetohydrodynamic flows of liquid metals with heat
transfer or magnetic stirring | We discuss the effects of nonhomogeneous magnetic fields in liquid metal
flows in two different configurations. In the first configuration, we briefly
report the impact of fringing magnetic fields in a turbulent
Rayleigh-B{\'e}nard convection setup, where it was shown that the global heat
transport decreases with an increase of fringe-width. The convective motion in
regions of strong magnetic fields is confined near the sidewalls. In the second
configuration, we numerically study the effects of an oscillating magnetic
obstacle with different frequencies of oscillation on liquid metal flow in a
duct. The Reynolds number is low such that the wake of the stationary magnetic
obstacle is steady. The transverse oscillation of the magnet creates a
sinusoidal time-dependent wake reminiscent of the vortex shedding behind solid
obstacles. We examine the behavior of the streamwise and spanwise components of
the Lorentz forces as well as the work done by the magnets on the fluid. The
frequency of the oscillation of the streamwise component of Lorentz force is
twice that of the spanwise component as in the case of lift and drag on solid
cylindrical obstacles. The total drag force and the energy transferred from the
magnets to the fluid show a non-monotonic dependence on the frequency of
oscillation of the magnetic obstacle indicative of a resonant excitation of the
sinusoidal vortex shedding. | Shashwat Bhattacharya, Seyed Loghman Sanjari, Dmitry Krasnov, Thomas Boeck | 2023-07-11T17:50:36Z | http://arxiv.org/abs/2307.05465v1 | # Simulation of magnetohydrodynamic flows of liquid metals with heat transfer or magnetic stirring
###### Abstract
We discuss the effects of nonhomogeneous magnetic fields in liquid metal flows in two different configurations. In the first configuration, we briefly report the impact of fringing magnetic fields in a turbulent Rayleigh-Benard convection setup, where it was shown that the global heat transport decreases with an increase of fringe-width. The convective motion in regions of strong magnetic fields is confined near the sidewalls. In the second configuration, we numerically study the effects of an oscillating magnetic obstacle with different frequencies of oscillation on liquid metal flow in a duct. The Reynolds number is low such that the wake of the stationary magnetic obstacle is steady. The transverse oscillation of the magnet creates a sinusoidal time-dependent wake reminiscent of the vortex shedding behind solid obstacles. We examine the behavior of the streamwise and spanwise components of the Lorentz forces as well as the work done by the magnets on the fluid. The frequency of the oscillation of the streamwise component of Lorentz force is twice that of the spanwise component as in the case of lift and drag on solid cylindrical obstacles. The total drag force and the energy transferred from the magnets to the fluid show a non-monotonic dependence on the frequency of oscillation of the magnetic obstacle indicative of a resonant excitation of the sinusoidal vortex shedding.
## 1 Introduction
Magnetohydrodynamic (MHD) flows, i.e., flows of electrically conducting fluids under the influence of magnetic fields, are frequently encountered in engineering and astrophysical applications. In such flows, the fluid is acted upon by the Lorentz force in addition to the force driving the flow. Industrial and technological applications of such flows include heating, pumping, stirring, and levitation of liquid metals, cooling blankets in fusion reactors, and liquid-metal batteries. In the context of astrophysics, magnetic fields strongly influence the flows in the sun and the stars and are responsible for the formation of sunspots and solar flares.
Magnetoconvection has been studied extensively in the past, but most of the studies focused on flows under the influence of a homogeneous magnetic field, which is an idealized approximation. However, in most engineering and astrophysical applications (such liquid metal batteries, cooling blankets in fusion reactors, electromagnetic stirring, solar spots, etc.) the magnetic fields are localized and thus vary in space [1, 2]. Further, strong homogeneous fields in large regions of space can only be generated by magnets of large size which are difficult to design and very costly to build and operate [3]. Thus, it is important to understand the impact of spatially varying magnetic fields on magnetohydrodynamic flows. Recently, Bhattacharya _et al._[4] studied the effects of spatially varying magnetic fields on MHD flows driven by buoyancy (magnetoconvection); these effects will be briefly summarized in this paper. There have been several studies on MHD duct flows with different configurations of spatially varying fields (see, for example, Sterl [5] and Prinz _et al._[6]); however, in this paper, we focus specifically on duct flows with a localized zone of applied magnetic field (henceforth referred to as _magnetic obstacle_). Flows past stationary magnetic obstacles have been studied before [7, 8, 9, 10, 11]. Similarities and differences between stationary magnetic and solid obstacles has been discussed by Votyakov and Kassinos [12]. Unsteady wakes were only found for fairly large Reynolds numbers where the flow develops small-scale turbulent eddies [10]. In order to realize an unsteady flow past a magnetic obstacle at a rather low Reynolds number it seems necessary to add an additional periodic motion of the magnet. We therefore consider the effects of _oscillating_ magnetic obstacles on MHD duct flow in the present paper, which can be interesting in the context of magnetic stirring. We also remark that oscillating solid obstacles have been studied previously but it appears that such studies are lacking for magnetic obstacles so far.
The outline of the paper is as follows. In Sec. 2, we discuss the mathematical model. Section 3 describes the numerical method used in our simulations. We discuss the results in Sec. 4 and conclude in Sec. 5.
## 2 Mathematical model
In this section, we describe the setup and the mathematical formulation of our problems. The study will be conducted under the quasi-static approximation, in which the induced magnetic field is neglected as it is very small compared to the applied
magnetic field. This approximation is fairly accurate for MHD flows of liquid metals [2]. The governing equations of MHD flows are given by
\[\nabla\cdot\mathbf{u} = 0, \tag{1}\] \[\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla\mathbf{u} = -\nabla p+\nu\nabla^{2}\mathbf{u}+\mathbf{f}, \tag{2}\]
where \(\mathbf{u}\) and \(p\) are the velocity and pressure fields respectively, \(\nu\) is the kinematic viscosity, and \(\mathbf{f}\) is the total body force acting on the fluid. For MHD duct flow, \(\mathbf{f}\) is the specific Lorentz force (i.e. force per unit mass, henceforth denoted as \(\mathbf{f}_{L}\)) and is given by
\[\mathbf{f}=\mathbf{f}_{L}=\frac{1}{\rho}(\mathbf{j}\times\mathbf{B}_{0}),\quad\mathbf{j}=\sigma(- \nabla\phi+\mathbf{u}\times\mathbf{B}_{0}),\quad\nabla^{2}\phi=\nabla\cdot(\mathbf{u} \times\mathbf{B}_{0}). \tag{4}\]
where \(\mathbf{j}\) is the current density, \(\mathbf{B}_{0}\) is the imposed magnetic field strength, \(\sigma\) and \(\rho\) are the electric conductivity and mean density of the fluid, respectively, and \(\phi\) is the electric potential.
In magnetoconvection, \(\mathbf{f}=\mathbf{f}_{L}+\mathbf{f}_{b}\), where \(\mathbf{f}_{b}=\alpha gT\hat{z}\) is the buoyancy force, \(\alpha\) is the thermal expansion coefficient, \(g\) is the gravitational acceleration, and \(T\) is the temperature field. Magnetoconvection is additionally governed by the following thermal energy equation which describes the evolution of the temperature field \(T\):
\[\frac{\partial T}{\partial t}+\mathbf{u}\cdot\nabla T=\kappa\nabla^{2}T, \tag{5}\]
where \(\kappa\) is the thermal diffusivity of the fluid.
MHD liquid-metal duct flows are governed by two nondimensional parameters: the Reynolds number \(Re\), which is the ratio of the inertial force to the viscous force; and the Hartmann number \(Ha\), which is the ratio of the Lorentz force to the viscous force. Liquid-metal magnetoconvection is governed by three nondimensional parameters: the Rayleigh number \(Ra\), the ratio of the buoyancy force to the dissipative forces; the Prandtl number \(Pr\) - the ratio of kinematic viscosity to the thermal diffusivity; and the Hartmann number \(Ha\). These quantities are given by
\[Re=\frac{UL}{\nu},\quad Ha=BL\sqrt{\frac{\sigma}{\rho\nu}},\quad Ra=\frac{ \alpha g\Delta L^{3}}{\nu\kappa},\quad Pr=\frac{\nu}{\kappa}, \tag{6}\]
where \(U\), \(L\), and \(\Delta\) are the characteristic velocity, length, and temperature scales respectively. For magnetoconvection, we consider the Rayleigh-Benard setup consisting of fluid enclosed between a cooler top plate and a warmer bottom plate (the temperature difference between the plates being \(\Delta\)), with the plates separated by a distance \(H\). In this case, \(H\) and \(\Delta\) respectively are the characteristic length and temperature scales for \(Ha\) and \(Ra\).
As discussed in Sec. 1, we describe the effects of spatially varying magnetic fields in liquid metal flow for two configurations - (i) thermal convection in a box, and (ii) pressure-driven duct flow. In the first configuration, we consider a horizontally extended convection box of size \(l_{x}\times l_{y}\times H=16\times 32\times 1\) which is influenced by magnetic fields generated by two semi-infinite permanent magnets. The north pole of one magnet faces the bottom of the convection cell and the south pole of the second magnet faces the top of the convection cell. These magnets extend from \(-\infty\) to \(\infty\) in the \(x\)-direction, \(l_{y}/2\) to \(\infty\) in the \(y\)-direction, from near the top wall to \(\infty\) in the positive \(z\)-direction, and from near the bottom wall to \(-\infty\) in the negative \(z\) direction. For a detailed description of the setup, the readers are referred to Bhattacharya _et. al_[4]. In this configuration, the lateral component of the magnetic field (\(B_{x}\)) vanishes, and the longitudinal and vertical components respectively are logarithmic and inverse-tangent functions of the spatial coordinates \(y\) and \(z\) and the gap \(\delta\) between the magnetic poles and the horizontal walls. The magnetic field distribution is such that its strength is negligible for \(0<y\lesssim l_{y}/2\), increases steeply at \(y\sim l_{y}/2\), and saturates close to its maximum value at \(y\gtrsim l_{y}/2\). When \(\delta\) is increased keeping other parameters same, the total magnetic flux through the convection cell remains the same, but the gradient of the magnetic field at \(y\sim l_{y}/2\) decreases, thereby increasing the fringe-width of the magnetic field. The aim of the study was to determine the effects of fringe-width on the heat and momentum transport.
The second configuration, which is the main focus of this paper, consists of liquid metal flow in a duct with two oscillating permanent magnetic poles near the top and bottom walls. The magnetic poles are semi-infinite in the \(z\)-direction and measure \(M_{x}=3\) units along the streamwise direction and \(M_{y}=4\) units along the spanwise direction in agreement with Votyakov and Kassinos [12]. The spanwise dimension of the duct is \(L_{y}=50\) units and the height is \(L_{z}=1\) unit. The vertical gap between the magnetic poles and the liquid domain (one quarter of the layer height) also corresponds to Ref. [12]. A schematic diagram of the setup is shown in Fig. 1.
The magnetic field \(\mathbf{B}=(B_{x},B_{y},B_{z})\) generated by the magnetic poles is given by the formula derived by Votyakov _et al._[13]. The oscillation takes place along the spanwise direction and the \(y\)-coordinate of the center of the magnet at time \(t\) is given by
\[y_{m}=A\sin(2\pi f_{0}t), \tag{7}\]
where \(A\) and \(f_{0}\) are the amplitude and frequency of oscillation respectively. The magnets therefore have a velocity \(\mathbf{u}_{m}\) with respect to the flow domain. Since the induction of currents depends on the relative velocity between conductor and magnet, the difference \(\mathbf{u}-\mathbf{u}_{m}\) must be used in Ohm's law (4b) and in the charge conservation condition (4c).
In our work, the oscillation amplitude is set to \(A=1\), i.e. the ratio \(A/M_{y}=0.25\). For the frequency we choose a reference value based on the Strouhal number \(\textit{St}_{0}=0.25\) in Ref. [12]. The nondimensional reference frequency in our work is therefore
\[f_{s}=\frac{\textit{St}_{0}U}{M_{y}}=\frac{0.25\times 1}{4}=0.0625 \tag{8}\]
where \(U=1\) is the nondimensional mean streamwise velocity. The frequency ratio is defined as \(F=f/f_{s}\), i.e., the ratio of the frequency of oscillation of the magnetic poles to that of vortex shedding for the stationary magnetic obstacle of the same dimensions at a Reynolds number \(\textit{Re}=900\) in Ref. [12].
## 3 Numerical method
We conduct direct numerical simulations of our setups using a second-order finite difference code developed by Krasnov _et al._[14; 15]. For the magnetoconvection setup, a non-uniform grid of resolution \(4800\times 9600\times 300\) was used. All the walls were rigid and electrically insulated such that the electric current density \(\mathbf{j}\) formed closed field lines inside the cell. The top and bottom walls were fixed at \(T=-0.5\) and \(T=0.5\) respectively, and the sidewalls were adiabatic. The Rayleigh number, Prandtl number, and the Hartmann number based on the maximum value of the vertical magnetic field were fixed at \(\textit{Ra}=10^{5}\), \(\textit{Pr}=0.021\), and \(\textit{Ha}_{z,max}=120\). The gap \(\delta\) between the magnetic poles and the conducting plates was varied from \(\delta=0.01\)\(\mathrm{{\it{H}}}\) to \(\delta=9\)\(\mathrm{{\it{H}}}\), where \(H\) is the cell height.
For the configuration of flow past oscillating magnetic obstacle, we employ a rectangular domain of dimensions \(L_{x}\times L_{y}\times L_{z}=200\times 50\times 1\) with a grid resolution of \(1024\times 384\times 32\). The fluid enters the domain at \(x=0\) with a nearly fully-developed laminar flow profile that is approximated by the analytical expression
\[u=\frac{\cosh\left(\frac{1.55L_{x}}{L_{z}}\left|\frac{2y}{L_{y}}\right|\right) -\cosh\left(\frac{1.55L_{x}}{L_{z}}\right)}{1-\cosh\left(\frac{1.55L_{x}}{L_{ z}}\right)}\cdot\frac{\cosh\left(\frac{1.55L_{x}}{L_{y}}\left|\frac{2z}{L_{z}} \right|\right)-\cosh\left(\frac{1.55L_{x}}{L_{y}}\right)}{1-\cosh\left(\frac {1.55L_{x}}{L_{y}}\right)}. \tag{9}\]
The fluid leaves the domain at \(x=L_{x}\) where \(\partial\mathbf{u}/\partial x=0\). The magnetic poles are located \(x=50\). The mesh is non-uniform in \(y\) and \(z\)-directions. The top, bottom, and sidewalls are rigid (no-slip) and electrically insulated. We fix \(\textit{Re}=400\) and the Hartmann number based on the maximum vertical magnetic field as \(\textit{Ha}_{z,max}=70\), and vary the frequency ratio from \(F=0.2\) to \(F=0.8\). It must be noted that the characteristic length and velocity for the above nondimensional quantities are \(L_{z}/2\) (that is, half of the duct height) and the bulk horizontal velocity at the inlet (\(U\)), respectively.
For both the simulation setups, the elliptic equations for pressure, electric potential, and the temperature were solved based on applying cosine transforms in along the directions with uniform grid-stretching and using a tridiagonal solver along the direction with non-uniform grid stretching. The diffusive term in the temperature transport equation is treated implicitly. The time discretization of the momentum equation uses the fully explicit Adams-Bashforth/Backward-Differentiation method of second order.
Figure 1: Schematic of the setup for the duct flow with a localized oscillating magnetic field. The magnetic poles are semi-infinite in \(z\)-direction and oscillate along \(y\)-direction. The height \(L_{z}\) of the duct is unity.
## 4 Results
In this section, we first briefly summarize the results on the magnetoconvection simulations and then describe in detail the results on the flow past oscillating magnetic obstacle.
### Results on magnetoconvection
A schematic of the magnetoconvection setup is illustrated in Fig. 2(a). The magnetic field generated by the magnets is strong enough to cease the flow in high magnetic flux region of the convection cell. We observe that as the local vertical magnetic field strength increases, the large scale structures become thinner and align themselves perpendicular to the longitudinal sidewalls. The dependence of the local Reynolds and Nusselt numbers on the local Hartmann number (based on the vertical component of the magnetic field) was determined; this dependence was observed to be independent of the fringe-width. The global heat transport was observed to decrease with increasing fringe-width for strong magnetic fields but decrease with increasing fringe-width for weak magnetic fields. The convective motion became confined to the vicinity of the sidewalls in the regions of strong magnetic fields as shown in Fig. 2(b). The amplitudes of these wall modes were shown to exhibit a non-monotonic dependence on the fringe-width.
For further details on the results, the readers are referred to Bhattacharya _et al._[4]. In the next section, we discuss the results for the MHD duct flow setup.
### Results on flow past oscillating magnetic obstacle
The simulations of the flow past magnetic obstacles are run for 300 convective time units after reaching a fully-developed state. The contour plots of instantaneous streamwise velocity are exhibited in Figs. 3(a-c) and those with time-averaging in Figs. 3(d-f) for \(F=0.2\), \(F=0.5\), and \(F=0.8\). The figures show regions of reduced and even reversed streamwise velocity in the regions of strong magnetic field and also in the wake of the magnetic obstacle. The regions of reduced instantaneous velocity exhibit a wavy pattern. It can be visually observed from Figs. 3(a-c) that as the magnets oscillate faster, the wavelength of spatial oscillation decreases. There is an increase in the amplitude of this path from \(F=0.2\) to \(F=0.5\), but the amplitude decreases with a further rise in \(F\). For \(F=0.5\), the wake of the magnetic obstacle comprises of small-scale eddies, indicating that the flow becomes turbulent. The time-averaged streamwise velocity contours show that the length of the reversed flow region first decreases as \(F\) is increased to 0.5, and then increases with a further increase of \(F\).
We examine the components of the total Lorentz force in the streamwise (\(f_{L,x}\)) and spanwise (\(f_{L,y}\)) directions. Note that \(f_{L,x}\) and \(f_{L,y}\) are the analog of the drag and lift forces, respectively, in flow past solid cylinders. These quantites are
\[f_{L,x}=\int_{-L_{x}/2}^{L_{x}/2}\int_{-L_{y}/2}^{L_{y}/2}\int_{0}^{L_{x}}( \boldsymbol{f}_{L}\cdot\boldsymbol{\hat{x}})\,dx\,dy\,dz,\quad f_{L,y}=\int _{-L_{x}/2}^{L_{x}/2}\int_{-L_{y}/2}^{L_{y}/2}\int_{0}^{L_{x}}(\boldsymbol{f} _{L}\cdot\boldsymbol{\hat{y}})\,dx\,dy\,dz. \tag{10}\]
Figures 4(a,b,c) exhibit the plots of the above quantities versus the convective time \(t\) for \(F=0.5\), \(F=0.6\), and \(F=0.8\), respectively. The figures show a periodic sinusoidal time dependence. The magnitude of \(f_{L,x}\) is higher than \(f_{L,y}\); however, \(f_{L,y}\) oscillates with a higher amplitude than \(f_{L,x}\). The amplitude of oscillation increases with an increase of \(F\). The response frequency of \(f_{L,y}\) is equal to the excitation frequency \(f_{0}\) of the magnets; however, the response frequency of \(f_{L,x}\) is twice of \(f_{0}\). We further compute \(\langle f_{L,x}\rangle_{t}\), the streamwise component of Lorentz force averaged over 300 timeframes, and plot it versus the frequency ratio in Fig. 5(a). It can be seen that \(\langle f_{L,x}\rangle_{t}\) increases rapidly from \(F=0.2\) to \(F=0.55\). On further increase of \(F\), \(\langle f_{L,x}\rangle_{t}\) decreases sharply till \(F=0.7\) above which \(\langle f_{L,x}\rangle_{t}\) saturates close to a constant value. Interestingly,
Figure 2: (a) Schematic diagram of the magnetoconvection setup, and (b) isosurface contours of vertical velocity \(u_{z}=0.01\) (red) and \(u_{z}=-0.01\) (blue) for magnetoconvection with \(\delta/H=3\)[4]. The magnetic poles are semi-infinite in \(y\) and \(z\)-directions, and infinite along \(x\)-direction. The fluid motion in the region of strong magnetic fields is restricted to narrow zones adjacent to the sidewalls.
the aforementioned behaviors of \(f_{L,x}\) and \(f_{L,y}\) closely resemble that of the drag and lift forces, respectively, in flows past an oscillating cylinder [16; 17].
For flows past an oscillating cylinder, the non-dimensional mechanical energy transferred from the cylinder to the fluid is expressed as
\[E=\frac{2}{\rho Ud^{2}}\int_{0}^{t_{P}}\frac{dy}{dt}f_{\rm lift}\,dt, \tag{11}\]
where \(t_{P}\) is the motion period, \(d\) is the diameter of the cylinder, \(y\) is the spanwise position of the cylinder's axis, and \(f_{\rm lift}\) is the magnitude of the lift force [16]. In our work, the energy transferred from the oscillating magnets to the fluid can be expressed similarly as follows:
\[E=\int_{0}^{t_{P}}\frac{dy_{m}}{dt}\,f_{L,y}\,dt, \tag{12}\]
where \(t_{P}=\) 300 convective time units for our case. We compute \(E\) for different frequency ratios and plot it versus \(F\) in Fig. 5(b). The figure shows that \(E\) is always positive, implying that the magnets perform work on the fluid for all frequencies. The figure further shows that there is a gradual growth of \(E\) until \(F=0.45\) and then it sharply decreases to a minimum value at \(F=0.53\). The energy transfer increases monotonically on further increase of \(F\). Interestingly, the point of minimum energy transfer almost coincides with the point of maximum average streamwise Lorentz force. This point corresponds to resonance where the velocity field exhibits stronger fluctuations compared to other frequency ratios.
We finally examine the trends of the phase angle between the spanwise component of Lorentz force and the spanwise displacement of the magnets. This parameter is used as an indicator of energy transfer from the magnets to the fluids [16] where a phase angle between 0 and 180 degrees indicates positive energy transfer. We compute the phase angle using our data by fitting it with a sinusoidal function using the method of least squares. The computed phase angle is plotted versus the frequency ratio in Fig. 5(b). The figure shows that the phase angle lies between 0 and 180 degrees, consistent with the fact that the energy is transferred from the magnets to the fluid. The maximum phase angle at \(F=0.55\) reaches about 170 degrees.
Figure 4: Time-series of the total spanwise and streamwise component of Lorentz force (\(f_{L,y}\) and \(f_{L,x}\), respectively) in flow past oscillating magnetic obstacles for the following frequency ratios: (a) \(F=0.5\), (b) \(F=0.6\), and (c) \(F=0.8\). The Lorentz force exhibits a sinusoidal time-dependence.
Figure 3: Contour plots of streamwise velocity \(u_{x}\) for flows past oscillating magnetic obstacle for different frequency ratios in the midplane \(z=0\). Instantaneous contour plots for (a) \(F=0.2\), (b) \(F=0.5\), and (c) \(F=0.8\). Time-averaged contour plots for (d) \(F=0.2\), (e) \(F=0.5\), and (f) \(F=0.8\).
## 5 Conclusions
In this paper, we numerically examined the effects of non-homogeneous magnetic fields in liquid metal flows using a finite-difference fluid solver. We briefly summarized the results of Bhattacharya _et al._[4] in which the influence of fringing magnetic fields on turbulent convection was studied. An important finding was that for strong magnetic fields, the global heat transport decreases with an increase of fringe-width, whereas for weak magnetic fields, the heat transport marginally increases with an increase of fringe-width. The convective motion gets confined near the sidewalls in regions of strong magnetic fields.
We numerically examined the effects of oscillating magnetic obstacle with different frequencies on liquid metal flow in a duct. We showed the presence of reduced and reversed streamwise velocity in the regions of strong magnetic field and in the wake of the magnetic obstacle. The regions of reduced velocity exhibit a wavy pattern with the wavelength of spatial oscillation decreasing with the excitation frequency of the magnets. The amplitude of wake oscillation shows a non-monotonic dependence on the frequency of the magnets and exhibits a maximum at a particular frequency \(f_{\text{max}}\), which appears to correspond to the point of maximum Lorentz force in the streamwise direction and the minimum work done by the magnets on the fluid. The total streamwise and spanwise components of the Lorentz force oscillate sinusoidally with time with the same frequency and twice the frequency of the magnet's oscillation. The mean of the spanwise Lorentz force is zero. Its amplitude increases with an increase of the frequency of oscillation of the magnets. The frequency \(f_{\text{max}}\approx 0.5f_{s}\) is considerably smaller than the reference value \(f_{s}\) taken from Ref. [12]. Although the stationary magnet does not produce vortex shedding in our case, it seems plausible that our value \(f_{\text{max}}\) is indicative of the intrinsic shedding frequency when _Re_ (and possibly _Ha_) are increased further. Lower values than \(Sl_{0}=0.25\) of the Strouhal number of stationary magnetic obstacles were also reported by Kenjeres _et al._[10].
The authors are grateful to J. Schumacher for providing valuable contributions to the study of convection under the influence of fringing magnetic fields. S. Bhattacharya is supported by a postdoctoral fellowship of Alexander von Humboldt Foundation, Germany.
|
2306.12230 | Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse
Training | Dynamic Sparse Training (DST) is a rapidly evolving area of research that
seeks to optimize the sparse initialization of a neural network by adapting its
topology during training. It has been shown that under specific conditions, DST
is able to outperform dense models. The key components of this framework are
the pruning and growing criteria, which are repeatedly applied during the
training process to adjust the network's sparse connectivity. While the growing
criterion's impact on DST performance is relatively well studied, the influence
of the pruning criterion remains overlooked. To address this issue, we design
and perform an extensive empirical analysis of various pruning criteria to
better understand their impact on the dynamics of DST solutions. Surprisingly,
we find that most of the studied methods yield similar results. The differences
become more significant in the low-density regime, where the best performance
is predominantly given by the simplest technique: magnitude-based pruning. The
code is provided at https://github.com/alooow/fantastic_weights_paper | Aleksandra I. Nowak, Bram Grooten, Decebal Constantin Mocanu, Jacek Tabor | 2023-06-21T12:43:55Z | http://arxiv.org/abs/2306.12230v2 | # _Fantastic Weights and How to Find Then_:
###### Abstract
Dynamic Sparse Training (DST) is a rapidly evolving area of research that seeks to optimize the sparse initialization of a neural network by adapting its topology during training. It has been shown that under specific conditions, DST is able to outperform dense models. The key components of this framework are the pruning and growing criteria, which are repeatedly applied during the training process to adjust the network's sparse connectivity. While the growing criterion's impact on DST performance is relatively well studied, the influence of the pruning criterion remains overlooked. To address this issue, we design and perform an extensive empirical analysis of various pruning criteria to better understand their effect on the dynamics of DST solutions. Surprisingly, we find that most of the studied methods yield similar results. The differences become more significant in the low-density regime, where the best performance is predominantly given by the simplest technique: magnitude-based pruning. The code is provided at [https://github.com/alooow/fantastic_weights_paper](https://github.com/alooow/fantastic_weights_paper)
## 1 Introduction
Modern deep learning solutions have demonstrated exceptional results in many different disciplines of science [5; 14; 25]. However, they come at the cost of using an enormous number of parameters. Consequently, compression methods aim to significantly reduce the model size without introducing any loss in the performance [19; 6].
One approach to sparsifying neural networks is pruning, in which a portion of the weights is removed at the end of the training based on some predefined importance criterion [30; 20; 19]. More recently, [15] have found that iterative pruning joined with cautions parameter re-initialization can identify sparse subnetworks that are able to achieve similar performance to their dense counterparts when trained from scratch. This result, known as the _lottery ticket hypothesis_, has launched subsequent research into methods for identifying and training models that are sparse already at initialization [31; 55; 61; 46; 51].
An especially promising direction in obtaining well-performing sparse neural networks is the Dynamic Sparse Training (DST) framework. Inspired by the neuroregeneration in the brain, DST allows for plasticity of the initial sparse connectivity of the network by iteratively pruning and re-growing a portion of the parameters in the model [40]. This relatively new concept has already gained increasing interest in the past few years. Most notably, in computer vision, DST demonstrated that it is sufficient to use only \(20\%\) of the original parameters of ResNet50 to train ImageNet without any drop in performance [12; 37]. Even more intriguingly, in Reinforcement Learning applications,
DST is able to significantly outperform the classical dense models [17; 53]. At the same time, general sparse networks have been reported to surpass their dense counterparts in terms of adversarial robustness [45]. All those results demonstrate the incredible potential of DST not only in increasing the model's efficiency but also in providing a better understanding of the features and limitations of neural network training.
Motivated by the above, we take a closer look at the current DST techniques. Most research in this domain is devoted to investigating the best growth criterion as the most influential factor in the performance of DST [3; 12; 11; 1], disregarding any potential contribution coming from the choice of the weight removal algorithm. In this study, we address this issue by taking a complementary approach and focusing on the pruning criteria instead. A pruning criterion in DST serves as a measure of weight importance and hence is a proxy of the "usefulness" of a particular connection. Note that the importance of a connection in DST can differ from standard post-training pruning, as the role of weights can change throughout training due to the network's plasticity and adaptation. Weights deemed unimportant in one step can become influential in later phases of the training.
Our goal is to provide a better understanding of the relationship between pruning criteria and the dynamics of DST solutions. To this end, we perform a large empirical study including several popular pruning criteria and analyze their impact on the DST framework on diverse models. We find that:
* Surprisingly, within a stable DST hyperparameter setup, the majority of the studied criteria perform similarly, regardless of the model architecture and the selected growth criterion.
* The difference in performance becomes more significant in a very sparse regime, with the simplest magnitude-based pruning methods surpassing any more elaborate choices.
* Applying only a few connectivity updates is already enough to achieve good results. At the same time, the reported outcomes surpass those obtained by static sparse training.
* By analyzing the structural similarity of the pruned sets by each criterion, we assert that the best-performing methods make similar decision choices.
The insights from our research identify that the simplest magnitude pruning is still the optimal choice, despite a large amount of alternatives present in the literature. This drives the community's attention to carefully examine any new adaptation criteria for Dynamic Sparse Training.
## 2 Related Work
**Pruning and Sparse Training.** A common way of reducing the neural network size is pruning, which removes parameters or entire blocks of layers from the network. In its classical form, pruning has been extensively studied in the context of compressing post-training models [24; 42; 30; 20; 19; 44; 32; 41; 16] - see e.g. [22; 4] for an overview and survey. Interestingly, [38] demonstrated that a sparse neural network can match and even outperform its corresponding dense neural network equivalent if its sparse connectivity is designed in a sensitive manner. Recently, a similar result was obtained by the _lottery ticket hypothesis_ and follow-up research, which showed that there exist sparse subnetworks that can be trained in isolation to the same performance as the dense networks [15; 61; 57]. In an effort to find these subnetworks without the need for dense training, different techniques have been proposed over the years [31; 55; 54], including random selection [46; 33]. Such approaches are commonly referred to as Sparse Training, or Static Sparse Training to emphasize that the sparse structure stays the same throughout the training.
**Dynamic Sparse Training.** In DST, the neural network structure is constantly evolving by pruning and growing back weights during training [40; 3; 11; 12; 59; 56; 1; 9; 27]. The key motivation behind DST is not only to provide compression but also to increase the effectiveness and robustness of the deep learning models without the need for overparametrization [37]. The capability of DST has recently been an area of interest. In [13], authors indicate that DST is able to outperform static sparse training by assuring better gradient flow. DST has also been reported to achieve high performance in Reinforcement Learning [17; 50; 18] and Continual Learning [49], with ongoing research into applicability in NLP [34]. Some attention has also been given to the topological properties of the connectivity patterns produced by DST. In particular, [35], investigated the structural features of DST. However, they focus only on one method, the Sparse Evolutionary Training (SET) procedure [40].
To the best of our knowledge, this work is the first to comparatively study a large number of pruning criteria in DST on multiple types of models and datasets. We hope that our analysis will increase the understanding within the dynamic sparse training community.
## 3 Background
### Dynamic Sparse Training
Dynamic sparse training is a framework that allows training neural networks that are sparse already at initialization. The fundamental idea behind DST is that the sparse connectivity is not fixed. Instead, it is repeatedly updated throughout training. More precisely, let \(\theta\) denote all the parameters of a network \(f_{\theta}\), which is trained to minimize a loss \(L(\theta;\mathcal{D})\) on some dataset \(\mathcal{D}\). The density \(d^{l}\) of a layer \(l\) with latent dimension \(n^{l}\) is defined as \(d^{l}=||\theta^{l}||_{0}/n^{l}\), where \(||\cdot||_{0}\) is the L0-norm, which counts the number of non-zero entries. Consecutively, the overall density \(D\) of the model is \(D=\frac{\sum_{l=1}^{L}d^{l}n^{l}}{\sum_{l=1}^{L}n^{l}}\), with \(L\) being the number of layers in the network. The _sparsity_\(S\) is given by \(S=1-D\). Before the start of training, a fraction of the model's parameters is initialized to be zero in order to match a predefined density \(D\). One of the most common choices for initialization schemes is the Erdos-Renyi (ER) method [40], and its convolutional variant, the ER-Kernel (ERK) [12]. It randomly generates the sparse masks so that the density in each layer \(d^{l}\) scales as \(\frac{n^{l-1}+n^{l}+n^{l}}{n^{l-1}+n^{l}}\) for a fully-connected layer and as \(\frac{n^{l-1}+n^{l}+n^{l}+n^{l}}{n^{l}+n^{l}+n^{l}}\), for convolution with kernel of width \(w^{l}\) and height \(h^{l}\). The sparse connectivity is updated every \(\Delta t\) training iterations. This is done by removing a fraction \(\rho_{t}\) of the active weights accordingly to a pruning criterion \(\mathcal{C}\). The selected weights become inactive, which means that they do not participate in the model's computations. Next, a subset of weights to regrow is chosen accordingly to a growth criterion from the set of inactive weights, such that the overall density of the network is maintained. The pruning fraction \(\rho_{t}\) is often decayed by cosine annealing \(\rho_{t}=\frac{1}{2}\rho(1+\cos{(t\pi/T)})\), where \(T\) is the iteration at which to stop the updates, and \(\rho\) is the initial pruning fraction.
The used pruning and growth criterion depends on the DST algorithm. The two most common growth modes in the literature are random growth and gradient growth. In the first one, the new connections are sampled from a given distribution [40; 43; 59]. The second category selects weights based on the largest gradient magnitude [12]. Other choices based on momentum can also benefit the learning [11].
### Pruning Criteria
Given a pruning fraction \(\rho_{t}\), the pruning criterion \(\mathcal{C}\) determines the importance score \(s(\theta^{l}_{i,j})\) for each weight and prunes the ones with the lowest score. We use _local pruning_, which means that the criterion is applied to each layer separately 3. For brevity, we will slightly abuse the notation and write \(\theta\) instead of \(\theta^{l}_{i,j}\) from now on. Below, we discuss the most commonly used pruning criteria in DST that are the subject of the conducted study.
Footnote 3: We adapt this setup as it is commonly used in DST approaches, and has a reduced risk of entirely disconnecting the network – see Appendix H for a more detailed discussion.
**SET.** To the best of our knowledge, SET is the first pruning criterion used within the DST framework, which was introduced in the pioneering work of [40]. The SET criterion prunes an equal amount of positive and negative weights with the smallest absolute value.
**Magnitude.** The importance score in this criterion is given by the absolute value of the weight \(s(\theta)=|\theta|\). Contrary to SET, no division between the positive and negative weights is introduced. Due to its simplicity and effectiveness, the magnitude criterion has been a common choice in standard post-training pruning, as well as in sparse training [15; 12].
**MEST.** The standard magnitude criterion has been criticized as not taking into consideration the fluctuations of the weights during the training. The MEST (Memory-Economic Sparse Training)
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline & \(\mathcal{C}_{\text{Maginnake}}\) & \(\mathcal{C}_{\text{SET}}\) & \(\mathcal{C}_{\text{MEST}}\) & \(\mathcal{C}_{\text{Sensitivity}}\) & \(\mathcal{C}_{\text{SNIP}}\) \\ & \(|\theta|\) & \(|\theta_{+}|,|\theta_{-}|\) & \(|\theta|+\lambda|\nabla_{\theta}\mathcal{L}(\mathcal{D})|\) & \(\frac{|\nabla_{\theta}\mathcal{L}(\mathcal{D})|}{|\theta|}\) & \(|\theta||\nabla_{\theta}\mathcal{L}(\mathcal{D})|\) \\ \hline random growth & \(\times\) & SET[40] & MEST[59] & Sensitivity[43] & \(\times\) \\ gradient growth & Rig[12] & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: An overview of the existing **pruning** criteria in DST. Our work analyzes the differences and similarities between all the methods and fills the gaps (\(\times\)) in the literature. Each column lists a pruning criterion, while each row presents a growing method. SNIP [31] was not designed to grow weights, but we investigate whether it can be applied in DST.
criterion [59], proposed to use the gradient as an indicator of the trend of the weight's magnitude, leading to a score function defined as \(s(\theta)=|\theta|+\lambda|\nabla_{\theta}\mathcal{L}(\mathcal{D})|\), where \(\lambda\) is a hyperparameter.
**Sensitivity.** The role of gradient information in devising the pruning criterion has also been studied by [43]. Taking inspiration from control systems, the authors propose to investigate the relative gradient magnitude in comparison to the absolute value of the weight, yielding \(s(\theta)=|\nabla_{\theta}\mathcal{L}(\mathcal{D})|/|\theta|\). In our study, we consider the _reciprocal_ version of that relationship \(s(\theta)=|\theta|/|\nabla_{\theta}\mathcal{L}(\mathcal{D})|\), which we call _RSensitivity_, as we found it to be more stable.4
Footnote 4: See Appendix B.
**SNIP.** The SNIP (Single-shot Network Pruning) criterion is based on a first-order Taylor approximation of the difference in the loss before and after the weight removal. The score is given by \(s(\theta)=|\theta|\cdot|\nabla_{\theta}\mathcal{L}(\mathcal{D})|\). The criterion has been originally successfully used in static sparse training [31] and post-training pruning [41]. Motivated by those results, we are interested in investigating its performance in the dynamic sparse training scenario.
We denote the above-mentioned criteria by adding a subscript under the sign \(\mathcal{C}\) in order to distinguish them from the algorithms that introduced them. We summarize their usage with random and gradient growth criteria in the DST literature in Table 1.
## 4 Methodology
This work aims to understand the key differences and similarities between existing pruning techniques in DST by answering the following research questions:
* What is the impact of various pruning criteria on the performance of dynamic sparse training?
* How does the frequency of topology updates influence the effectiveness of different pruning methods?
* To what extent do different pruning criteria result in similar neural network topologies?
Q1: Impact on the Performance of DST.In the first research question, we are interested in investigating the relationship between the pruning criterion and the final performance of the network. We compare the results obtained by different pruning criteria on a diverse set of architectures and datasets, ranging from a regime with a small number of parameters (\(<100K\)) to large networks (
Figure 1: The test accuracy versus density computed on the studied models for different pruning criteria (due to space limits we only present here six plots, see Figure 9 in Appendix C for the remaining two setups). The first row represents the results obtained by random growth, while the second corresponds to gradient growth. Note the logarithmic scale in the x-axis. The performance of the dense model is indicated by the horizontal dashed line. In almost every case all pruning criteria perform well, regardless of the chosen model and growth criterion, at the same time surpassing the static initialization (in light blue).
\(13M\)). We examine multi-layer perceptrons (MLPs) and convolutional neural networks (ConvNets). Within each model, we fix the training setup and vary only the pruning criterion used by the DST framework. This allows us to assess the benefits of changing the weight importance criterion in isolation from other design choices. As the growing criterion, we choose the uniform random [40] and gradient [12] approaches, as they are widely used in the literature. In addition, we also perform a separate comparison on the ResNet-50 model (\(25M\) parameters) on ImageNet.
The common sense expectation would be that the more elaborate pruning criteria, which combine the weight magnitude with the gradient information, will perform better than the techniques that rely only on the weight's absolute value. Since the gradient may provide information on how the weight will change in the future, it might be more suitable in the DST approach, where connectivity is constantly evolving. This is especially promising considering the effectiveness of the gradient-based approach in estimating the importance in the growth criterion.
Q2: Update Period Sensitivity.In the second research question, we focus on one of the most important design choices: the topology update period \(\Delta t\). The setting of this update period determines how often the network structure is adapted during training. Within a fixed dataset and batch size, using a relatively low value means that the topology is changed very frequently in comparison to the optimization update of the weights. This poses a potential risk of not letting the newly added connections reach their full potential. On the other hand, a high value gives the weights enough opportunity to become important but may not allow the sparse network to adjust its structure based on the data characteristics. Finding a balance between this exploitation-exploration conflict is an imperative task in any DST framework.
In this work, we are interested in how the frequency of the topology update impacts different pruning criteria. Intuitively, one could anticipate that the pruning criteria incorporating the gradient information will perform better with a smaller update period, as the gradient may indicate the trend in the weight's future value.
Q3: Structural Similarity.Finally, we take a deep dive into the structures produced by the various pruning criteria. Please note that while the previous questions, Q1 and Q2, may explain how the pruning criterion choice affects performance and update frequency in DST, they do not indicate whether the mask solutions obtained by those criteria are genuinely different. In order to assess the diversity of the studied methods in terms of the produced sparse connectivity structures, we need to compare the sets of weights selected for pruning by each criterion under the same network state. In addition, we also investigate the similarity between the final masks obtained at the end
Figure 3: The critical distance diagram of the studied methods for **(a)** all densities, **(b)** densities smaller or equal \(0.2\), and **(c)** densities larger than \(0.2\). For each model we compute its average rank (the lower the better) and plot it on the horizontal line. The methods with ranks not larger than the critical distance (denoted as CD, displayed in the left upper corner of each plot) are considered not statistically different and are joined by thick horizontal black lines.
Figure 2: **Left:** Validation accuracy of the different pruning criteria on ImageNet obtained for density \(0.2\). The dashed black line indicates the best result of the dense model. **Right:** The training loss versus the number of epochs. We see that all methods perform similarly, except for \(\mathcal{C}_{\text{SNP}}\), which suffers already at the beginning of the training.
of training and how close they are to their corresponding sparse initializations. This allows us to assess whether the DST topology updates are indeed meaningful in changing the mask structure. For both cases, we incorporate a common score of the proximity of sets, known as the Jaccard index (or Intersection-Over-Union). We compute it separately for each layer and average the result:
\[\bar{J}(I_{a},I_{b})=\frac{1}{L}\sum_{l=1}^{L}J(I_{a}^{l},I_{b}^{l}),\;\;J(I_{a} ^{l},I_{b}^{l})=\frac{|I_{a}^{l}\cap I_{b}^{l}|}{|I_{a}^{l}\cup I_{b}^{l}|}, \tag{1}\]
where \(I_{a}^{l}\) and \(I_{b}^{l}\) are the sets selected by pruning criteria \(a\) and \(b\) in layer \(l\). A Jaccard index of \(1\) indicates that sets overlap perfectly, while \(0\) implies they are entirely separate.
We expect to see some overlap in the pruning methods' selection of weights since all of them incorporate the use of the magnitude of the weight. We also hope to see if the difference between the connectivities produced by scores that give different performances is indeed large or whether a small adjustment of the weights would render them equal.
## 5 Experiments
In this section, we present the results of our empirical studies. We start with the description of the experimental setup and then answer each of the posed questions in consecutive sections.
### Setup of the Experiments
We perform our analysis using eight different models, including small- and large-scale MLPs and Convolutional Nets. The small-MLP is a 4-layer network with hidden size of at most \(256\) (trained on the tabular Higgs dataset [2]). The large-MLP also consists of 4 layers (latent dimension size up to \(1024\) neurons) and is evaluated on the CIFAR10 [26]. The convolutional architectures are: a small 3-layer CNN (CIFAR10), LeNet5-Caffe [29] (FashionMNIST [58]), ResNet56 [21](CIFAR10, CIFAR100), VGG-16 [48](CIFAR100) and EfficientNet [52](Tiny-ImageNet [28]). In addition, on a selected density, we also consider the ResNet50 model on ImageNet [47]. We summarize all the architectures in Appendix A.
We train the models using the DST framework on a set of predefined model densities. We use adjustment step \(\Delta t=800\) and initial pruning fraction \(\rho=0.5\) with cosine decay. As the growth criterion, we investigate both the random and gradient-based approaches, as those are the most common choices in the literature. In all cases, we ensure that within each model, the training hyperparameters setup for every criterion is the same. Each performance is computed by averaging 5 runs, except for ImageNet, for which we use 3 runs. The ResNet-50 model is trained using density
Figure 4: The update period \(\Delta t\) versus validation accuracy on the CIFAR10 dataset on the MLP, ConvNet, and ResNet-56 models for different pruning criteria with density \(0.2\) and \(0.1\). The top row corresponds to random growth, while the bottom row corresponds to gradient growth. Note the logarithmic scale on the x-axis. We see that the methods are most sensitive to the update frequency in the MLP setting. For all setups, performing the update later (\(\Delta t>400\)) seems to be beneficial.
\(0.2\) and gradient growth (we select the gradient-based growth method as it is known to provide good performance in this setup [12]). Altogether we performed approximately **7000** runs of dense, static sparse, and DST models, which in total use 8 (9 including ImageNet) different dataset-model configurations, and jointly took approximately **5275** hours on GPU.5
Footnote 5: For training details please refer to Appendix A.2.
### Impact on the Performance of DST
In this section, we analyze the impact of the different pruning criteria on the test accuracy achieved by the DST framework. We compare the results with the original dense model and static sparse training with ERK initialization. The results are presented in Figure 1.
Firstly, we observe that the differences between the pruning criteria are usually most distinctive in low densities. Furthermore, in that setting, the dynamic sparse training framework almost always outperforms the static sparse training approach. This is important, as in DST research, high sparsity regime is of key interest. Surprisingly, the more elaborate criteria using gradient-based approaches usually either perform worse than the simple magnitude score, or do not hold a clear advantage over it. For instance, the Taylor-based \(\mathcal{C}_{\text{SNP}}\) criterion, despite being still better than the static initialization, consistently achieves results similar or worse than \(\mathcal{C}_{\text{Magnitude}}\). This is especially visible in the DST experiments using gradient growth (note ResNet56, MLP-CIFAR10, and EfficientNet). The only exception is the VGG-16 network. In addition, we also observe that \(\mathcal{C}_{\text{SNP}}\) gives better performance when used in a fine-tuning setup for EfficientNet - see Appendix G. Consequently, this Taylor-approximation-based criterion, although being well suited for standard post-training pruning [41] and selecting sparse initialization masks [31] seems not to be the optimal choice in dynamic sparse training approaches on high sparsity. The \(\mathcal{C}_{\text{RSensitivity}}\), on the other hand, can outperform other methods but is better suited for high densities. Indeed, it is even the best choice for densities larger than \(0.15\) for large-MLP on CIFAR10 and ResNet models. Finally, the \(\mathcal{C}_{\text{MEST}}\) criterion also is not superior to the \(\mathcal{C}_{\text{Magnitude}}\), usually leading to similar results.6 We also present the results obtained for ResNet-50 on ImageNet with density \(0.2\) in Figure 2. We observe that, again, the \(\mathcal{C}_{\text{SNP}}\) criterion leads to the worst performance. The \(\mathcal{C}_{\text{RSensitivity}}\), \(\mathcal{C}_{\text{SET}}\), and \(\mathcal{C}_{\text{MEST}}\) methods achieve high results but not clearly better than the \(\mathcal{C}_{\text{Magnitude}}\) criterion.
Footnote 6: Note that we also study the impact of the \(\lambda\) hyperparameter of the \(\mathcal{C}_{\text{MEST}}\) criterion in Appendix A.3. In general, we see that \(\lambda\to 0\) leads to results equal to \(\mathcal{C}_{\text{Magnitude}}\), while \(\lambda\to\infty\) prioritizes only the gradient magnitude and degrades the performance. We do not see any clear improvement for values lying between those two extremes.
Within each studied density, growing criterion, and model, we rank the pruning criteria and then calculate their average ranks. To rigorously establish the statistical significance of our findings, we compute the Critical Distance Diagram for those ranks, using the Nemenyi post-hoc test [10] with p-value \(0.05\) and present it in Figure 3. We observe that in the low-density regime (Figure 3b), the \(\mathcal{C}_{\text{Magnitude}}\) criterion achieves the best performance, as given by the lowest average rank. Furthermore, in such a case, we also note that other predominately magnitude-based criteria, such as \(\mathcal{C}_{\text{SET}}\) and \(\mathcal{C}_{\text{MEST}}\) are not significantly different than \(\mathcal{C}_{\text{Magnitude}}\), while the remaining gradient-based approaches are clearly worse. This confirms the overall effectiveness of the weight's magnitude as an importance score in sparse regimes.7 Interestingly, when larger densities are involved, the best choice is given by \(\mathcal{C}_{\text{RSensitivity}}\), while the performance of \(\mathcal{C}_{\text{magnitude}}\) deteriorates. This may suggest that the information from the gradient is more reliable in such a case. The gradient-based methods seem to perform better in denser setups.
Footnote 7: In Appendix F we loosely hypothesize that the low performance of gradient-based approaches may be due to the high variance in the gradient.
Figure 5: **Left:** The loss gap (training loss \(-\) validation loss) over time for different pruning criteria, the static sparse model, the dense model, and the dense model with dropout p=0.05 on the large-MLP. **Right:** The test accuracy obtained for each criterion. The DST methods exhibit lower gaps than the dense model. At the same time they achieve better results than the dense and dense+dropout models.
Finally, we notice that in certain situations, the DST framework can lead to better results than the original dense model (see small-MLP, or ResNet56 on CIFAR100 in Figure 1). We argue that this is due to a mix of a regularization effect and increased expressiveness introduced by removing and adding new connections in the network - see Figure 5. Moreover, we also observe that within the setup studied in Figure 1, there is almost no difference between choosing the random or gradient _growth criterion_. The second one achieves slightly better accuracy for the convolutional models (see e.g., density \(0.2\) on ResNet-56, as well as improved stability for EfficientNet). The similarity between those two growth criteria and the surprising effectiveness of DST in comparison to dense models have also been previously observed in the context of Reinforcement Learning [17].
### Sensitivity to Update Period
In this section, we analyze the sensitivity of each pruning criterion to the update period \(\Delta t\). This value describes how often the adjustments are made in training and hence can affect the weights' importance scores. We fix the density to \(0.2\) and \(0.1\) and the initial pruning fraction \(\rho\) to \(0.5\). We choose the densities \(0.2\) and \(0.1\) as they correspond to rather high sparsity levels and generally provide reasonable performance. Next, we investigate different update periods on the models using CIFAR10 dataset.8 We start from \(\Delta t=25\) and increase it by a factor of \(2\) up to \(\Delta t=6400\).
Footnote 8: The update period is only meaningful when considered together with the size of the training set. Therefore it is important to evaluate the models on the same dataset.
The results are presented in Figure 4. For the MLP model, we clearly see the behavior described in Section 4: if the update period \(\Delta t\) is too small, the performance will suffer. The best update period setting seems to be \(\Delta t=800\) for the random growth, regardless of the pruning method. For gradient growth, this value also gives a good performance. However, larger pruning periods (e.g. \(\Delta t=1600\)) are even better. Note that we use a batch size of \(128\), hence for \(\Delta t=800\) the connectivity is changed approximately once per 2 epochs.9 Similarly, less frequent updates are also beneficial in the ResNet56 model, although there is not as much consensus between the different pruning criteria. Even performing just one topology update every \(\Delta t=6400\) iterations (approximately once per 16 epochs) still gives excellent performance. At the same time, not performing any topology updates at all (i.e., static sparse training) deteriorates performance, as shown by the cyan dashed line in Figure 4. Furthermore, we observe that the gradient-based pruning criteria seem to suffer much more than the magnitude criteria when paired with a small update period (consider the plots for \(\Delta t=25\)), especially when combined with the gradient growth.10 This suggests that it is safer not to use those techniques together or be more rigorous about finding the right update period in such a case.
Footnote 9: Since \(800\cdot 128=102,400\) samples, which is just over twice the training set size of CIFAR10 (\(50,000\)).
Footnote 10: In Appendix D we also study the exploration imposed by the pruning criteria using the ITOP ratio [37].
Most interestingly, the results from Figure 4 strongly suggest that frequent connectivity adjustment is not needed. Even few pruning and re-growing iterations are enough to improve the static initialization.
Figure 6: **(a)** The mean normalized Jaccard index between the sets of weights chosen for removal during the first update of the sparse connectivity, computed for the MLP, CNN, and ResNet-56 models on CIFAR10. The \(J_{r}\) indicates the expected overlap of random subsets and serves as a reference point. We estimate this value by computing the mean pair-wise Jaccard index of the random pruning criterion with different sparse initializations. **(b)** The same index computed between the masks obtained by different pruning criteria at the end of training. The rows and columns represent the pruning criteria between which the index is computed. The \(J_{init}\) denotes the expected overlap of random masks and serves as a reference point. We estimate this value by computing the mean pair-wise Jaccard index of the different sparse initializations.
### Structural Similarity
Finally, we investigate whether the studied criteria are diverse in terms of the selected weights for removal. We fix the dataset to CIFAR10 and analyze the moment in which the sparse connectivity is updated for the first time. At such a point, given the same sparse initialization and random seed, all the pruning criteria share the same network state. Consequently, we can analyze the difference between the sets selected for removal by each criterion. We do this by computing their Jaccard index (Equation 1) and present the averaged results of 5 different runs in Figure 5(a) (See also Appendix I).
We observe that the \(\mathcal{C}_{\text{SET}}\) and \(\mathcal{C}_{\text{Magnitude}}\) mainly select the same weights for pruning. This indicates that the absolute values of the positive and negative weights are similar. In addition, for the simpler models with smaller depths (MLP and CNN) the \(\mathcal{C}_{\text{MEST}}\) criterion also leads to almost the same masks as \(\mathcal{C}_{\text{Magnitude}}\). On the other hand, the \(\mathcal{C}_{\text{SNIP}}\) and \(\mathcal{C}_{\text{Rosensitivity}}\) criteria produce sets that are distinct from each other. This is natural, as \(\mathcal{C}_{\text{SNIP}}\) multiplies the score by the magnitude of the gradient, while \(\mathcal{C}_{\text{Rosensitivity}}\) uses its inverse. The experiment suggests that early in training, pruning criteria such as \(\mathcal{C}_{\text{Magnitude}}\), \(\mathcal{C}_{\text{SET}}\), and \(\mathcal{C}_{\text{MEST}}\) lead to similar sparse connectivity. Together with the similarity in performance observed in Section 5.2, the results indicate that these three methods are almost identical, despite often being presented as diverse in DST literature. At the same time, the updates made by \(\mathcal{C}_{\text{SNIP}}\) and \(\mathcal{C}_{\text{R Sensitivity}}\) produce different pruning sets, and result in lower performance (recall Figure 2(b)). However, note that for each entry (except \(\mathcal{C}_{\text{SNIP}}\) with \(\mathcal{C}_{\text{Rosensitivity}}\)), the computed overlap is larger than random, suggesting that there is a subset of weights considered important by all the criteria.
In addition, we also compare how similar are the final sparse solutions found by the methods. To this end, we fix the sparse initialization to be the same for each pair of criteria and compute the Jaccard Index of the masks at the end of the DST training (performed with gradient growth and density \(0.2\) on the CIFAR10 experiments from Section 5.2). For each pair, we average the score over \(5\) initializations. The resulting similarity matrix is presented in Figure 5(b). We observe that pruning criteria that selected similar sets of weights for removal from the previous experiment still hold some resemblance to each other in terms of the end masks. Let us also note that in Section 5.3, we observed that applying just a few connectivity updates is sufficient for good results. This raises the natural question of whether DST masks are genuinely diverse from their sparse initializations. By analyzing the bottom row in Figure 5(b), we see that the masks obtained at the end of the DST training are distinct from their corresponding initializations, having the Jaccard Index close to that of a random overlap.
In consequence, we conclude that the best-performing methods from Section 5.2 indeed make similar decision choices while having a smaller overlap with the less efficient criteria. This renders the magnitude-based criteria almost equivalent, despite their separate introduction in the literature. At the same time, the DST mask updates made during the training are necessary to adapt and outperform the static initialization. We would like to highlight that such insights could not have been derived solely from performance and hyperparameter results. We hope that our insights will raise awareness of the importance of performing structural similarity experiments in the DST community.
## 6 Conclusion
We design and perform a large study of the different pruning criteria in dynamic sparse training. We unveil and discuss the complex relations between these criteria and the typical DST hyperparameters settings for various models. Our results suggest that, overall, the differences between the multiple criteria are minor. For very low densities, which are the core of DST interest, criteria based on magnitude perform the best. This questions the effectiveness of gradient-based scores for pruning during the training. We propose to incorporate the selected models and datasets as a baseline in future works investigating the pruning criteria for DST to avoid the common case where methods are overfitted to given DST hyperparameters or tasks. We hope that our research will contribute to the understanding of the sparse training methods.
**Limitations & Future Work.** Our research was conducted mainly on datasets from computer vision and one tabular dataset. It would be interesting to verify how our findings translate to large language models. The computed structural insights are based on set similarity and disregard the information about the value of the parameters. Additional topographic insights could be provided to incorporate this information and analyze the graph structure of the found connectivity. Finally, much work still needs to be done on the hardware side to truly speed up training times and decrease memory requirements (see Appendix E). We do not see any direct social or ethical broader impact of our work. |
2305.04102 | Leveraging Semantic Relationships to Prioritise Indicators of Compromise
in Additive Manufacturing Systems | Additive manufacturing (AM) offers numerous benefits, such as manufacturing
complex and customised designs quickly and cost-effectively, reducing material
waste, and enabling on-demand production. However, several security challenges
are associated with AM, making it increasingly attractive to attackers ranging
from individual hackers to organised criminal gangs and nation-state actors.
This paper addresses the cyber risk in AM to attackers by proposing a novel
semantic-based threat prioritisation system for identifying, extracting and
ranking indicators of compromise (IOC). The system leverages the heterogeneous
information networks (HINs) that automatically extract high-level IOCs from
multi-source threat text and identifies semantic relations among the IOCs. It
models IOCs with a HIN comprising different meta-paths and meta-graphs to
depict semantic relations among diverse IOCs. We introduce a domain-specific
recogniser that identifies IOCs in three domains: organisation-specific,
regional source-specific, and regional target-specific. A threat assessment
uses similarity measures based on meta-paths and meta-graphs to assess semantic
relations among IOCs. It prioritises IOCs by measuring their severity based on
the frequency of attacks, IOC lifetime, and exploited vulnerabilities in each
domain. | Mahender Kumar, Gregory Epiphaniou, Carsten Maple | 2023-05-06T17:38:01Z | http://arxiv.org/abs/2305.04102v1 | Leveraging Semantic Relationships to Priorities Indicators of Compromise in Additive Manufacturing Systems
###### Abstract
Additive manufacturing (AM) offers numerous benefits, such as manufacturing complex and customised designs quickly and cost-effectively, reducing material waste, and enabling on-demand production. However, several security challenges are associated with AM, making it increasingly attractive to attackers ranging from individual hackers to organised criminal gangs and nation-state actors. This paper addresses the cyber risk in AM to attackers by proposing a novel semantic-based threat prioritisation system for identifying, extracting and ranking indicators of compromise (IOC). The system leverages the heterogeneous information networks (HINs) that automatically extract high-level IOCs from multi-source threat text and identifies semantic relations among the IOCs. It models IOCs with a HIN comprising different meta-paths and meta-graphs to depict semantic relations among diverse IOCs. We introduce a domain-specific recogniser that identifies IOCs in three domains: _organisation_specific_, _regional_source-specific_, and _regional_target-specific_. A threat assessment uses similarity measures based on meta-paths and meta-graphs to assess semantic relations among IOCs. It prioritises IOCs by measuring their severity based on the frequency of attacks, IOC lifetime, and exploited vulnerabilities in each domain.
Keywords:Indicators of Compromise Cyber-Physical Systems Threat Intelligence, Threat Prioritisation Heterogeneous Information Networks
## 1 Introduction
Industry 4.0, the fourth industrial revolution, refers to integrating advanced digital technologies and manufacturing systems to automate and optimize industrial processes. Additive manufacturing (AM) is a key enabler of Industry 4.0, as it allows for the rapid and flexible production of customized parts and products [1]. AM is a process that enables the production of complex devices by applying successive layers of materials. AM offers many advantages, such as on-demand customisation, enhanced logistics, reduced labour and production lead times,
streamlined production, reduced waste, reduced inventory, and reduced transportation costs. However, cyber and physical attacks in AM pose severe concerns and formidable challenges [2], making AM supply chains susceptible to various attack vectors. As a result, protecting the security of AM has become increasingly important, and developing robust security mechanisms that protect against a range of potential attacks has become a significant challenge for researchers and industry practitioners alike.
Modern attacks on AM are often sophisticated and can exploit hidden vulnerabilities that go undetected for long periods [3-7]. A prime example is Advanced Persistent Threats (APTs), which have been used to target AM industries for espionage, economic gain, and intellectual property theft. APTs are commonly described as an extended attack campaign in which one or more intruders execute a long-term plan to take control of a network or system. In 2020, more than 1000 data breaches were reported in the United States alone, affecting more than 155.8 million individuals through data exposure [3]. Perhaps the most famous kinetic cyber attack of all time was aimed at Iran's nuclear program, considered unprecedented in the industry [4]. The Stuxnet attack involved a complexly targeted worm that executed zero-day exploits on operating systems and software for managing programmable logic controllers (PLCs). The attack resulted in tens of billions of dollars in damage. Another famous example of a cyberattack is the sewage attack in Maroochy Shire, which caused a system failure and millions of litres of untreated sewage to leak into the water supply [5]. Belikovetsky et al. [6] conducted a study on the vulnerability of additive manufacturing to cyber attacks. He demonstrated the sabotage attack on a propeller blueprint that can be 3D printed at home. The findings of their study emphasized the vulnerability of additive constructs to cyber attacks, which was also confirmed by another recent paper [7]. The authors of the latter paper identified AM as the second most vulnerable industry to cyberattacks, second only to the financial sector. With AM's growing national and industrial importance, cyberattacks have become more attractive and induced threat actors are increasingly involved in cybercrime-as-a-service, commoditising cyberattacks. As a result, APTs are now employing common attack patterns to compromise targets. Therefore, early identification of threat exposure and breaches is critical to preventing significant damage to an organization and providing reliable evidence during prosecution trials.
Cyber Threat Intelligence (CTI) can be a valuable tool for assessing the threat landscape in AM and developing effective strategies for mitigating cyber risks. Threat intelligence feeds can help organizations stay informed about emerging threats and new attack techniques. This can be especially useful in the fast-paced world of AM, where new technologies and processes are constantly being developed. CTI involves the extraction of threat intelligence from threat-related information from multiple sources, utilising several attributes, including Indicators of Compromise (IOCs), Tactic, Technique, and Procedure (TTP) and the skills and motive of the threat actor [8]. Some example of CTI feeds are Structured Threat Information Expression (STIX), OpenDef, Cybox, and Ope
nIOC, but a massive amount of information remain unstructured. IBM X-force [9], Facebook ThreatExchange [10], OpenCTI [11] and MISP [12] are a few vendors who provide threat intelligence feeds by extracting threat intelligence from multiple open sources using IOC extraction methods such as PhishTank, and IOCFinder.
These structured threat information feeds have several disadvantages, including a limited scope, delayed information, high cost, inflexibility and false positives, making it very challenging for AM industry to rely on them. On the other hand, unstructured threat feeds can provide a more comprehensive and flexible approach to threat intelligence that can be more effective for many organizations. However, unstructured reports may not be well-organized, making it hard to identify the relationships between different pieces of information. Other challenges include errors, inaccuracies, and missing information. As a result, it requires advanced natural language processing (NLP) techniques and machine learning algorithms to extract meaningful and relevant threat information from unstructured reports.
Consider the following instance of a security-related post: "_In October 2019, Mr Xing from China exploited the CVE-2019-0114 vulnerability, which affected multiple Android and Win10 devices in the United States. CVE-2019-0114 is a remote code execution vulnerability that contains the malicious file abc.bat_". Figure 1 displays a graphical representation of CTI, including eight IOCs such as attack actor, vulnerability, time, region, file, attack type, device and platform, and the relationship between them. Existing methods only consider IOCs but avoid the relationship between them, and as a result, they cannot grasp a comprehensive picture of the threat landscape. To overcome the limitations of existing structured threat feed tools, this paper aims to
Figure 1: An annotated example of CTI includes IOCs such as attack actor, vulnerability, time, region, file, attack type, device and platform, and their relationship.
by exploiting Heterogeneous Information Network (HIN) that provides insight into the interdependencies between heterogeneous IOCs.
This paper presents a novel semantic-based threat prioritisation framework for identifying, extracting and ranking IOCs. The summary of the paper is as follows:
* _Recogniser_. We propose a recogniser that automatically extracts threat-related information from multi-source threat text. It also identifies the domains to which IOCs belong and integrates IOCs with their domain, forming three domain-specific IOCs such as _organisation_domain-specific_, _regional_source-specific_, and _regional_target-specific_ threat intelligence.
* _Threat modelling_. We model the range of IOCs with a Heterogeneous Information Network (HIN), which comprises different meta-paths and meta-graphs that depicts the semantic relations among diverse IOCs to capture a more comprehensive landscape of threat events.
* _Threat assessment_. We present a CTI assessment framework that uses similarity measures based on meta-paths and meta-graphs and assesses the interdependent relations among diverse IOCs.
* _Prioritisation_. We then measure the severity of IOCs by considering the frequency of attacks, IOC lifetime, and the number of exploited vulnerabilities in each domain. As a result, they evaluate the ranking mechanism for each IOC.
The rest of the paper is organized as follows: Section 2 discusses the related work, and Section 3 provides the conceptual background. The proposed framework is presented in Section 4. Finally, Section 5 summarizes the paper and provides directions for future research.
## 2 Related Work
Extracting threat intelligence from the unstructured text of threat-related information has become an exciting research topic in cyber security. This section briefly describes key methodologies for identifying cyber threats by extracting IOCs from multiple sources.
Noor et al. [13] have proposed a model to automate cyber threat attribution by considering high-level IOCs to determine threat actors. Their technique extracts high-level IOCs from unstructured CTI reports and then semantically profiles threat actors with the high-level IOCs taken from MITRE's ATT&CK. Zhao et al. [14] present TIMiner, a method to extract and assess domain-specific CTIs that automatically classify the domains associated with related CTIs. Gao et al. [15] proposed a cyber threat intelligence method based on the Hierarchical Information Network (HINCTI) system to identify threat types. HINCTI provides a threat intelligence-based meta-schema to capture the semantic relationship between threat nodes leveraging a meta-path and meta-graph-based similarity method. Zhao et al. [16] proposed a CTI framework, HINTI, based on HIN that proposed a multi-granular-based IOC recogniser to enhance the
accuracy of the IOC extraction method. HINTI defines different types of IOCs using meta-paths to identify the relationship between IOCs, profile the threat events and rank the significance of IOCs to understand the threat landscape.
Liao et al. [17] proposed a novel automated threat-related information collection and extraction (iACE) technique from unstructured text that uses a natural language processing method to extract IOC data from text documents and then analyse the IOC data using graph mining methods. iACE aims to identify the grammatical semantic relationships between token threat patterns associated with IOC in text documents. The method integrates name entity recognition and relation extraction methods. Gao et al. [18] proposed a threat-hunting system (THREATRAPTOR) that extracts threat behavioural information from unstructured CTI reports to facilitate threat hunting. THREATRAPTOR provides an accurate NLP-based auditing framework to extract structured threat information from unstructured CTI text and defines a domain-specific query language to detect malicious system activities. Wang et al. [19] develop an efficient automated process that recognises and extracts entities and their relationship from text reports.
## 3 Conceptual Background
### Cyber Threat Intelligence
Modern cybercriminals have developed sophisticated tactics, techniques, and procedures (TTP) to realise their aim of compromising their targets quickly and efficiently. Thus, traditional defence mechanisms, such as anti-virus software, firewalls and intrusion detection methods, struggle to effectively detect cyber attacks such as advance persistent threats (APTs) and zero-day attacks. Cyber attacks have successfully compromised systems in a wide range of sectors. For example, the WannaCry ransomware attack extorted money to unlock sensitive information and designs across various industries [20]. Security experts have increasingly turned to sharing cyber threat intelligence (CTI) to combat such emerging cyber threats. CTI is any relevant information that helps detect, monitor, assess, and respond to cyber threats. CTI facilitates a comprehensive and significant threat warning and includes information such as IOCs [21].
Nowadays, a rich source of commercial and free CTI feeds are available, making it difficult for network defenders to evaluate the quality of information and select the optimal set of data feeds to pay attention to. Acting on results from low-quality feeds can give rise to many false alerts while concentrating on only a few data feeds increases the risk of missing relevant threats. However, it is challenging to extract IOCs from unstructured form sources. Several automated methods for extracting IOCs (such as malicious IP addresses, malware, and file hashes of malicious payloads) are based on the OpenIOC standard, including PhishTank, IOCFinder, and CleanMX [14]. To facilitate efficient threat intelligence sharing among organisations, CybOX [22], STIX [23], and TAXII [24] have emerged as de-facto standards for describing threat intelligence and are widely
consumed by the threat intelligence sharing platforms, including MISP [12] and AT&T Open Threat eXchange (OTX).
### Indicators of Compromise
Cyber Threat Intelligence (CTI) includes IOCs, which organisations can use to identify possible threats and protect themselves and their customers. Specifically, IOCs are artefacts observed about an attacker or their behaviour, such as tactics, techniques and procedures [25]. IOCs can be kept at a network or host level and help network defenders block malicious traffic and identity actions or determine if a cyber intrusion has occurred. Security and forensic analysts prepare reports of in-depth analysis of cyber attacks, including the IOCs, to be shared with communities, often through public data sources. Examples of IOC found in reports from data sources include actor identity behind cyber attacks, the malware used in threat attacks and their typical behaviour, communication and control server list, and other types of information. The information used in creating these reports is gathered from multiple sources, such as host logs, proxy logs and alerts. The reports may be widely distributed through various channels, including blogs, forums and social media.
The pyramid of pain (PoP) classifies the common types of IOCs. The PoP identifies the types of indicators that a system defender might use to detect an adversary's activities. The pyramid organises the pain an adversary will cause when the defender can deny those indicators. At the bottom end, if a defender identifies hash values of malicious files and then blocks these, it causes the attacker little pain since making an insignificant change to the file to produce the same outcome with a different hash is trivial. TTP sit at the top of the pyramid. When a defender detects and responds at this level, this disrupts behaviours much more complicated for an adversary to change; defining new behaviours is a significant challenge for adversaries.
### Heterogeneous Information Network
Heterogeneous Information Network (HIN) is a simple way of modelling a problem as a graph compromising different types of nodes and one or more correlations between nodes (edges) [26]. The set of node and edge types correspond to the network scheme. HIN delivers a high conceptualisation of modelling for a complex collection of data. From the graphical representation of the dataset, feature vectors can be extracted by defining meta-paths and meta-graphs corresponding to the graph and implementing a guided random walk over defined meta-paths and meta-graphs. A meta-path is a path defined within the graph of network schema, covering a specific sequence of relation types. A meta-graph [27] can handle the in-depth relationship between nodes by employing a direct acyclic graph of nodes defined over the HIN from a single source node to a single target node. The guided random walk generates a sequence of nodes processed in an embedding model such as word2vec, skip-gram or Continuous Bag-of-Words
(CBOW). Once the nodes are represented numerically, it is possible to determine a set of nodes and resolve many problems (classification, clustering, and similarity search).
### Overview
We introduce a novel system designed to automatically extract and prioritise high-level IOCs (Indicators of Compromise) from multiple sources of threat text. Our system addresses the limitations of existing IOC extraction methods by considering the semantic relationships among different IOCs. We present a novel approach to extracting threat-related information and identifying the domains IOCs belong to. This information is then integrated with their respective domains to form three domain-specific threat intelligence categories: the organisational domain, regional-source domain, and regional-target domain. We also present a threat modelling that utilizes a Heterogeneous Information Network (HIN) comprising different meta-paths and meta-graphs. The proposed system captures the interdependent relationships among diverse IOCs and provides a more comprehensive view of the landscape of threat events. Our system then utilizes similarity measures based on these meta-paths and meta-graphs to assess the interdependent relationships among different IOCs.
To prioritize IOCs, we measure their severity by considering various factors, including the frequency of attacks, the lifetime of the IOC, and the number of exploited vulnerabilities in each domain. Our system then evaluates the ranking mechanism for each IOC, providing a more comprehensive and accurate view of the threat landscape. Our system significantly contributes to cybersecurity, providing a more effective and efficient method for automatically extracting, assessing, and prioritizing high-level IOCs. With the increasing frequency and complexity of cyber threats, the need for such a system has become more critical.
## 4 Methodology
The architecture of the proposed method, as shown in Fig 2, comprises of following phases: Data collection and Preprocessing, Relation Extraction and Threat Modelling, Domain Recognition and Tag Generation, Domain-specific threat identification and Tagging, and Severity measure and Threat Prioritisation. Table 1 summarises the list of notations and abbreviations used throughout the paper.
### Data Collection and Preprocessing
The system automatically collects threat information identifying IOCs from multiple resources, including forums, blogs, security news, and bulletins. We use a breadth-first search to capture the HTML course code and Xpath for data extraction. We then reduce the dimension of each text report and remove noisy features by pre-processing. This pre-processing includes the removal of stopwords, punctuations, and markup characters.
\begin{table}
\begin{tabular}{c c} \hline \hline Notations & Description \\ \hline IOC & Indicators of Compromise \\ AM & Additive manufacturing \\ HIN & Heterogeneous Information Network \\ APT & Advanced Persistent Threats \\ CTI & Cyber Threat Intelligence \\ TTP & Tactic, technique and procedure \\ PoP & Pyramid of pain \\ STIX & Structured threat information exchange \\ TAXII & Trusted Automated eXchange of Indicator Information \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of Abbreviations and Notations
Figure 2: Process flow of the Proposed system
### Relation extraction and threat intelligence modelling
Using a heterogeneous information network (HIN) for threat intelligence, we first build a graph that shows the interdependent (semantic) relationships between the different IOCs involved in the attack. By denoting nodes (IOCs) and relationships, we can identify patterns and anomalies that may indicate the presence of a threat. For example, we can use HINs to identify groups of attackers that share common attack vectors or targets or to track the evolution of an attack over time as new entities become involved. To better understand, we can characterise the nodes and relations as follows.
#### 4.2.1 Node Features.
In the context of risk in Additive Manufacturing, it is essential to consider the domain-specific threat information. For instance, a threat post discussing the Stuxnet virus and its impact on industrial control systems is more relevant to manufacturing organisations than those in the finance or healthcare sectors. This highlights the need for threat intelligence tailored to an organisation's domain.
Additionally, geographical location plays a significant role in cyber attacks. Over 500 geopolitical cyber attacks have been reported worldwide in the past decade, with 30% originating from China or Russia and 26.3% targeting the USA. In 2018 alone, 27% of attacks occurred in the USA [28]. Therefore, when developing threat models for Additive Manufacturing, it is crucial to consider the regional source and target source of cyber attacks.
To account for these domain-specific and regional factors in our threat intelligence model for Additive Manufacturing, we define nodes as organisation-specific, regional_source-specific, and regional_target-specific. This enables us to capture the complex relationships between entities involved in cyber attacks, such as attackers, attack vectors, and targets. Moreover, we consider time-related node features such as attack frequency and IOC lifecycle, which can provide valuable insight into the TTPs of attackers and help defenders calculate the level of risk posed by a particular threat.
#### 4.2.2 Semantic relation features.
The node features in the HIN represent a specific action, but actions can be employed multiple times in conjunction with other activities in a campaign. These complex relationships among nodes can provide more valuable intelligence for threat identification; therefore, we consider relation-based and node features. This allows us to analyse highly sophisticated malicious cyber attacks. To model the interdependent relationship between eight IOCs, we define the following semantic relationships:
* **R1**: The relation **actor-exploit-vulnerability** matrix \(A\) represents the link between the threat actor and vulnerability. For each element, \(A_{i,j}\in\{0,1\}\), where \(A_{i,j}=1\) means actor \(i\) exploits vulnerability \(j\).
* **R2**: The relation **actor-invade-device** matrix \(B\) represents the link between the threat actor and device. For each element, \(B_{i,j}\in\{0,1\}\), where \(B_{i,j}=1\) means actor \(i\) invades device \(j\).
* **R3**: The link between two actors is represented by the relation **actor-assist-actor** matrix \(C\). For each element, \(C_{i,j}\in\{0,1\}\), where \(C_{i,j}=1\) means actor \(i\) assists actor \(j\).
* **R4**: The relation **attack_type-originate_from-region** matrix \(D\) represents the link between the attack type and location. For each element, \(D_{i,j}\in\{0,1\}\), where \(D_{i,j}=1\) means attack type \(i\) originate from region \(j\).
* **R5**: The relation **attack_type-target-region** matrix \(E\) represents the link between the attack type and location. For each element, \(E_{i,j}\in\{0,1\}\), where \(F_{i,j}=1\) means attack type \(i\) target to region \(j\).
* **R6**: The relation **vulnerability-affect-device** matrix \(F\) represents the link between the vulnerability and the device. For each element, \(F_{i,j}\in\{0,1\}\), where \(F_{i,j}=1\) means vulnerability \(i\) affects device \(j\).
* **R7**: The relation **attack_type-associate-vulnerability** matrix \(G\) represents the link between the attack type and vulnerability. For each element, \(G_{i,j}\in\{0,1\}\), where \(G_{i,j}=1\) means attack type \(i\) carry vulnerability \(j\).
* **R8**: The relation **vulnerability-held-time** matrix \(H\) represents the link between the vulnerability and time. For each element, \(H_{i,j}\in\{0,1\}\), where \(H_{i,j}=1\) means vulnerability \(i\) held in time \(j\).
* **R9**: The relation **vulnerability-include -file** matrix \(B\) represents the link between the vulnerability and malicious file. For each element, \(I_{i,j}\in\{0,1\}\), where \(I_{i,j}=1\) means vulnerability \(i\) include malicious file \(j\).
**R10**: The relation **vulnerability-evolve-vulnerability** matrix \(B\) represents the link between the vulnerabilities. For each element, \(J_{i,j}\in\{0,1\}\), where \(J_{i,j}=1\) means vulnerability \(i\) evolve to vulnerability \(j\).
We initiate dependency parsing to leverage the semantic relationships among the eight IOCs and extract them in a structured format. Using this approach, we can represent the IOCs as triplets, each consisting of two IOCs and a relation between them. For instance, if IOC1 is dependent on IOC2, we would define the relationship as (IOC1-relation-IOC2), where'relation' denotes the nature of the relationship between the two IOCs.
#### 4.2.2 Meta-path and Meta-graph.
Figure 3 presents 12 distinct types of meta-paths and meta-graphs denoted by \(\chi_{i}\) that capture interdependent relationships among seven different IOCs. While the meta-path illustrates the connections between the IOCs, it falls short in capturing intricate relationships. To address this limitation, the proposed HIN-based Threat Intelligence (TI) model utilizes a directed acyclic graph of nodes to handle more complex structures in the HIN architecture. By learning and analyzing these 12 different meta-paths and meta-graphs, the model can convey the context of a threat event and offer threat
insights across heterogeneous IOCs. For instance, the \(\chi_{1}\) meta-path is a length-2 meta-path that represents the relatedness of "threat actors (A) exploiting the same vulnerability (V)."
Similarly, \(\chi_{8}\) is a meta-path that describes the relationships between IOCs that "two attack types who leverage the same vulnerability held at the same time". Likewise, \(\chi_{10}\) is a meta-graph that portrays the relationship over threat infrastructure with more comprehensive insight that integrates both external and intrinsic relationships. Meta-graph \(\chi_{10}\) depicts the relationship among IOCs: "two attack types originated from the same region, and their associated vulnerabilities affect the same device occur at the same time".
### Domain recognition and tag generation
To extract domain-specific IOCs, it is essential first to identify the domain of threat information. This initial step helps to ensure that IOCs are tailored to the specific context of the threat landscape, enabling more effective threat detection and response. Here, we consider three domains, _organisation_domain-specific_, _regional_source-specific_, and _regional_target-specific_. _Organisational_domain-specific_ threat information includes financial, health, manufacturing, government, and
Figure 3: Proposed meta-path and meta-graph for threat type identification, where A denotes threat actor, V denotes vulnerability, D denotes device, R denotes a region, AT denotes attack type, F denotes file, and T denotes time.
IoT information. _Regional_source-specific_ and _Regional_target-specific_ threat information originated and targeted the geographic region, such as China, Russia, India, Korea, USA, UK, and Europe. We first trained the word2vec model specific to a threat description embedding that inputs a large corpus (threat description) and generates a low-dimension vector. Each unique word in the corpus is allocated an assigned vector in latent space. The convolution function sets a filter for each word vector to generate a feature called local_feature. Our model captures the most significant features by leveraging the max-pooling operation that takes the maximum values of local-feature and runs over a local feature set. This will generate tags for three domains, i.e., \(OD_{t}\), \(SD_{t}\), and \(TD_{t}\) denotes the tags corresponding to organisation_domain-specific, regional_source-specific, and regional_target-specific threat, respectively.
### Domain-Specific Threat Identification and Tagging
After successfully extracting the features of IOCs and their relationships and identifying the relevant meta-paths and meta-graphs, a meta-path and meta-graph-based heterogeneous classifier is developed to classify the threat type of infrastructure nodes in Cyber Threat Intelligence (CTI). The proposed classification approach integrates node features and explores the semantic similarities between meta-paths and meta-graphs to represent the nodes comprehensively. These advanced techniques enable a more comprehensive depiction of the nodes, enhancing the accuracy of the threat classification.
Given the threat intelligence graph \(G=(N,R)\), and meta-path and meta-graph set \(P=\{\chi_{1},\chi_{2},..\chi_{n}\}\), the assessment of threat intelligence includes the following steps:
* **Adjacency matrix**. The relationships between threat nodes can be explored using different meta-paths and meta-graphs, which capture the behaviour of threats in various aspects. To represent these relationships, we propose using an adjacent weighted matrix, denoted by \(Adj_{i}\in R^{N\times N}\), which can be generated using similarity algorithms such as Euclidean distance, Manhattan distance, Cosine similarity, word mover distance, and Jaccard similarity. To assess the similarity between IOCs, we generate a corresponding weighted adjacency matrix, \(Adj_{i}\), based on the meta-path and meta-graph path set, \(P\). The use of weighted adjacency matrices \(Adj_{i}\) enables the identification of the most significant relationships between the different nodes, which can be used to prioritise threat mitigation efforts.
* **Feature matrix**. By incorporating attributed information of nodes, we can construct an attributed feature matrix \(F_{i}\) of size \(N\times d\), where N denotes the number of IOCs in \(Adj_{i}\), and \(d\) is the dimension of the node feature. This allows us to integrate the attribute information of IOCs and create a node feature matrix \(F_{i}\in R^{N\times d}\). To recognize previously unnoticed IOCs, we employ the word2vec method to develop a threat intelligence embedding, which transforms words into a latent vector space. To achieve this, threat-related texts are pre-processed, accumulated into a word set, and converted into a
latent vector space using word2vec. This approach enables us to represent threat-related information in a low-dimensional vector space, facilitating the detection and analysis of potential threats.
* **Quantify threat intelligence**. After designing an adjacent weighted and attributed feature matrix, we assess the threat intelligence. Different types of assessment methods to quantify the proposed HIN-based threat intelligence. For example, graph convolution network (GCN) and Bidirectional Encoder Representations from Transformers (BERT). Given the adjacency matrix \(Adj_{i}\), and its corresponding feature matrix \(F_{i}\) (low-dimensional space), we utilise the graph convolution network (GCN) method to quantify the relationship between IOCs. This will fuse the adjacency matrix \(Adj_{i}\), and feature matrix \(F_{i}\) as \(Z=(F,Adj)\) and output the predicted labels of IOCs. Then, the model integrates the domain-specific tags \(OD_{t}\), \(SD_{t}\) and \(TD_{t}\) to the predicted IOC, representing that the IOCi belongs to the organisation \(OD_{t}\), originates from the country \(SD_{t}\) and is targeted to the country \(TD_{t}\) and are considered as the domain-specific IOCs.
### Severity Measure and Threat Prioritisation
Utilizing the learned domain-specific IOCs, we can evaluate the severity of potential threats of various attack vectors within each domain. This motivates us to develop a quantitative measure to assess threat risks corresponding to each domain. The proposed severity measure is based on several key assumptions.
1. Firstly, we assume that the frequency of attacks may significantly influence the severity and scope of the threats manifested.
2. Secondly, we postulate that chain exploits, where multiple vulnerabilities use an attack, can cause considerably more damage.
3. Finally, we recognize that the severity of a threat may decrease over time, particularly during the zero-day risk period.
Consequently, the severity of a threat can be measured by examining the frequency of attacks, the lifetime of the IOC, and the number of exploited vulnerabilities in each domain. This approach allows us to develop a more nuanced and comprehensive understanding of the potential threats facing a domain, enabling us to take appropriate measures to mitigate risk and enhance security.
## 5 Conclusion and Future scope
This paper presented a novel semantic-based threat prioritisation system for AM that intends to expose the comprehensive behaviour of threat events in the relationships among different IOCs with high accuracy. We proposed an intelligent IOC acquisition and ranking system based on a Heterogeneous Information Network (HIN). The proposed system collects threat-related from multiple sources that automatically extract threat-related information, measure the
severity of IOCs, and quantify them based on the severity. We considered individual IOCs and one or more relationships among semantically similar IOCs. We proposed an efficient recogniser to identify domain-specific IOCs focusing on three domains: _organisational_domain-specific, regional_source-specific_, and _regional_target-specific_ threat intelligence. Further, we evaluated the severity of IOC by exploring the frequency of attacks, IOC lifetime, and the number of exploited vulnerabilities in each domain.
The proposed semantic-based threat prioritisation system for AM has potential future scopes that can be explored, such as:
* _Integrating with existing security tools_: The proposed system can be combined with existing security tools to provide real-time threat intelligence and prioritisation of threats. The integration can help security teams to automate the detection, investigation, and response to threats and reduce the time to mitigate them.
* _Exploring additional domains_: The proposed system focuses on three domains: _organisational_domain-specific_, _regional_source-specific_, and _regional_target-specific_. However, other domains, such as _industry-specific_ or _technology-specific_, can be explored to provide a more comprehensive view of the threat landscape.
* _Improving the ranking system_: The proposed system ranks the IOCs based on severity. However, the ranking system can be improved to consider the evolving threat landscape and real-time threat intelligence data to enhance the accuracy of the prioritisation system.
|
2304.13757 | A New Higgs Boson with Electron-Muon Flavor-Violating Couplings | Recently, the CMS Collaboration performed a search on a new resonance
decaying to $e^\pm\mu^\mp$ in the mass range of 110 GeV to 160 GeV. The search
also hints a possible excess at 146 GeV with a $3.8\sigma~(2.8\sigma)$ of local
(global) significance. Motivated by that, we try to interpret the results in
the context of the type-III two-Higgs-doublet-model. We find that the excess is
only moderately constrained by low-energy lepton-flavor-violation processes, in
particular the $\mu\to e \gamma$ decay. We also compare the CMS bounds across
the entire search region against constraints of $\mu\to e\gamma$ and $\mu\to e$
conversion in nuclei. Our finding indicates that the collider bounds can be
superior to those of low-energy processes for the scalar mass between $110
\text{ GeV}$ and $150 \text{ GeV}$, suggesting the importance of this mass
range for future searches. | R. Primulando, J. Julio, N. Srimanobhas, P. Uttayarat | 2023-04-26T18:00:29Z | http://arxiv.org/abs/2304.13757v2 | # A New Higgs Boson with Electron-Muon Flavor-Violating Couplings
###### Abstract
Recently, the CMS Collaboration performed a search on a new resonance decaying to \(e^{\pm}\mu^{\mp}\) in the mass range of 110 GeV to 160 GeV. The search also hints a possible excess at 146 GeV with a 3.8\(\sigma\) (2.8\(\sigma\)) of local (global) significance. Motivated by that, we try to interpret the results in the context of the type-III two-Higgs-doublet-model. We find that the excess is only moderately constrained by low-energy lepton-flavor-violation processes, in particular the \(\mu\to e\gamma\) decay. We also compare the CMS bounds across the entire search region against constraints of \(\mu\to e\gamma\) and \(\mu\to e\) conversion in nuclei. Our finding indicates that the collider bounds can be superior to those of low-energy processes for the scalar mass between 110 GeV and 150 GeV, suggesting the importance of this mass range for future searches.
## I Introduction
As the most recently discovered particle of the Standard Model (SM) [1; 2], the 125-GeV Higgs boson, \(h\), is the least studied fundamental particle. After the discovery, immense amount of works on precision measurements of the \(h\) properties has been done at the Large Hadron Collider (LHC). It is found that the \(h\) properties agree quite well with the SM expectation [3; 4]. In particular, since its discovery in the \(\gamma\gamma\), \(W^{+}W^{-}\) and \(ZZ\) channels, the quests of fermionic decay channels have also delivered positive signals. It has been established that \(h\) decays into a pair of bottom quarks [5; 6] and a pair of \(\tau\) leptons [7]. There are also evidences of \(h\) decaying to \(\mu^{+}\mu^{-}\)[8]. The consistency of these measurements with the SM predictions, is one of the biggest triumph of the SM. Further searches for the remaining predicted decay channels, e.g., electrons [9; 10] and charm quarks [11; 12], are underway. However, one should not discount other possible decay modes that are not predicted by the SM. Discovering any of these decays will be a clear signal of a new interaction beyond the SM.
Lepton-flavor-violating (LFV) couplings are prototypical examples of the new interactions beyond the SM. The LFV couplings of the \(h\) would lead to LFV decays \(h\to\ell\ell^{\prime}\), where \(\ell\) and \(\ell^{\prime}\) are leptons of different flavors. These LFV decays are correlated with low-energy LFV decays of charged lepton \(\ell\to\ell^{\prime}\gamma\) and \(\ell\to 3\ell^{\prime}\). Additionally the \(\mu\to e\) conversions in atomic nuclei will constrain any processes involving the LFV \(e\)-\(\mu\) coupling. In the case of LFV couplings involving the tau lepton, it has been demonstrated that the LHC searches for \(h\to\tau\ell\) (\(\ell=e,\mu\)) decays provide more stringent constraints than the low-energy processes \(\tau\to\ell\gamma\) and \(\tau\to 3\ell\) decays [13; 14]. This indicates that collider searches for \(h\to\tau\ell\), together with their heavy scalar counterparts, are of the essence in constraining new physics parameter space [15; 16; 17; 18; 19; 20; 21].
The collider search for \(h\to e\mu\), on the other hand, has often been overlooked as tools for constraining new physics parameter space. This is due to a conventional wisdom that such a channel may lead to \(\mu\to e\gamma\) and \(\mu\to e\) conversions, which provide more stringent constraints [13; 14]. In this work, we will show that such expectation does not generally apply to a new resonance decaying into a pair of electron and muon. In particular, when the new particle can mix with the 125-GeV one and its mass is less than 160 GeV, its contribution naturally cancels that of the \(h\) in the \(\mu\to e\gamma\) decay and \(\mu\to e\) conversion in atomic nuclei, weakening the bounds from these processes.
The strongest bound on \(h\to e\mu\) decay derived from the full LCH Run II data is set by the CMS with BR(\(h\to e^{\pm}\mu^{\mp})\leq 4.4\times 10^{-5}\)[22]. A slightly weaker bound is provided by the ATLAS with BR(\(h\to e^{\pm}\mu^{\mp}\)) \(\leq 6.2\times 10^{-5}\)[10]. Beside searching for the SM Higgs decaying to \(e^{\pm}\mu^{\mp}\), the CMS also looks for the LFV decay of a new resonance, \(H\), for 110 GeV \(<m_{H}<\) 160 GeV. They find a hint of excess at \(m_{H}\sim\) 146 GeV with 3.8\(\sigma\) (2.8\(\sigma\)) global (local) significance. The preferred cross section for the excess is \(\sigma\times\text{BR}\,(pp\to H\to e^{\pm}\mu^{\mp})=3.82^{+1.16}_{-1.09}\) fb. However, in Ref. [10], ATLAS does not analyze their data in the context of a new resonance search. Yet, in Fig. 1 of the reference, the data for the
invariant mass \(m_{e\mu}\) between 110 and 160 GeV were shown. There is no obvious excess at \(m_{e\mu}=146\) GeV, and hence the ATLAS data do not strongly favor the existence of the CMS excess.
In this work, we will compare the CMS results on the new resonance search with the corresponding low-energy bounds. For concreteness, we will focus our analysis on the type-III two-Higgs-doublet model (2HDM). While not advocating the presence of the excess, we try to understand whether the excess holds up against the current and the future \(\mu\to e\gamma\) and \(\mu\to e\) conversion bounds. We will compare the low-energy constraints with the CMS bounds in the mass range of 110 GeV \(<m_{H}<\) 160 GeV. We will highlight the importance of searching for LFV decay at a relatively low mass, i.e., between 100 GeV and 150 GeV.
This paper is organized as follows. In the next section we briefly review the type-III 2HDM. In Section III, some relevant low-energy constraints are discussed. Both the CMS 146-GeV excess and and the CMS collider bounds for other masses are examined in Section IV. Finally, we conclude in Section V.
## II Type-III 2HDM
The type-III 2HDM can be conveniently described in the Higgs basis [23], where the two Higgs doublets, \(H_{1,2}\), are given by
\[H_{1}=\left(\frac{G^{+}}{v+h_{1}+iG}\right),\quad H_{2}=\left( \frac{H^{+}}{\sqrt{2}}\right). \tag{1}\]
Here \(v\) is the vacuum expectation value, and \(G^{+}\) and \(G\) are the would-be Goldstone bosons. For simplicity, we assume \(CP\) symmetry in the scalar sector so that the two \(CP\)-even states, \(h_{1}\) and \(h_{2}\), do not mix with the \(CP\)-odd one, \(A\). The \(CP\)-even states, however, can mix through
\[\begin{pmatrix}h_{1}\\ h_{2}\end{pmatrix}=\begin{pmatrix}c_{\alpha}&s_{\alpha}\\ -s_{\alpha}&c_{\alpha}\end{pmatrix}\begin{pmatrix}h\\ H\end{pmatrix}, \tag{2}\]
where \(c_{\alpha}\) and \(s_{\alpha}\) are shorthand notations for \(\cos\alpha\) and \(\sin\alpha\), respectively. Here we identify \(h\) with the 125 GeV Higgs boson. Since the properties of the 125-GeV Higgs boson agrees with the SM predictions [24; 25; 26], the mixing angle \(s_{\alpha}\) is expected to be small. The masses of the extra Higgs bosons \(H\), \(A\), and \(H^{+}\), in principle, are arbitrary.
The Yukawa couplings of \(H_{1}\) generate fermion masses while the Yukawa couplings of \(H_{2}\) give rise to potential flavor violations. The most general form of the Yukawa couplings of \(H_{1}\) and \(H_{2}\) to the leptons is given by
\[\mathcal{L}_{yuk}\supset-\frac{\sqrt{2}m_{i}}{v}\delta_{ij}\bar{ L}_{i}\ell_{Rj}H_{1}-\sqrt{2}Y_{ij}\bar{L}_{i}\ell_{Rj}H_{2}+\text{h.c.}, \tag{3}\]
where \(L\) denotes the lepton doublet \(L\equiv(\nu_{L},\ell_{L})^{T}\), \(m\) is the mass of the charged lepton, and indices \(i\) and \(j\) run over lepton generations. After the electroweak symmetry breaking, the couplings between the neutral scalars and the leptons read
\[\mathcal{L}_{yuk}\supset-y^{h}_{ij}\bar{\ell}_{Li}\ell_{Rj}h-y^{H }_{ij}\bar{\ell}_{Li}\ell_{Rj}H-y^{A}_{ij}\bar{\ell}_{Li}\ell_{Rj}A+\text{h.c.}, \tag{4}\]
where
\[y^{h}_{ij} =\frac{m_{i}}{v}\delta_{ij}c_{\alpha}-Y_{ij}s_{\alpha}, \tag{5}\] \[y^{H}_{ij} =\frac{m_{i}}{v}\delta_{ij}s_{\alpha}+Y_{ij}c_{\alpha},\] \[y^{A}_{ij} =iY_{ij}.\]
In this work, we are interested in LFV in the \(e\)-\(\mu\) sector, so we take only \(Y_{e\mu}\) and \(Y_{\mu e}\) in Eq. (3) to be nonzero. The couplings \(Y_{e\mu}\) and \(Y_{\mu e}\), in principle, are complex. However, such a scenario is strongly constrained by the electron electric dipole moment measurement [27], making the product \(Y_{e\mu}Y_{\mu e}\) approximately real. Therefore, for simplicity, we assume that both \(Y_{e\mu}\) and \(Y_{\mu e}\) are real in this work.
With the minimal Yukawa couplings in our scenario, among the heavy scalars, only the \(CP\)-even \(H\) can be singly produced via the mixing of Eq. (2). Thus, \(H\) is the only relevant resonance for the CMS and ATLAS LFV searches. The \(H\) production cross-sections can be obtained from the would-be SM Higgs boson cross-section by scaling it with
a factor of \(s_{\alpha}^{2}\). Similarly, for a light \(H\) in the CMS and ATLAS search region, all its non-LFV partial decay widths are also related to that of the SM-like Higgs boson by a scaling factor \(s_{\alpha}^{2}\). On the other hand, the \(h\) production cross-sections and its non-LFV partial decay widths are reduced by a factor of \(c_{\alpha}^{2}\) from their SM values.
## III Low-Energy Flavor-Violating Constraints
Naturally, the LFV couplings \(Y_{e\mu}\) and \(Y_{\mu e}\), together with the scalar mixing \(s_{\alpha}\), will give rise to \(\mu\to e\gamma\) through loops containing the LFV scalars. The partial decay width given by
\[\Gamma(\mu\to e\gamma)=\frac{\alpha_{EM}m_{\mu}^{5}}{64\pi^{4}}\left(|c_{L}|^ {2}+|c_{R}|^{2}\right), \tag{6}\]
where \(\alpha_{EM}\) is the electromagnetic fine-structure constant. The Wilson coefficients \(c_{L,R}\) arise at one- and two-loop level. The one-loop contributions are given by
\[c_{L}^{(1)}=-\frac{s_{2\alpha}}{24}\frac{m_{\mu}Y_{\mu e}}{v}\left[\frac{1}{m _{h}^{2}}\left(4+3\ln\frac{m_{\mu}^{2}}{m_{h}^{2}}\right)-\frac{1}{m_{H}^{2}} \left(4+3\ln\frac{m_{\mu}^{2}}{m_{H}^{2}}\right)\right]. \tag{7}\]
with \(s_{2\alpha}\equiv\sin 2\alpha\). The two-loop contributions are dominated by the \(W\) and top-quark exchanges, and they are given by
\[c_{L}^{(2W)} \simeq-\frac{s_{2\alpha}}{8}\frac{\alpha_{EM}Y_{\mu e}}{\pi vm_{ \mu}}\left(3f(z_{Wh})+\frac{23}{4}g(z_{Wh})+\frac{3}{4}h(z_{Wh})+\frac{f(z_{Wh })-g(z_{Wh})}{2z_{Wh}}-(h\to H)\right) \tag{8}\] \[c_{L}^{(2t)} \simeq\frac{s_{2\alpha}}{3}\frac{\alpha_{EM}Y_{\mu e}}{\pi vm_{ \mu}}\left(f(z_{th})-f(z_{tH})\right), \tag{9}\]
with \(z_{ab}=m_{a}^{2}/m_{b}^{2}\) and \(c_{L}^{(2)}=c_{L}^{(2W)}+c_{L}^{(2t)}\). The loop functions are found to be
\[f(z) =\frac{z}{2}\int_{0}^{1}dx\frac{1-2x(1-x)}{x(1-x)-z}\ln\frac{x(1- x)}{z}, \tag{10}\] \[g(z) =\frac{z}{2}\int_{0}^{1}dx\frac{1}{x(1-x)-z}\ln\frac{x(1-x)}{z},\] (11) \[h(z) =\frac{z}{2}\int_{0}^{1}dx\frac{1}{z-x(1-x)}\left(1+\frac{z}{z-x( 1-x)}\ln\frac{x(1-x)}{z}\right). \tag{12}\]
The Wilson coefficient \(c_{R}\) can be obtained from \(c_{L}\) by a replacement \(Y_{\mu e}\to Y_{e\mu}\).
In addition, there is also a set of diagrams known as "set C", whose definition and full expressions are given in Ref. [28]. (In the context of the 2HDM, such contributions are given in [18].) While their contributions are subdominant compared to those of the \(W\)-boson and the top-quark, we have included them in our analysis. However, we have omitted contributions from diagrams with the \(Z\)-boson because they are suppressed by a factor of \((1-4s_{W}^{2})/4s_{W}^{2}\) with \(s_{W}^{2}\) parameterizing the weak mixing angle.
It should be noted the loop functions in Eqs. (10)-(12) are typically \(\mathcal{O}(1)\). Hence, the one-loop coefficient is parameterically suppressed by \(\,m_{\mu}^{2}/(\alpha_{EM}m_{h(H)}^{2})\) compared to the two-loop ones. Note also that there is a cancellation between the \(h\) and the \(H\) contributions in the Wilson coefficients; the closer \(m_{H}\) is to \(m_{h}\), the more irrelevant the \(\mu\to e\gamma\) constraint becomes.
The most stringent constraint on the \(\mu\to e\gamma\) decay is provided by the MEG experiment with \(BR(\mu\to e\gamma)\leq 4.2\times 10^{-13}\)[29]. The upgraded MEGII experiment, which is currently taking data, is expected to push the bound down to \(6\times 10^{-14}\)[30].
The Wilson coefficients \(c_{L}\) and \(c_{R}\) also lead to \(\mu\to e\) conversion in an atomic nucleus. In addition, the \(\mu\to e\) conversion also gets tree-level contributions mediated by the \(CP\)-even Higgs boson. In our scenario, the conversion rate is given by [31]
\[\Gamma(\mu\to e\ \text{conv.})=m_{\mu}^{5}\left|\frac{e}{16\pi^{2}}c_{R}D+\sum_{q} \sum_{N=p,n}g_{L}\frac{m_{N}}{v}f^{(q,N)}S^{N}\right|^{2}+(L\leftrightarrow R), \tag{13}\]
where \(D\) and \(S^{N}\) are the overlap integrals, the index \(q\) runs over all quark flavors, and \(f^{(q,N)}\) is the quark form factor. The effective couplings \(g_{L}\) is given by
\[g_{L}=s_{2\alpha}Y_{e\mu}\left(\frac{1}{m_{H}^{2}}-\frac{1}{m_{h}^{2}}\right). \tag{14}\]
The effective coupling \(g_{R}\) can be obtained from \(g_{L}\) by a replacement \(Y_{e\mu}\to Y_{\mu e}\). As is the case with \(\mu\to e\gamma\), the \(\mu\to e\) conversion constraint also suffers from the same blind spot when \(m_{H}\) approaches \(m_{h}\).
The strongest \(\mu\to e\) conversion constraint comes from conversion in the gold nucleus, with the rate \(\Gamma(\mu\to e\ {\rm conv.})/\Gamma({\rm captured})<7\times 10^{-13}\)[32]. In this case, the overlap integrals are given by \(D=0.189\), \(S^{p}=0.0614\) and \(S^{n}=0.0918\), and the upper bound on the captured rate is \(\Gamma({\rm captured})\)\(<13.07\times 10^{6}\)s\({}^{-1}\)[31]. The form factors for up- and down-quark are \(f^{(u,p)}=0.018\pm 0.005\), \(f^{(u,n)}=0.016\pm 0.005\), \(f^{(d,p)}=0.034\pm 0.011\) and \(f^{(d,n)}=0.038\pm 0.011\)[33]. The s-quark form factor is \(f^{(s,p)}=f^{(s,n)}=0.043\pm 0.011\)[34]. Finally, the heavy quark form factor, is \(f^{(Q,p)}=f^{(Q,n)}=0.067\pm 0.001\) for \(Q=c,b,t\)[14].
## IV Collider search for LFV decays
The LFV couplings \(Y_{e\mu}\) and \(Y_{\mu e}\), together with the mixing angle \(s_{\alpha}\), lead to LFV decays of the \(h\) and \(H\). The partial decay width \(h\to e^{\pm}\mu^{\mp}\) is given by
\[\Gamma(h\to e^{\pm}\mu^{\mp})=\frac{s_{\alpha}^{2}m_{h}}{8\pi}\left(|Y_{e\mu} |^{2}+|Y_{\mu e}|^{2}\right), \tag{15}\]
while the corresponding partial decay width for \(H\to e^{\pm}\mu^{\mp}\) can be obtained by the replacements \(s_{\alpha}\to c_{\alpha}\) and \(m_{h}\to m_{H}\). The LFV decay of \(h\) has been searched for by the ATLAS and the CMS collaborations [10; 22]. Neither ATLAS nor CMS find an evidence of such a decay. Assuming the SM production cross-section for the \(h\), ATLAS has set an upper bound \(BR_{h\to e^{\pm}\mu^{\mp}}\leq 6.2\times 10^{-5}\), while CMS has set a slightly stronger constraint \(BR_{h\to e^{\pm}\mu^{\mp}}\leq 4.4\times 10^{-5}\). However, due to the mixing angle \(s_{\alpha}\), the production cross-section of \(h\) is slightly reduced by \(c_{\alpha}^{2}\). Hence, in our scenario, the bound on the \(h\to e^{\pm}\mu^{\mp}\) branching ratio must be weaken by \(c_{\alpha}^{2}\). In particular, the CMS upper bound becomes \(c_{\alpha}^{2}BR_{h\to e^{\pm}\mu^{\mp}}\leq 4.4\times 10^{-5}\).
The collider search for \(H\to e^{\pm}\mu^{\mp}\) was done by the CMS collaboration [22]. The results was presented in the production cross-section of \(H\) times the branching ratio into \(e^{\pm}\mu^{\mp}\). In our model it is given by
\[\sigma\times BR_{H\to e^{\pm}\mu^{\mp}}=s_{\alpha}^{2}\sigma_{SM}\frac{\Gamma (H\to e^{\pm}\mu^{\mp})}{s_{\alpha}^{2}\Gamma_{SM}+\Gamma(H\to e^{\pm}\mu^{ \mp})}, \tag{16}\]
where \(\sigma_{SM}\) and \(\Gamma_{SM}\) are the cross-section and the total decay width of the SM-like Higgs with mass \(m_{H}\). In our analysis, we use the SM-like Higgs cross-sections and decay widths provided by the LHC Higgs Cross Section Working Group [35].
### The CMS 146-GeV excess
The CMS search for a resonance decaying to \(e^{\pm}\mu^{\mp}\) has observed an excess of event around 146 GeV with a significance of \(3.8\sigma\)[22]. The apparent excess can be explained by the \(H\) provided its production cross-section times branching ratio into \(e^{\pm}\mu^{\mp}\), \(\sigma\times BR_{H\to e^{\pm}\mu^{\mp}}\), is \(3.82^{+1.16}_{-1.09}\) fb. On the other hand, ATLAS has never published any searches on heavier Higgs decaying to \(e^{\pm}\mu^{\mp}\). However, ATLAS analysis on \(h\to e^{\pm}\mu^{\mp}\)[10] shows the data for \(m_{e\mu}\) between 110 GeV to 160 GeV in their Fig. 1. There is no clear excess at \(m_{e\mu}\sim 146\) GeV in the presented data.
Taking both CMS and ATLAS results into consideration, we perform a naive combination of the two searches using the counting experiment. The combine tool with asymptotic approximation [36] is used to calculate the best fit cross-section and its significance. The data and background monte-carlo (MC) from both experiments, as well as the CMS signal MC for \(m_{H}=146\), are extracted from the Data-MC plots in [10; 22]. In the case of ATLAS MC, we estimated the signal model using Madgraph5 [37] to simulate parton-level production cross-section and decay of \(H\), followed by showering and hadronization simulations by Pythia8 [38]. Finally, the detector responses are simulated using Delphes3 [39]. The overall uncertainty is estimated as 1% as mentioned in [10] for \(e\mu\) channel. We find the combination reduces the local significance of the excess to \(3.3\sigma\). With the combined the CMS and ATLAS data, we determine \(\sigma\times BR_{H\to e^{\pm}\mu^{\mp}}=2.92^{+0.91}_{-0.89}\) fb.
The region in \(s_{\alpha}\)-\(\sqrt{Y_{e\mu}^{2}+Y_{\mu e}^{2}}\) parameter space consistent with the CMS excesses, together with the combined CMS and ATLAS data, at 1-\(\sigma\) level is shown in Fig. 1. The preferred regions of parameter space are compared against constraints from the 125 GeV \(h\to e\mu\), \(\mu\to e\gamma\) and \(\mu\to e\) conversion searches. In the case of \(\mu\to e\gamma\) constraint, we also show the projected sensitivity of the MEGII experiment which is currently taking data. Note the small 1-\(\sigma\) region preferred by the CMS excess consistent with the bound on \(\mu\to e\gamma\) is centered around \(s_{\alpha}=0.014\) and \(\sqrt{Y_{e\mu}^{2}+Y_{\mu e}^{2}}=6.6\times 10^{-4}\). Moreover, if the origin of the CMS excess is a new particle, the MEGII experiment must also observes the \(\mu\to e\gamma\) decay.
In addition to above constraints, the CMS excess is also constrained by the search for a new resonance. In our scenario, the relevant LHC bounds are the CMS search for a scalar resonance decaying into a pair of \(ZZ^{*}\)[40], and a pair of tau leptons [41]. In both cases, the bounds are \({\cal O}(100)\) fb, which correspond to \(s_{\alpha}\lesssim 0.2\).
The 146 GeV excess is also constrained by the pair production of \(HA\). The \(A\) will decay predominantly into \(e\mu\), \(hZ\) or \(HZ\) if the latter is kinematically open. A significant portion of the \(H\), on the other hand, will decay into \(e\mu\) for sufficiently small \(s_{\alpha}\). When this is the case, the pair produced \(HA\) results in 4-lepton signatures. They are strongly constrained by the CMS multi-lepton searches [42]. In the CMS multi-lepton analysis, many of its search regions contain little to none background. Hence, if the \(HA\) signal falls into such signal regions, the multi-lepton cross-section must be small to be consistent with the CMS search. From our analysis, we find the CMS multi-lepton constraints imply either \(m_{A}\gtrsim 800\) GeV, or \(s_{\alpha}\gtrsim 0.025\) with \(A\to HZ\) kinematically open.
### Bounds for other masses
While the excess at 146 GeV seen by CMS could be a result of a new particle, we should not draw a definite conclusion on its nature without more data. Thus, we study the possibility that the CMS search finds no excess anywhere in the search region 110 GeV \(\leq m_{H}\leq\) 160 GeV. In this scenario, we take the 95% confidence level upper bounds on \(\sigma\times BR_{H\to e\mu}\) reported by CMS and compare them against constraints from \(\mu\to e\gamma\) and \(\mu\to e\) conversion.
The bounds from collider search, \(\mu\to e\gamma\), and \(\mu\to e\) conversion experiments are functions of the mixing angle \(s_{\alpha}\) and the LFV couplings \(\sqrt{|Y_{e\mu}|^{2}+|Y_{\mu e}|^{2}}\). The \(\mu\to e\gamma\) and \(\mu\to e\) conversion constraints depend on the product
Figure 1: The allowed/preferred region in the \(\sin\alpha\)–\(\sqrt{|Y_{e\mu}|^{2}+|Y_{\mu e}|^{2}}\) plane. The region above the yellow line is excluded by \(h(125)\to e^{\pm}\mu^{\mp}\). The above the blue line is excluded by the current and future \(\mu\to e\gamma\) searches. The red(cyan) band satisfies the preferred cross section by CMS (CMS+ATLAS combination), however the red(cyan) dashed line is excluded by the multilepton search from \(HA\) production unless \(m_{A}\gtrsim 800\) GeV.
\(s_{2\alpha}\sqrt{\left|Y_{e\mu}\right|^{2}+\left|Y_{\mu e}\right|^{2}}\), while the collider constraint is a more complicated function. For illustrative purposes, the collider bound, for each value of \(H\) mass, would trace out a curve in the \(s_{\alpha}\)-\(\sqrt{\left|Y_{e\mu}\right|^{2}+\left|Y_{\mu e}\right|^{2}}\) plane similar to the preferred region for the 146 GeV excess in Fig. 1. In order to compare these two types of constraints on the entire CMS search region, we project the constraints onto the \(s_{2\alpha}\sqrt{\left|Y_{e\mu}\right|^{2}+\left|Y_{\mu e}\right|^{2}}\) line. Such a projection, for each value of \(m_{H}\), maps the \(\mu\to e\gamma\) and \(\mu\to e\) conversion constraints to a point. On the other hand, the bound from CMS search gets mapped into a bounded from below interval. If the minimum of such an interval is smaller than the \(\mu\to e\gamma\) (\(\mu\to e\) conversion) projection, then the collider bound will be more constraining than the \(\mu\to e\gamma\) (\(\mu\to e\) conversion) bound for some part of the \(s_{\alpha}\)-\(\sqrt{\left|Y_{e\mu}\right|^{2}+\left|Y_{\mu e}\right|^{2}}\) parameter space. In Fig. 2 we show the comparison between the collider bounds and the \(\mu\to e\gamma\) and \(\mu\to e\) conversion constraints for 110 GeV \(\leq m_{H}\leq\)160 GeV. Note, in Fig 2, only the minimum of the collider constraint projection is shown for each \(m_{H}\) value. As can be seen from the plot, for \(m_{H}\lesssim\) 140 GeV, the collider search can be more constraining than even the projected MEGII constraint. Hence we advocate for both the ATLAS and CMS collaborations to continue searching for \(H\to e^{\pm}\mu^{\mp}\) in the low \(m_{H}\) region.
## V Conclusion
In this work, we have analyzed the LHC searches for a new resonance decaying into \(e^{\pm}\mu^{\mp}\) in the context of the type-III 2HDM. The analysis was done by comparing the LHC constraints against the low energy bounds from \(\mu\to e\gamma\) and \(\mu\to e\) conversion. This analysis is motivated by a recent CMS search [22] that explores the LFV decay of a new scalar boson between 110 GeV \(<m_{H}<\) 160 GeV. Inside the search region, CMS find a possible excess at \(m_{H}=146\) GeV with a significance of 3.8(2.8)\(\sigma\) locally (globally). A simplistic combination of the CMS and ATLAS searches reduces the significance of the excess to 3.3\(\sigma\) locally with a corresponding \(\sigma\times BR_{H\to e^{\pm}\mu^{\mp}}=2.92^{+0.91}_{-0.89}\) fb. In the type-III 2HDM context, the 146 GeV excess is moderately constrained by the current \(\mu\to e\gamma\) data, while the future MEGII search will probe the whole parameter region preferred by the excess.
In the event that the excess at 146 GeV is due to an upward fluctuation in the data, we analyze the bounds on \(\sigma\times\text{BR}_{H\to e^{\pm}\mu^{\mp}}\) provided by CMS over the whole search region. The comparison between the CMS bounds and their
Figure 2: The comparison between CMS observed limit (solid red), CMS expected limit (dashed red), current \(\mu\to e\gamma\) constraint (solid blue), current \(\mu\to e\) conversion (black) and the projected \(\mu\to e\gamma\) limit from MEGII (dotted blue) on the CMS search region. For each \(m_{H}\), the bounds are projected onto the \(s_{2\alpha}\sqrt{\left|Y_{e\mu}\right|^{2}+\left|Y_{\mu e}\right|^{2}}\) line. For the CMS bounds, only the lower end of the projection is shown.
low energy counterparts are shown in Fig. 2. From the plot, we see that for \(110~{}\text{GeV}\leq m_{H}\lesssim 140~{}\text{GeV}\), the current CMS bound is better than the current and the projected MEGII bounds for \(\mu\to e\gamma\). Thus, we encourage both the CMS and ATLAS collaborations to continue searching for LFV decays of a new scalar resonance in this low mass region.
###### Acknowledgements.
The work of R.P. was supported by the Parahyangan Catholic University under grant no. III/LPPM/2023-02/32-P. The work of J.J. was supported in part by the Indonesia Toray Science Foundation. The work of P.U. was supported in part by the Mid-Career Research Grant from National Research Council of Thailand under contract no. N42A650378. The work of N.S. was supported by Thailand NSRF via PMU-B under grant number B37G660013. R.P. thanks The Abdus Salam International Centre for Theoretical Physics for kind hospitality while this work was being completed.
|
2304.02882 | Two-point patterns determined by curves | Let $\Gamma \subset \mathbb{R}^d$ be a smooth curve containing the origin.
Does every Borel subset of $\mathbb R^d$ of sufficiently small codimension
enjoy a S\'ark\"ozy-like property with respect to $\Gamma$, namely, contain two
elements differing by a member of $\Gamma \setminus \{0\}$? Kuca, Orponen, and
Sahlsten have answered this question in the affirmative for a specific curve
with nonvanishing curvature, the standard parabola $(t, t^2)$ in
$\mathbb{R}^2$. In this article, we use the analytic notion of "functional
type", a generalization of curvature ubiquitous in harmonic analysis, to study
containment of patterns in sets of large Hausdorff dimension. Specifically, for
$\textit{every}$ curve $\Gamma \subset \mathbb{R}^d$ of finite type at the
origin, we prove the existence of a dimensional threshold $\varepsilon >0$ such
that every Borel subset of $\mathbb{R}^d$ of Hausdorff dimension larger than $d
- \varepsilon$ contains a pair of points of the form $\{x, x+\gamma\}$ with
$\gamma \in \Gamma \setminus \{0\}$. The threshold $\varepsilon$ we obtain,
though not optimal, is shown to be uniform over all curves of a given "type".
We also demonstrate that the finite type hypothesis on $\Gamma$ is necessary,
provided $\Gamma$ either is parametrized by polynomials or is the graph of a
smooth function. Our results therefore suggest a correspondence between sets of
prescribed Hausdorff dimension and the "types" of two-point patterns that must
be contained therein. | Benjamin B. Bruce, Malabika Pramanik | 2023-04-06T06:15:57Z | http://arxiv.org/abs/2304.02882v1 | # Two-point patterns determined by curves
###### Abstract.
Let \(\Gamma\subset\mathbb{R}^{d}\) be a smooth curve containing the origin. Does every Borel subset of \(\mathbb{R}^{d}\) of sufficiently small codimension enjoy a Sarkozy-like property with respect to \(\Gamma\), namely, contain two elements differing by a member of \(\Gamma\setminus\{0\}\)? Kuca, Orponen, and Sahlsten [26] answer this question in the affirmative for a specific curve with nonvanishing curvature, the standard parabola \((t,t^{2})\) in \(\mathbb{R}^{2}\). In this article, we use the analytic notion of "functional type", a generalization of curvature ubiquitous in harmonic analysis, to study containment of patterns in sets of large Hausdorff dimension. Specifically, for _every_ curve \(\Gamma\subset\mathbb{R}^{d}\) of finite type at the origin, we prove the existence of a dimensional threshold \(\varepsilon>0\) such that every Borel subset of \(\mathbb{R}^{d}\) of Hausdorff dimension larger than \(d-\varepsilon\) contains a pair of points of the form \(\{x,x+\gamma\}\) with \(\gamma\in\Gamma\setminus\{0\}\). The threshold \(\varepsilon\) we obtain, though not optimal, is shown to be uniform over all curves of a given "type". We also demonstrate that the finite type hypothesis on \(\Gamma\) is necessary, provided \(\Gamma\) either is parametrized by polynomials or is the graph of a smooth function. Our results therefore suggest a correspondence between sets of prescribed Hausdorff dimension and the "types" of two-point patterns that must be contained therein.
Key words and phrases:Fractals, polynomial configurations, curvature, functions of finite type, Hausdorff dimension, Fourier transforms of measures 2010 Mathematics Subject Classification: 28A80 (primary), 11B25, 11B30, 42B99, 28A78 (secondary) Both authors were supported in part by NSERC Discovery grant GR010263.
that \(\Gamma\cap(K-K)=\emptyset\) for any set \(K\) of diameter strictly less than \(\operatorname{dist}(0,\Gamma)\), including Euclidean balls which not only have full Hausdorff dimension but positive Lebesgue measure. Equivalently stated, if \(\Gamma\) is an unavoidable curve, then sets of large Hausdorff dimension but arbitrarily small diameter would contain pairs of the form \(\{x,x+\gamma\}\) with \(\gamma\in\Gamma\), forcing the origin to be a limit point (and thus an element) of \(\Gamma\). Unavoidability is therefore a local property of a curve, reflecting its behaviour near the origin.
A classical result of Furstenberg [14] and Sarkozy [39, 40] shows that the difference set \(A-A\) of any set \(A\subseteq\mathbb{Z}\) of positive upper density cannot be square-free. The concept of unavoidability extends this notion to Euclidean spaces of dimension \(d\geq 2\): If \(\Gamma\subset\mathbb{R}^{d}\) is unavoidable, then the difference set \(K-K\) of any "large" set \(K\subseteq\mathbb{R}^{d}\) must contain a nontrivial element of \(\Gamma\). Recent work of Kuca, Orponen, and Sahlsten [26] has shown that the standard parabola \(\{(t,t^{2})\colon t\in[-1,1]\}\) in \(\mathbb{R}^{2}\) is unavoidable, leading to a natural question as to which other curves enjoy this property. This article addresses this question by presenting results of two types. We show that a smooth curve is unavoidable if it is of "finite type" at the origin. We also consider two distinct subclasses of smooth curves, namely smooth graphs and polynomial curves, and show that within these classes the finite type hypothesis is necessary (as well as sufficient) for unavoidability.
To state our results, let us recall (e.g. from [43, Chapter VIII, SS3.2]) the classical notion of "type" for a smooth function \(\Phi\colon\mathrm{I}\to\mathbb{R}^{d}\) at a point \(\mathtt{t}\in\mathrm{I}\).
1. We say that \(\Phi\) is of _finite type_ at \(\mathtt{t}\) if for every nonzero vector \(u\in\mathbb{R}^{d}\) there exists an integer \(n\geq 1\) such that \(u\cdot\Phi^{(n)}(\mathtt{t})\neq 0\). If there is a nonzero vector \(u\in\mathbb{R}^{d}\) for which no such \(n\) exists, then \(\Phi\) is of _infinite type_ at \(\mathtt{t}\).
2. We say that \(\Phi\)_is vanishing of finite type_ at \(\mathtt{t}\) if \(\Phi\) is of finite type at \(\mathtt{t}\) and \(\Phi(\mathtt{t})=0\).
3. We say that \(\Phi\) is of _type_\(\mathtt{N}\) at \(\mathtt{t}\) if \(\mathtt{N}\) is the smallest integer with the following property: For every \(u\in\mathbb{R}^{d}\setminus\{0\}\) there exists \(n\in\{1,\ldots,\mathtt{N}\}\) such that \(u\cdot\Phi^{(n)}(\mathtt{t})\neq 0\).
An easy compactness argument shows that \(\Phi\) is of finite type at \(\mathtt{t}\) (in the sense of definition 1) if and only if \(\Phi\) is of type \(\mathtt{N}\) at \(\mathtt{t}\) for some \(\mathtt{N}\). Since it is always possible to find a nonzero vector \(u\in\mathbb{R}^{d}\) such that \(u\cdot\Phi^{(\ell)}(\mathtt{t})=0\) for every \(\ell\in\{1,\ldots,d-1\}\), the smallest possible value of \(\mathtt{N}\) is \(d\). The case \(\mathtt{N}=d=2\) corresponds to the situation where the image of \(\Phi\) has nonzero curvature at \(\Phi(\mathtt{t})\). More generally, \(\mathtt{N}=d\) is equivalent to nonvanishing "torsion": \(\det(\Phi^{\prime}(\mathtt{t}),\Phi^{\prime\prime}(\mathtt{t}),\ldots,\Phi^{(d )}(\mathtt{t}))\neq 0\).
4. We say that \(\Phi\) is _vanishing of type_\(\mathtt{N}\) at \(\mathtt{t}\) if \(\Phi\) is of type \(\mathtt{N}\) at \(\mathtt{t}\) and \(\Phi(\mathtt{t})=0\).
As stated above, if \(\mathtt{t}\) is an endpoint of \(\mathtt{I}\), then the derivatives in definitions 1 and 3 are to be understood as one-sided.
The notion of type can be transferred from functions to curves. For simplicity, we formulate the definition at the origin but note that it easily extends to any other point on a curve. Let \(\Gamma\subset\mathbb{R}^{d}\) be a smooth curve containing the origin.
1. We say that \(\Gamma\) is of _finite type at the origin_ if there exists a compact interval \(\mathtt{I}\subset\mathbb{R}\), a point \(\mathtt{t}\in\mathtt{I}\), and a smooth function \(\Phi\colon\mathrm{I}\to\mathbb{R}^{d}\) that is vanishing of finite type at \(\mathtt{t}\) such that \(\Gamma\) contains the image \(\Phi(\mathtt{I})\); otherwise \(\Gamma\) is of _infinite type_ at the origin.
6. We say that \(\Gamma\) is of _type \(\mathtt{N}\) at the origin_ if \(\mathtt{N}\) is the smallest integer with the following property: There exists a compact interval \(\mathtt{I}\subset\mathbb{R}\), a point \(\mathtt{t}\in\mathtt{I}\), and a smooth function \(\Phi\colon\mathtt{I}\to\mathbb{R}^{d}\) that is vanishing of type \(\mathtt{N}\) at \(\mathtt{t}\) such that \(\Gamma\) contains the image \(\Phi(\mathtt{I})\).
In definitions 5 and 6, we could require that \(\mathtt{t}=0\), since \(\Phi\) vanishing of type \(\mathtt{N}\) at \(\mathtt{t}\) is equivalent to \(\Phi(\cdot-\mathtt{t})\) vanishing of type \(\mathtt{N}\) at \(0\).
It is important to note the distinction between the type of a curve \(\Gamma\) and the types of the many functions that represent \(\Gamma\). For example:
* Suppose that \(\eta\colon[0,1]\to\mathbb{R}\) is a smooth increasing function that is vanishing of infinite type at \(t=0\); say \(\eta(t)=\exp(-1/t^{2})\). Then the two functions \[\Phi_{0}(t)=(\eta(t),\eta(t)^{2})\qquad\text{ and }\qquad\Phi(t)=(t^{2},t^{4})\] both represent the standard parabola in a neighbourhood of the origin. However, \(\Phi\) is vanishing of finite type at \(t=0\) while \(\Phi_{0}\) is not.
* If a curve passes through the origin more than once, then its type is determined by its "nicest" behaviour there. Let \(\eta\colon[0,1]\to\mathbb{R}\) be a smooth function that is identically \(0\) near \(t=0\) and identically \(1\) near \(t=1\). Suppose that \(\Gamma=\operatorname{im}\Phi\), where \(\Phi\colon[0,1]\to\mathbb{R}^{2}\) is given by \[\Phi(t):=(t(t-1),(t-1)^{3}\eta(t)).\] Then \(\Phi^{-1}(0)=\{0,1\}\), and \(\Phi\) is of infinite type at \(t=0\) and of finite type (\(\mathtt{N}=3\)) at \(t=1\). In spite of the behaviour of \(\Phi\) at \(t=0\), the curve \(\Gamma\) is of finite type at the origin according to definition 5.
### Statement of results
We assume for the entirety of the article that the ambient dimension \(d\geq 2\) is fixed. All constants are allowed to depend on \(d\). The following are our main results.
**Theorem 1.1**.: _Let \(\Gamma\subset\mathbb{R}^{d}\) be a smooth curve of finite type at the origin. Then \(\Gamma\) is unavoidable._
Figure 1. Curves of finite type.
A stronger quantitative version of Theorem 1.1 is given in Theorem 1.5 below, where the dimensional threshold is shown to be uniform across the class of curves of fixed type.
As a consequence of Theorem 1.1, a full-dimensional set must contain every two-point pattern of finite type.
**Corollary 1.2**.: _Let \(K\subseteq\mathbb{R}^{d}\) be a Borel set with \(\dim_{\rm H}K=d\). Then \((\Gamma\setminus\{0\})\cap(K-K)\neq\emptyset\) for every smooth curve \(\Gamma\subset\mathbb{R}^{d}\) of finite type at the origin._
For certain classes of smooth curves, namely graphs and polynomials, the assumption of finite type at the origin is both sufficient and necessary for unavoidability.
**Theorem 1.3**.: _Let \(\Gamma\subset\mathbb{R}^{d}\) be a curve that contains the origin and is the graph of a smooth function, i.e., of the form_
\[\Gamma=\{(t,\Psi(t))\colon t\in{\tt I}\} \tag{1.1}\]
_for some compact interval \({\tt I}\) with \(0\in{\tt I}\) and some smooth function \(\Psi\colon{\tt I}\to\mathbb{R}^{d-1}\) with \(\Psi(0)=0\). Then \(\Gamma\) is unavoidable if and only if \(\Gamma\) is of finite type at the origin._
**Theorem 1.4**.: _Let \(\Gamma\subset\mathbb{R}^{d}\) be a polynomial curve that contains the origin, i.e., \(\Gamma=\Phi({\tt I})\) for some compact interval \({\tt I}\) and some \(d\)-tuple \(\Phi=(\Phi_{1},\ldots,\Phi_{d})\) of univariate polynomials \(\Phi_{i}\) such that \(\Phi({\tt t})=0\) for some \({\tt t}\in{\tt I}\). Then the following are equivalent:_
1. \(\Gamma\) _is unavoidable._
2. \(\Gamma\) _is of finite type at the origin._
3. \(\Gamma\) _is not contained in any hyperplane in_ \(\mathbb{R}^{d}\)_._
4. _There exist linearly independent polynomials_ \(\Phi_{1},\ldots,\Phi_{d}\) _and a compact interval_ \({\tt I}\) _such that_ \(\Gamma=\{(\Phi_{1}(t),\ldots,\Phi_{d}(t))\colon t\in{\tt I}\}\)_._
5. _If_ \(\Phi_{1},\ldots,\Phi_{d}\) _and_ \({\tt I}\) _are any choice of polynomials and a compact interval such that_ \(\Gamma=\{(\Phi_{1}(t),\ldots,\Phi_{d}(t))\colon t\in{\tt I}\}\)_, then_ \(\Phi_{1},\ldots,\Phi_{d}\) _are linearly independent._
Figure 2. Curves of infinite type.
We turn our attention now to a quantitative formulation of the qualitative statement in Theorem 1.1. For every smooth curve \(\Gamma\subset\mathbb{R}^{d}\) of finite type at the origin, Theorem 1.1 gives a constant \(\varepsilon>0\), possibly depending on \(\Gamma\), such that every Borel set with Hausdorff dimension exceeding \(d-\varepsilon\) contains a pair of the form \(\{x,x+\gamma\}\) with \(\gamma\in\Gamma\setminus\{0\}\). It is natural to ask how this \(\varepsilon\) might depend on \(\Gamma\). Our proof of Theorem 1.1 did not provide the optimal threshold value of \(\varepsilon\) for a given \(\Gamma\); however, a careful scrutiny of the argument revealed that a value of \(\varepsilon\) could be chosen so as to depend only on the ambient dimension \(d\) and the type of \(\Gamma\) at the origin. That \(\varepsilon\) therefore works for all curves of the same type as \(\Gamma\) at the origin. We record this as a theorem:
**Theorem 1.5**.: _For each \(\mathtt{N}\geq d\), there exists a constant \(\varepsilon_{\mathtt{N}}>0\) such that \((\Gamma\setminus\{0\})\cap(K-K)\neq\emptyset\) for every smooth curve \(\Gamma\subset\mathbb{R}^{d}\) of type \(\mathtt{N}\) at the origin and every Borel set \(K\subseteq\mathbb{R}^{d}\) with \(\dim_{\mathrm{H}}K>d-\varepsilon_{\mathtt{N}}\)._
These results should be viewed in the context of the substantial body of literature on multi-point patterns in large sets, a genre of problems that has been explored in a variety of discrete and continuous settings; see [1, 2, 4, 5, 6, 7, 8, 9, 10, 12, 13, 15, 16, 17, 18, 19, 21, 22, 24, 25, 27, 31, 32, 35, 36, 37, 38, 41, 42, 45, 46]. It is known that high Hausdorff dimension alone does not guarantee the presence of certain linear patterns, such as three-term arithmetic progressions on the real line or even linear "parallelograms"; see [21]. Many pattern existence results (both linear and nonlinear) therefore employ stronger Fourier analytic hypotheses, such as a lower bound on Fourier dimension, or existence of a measure that simultaneously obeys a ball condition and exhibits Fourier decay. Such results may be found, for example, in [27, 7, 19, 13]. In general it is not known whether, for nonlinear patterns, these stronger hypotheses are necessary. Because of the (heuristic) connection between nonlinearity and Fourier decay, one might hope that Fourier analytic assumptions could be avoided in the nonlinear setting. Theorems 1.1 and 1.5 confirm this for two-point patterns determined by curves.
Recent results for more general configurations also align with this view. Greenleaf, Iosevich, and Taylor [16, 17] have shown that nonlinear patterns are abundant in sets with high Hausdorff dimension. For instance, it is proved in [16] that for each \(\Phi\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{k}\) belonging to a large class of smooth maps, there exists a threshold \(s_{0}>0\) such that if \(K\subseteq\mathbb{R}^{d}\) has Hausdorff dimension exceeding \(s_{0}\), then the configuration set \(\Delta_{\Phi}(K):=\{\Phi(x,y)\colon x,y\in K\}\) has nonempty interior in \(\mathbb{R}^{k}\). In particular, their results ensure abundance of patterns determined by curves. A special case yields the following: If \(K\subseteq\mathbb{R}^{2}\) and \(\dim_{\mathrm{H}}K>3/2\), then the set \(\{x_{2}-y_{2}-(x_{1}-y_{1})^{2}\colon x,y\in K\}\) has nonempty interior; see [16, Cor. 2.8]. This result says that sets with high Hausdorff dimension contain two-point patterns determined by a "continuum" of parabolas. Whether any _specific_ parabolic pattern must be present is a rather different question. This was answered by Kuca, Orponen, and Sahlsten [26], as mentioned above: If \(K\subseteq\mathbb{R}^{2}\) has Hausdorff dimension close enough to \(2\), then \(K\) contains a pair of the form \(\{x,x+(t,t^{2})\}\) with \(t\neq 0\).
The present article shows that many other specific nonlinear two-point patterns can be found in sets with high Hausdorff dimension (Theorems 1.1 and 1.5). It also offers a characterization of such patterns, provided they are determined by curves corresponding to certain function classes (Theorems 1.3 and 1.4). These results suggest a correspondence between sets of
prescribed Hausdorff dimension and classes of two-point patterns that must be contained therein. A statement along the lines of the following seems plausible:
_For every \(s_{0}\in(0,d)\), there exists a class \(\mathcal{C}=\mathcal{C}(s_{0})\) of smooth curves in \(\mathbb{R}^{d}\) such that (i) \((\Gamma\setminus\{0\})\cap(K-K)\neq\emptyset\) for every \(\Gamma\in\mathcal{C}\) and every Borel set \(K\subseteq\mathbb{R}^{d}\) with \(\dim_{\mathrm{H}}K>s_{0}\), and (ii) for every \(s<s_{0}\) and every \(\Gamma\in\mathcal{C}\) there exists a Borel set \(K\subseteq\mathbb{R}^{d}\) with \(\dim_{\mathrm{H}}K>s\) such that \((\Gamma\setminus\{0\})\cap(K-K)=\emptyset\)._
We envision that such a class \(\mathcal{C}(s_{0})\) might consist of curves of bounded type, for some bound depending quantitatively on \(s_{0}\).
### Proof structure
The rest of the article is organized as follows: In Section 2, we prove Theorem 1.5 and therefore Theorem 1.1, conditional on three key propositions that highlight the main technical tools required:
* Given measures \(\mu\) and \(\pi\), we introduce in Proposition 2.2 a general two-point configuration integral \(\mathcal{I}[\mu;\pi]\) which, if nonzero, signals nonempty intersection of \(\operatorname{supp}\mu-\operatorname{supp}\mu\) and \(\operatorname{supp}\pi\).
* Given a finite type curve \(\Gamma\), the above configuration integral is estimated in Proposition 2.3 under assumptions of finite energy and spectral gap for the measure \(\mu\), and with \(\pi\) being a natural measure supported on \(\Gamma\setminus\{0\}\).
* Given a set \(K\) of large enough Hausdorff dimension, Proposition 2.4, ensures that a measure \(\mu\) satisfying the energy and spectral gap conditions (required for the application of Proposition 2.3) exists on \(K\).
These three propositions are proved in Sections 6, 7, and 8 respectively, concluding the proof of Theorems 1.5 and 1.1. All three propositions extend statements of a similar nature developed in [26] for the standard parabola. However, the current versions are distinctive in the following ways:
* The proof in [26] relies heavily on the anisotropic dilation-invariance of the standard parabola. This feature is not available for general curves, and one of the main contributions of this article lies in providing the necessary workaround.
* Dependencies of the dimensional threshold \(\varepsilon\) and the pattern \(\gamma\in(K-K)\cap(\Gamma\setminus\{0\})\) on underlying parameters (needed for Theorem 1.5) are made explicit.
In Section 3, we sketch the proof of a known result on Holder-continuous functions whose graphs have high Hausdorff dimension. This result plays an essential role in the proofs of Theorems 1.3 and 1.4, which appear in Sections 4 and 5, respectively. In Section 9, we explain how our methods can be reinterpreted using Hausdorff dimension in a suitable metric space. A few technical results are relegated to the Appendix.
## 2. Finite type patterns are unavoidable
### Standardization
A curve of finite type at the origin may be represented as the image of many functions. The goal of this subsection is to find a parametrization that is most convenient for later usage.
Let us fix a curve \(\Gamma\subset\mathbb{R}^{d}\) of type N at the origin. Then, by definition 6 on page 3, there exists a compact interval \(\mathrm{I}\subset\mathbb{R}\), a point \(\mathtt{t}\in\mathrm{I}\), and a smooth function \(\Phi\colon\mathrm{I}\to\mathbb{R}^{d}\) that is
vanishing of type \(\mathtt{N}\) at \(\mathtt{t}\) such that \(\Gamma\supseteq\Phi(\mathtt{I})\). As noted above, we may assume without loss of generality that \(\mathtt{t}=0\), so that \(0\in\mathtt{I}\). We may also assume that
\[\mathtt{I}=[0,1]. \tag{2.1}\]
To justify this, we need to find a function \(\widetilde{\Phi}\colon[0,1]\to\mathbb{R}^{d}\) that is vanishing of type \(\mathtt{N}\) at the origin such that \(\Gamma\supseteq\widetilde{\Phi}([0,1])\). There are many ways to construct such a function using \(\Phi\); one way is the following: Let \(\mathtt{I}=:[a,b]\), so that \(a\leq 0\leq b\) with at least one of the inequalities being strict, and define
\[\widetilde{\Phi}(t):=\begin{cases}\Phi(at)&\text{if }a<0,\\ \Phi(bt)&\text{if }a=0\end{cases}\qquad\text{for }t\in[0,1].\]
Replacing \(\Phi\) by \(\widetilde{\Phi}\), we have (2.1).
For each \(i\in\{1,\ldots,d\}\), let \(\Phi_{i}\colon\mathtt{I}\to\mathbb{R}\) denote the \(i^{\text{th}}\) component of \(\Phi\), so that \(\Phi=(\Phi_{1},\ldots,\Phi_{d})\). The functions \(\Phi_{i}\) can be expressed as
\[\Phi_{i}(t)=t^{\mathtt{n}_{i}}\phi_{i}(t) \tag{2.2}\]
for some positive integers \(\mathtt{n}_{1},\ldots,\mathtt{n}_{d}\in\{1,\ldots,\mathtt{N}\}\) and some smooth functions \(\phi_{i}\colon\mathtt{I}\to\mathbb{R}\) satisfying \(\phi_{i}(0)\neq 0\). We will make the following assumptions about \(\mathtt{n}_{i}\) and \(\phi_{i}\):
\[1\leq\mathtt{n}_{1}<\mathtt{n}_{2}<\cdots<\mathtt{n}_{d}= \mathtt{N}, \tag{2.3}\] \[\phi_{1}(0)=\phi_{2}(0)=\cdots=\phi_{d}(0)=1. \tag{2.4}\]
These assumptions also require justification, which is provided in part by Lemma 2.1 below. There, we prove the existence of an invertible linear map \(\mathbf{L}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that the composition \(\mathbf{L}\circ\Phi\) has the properties (2.3) and (2.4). It is straightforward to check that \(\mathbf{L}(\Gamma)\) is unavoidable with a dimensional threshold of \(\varepsilon\) if and only if \(\Gamma\) is unavoidable with the same threshold. Therefore, replacing \(\Phi\) by \(\mathbf{L}\circ\Phi\), we may assume without loss of generality that \(\Phi\) satisfies the desired properties (2.3) and (2.4). The proof of Lemma 2.1 is deferred to the Appendix.
**Lemma 2.1**.: _Let \(\Theta\colon\mathtt{I}\to\mathbb{R}^{d}\) be a smooth function that is vanishing of type \(\mathtt{N}\) at the origin. Then there exists an invertible linear map \(\mathbf{L}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that \(\Phi:=\mathbf{L}\circ\Theta\) obeys (2.2) with the accompanying integers \(\mathtt{n}_{i}\) and functions \(\phi_{i}\) obeying (2.3) and (2.4), respectively._
If \(\Phi\colon\mathtt{I}\to\mathbb{R}^{d}\) obeys (2.1)-(2.4), then we say that \(\Phi\) is in _standard form_. Lemma 2.1 (and the discussion preceding it) implies that any curve of finite type at the origin is, after a harmless invertible linear transformation, the image of a function in standard form.
A constant appearing in the proof of Theorem 1.5 (or any propositions used therein) is said to be _admissible_ if it depends only on \(d\) and \(\mathtt{N}\). In particular, the dimensional threshold \(\varepsilon_{\mathtt{N}}\) provided by the theorem is to be admissible. It will therefore be important to indicate admissibility, or otherwise, of constants that appear in the proof.
### A few key propositions
We now formulate the main steps from which Theorem 1.5, and hence Theorem 1.1, will easily follow. Each step, including any accompanying definitions, is a quantitative adaptation of a similar idea occurring in [26]. These propositions will be proved in Sections 6, 7, and 8 respectively.
#### 2.2.1. nonvanishing of a configuration integral
A recurring feature in the study of configurations is the formulation of an appropriate integral whose positivity signals the presence of the desired configuration. We describe below the integral relevant to our problem.
**Proposition 2.2**.: _Fix a Schwartz function \(\psi\colon\mathbb{R}^{d}\to\mathbb{C}\), and set \(\psi_{\delta}:=\delta^{-d}\psi(\delta^{-1}\cdot)\) for \(\delta>0\). Let \(\mu\) be any compactly supported Borel probability measure on \(\mathbb{R}^{d}\), and let \(\pi\) be any finite Borel measure on \(\mathbb{R}^{d}\). Assume that_
\[\mathcal{I}[\mu,\pi]:=\liminf_{\delta\searrow 0}\Big{|}\int(\mu*\psi_{ \delta})*\pi\,d\mu\Big{|}>0. \tag{2.5}\]
_Then \(\operatorname{supp}\pi\cap(\operatorname{supp}\mu-\operatorname{supp}\mu)\neq\emptyset\)._
In our proof of Theorem 1.5, we will apply Proposition 2.2 to measures \(\mu\) and \(\pi\) supported on (affine images of) \(K\) and \(\Gamma\setminus\{0\}\), respectively. This will eventually yield the desired conclusion that \((\Gamma\setminus\{0\})\cap(K-K)\neq\emptyset\).
Although Proposition 2.2 holds for any choice of Schwartz function \(\psi\), we will fix a convenient \(\psi\) with properties that simplify certain parts of the subsequent argument. Specifically, we now take \(\psi\) to be the normalized Gaussian
\[\psi(x):=e^{-\pi|x|^{2}} \tag{2.6}\]
(which is its own Fourier transform) and record the following properties in particular:
\[\psi\geq 0,\qquad\psi(0)=\|\psi\|_{\infty}=\|\widehat{\psi}\|_{ \infty}=1,\qquad|\widehat{\psi}(\xi)-\widehat{\psi}(0)|\leq\pi|\xi|^{2}. \tag{2.7}\]
Here, \(\widehat{\psi}\) denotes the Fourier transform of \(\psi\). For any Borel measure \(\mu\), let us define
\[\mu_{\delta}:=\mu*\psi_{\delta}, \tag{2.8}\]
where \(\psi_{\delta}:=\delta^{-d}\psi(\delta^{-1}\cdot)\), as above.
#### 2.2.2. Role of energy and spectral gap in identifying patterns
Proposition 2.2 is quite general, in the sense that it does not require any special properties of \(\mu\) or \(\pi\). Our next goal is to ensure that, for a given curve \(\Gamma\) of finite type at the origin, (2.5) holds for an appropriate choice of \(\mu\) and \(\pi\) with \(\pi\) (essentially) supported on \(\Gamma\setminus\{0\}\). We will also need to describe the admissible and inadmissible constants involved in this choice.
Toward this end, let \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) be any function in standard form that is vanishing of type N at the origin. For each \(i\in\{1,\ldots,d\}\), assumptions (2.2)-(2.4) imply that
\[\Phi_{i}^{(\mathfrak{n}_{i})}(0)=\mathfrak{n}_{i}!\phi_{i}(0)= \mathfrak{n}_{i}!\in[1,\texttt{N}!]\]
and
\[\lim_{t\searrow 0}\frac{\Phi_{i}^{(\ell)}(t)}{t^{\mathfrak{n}_{i}- \ell}}=\frac{\mathfrak{n}_{i}!}{(\mathfrak{n}_{i}-\ell)!}\in[1,\texttt{N}!] \quad\text{for $0\leq\ell<\mathfrak{n}_{i}$.}\]
Here, \(\Phi_{i}^{(\ell)}\) denotes the \(\ell^{\text{th}}\) derivative of \(\Phi_{i}\), with the convention that \(\Phi_{i}^{(0)}\equiv\Phi_{i}\). Using the smoothness of \(\Phi\) near the origin, one can find a large integer \(\mathsf{J}_{0}=\mathsf{J}_{0}(\Phi)\) depending on \(\Phi\) (and therefore inadmissible) such that
\[|\Phi_{i}^{(\ell)}(t)|\leq 2\texttt{N}!|t|^{\mathfrak{n}_{i}- \ell}\quad\text{for all $t\in[0,2^{-\mathsf{J}_{0}}]$ and $0\leq\ell<\mathfrak{n}_{i}$,} \tag{2.9}\]
and
\[\frac{1}{2}\leq|\Phi_{i}^{(\mathfrak{n}_{i})}(t)|\leq 2\mathtt{N}!\quad\text{for all }t\in[0,2^{-\mathsf{J}_{0}}]. \tag{2.10}\]
Property (2.2) and the lower bound in (2.10) together imply that
\[\Phi(t)\neq 0\quad\text{for all }t\in(0,2^{-\mathsf{J}_{0}}]. \tag{2.11}\]
Thus, the origin is the single isolated zero of \(\Phi\) on \([0,2^{-\mathsf{J}_{0}}]\).
Let us define the rescaled functions
\[\Phi^{j}:=(2^{\mathfrak{n}_{1}j}\Phi_{1}(2^{-j}\cdot),\ldots,2^{ \mathfrak{n}_{d}j}\Phi_{d}(2^{-j}\cdot)),\qquad j\geq 0, \tag{2.12}\]
which interpolate between \(\Phi\) (when \(j=0\)) and the monomial curve \(t\mapsto(t^{\mathfrak{n}_{1}},\ldots,t^{\mathfrak{n}_{d}})\) (when \(j=\infty\)). For each \(j\geq\mathsf{J}_{0}\) and \(c\in(0,1]\), let \(\pi=\pi[\Phi;j,c]\) denote the singular measure defined by the formula
\[\int fd\pi:=\int_{c}^{1}f(\Phi^{j}(s))ds \tag{2.13}\]
and supported on \(\Phi^{j}([c,1])\subset\mathbb{R}^{d}\setminus\{0\}\).
For \(\sigma\in(0,d)\), let \(I_{\sigma}(\mu)\) denote the \(\sigma\)-dimensional energy of a Borel measure \(\mu\):
\[I_{\sigma}(\mu):=\iint|x-y|^{-\sigma}d\mu(x)d\mu(y)=\gamma(d, \sigma)\int|\widehat{\mu}(\xi)|^{2}|\xi|^{\sigma-d}d\xi; \tag{2.14}\]
here, \(\gamma(d,\sigma)\) is a positive constant (see [34, SS3.4-3.5]), and \(\widehat{\mu}\) denotes the Fourier transform of \(\mu\). For each positive integer \(N\), let
\[\sigma_{N}:=d-\frac{1}{2N}\qquad\text{and}\qquad\gamma_{N}:= \gamma(d,\sigma_{N}). \tag{2.15}\]
**Proposition 2.3**.: _For each \(\mathtt{N}\geq d\), there exists an admissible constant \(\mathtt{L}_{\mathtt{N}}\geq 1\) with the following property: Let \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) be any function in standard form that is vanishing of type \(\mathtt{N}\) at the origin. Then there exists an (inadmissible) integer \(\mathtt{J}=\mathtt{J}(\Phi)\geq\mathtt{J}_{0}(\Phi)\) such that if_
* \(\mathtt{A},\mathtt{B},\mathtt{C}\) _are any choice of constants satisfying_ \[\mathtt{A}^{d}\geq 4\mathtt{L}_{\mathtt{N}}^{2},\qquad\mathtt{B}\geq( \mathtt{L}_{\mathtt{N}}\mathtt{A}^{4d}\mathtt{C})^{2\mathtt{N}},\qquad\mathtt{ C}\geq 1,\] (2.16) _and_
* \(\mu\) _is any Borel probability measure on_ \([0,1]^{d}\) _that obeys the energy and spectral gap conditions_ \[I_{\sigma_{\mathtt{N}}}(\mu)\leq\mathtt{C}\qquad\text{and}\qquad \int_{|\xi|\in[\mathtt{A},\mathtt{B}]}|\widehat{\mu}(\xi)|^{2}d\xi\leq\mathtt{ A}^{-4d},\] (2.17)
_then_
\[\mathscr{I}[\mu,\pi[\Phi;j,\mathtt{A}^{-6d}]]\geq\mathtt{A}^{-4d }\quad\text{for all }j\geq\mathtt{J}. \tag{2.18}\]
Here, \({\rm J}_{0}\) refers to the integer appearing in (2.9)-(2.11), \(\sigma_{\tt N}\) is the index defined in (2.15), \({\rm J}\) refers to the configuration integral in (2.5), and \(\pi\) is the measure introduced in (2.13).
There is nothing special about the choice of \(\sigma_{\tt N}\) in (2.15), other than it being sufficiently close to \(d\). We could have chosen any value for \(\sigma_{\tt N}\) from the interval \((d-1/{\tt N},d)\) and Proposition 2.3 would still hold. Similarly, there was some flexibility when choosing the exponent \(4d\) that appears in the spectral gap condition in (2.17). We could replace \(4d\) by any number strictly larger than \(d+2\), provided we also make minor adjustments to the conditions (2.16) and the conclusion (2.18). The choice of \(4d\) simply gives nicer-looking exponents throughout the proof.
#### 2.2.3. Construction of a measure with finite energy and spectral gap
Given a finite type curve \(\Gamma\) represented by the function \(\Phi\), the conclusion (2.18) and Proposition 2.2 imply, roughly, that \((\Gamma\setminus\{0\})\cap({\rm supp}\,\mu-{\rm supp}\,\mu)\neq\emptyset\). Given an arbitrary Borel set \(K\subseteq\mathbb{R}^{d}\) of large Hausdorff dimension, it remains to ascertain whether a probability measure \(\mu\) obeying the hypotheses of Proposition 2.3 can be found with support in \(K\). This is the objective of the next proposition, which says, in short, that such a measure can be found, not in \(K\) itself but in a certain affine copy of \(K\).
Before stating the proposition, we introduce the necessary notation and terminology. Fix a \(d\)-tuple of positive integers \(\vec{n}=(n_{1},\ldots,n_{d})\). Unlike in (2.3), the entries of \(\vec{n}\) need not be distinct or ordered. Define
\[{\mathcal{D}}^{*}={\mathcal{D}}^{*}[\vec{n}]:=\bigcup_{j\in\mathbb{Z}}{ \mathcal{D}}_{j},\quad\text{where}\quad{\mathcal{D}}_{j}={\mathcal{D}}_{j}[ \vec{n}]:=\Big{\{}a+\prod_{i=1}^{d}[0,2^{-n_{i}j})\colon a\in\prod_{i=1}^{d}2 ^{-n_{i}j}\mathbb{Z}\Big{\}}. \tag{2.19}\]
Thus, \({\mathcal{D}}^{*}\) consists of all dyadic boxes in \(\mathbb{R}^{d}\) of dimensions \(2^{-n_{1}j}\times\cdots\times 2^{-n_{d}j}\) for some integer \(j\). We also set
\[{\mathcal{D}}^{*}_{J}={\mathcal{D}}^{*}_{J}[\vec{n}]:=\bigcup_{j\geq J}{ \mathcal{D}}_{j} \tag{2.20}\]
for any integer \(J\). Now, fix a box \(Q\in{\mathcal{D}}_{j}\) and write \(Q=a+\prod_{i=1}^{d}[0,2^{-n_{i}j})\). We denote the "length" of \(Q\) by
\[\ell(Q):=2^{-j} \tag{2.21}\]
and define the rescaling function
\[{\bf T}_{Q}(x):=(2^{n_{1}j}(x_{1}-a_{1}),\ldots,2^{n_{d}j}(x_{d}-a_{d})) \tag{2.22}\]
that maps \(Q\) onto \([0,1)^{d}\). Given a finite Borel measure \(\nu\) such that \(\nu(Q)>0\), the _blow-up_ of \(\nu\) with respect to \(Q\) is defined as
\[\nu^{Q}:=\|\nu|_{Q}\|^{-1}{\bf T}_{Q}(\nu|_{Q}), \tag{2.23}\]
where \({\bf T}_{Q}(\nu|_{Q})\) denotes the push-forward of \(\nu|_{Q}\) by \({\bf T}_{Q}\). Thus, \(\nu^{Q}\) is always a probability measure supported on the closure of \({\bf T}_{Q}({\rm supp}\,\nu\cap Q)\), a subset of \([0,1]^{d}\):
\[\nu^{Q}(E)=\frac{\nu({\bf T}_{Q}^{-1}(E)\cap Q)}{\nu(Q)}\quad\text{ for any Borel set $E\subseteq\mathbb{R}^{d}$.} \tag{2.24}\]
**Proposition 2.4**.: _For each \(\mathtt{N}\geq d\), there exist admissible constants \(\mathtt{A},\mathtt{B},\mathtt{C}\) obeying (2.16) and an admissible constant \(\varepsilon=\varepsilon(\mathtt{A},\mathtt{B},\mathtt{C})>0\) with the following property: Let \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) be any smooth function in standard form that is vanishing of type \(\mathtt{N}\) at the origin, and let \(K\subseteq\mathbb{R}^{d}\) be any Borel set with \(\dim_{\mathrm{H}}K>d-\varepsilon\). Then there exists_
* _a dyadic box_ \(\mathtt{Q}\in\mathcal{D}_{\mathtt{J}}^{*}[\mathtt{\vec{n}}]\) _with_ \(\mathtt{J}=\mathtt{J}(\Phi)\) _as in Proposition_ 2.3 _and_ \(\mathtt{\vec{n}}=(\mathtt{n}_{1},\ldots,\mathtt{n}_{d})\) _as in (_2.3_), and_
* _a finite Borel measure_ \(\nu\) _with_ \(\operatorname{supp}\nu\subseteq K\cap\overline{\mathtt{Q}}\) _and_ \(\nu(\mathtt{Q})>0\)__
_such that the blow-up \(\mu:=\nu^{\mathtt{q}}\) satisfies the energy and spectral gap conditions in (2.17)._
Here, \(\overline{E}\) denotes the topological closure of \(E\).
### Proof of Theorems 1.5 and 1.1, assuming Propositions 2.2-2.4
The proof is a concatenation of the three propositions, in reverse order. Let \(\Gamma\subset\mathbb{R}^{d}\) be a smooth curve that is vanishing of type \(\mathtt{N}\) at the origin. Let \(K\subseteq\mathbb{R}^{d}\) be a Borel set with \(\dim_{\mathrm{H}}K>d-\varepsilon\), with \(\varepsilon\) as in Proposition 2.4. We may assume that \(\Gamma\supseteq\Phi([0,1])\), where \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) is in standard form and vanishing of type \(\mathtt{N}\) at the origin.
Let \(\mathtt{Q}\) and \(\mu\) be as in the conclusion of Proposition 2.4 when applied to \(\Phi\) and \(K\). Thus, in particular, \(\mu\) is a probability measure supported on \(\mathbf{T}_{\mathtt{Q}}(K\cap\overline{\mathtt{Q}})\subseteq[0,1]^{d}\) that satisfies the criteria (2.17) for some constants \(\mathtt{A},\mathtt{B},\mathtt{C}\) obeying (2.16). Let \(\mathtt{j}\geq\mathtt{J}(\Phi)\) be such that \(\mathtt{Q}\in\mathcal{D}_{\mathtt{j}}\). Proposition 2.3 gives that \(\mathcal{I}[\mu,\pi[\Phi;\mathtt{j},\mathtt{A}^{-6d}]]>0\), and consequently, by Proposition 2.2 there exists some \(x\in\operatorname{supp}\pi[\Phi;\mathtt{j},\mathtt{A}^{-6d}]\cap(\operatorname {supp}\mu-\operatorname{supp}\mu)\).
The measure \(\pi[\Phi;\mathtt{j},\mathtt{A}^{-6d}]\) is supported on \(\Phi^{\mathtt{j}}([\mathtt{A}^{-6d},1])\), where \(\Phi^{\mathtt{j}}\) is as in (2.12), while \(\mu\) is supported on \(\mathbf{T}_{\mathtt{Q}}(K)\). Writing \(x=\Phi^{\mathtt{j}}(s)\) for some \(s\in[\mathtt{A}^{-6d},1]\) and setting \(\gamma:=\Phi(2^{-\mathtt{j}}s)\), it follows from (2.11) and (2.22) that \(\gamma\in\Gamma\setminus\{0\}\) and \(\gamma\in K-K\).
**Remark 2.5**.: The above argument relies crucially on the relationship between the curve \(\Gamma\) and the collection \(\mathcal{D}^{*}=\mathcal{D}^{*}[\mathtt{\vec{n}}]\) of dyadic boxes. We would like to explain how these boxes were chosen. For simplicity, we will consider the example of \(\Gamma=\operatorname{im}\Phi\) with
\[\Phi(t)=(t^{2},t^{3}+t^{4}),\qquad t\in[0,1],\]
and thus \(\mathtt{\vec{n}}=(2,3)\). Roughly speaking, Propositions 2.3 and 2.2 imply that if \(K\) supports a measure with a spectral gap, then \(K-K\) intersects \(\Gamma\setminus\{0\}\) as desired. Proposition 2.4 provides a rectangle \(\mathtt{Q}\in\mathcal{D}^{*}\) such that the affine image \(\mathbf{T}_{\mathtt{Q}}(K)\) of \(K\) under the rescaling map for \(\mathtt{Q}\) supports a measure with this property. In an ideal scenario, the dimensions of our dyadic rectangles in \(\mathcal{D}^{*}\) would be chosen so that \(\Gamma\) would be invariant under the (linear part of the) inverse scaling map \(\mathbf{T}_{\mathtt{Q}}^{-1}\). This would then allow us to pull back the desired pattern from \(\mathbf{T}_{\mathtt{Q}}(K)\) to \(K\). However, the curve \(\Gamma\) in the present example does not enjoy any such scaling relation. Instead, it possesses an approximate scaling relation based on its leading order behaviour: The rescaled functions
\[\Phi^{j}(t):=(2^{2j}\Phi_{1}(2^{-j}t),2^{3j}\Phi_{2}(2^{-j}t))=(t^{2},t^{3}+2^ {-j}t^{4})\]
yield a sequence of curves \(\Gamma_{j}:=\operatorname{im}\Phi^{j}\) that approach the "leading order" curve
\[\Gamma_{\infty}:=\{(t^{2},t^{3})\colon t\in[0,1]\}\]
as \(j\to\infty\), and this limit curve is invariant under the transformation \((x_{1},x_{2})\mapsto(2^{2j}x_{1},2^{3j}x_{2})\) for any \(j\). We define the rectangles in \(\mathcal{D}_{j}\) to have dimensions \(2^{-2j}\times 2^{-3j}\), so that (the linear parts of) their rescaling maps coincide with this transformation. Since Proposition 2.3 is in fact uniform in \(j\) (sufficiently large), we can show that \(\mathbf{T}_{\mathfrak{Q}}(K)-\mathbf{T}_{\mathfrak{Q}}(K)\) contains a nonzero point \(\Phi^{\mathfrak{j}}(s)\), where \(\mathfrak{j}\) is such that \(\mathfrak{Q}\in\mathcal{D}_{\mathfrak{j}}\). As desired, the rescalings \(K\to\mathbf{T}_{\mathfrak{Q}}(K)\) and \(\Phi\to\Phi^{\mathfrak{j}}\) are compatible, in the sense that now \(K-K\) must contain the nonzero point \(\Phi(2^{-\mathfrak{j}}s)\in\Gamma\).
## 3. High-dimensional graphs
The proofs of Theorems 1.3 and 1.4 require construction of counterexamples to unavoidability. For these, we will utilize the existence of high-dimensional one-parameter graphs.
**Proposition 3.1**.: _For every \(s\in[1,d)\), there exists a Holder-continuous function \(F_{s}\colon[0,1]\to\mathbb{R}^{d-1}\) such that_
\[\|F_{s}\|_{C^{0,\alpha}}<\infty\quad\text{for all}\quad 0<\alpha<\min\Big{\{} \frac{1}{s},\frac{d-s}{d-1}\Big{\}}, \tag{3.1}\]
_and_
\[\dim_{\mathrm{H}}(\operatorname{graph}F_{s})=s,\quad\text{where}\quad \operatorname{graph}F_{s}:=\{(t,F_{s}(t))\colon t\in[0,1]\}; \tag{3.2}\]
_here, \(\|\cdot\|_{C^{0,\alpha}}\) denotes the Holder norm_
\[\|f\|_{C^{0,\alpha}}:=\sup_{t\in[0,1]}|f(t)|+\sup_{t,t^{\prime}\in[0,1]\colon t \neq t^{\prime}}\frac{|f(t)-f(t^{\prime})|}{|t-t^{\prime}|^{\alpha}}.\]
This classical result, originally due to Besicovitch and Ursell [3] for \(d=2\), now has many variants in the literature; see for example [28, 23, 29, 30, 44, 47]. It has been proved in its above-stated form by Kahane [20], who shows through a random argument that such functions \(F_{s}\) are in fact plentiful. We briefly outline his argument, pointing the reader to the relevant sections of the text for a complete proof.
In [20, Chapter 18] and for \(n,d\geq 1\), Kahane introduces an \((n,d,\gamma)\) Gaussian process \(\{X_{t}\colon t\in\mathbb{R}^{n}\}\) with values in \(\mathbb{R}^{d}\) such that
\[\mathbb{E}(|X_{t}-X_{t^{\prime}}|^{2})=d|t-t^{\prime}|^{\gamma}.\]
Such a process is shown to exist in [20, Chapter 18, SS2] when \(0<\gamma\leq 2\), with an almost sure continuous version, i.e. with \(t\mapsto X(t;\omega)\) being a continuous function of \(t\) for almost every \(\omega\). More precisely, the modulus of continuity \(\omega_{X}\) of \(X(\cdot;\omega)\) is shown to obey
\[\omega_{X}(h)=\sup_{|t-t^{\prime}|\leq h}|X(t)-X(t^{\prime})|=O\big{(}\sqrt{|h |^{\gamma}\log(1/|h|)}\big{)}\qquad\text{ almost surely} \tag{3.3}\]
on every compact subset of \(\mathbb{R}^{n}\). This is stated in equation (3) of [20, Chapter 18], and follows from the content of [20, Chapter 14]. The condition (3.3) implies that (almost surely)
\[X(t)\text{ is H\"{o}lder continuous with exponent }\gamma/2-\varepsilon\text{ for every } \varepsilon\in(0,\gamma/2). \tag{3.4}\]
In [20, Chapter 18, SS7, Theorem 7], Kahane proves:
_For any compact set \(E\subset\mathbb{R}^{n}\), the relation_
\[\dim_{\mathrm{H}}\bigl{(}\operatorname{graph}X|_{E}\bigr{)}=\min\Bigl{\{} \frac{2}{\gamma}\dim_{\mathrm{H}}E,\,\dim_{\mathrm{H}}E+\Big{(}1-\frac{\gamma }{2}\Big{)}d\Bigr{\}}\]
_holds almost surely_.
Here, \(\operatorname{graph}X|_{E}\) denotes the set \(\{(t,X(t))\colon t\in E\}\). We can now obtain Proposition 3.1 from Kahane's theorem as follows: Set \(n=1\) and \(E=[0,1]\), and replace \(d\) by \((d-1)\) and \(X(t)\) by \(F_{s}(t)\) to get
\[\dim_{\operatorname{H}}(\operatorname{graph}F_{s})=\dim_{\operatorname{H}}( \operatorname{graph}X|_{[0,1]})=\min\Bigl{\{}\frac{2}{\gamma},d-\frac{\gamma}{ 2}(d-1)\Bigr{\}}, \tag{3.5}\]
If we choose
\[\gamma=\min\Big{\{}\frac{2}{s},\frac{2(d-s)}{d-1}\Big{\}}=\begin{cases}\frac{ 2}{s}&\text{if $1\leq s\leq d-1$},\\ \frac{2(d-s)}{d-1}&\text{if $d-1<s<d$},\end{cases}\]
then (3.4) implies the first conclusion of the proposition, and (3.5) and a bit of arithmetic confirm the second.
## 4. Graphs of infinite type are avoidable
The goal of this section is to prove Theorem 1.3. We will in fact prove a slightly stronger statement (Proposition 4.1 below), namely an analogue of the theorem for curves that are not necessarily graphs, but graph-like. Additionally, we formulate a quantitative partial-avoidance result for graph-like curves of finite type (Proposition 4.2); the proof is sketched in the Appendix.
Let us set up the relevant definition. We say that a smooth curve \(\Gamma\subset\mathbb{R}^{d}\) is _graph-like_ if there exists an invertible linear transformation \(\mathbf{L}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) and a smooth function \(\Phi\colon\mathbf{I}\to\mathbb{R}^{d}\) such that
1. \(\mathbf{I}\subset\mathbb{R}\) is a nondegenerate compact interval,
2. \(\Phi=:(\Phi_{1},\underline{\Phi})\) is of the form \(\Phi_{1}(t)=t^{\mathfrak{m}}\phi(t)\) for some integer \(\mathfrak{m}\geq 1\) and some smooth function \(\phi\colon\mathbf{I}\to\mathbb{R}\) such that \(\inf\{|\phi(t)|\colon t\in\mathbf{I}\}>0\),
3. \(\mathbf{L}(\Gamma)=\Phi(\mathbf{I})\).
In the above, if \(\mathbf{L}\) is the identity, \(\mathfrak{m}=1\), and \(\phi\equiv 1\), then \(\Gamma=\{(t,\underline{\Phi}(t))\colon t\in\mathbf{I}\}\) is an ordinary graph.
**Proposition 4.1**.: _Let \(\Gamma\subset\mathbb{R}^{d}\) be a smooth graph-like curve that contains the origin. Then \(\Gamma\) is unavoidable if and only if \(\Gamma\) is of finite type at the origin. In particular, Theorem 1.3 holds._
### Proof of Proposition 4.1
Theorem 1.1 provides one direction of the proposition, namely that if \(\Gamma\) is of finite type at the origin, then \(\Gamma\) is unavoidable. Toward proving the other direction, we assume that \(\Gamma\) is of infinite type at the origin and aim to show that \(\Gamma\) is avoidable. Fix \(\mathbf{L}\) and \(\Phi\colon\mathbf{I}\to\mathbb{R}^{d}\) satisfying conditions (i)-(iii) in the definition of graph-like curve. We may assume that \(\mathbf{L}\) is the identity, since unavoidability and type are both invariant under the action of invertible linear transformations. The hypothesis that \(\Gamma\) contains the origin, together with conditions (ii) and (iii), implies that \(0\in\mathbf{I}\) and that
\(\Phi(0)=0\). Moreover, \(\Phi\) must be of infinite type at the origin, since we have assumed the same of \(\Gamma\). Consequently, there exists a unit vector \(\mathtt{u}\in\mathbb{R}^{d}\) such that
\[\mathtt{u}\cdot\Phi^{(n)}(0)=0\quad\text{for every $n\geq 0$.} \tag{4.1}\]
Let \(\mathbf{U}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) be the unitary matrix that maps \(\mathtt{e}_{1}=(1,0,\ldots,0)\) to \(\mathtt{u}\). Define \(z:=\mathbf{U}^{-1}\circ\Phi\), and write \(z=(z_{1},\underline{z})\) with \(z_{1}\colon\mathtt{I}\to\mathbb{R}\) and \(\underline{z}\colon\mathtt{I}\to\mathbb{R}^{d-1}\). Then (4.1) implies that
\[z_{1}^{(n)}(0)=[\mathtt{e}_{1}\cdot(\mathbf{U}^{-1}\circ\Phi)]^{(n)}(0)= \mathtt{u}\cdot\Phi^{(n)}(0)=0\quad\text{for every $n\geq 0$.} \tag{4.2}\]
It follows that for each \(n\), there exists \(\mathtt{C}_{n}>0\) such that
\[|z_{1}(t)|\leq\mathtt{C}_{n}|t|^{n}\quad\text{for all $t\in\mathtt{I}$.} \tag{4.3}\]
By condition (ii), we have \(\Phi^{(\mathtt{n})}(0)\neq 0\) and thus
\[z^{(\mathtt{n})}(0)=\mathbf{U}^{-1}\circ\Phi^{(\mathtt{n})}(0)\neq 0. \tag{4.4}\]
Properties (4.2) and (4.4) together imply that \(\underline{z}^{(\mathtt{n})}(0)\neq 0\). It follows that there exist constants \(\mathtt{c}_{0}>0\) and \(\delta>0\) such that
\[|\underline{z}(t)|\geq\mathtt{c}_{0}|t|^{\mathtt{n}}\quad\text{for all $t\in\mathtt{I}\cap[-\delta,\delta]$.} \tag{4.5}\]
Now, fix \(s\in[1,d)\) and consider the graph of \(F_{s}\), as defined in (3.2). Given any \(r>0\), we can find a cube \(Q_{r}\subset\mathbb{R}^{d}\) of diameter \(r\) such that
\[\dim_{\mathrm{H}}(\mathbf{U}(\operatorname{graph}F_{s})\cap Q_{r})=\dim_{ \mathrm{H}}(\mathbf{U}(\operatorname{graph}F_{s}))=\dim_{\mathrm{H}}( \operatorname{graph}F_{s})=s.\]
Fix \(\mathtt{r}\) such that
\[0<\mathtt{r}<\mathtt{c}_{1}\min\left\{\delta,\left(\frac{\mathtt{c}_{0}}{\|F_ {s}\|_{C^{0,\alpha}}\mathsf{C}_{\mathtt{n}}^{\alpha}}\right)^{\frac{1}{\mathtt{ n}\alpha-\mathtt{n}}}\right\}^{\mathtt{m}},\]
where
* \(\mathtt{c}_{1}:=\inf\{|\phi(t)|\colon t\in\mathtt{I}\}>0\),
* \(\alpha\) is any exponent satisfying (3.1),
* \(\|F_{s}\|_{C^{0,\alpha}}\) is the Holder norm of \(F_{s}\), as in (3.1),
* \(\mathtt{n}\) is any integer such that \(\mathtt{n}\alpha>\mathtt{m}\).
Let
\[K:=\mathbf{U}(\operatorname{graph}F_{s})\cap Q_{\mathtt{r}}.\]
We claim that \((\Gamma\setminus\{0\})\cap(K-K)=\emptyset\). For a contradiction, suppose there exists some \(\gamma\in(\Gamma\setminus\{0\})\cap(K-K)\). Because \(\gamma\) is a member of \(\Gamma\), we can express it as \(\gamma=\Phi(\mathtt{t})\) for some \(\mathtt{t}\in\mathtt{I}\). This \(\mathtt{t}\) obeys
\[\mathtt{c}_{1}|\mathtt{t}|^{\mathtt{n}}\leq|\Phi_{1}(\mathtt{t})|\leq|\Phi( \mathtt{t})|=|\gamma|\leq\operatorname{diam}K\leq\operatorname{diam}Q_{ \mathtt{r}}=\mathtt{r},\]
using that \(\gamma\in K-K\). Thus,
\[|\mathtt{t}|<\min\left\{\delta,\left(\frac{\mathtt{c}_{0}}{\|F_{s}\|_{C^{0, \alpha}}\mathsf{C}_{\mathtt{n}}^{\alpha}}\right)^{\frac{1}{\mathtt{n}\alpha- \mathtt{n}}}\right\} \tag{4.6}\]
due to our choice of \(\mathtt{r}\). Using the graph structure of \(K\), we can also express \(\gamma\) as \(\gamma=\mathbf{U}(t-t^{\prime},F_{s}(t)-F_{s}(t^{\prime}))\) for some \(t,t^{\prime}\in[0,1]\). This leads to the relation
\[z(\mathtt{t})=(t-t^{\prime},F_{s}(t)-F_{s}(t^{\prime})).\]
By (4.6), (4.5), (4.3), and the Holder continuity of \(F_{s}\), we have
\[\mathtt{c}_{0}|\mathtt{t}|^{\mathtt{n}}\leq|z(\mathtt{t})|=|F_{s }(t)-F_{s}(t^{\prime})| \leq\|F_{s}\|_{C^{0,\alpha}}|t-t^{\prime}|^{\alpha}\] \[=\|F_{s}\|_{C^{0,\alpha}}|z_{1}(\mathtt{t})|^{\alpha}\leq\|F_{s} \|_{C^{0,\alpha}}\mathtt{c}_{\mathtt{n}}^{\alpha}|\mathtt{t}|^{\alpha}.\]
This inequality is compatible with (4.6) only if \(\mathtt{t}=0\). So \(\gamma=\Phi(0)=0\), contradicting the assumption that \(\gamma\in\Gamma\setminus\{0\}\). Since \(\dim_{\mathrm{H}}K=s\) and \(s\in[1,d)\) was arbitrary, we conclude that \(\Gamma\) is avoidable.
### A quantitative partial-avoidance result
The proof of Proposition 4.1 can be modified slightly to give a quantitative partial-avoidance result for graph-like curves of finite type. To formulate such a statement, we need another definition. Let \(\Gamma\) be a graph-like curve. There are many choices of \(\mathbf{L}\), \(\mathtt{I}\), and \(\Phi(t)=(t^{\mathtt{n}}\phi(t),\underline{\Phi}(t))\) such that conditions (i)-(iii) in the definition of graph-like curve are satisfied. The smallest integer \(\mathtt{m}\) appearing among these parametrizations will be called the _subtype_ of \(\Gamma\). We sketch a proof of the following in the Appendix:
**Proposition 4.2**.: _Let \(\Gamma\) be a graph-like curve of type \(\mathtt{N}\) at the origin and of subtype \(\mathtt{m}\). Then for every \(s<\min\left\{\frac{\mathtt{N}}{\mathtt{n}},d-\frac{(d-1)\mathtt{m}}{\mathtt{ N}}\right\}\), there exists a Borel set \(K\subseteq\mathbb{R}^{d}\) with \(\dim_{\mathrm{H}}K\geq s\) such that \((\Gamma\setminus\{0\})\cap(K-K)=\emptyset\)._
Taking \(\Gamma\) to be a curve of subtype \(\mathtt{m}=1\), this result implies that the constant \(\varepsilon_{\mathtt{N}}\) in Theorem 1.5 cannot exceed \(\frac{d-1}{\mathtt{N}}\).
## 5. Polynomial patterns of infinite type are avoidable
In this section, we prove Theorem 1.4, which asserts equivalence between five statements. We will only demonstrate the equivalence of statements 1 and 2, namely that \(\Gamma\) is unavoidable if and only if \(\Gamma\) is of finite type at the origin. It is straightforward to show that statements 2-5 are equivalent; this is left to the reader.
### Proof of Theorem 1.4
Fix a polynomial curve \(\Gamma\subset\mathbb{R}^{d}\) that contains the origin and a polynomial function \(\Phi\colon\mathtt{I}\to\mathbb{R}^{d}\) such that \(\Gamma=\Phi(\mathtt{I})\). We may assume that \(\Phi(0)=0\). As in the proof of Proposition 4.1, one direction of the equivalence between statements 1 and 2 is already supplied by Theorem 1.1, namely that 2 implies 1. We therefore assume that \(\Gamma\) is of infinite type at the origin and aim to show that \(\Gamma\) is avoidable; this would show that 1 implies 2. Toward that end, we note that the parametrization \(\Phi\) must be of infinite type at the origin and, because \(\Phi\) is a polynomial function, this is equivalent to the existence of a unit vector \(\mathtt{u}\in\mathbb{R}^{d}\) such that \(\mathtt{u}\cdot\Phi\equiv 0\). Let \(\mathbf{U}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) be the unitary matrix that maps \(\mathtt{e}_{1}=(1,0,\ldots,0)\) to \(\mathtt{u}\). Fix \(s\in[1,d)\), and let
\[K:=\mathbf{U}(\operatorname{graph}F_{s}),\]
where \(\operatorname{graph}F_{s}\) is as in (3.2). We claim that \((\Gamma\setminus\{0\})\cap(K-K)=\emptyset\). For a contradiction, suppose there exists \(\gamma\in(\Gamma\setminus\{0\})\cap(K-K)\), and let \(z=\mathbf{U}^{-1}(\gamma)\). Writing \(\gamma=\Phi(\mathtt{t})\) for
some \(\mathtt{t}\in\mathtt{I}\), we have
\[z_{1}=\mathtt{e}_{1}\cdot z=\mathbf{U}^{-1}\mathtt{u}\cdot z=\mathtt{u}\cdot \mathbf{U}z=\mathtt{u}\cdot\Phi(\mathtt{t})=0. \tag{5.1}\]
Due to the graph structure of \(K\), we also have
\[z=\mathbf{U}^{-1}(\gamma)=(t-t^{\prime},F_{s}(t)-F_{s}(t^{\prime})) \tag{5.2}\]
for some \(t,t^{\prime}\in[0,1]\). Together, (5.1) and (5.2) imply that \(z=0\) and hence \(\gamma=0\), a contradiction. Since \(\dim_{\mathrm{H}}K=s\) and \(s\in[1,d)\) was arbitrary, we conclude that \(\Gamma\) is avoidable.
### Checking conditions for (un)avoidability
As noted in Section 1, an unavoidable smooth curve must contain the origin. Combining this observation with statements 4 and 5 in Theorem 1.4, we get a simple criterion for checking whether a given tuple of polynomials \(\Phi=(\Phi_{1},\dots,\Phi_{d})\) defines an unavoidable curve on a compact interval \(\mathtt{I}\): The image of \(\Phi\) on \(\mathtt{I}\) is unavoidable if and only if \(\Phi_{1},\dots,\Phi_{d}\) are linearly independent and share a common zero in \(\mathtt{I}\). The following examples illustrate this with \(d=3\):
* \(\Gamma=\{(t^{2}-1,t^{3}+5t-6,2t^{3}-t^{2}-1)\colon t\in[0,1]\}\) is unavoidable. The parametrizing polynomials are linearly independent and vanish at \(t=1\).
* \(\Gamma=\{(t-2,t^{2}-2t,t^{2}+t-6)\colon t\in[0,1]\}\) is avoidable. The parametrizing polynomials are linearly dependent.
* \(\Gamma=\{(t+1,t^{2}-1,t^{3}+2t+1)\colon t\in[0,1]\}\) is avoidable. The parametrizing polynomials do not share a zero.
In general, the presence of a shared zero among polynomials can be checked by, say, using the Euclidean algorithm to compute their greatest common divisor and then using Sturm's theorem to determine whether that divisor has a real zero.
## 6. Configuration integral: Proof of Proposition 2.2
In this section, we prove Proposition 2.2. Let \(\mathscr{I}_{0}:=\mathscr{I}[\mu,\pi]/2>0\), with \(\mathscr{I}[\mu,\pi]\) as in (2.5). Then there exists \(\delta_{0}>0\) such that
\[\Big{|}\int(\mu*\psi_{\delta})*\pi\,d\mu\Big{|}\geq\mathscr{I}_{0}\quad\text{ for all }\delta\in(0,\delta_{0}]. \tag{6.1}\]
Since \(\mu\) is a probability measure, property (6.1) implies that for every \(\delta\in(0,\delta_{0}]\), there exists a point \(x_{\delta}\in\operatorname{supp}\mu\) such that
\[|(\mu*\psi_{\delta})*\pi(x_{\delta})|\geq\mathscr{I}_{0}.\]
Set \(\pi_{\delta}:=\pi*\psi_{\delta}\). Since \((\mu*\psi_{\delta})*\pi=\mu*\pi_{\delta}\) by properties of convolution, we obtain
\[\mathscr{I}_{0}\leq|\mu*\pi_{\delta}(x_{\delta})| =\Big{|}\int\pi_{\delta}(x_{\delta}-y)d\mu(y)\Big{|}\] \[=\int_{E_{1}}|\pi_{\delta}(x_{\delta}-y)|d\mu(y)+\int_{E_{2}}|\pi _{\delta}(x_{\delta}-y)|d\mu(y), \tag{6.2}\]
where
\[E_{1}=E_{1}(\delta) :=\big{\{}y\colon\operatorname{dist}(x_{\delta}-y,\operatorname{ supp}\pi)>\sqrt{\delta}\big{\}},\] \[E_{2}=E_{2}(\delta) :=\big{\{}y\colon\operatorname{dist}(x_{\delta}-y,\operatorname{ supp}\pi)\leq\sqrt{\delta}\big{\}}.\]
We claim that the integral over \(E_{1}\) vanishes as \(\delta\to 0\). Indeed, fix \(z:=x_{\delta}-y\) such that \(\operatorname{dist}(z,\operatorname{supp}\pi)>\sqrt{\delta}\). Then for each integer \(N\), there exists \(C_{N}<\infty\) such that
\[|\pi_{\delta}(z)|\leq\delta^{-d}\int|\psi(\delta^{-1}(z-w))|d\pi(w)\leq C_{N} \delta^{-d}\int(\delta^{-1}|z-w|)^{-N}d\pi(w)\leq C_{N}\|\pi\|\delta^{\frac{N}{2 }-d};\]
here, we have used the rapid decay of \(\psi\). Noting that \(\|\pi\|<\infty\) and taking \(N>2(d+1)\) and \(\delta\) sufficiently small, we get that \(|\pi_{\delta}(z)|\leq\delta\). This pointwise bound on the integrand means that
\[\int_{E_{1}}|\pi_{\delta}(x_{\delta}-y)|d\mu(y)\leq\delta. \tag{6.3}\]
Now, assuming \(\delta<\mathcal{I}_{0}/2\) and inserting (6.3) into (6.2), we find that
\[\int_{E_{2}}|\pi_{\delta}(x_{\delta}-y)|d\mu(y)\geq\frac{\mathcal{I}_{0}}{2}>0.\]
Hence, for each sufficiently small \(\delta\), there exists \(y_{\delta}\in\operatorname{supp}\mu\) such that
\[\operatorname{dist}(x_{\delta}-y_{\delta},\operatorname{supp}\pi)\leq\sqrt{ \delta}. \tag{6.4}\]
Since \(\operatorname{supp}(\mu)\times\operatorname{supp}(\mu)\) is a compact set in \(\mathbb{R}^{2d}\), there exists a sequence of values of \(\delta\) along which \((x_{\delta},y_{\delta})\) converges to a point \((x,y)\in\operatorname{supp}(\mu)\times\operatorname{supp}(\mu)\). By (6.4), we must have \(x-y\in\operatorname{supp}\pi\), and the conclusion of the proposition follows.
## 7. Energy and spectral gap: Proof of Proposition 2.3
The goal of this section is to prove Proposition 2.3. Essential to this proof is a deeper understanding of the behaviour of the measures \(\pi=\pi[\Phi;j,c]\) defined for functions \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) in standard form, with special attention to their dependence on the accompanying parameters \(j\) and \(c\). We are specifically interested in the growth rate of the mass assigned by \(\pi\) to Euclidean balls and the decay of its Fourier transform \(\widehat{\pi}\). We collect the main tools in the first three subsections. Using these, the proof of Proposition 2.3 is completed in Subsection 7.4.
### Choice of constants
Let \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) be a smooth function in standard form that is vanishing of type N at the origin. The discussion in Subsection 2.2.2 leading up to (2.9) and (2.10) identifies two constants \(\mathtt{K}_{\mathtt{N}}:=2\mathbb{N}!\) and \(\mathtt{J}_{0}=\mathtt{J}_{0}(\Phi)\), only the former being admissible. These two constants are important for our subsequent analysis: the admissible constant \(\mathtt{L}_{\mathtt{N}}\) and the inadmissible constant J appearing in Proposition 2.3 will depend respectively on \(\mathtt{K}_{\mathtt{N}}\) and \(\mathtt{J}_{0}\). In the remainder of this section, \(\mathtt{L}_{\mathtt{N}}\) will always denote an admissible constant and J an inadmissible one, although their exact values may change from one occurrence to another. In particular, \(\mathtt{L}_{\mathtt{N}}\) will always be a large multiple of \(\mathtt{K}_{\mathtt{N}}\). The multiplicative factor may depend on \(d\), N, and the Schwartz function \(\psi\) introduced in (2.6) in order to define \(\mu_{\delta}\).
### Ball condition for \(\pi\)
**Lemma 7.1**.: _Fix any \(\mathtt{L}_{\mathtt{N}}\geq 2d\mathtt{K}_{\mathtt{N}}\), where \(\mathtt{K}_{\mathtt{N}}\) is the constant defined in Subsection 7.1. Then for every \(\Phi\) in standard form (of type N at the origin), we have_
\[\inf\Bigl{\{}\pi(B(0;r))\colon\pi=\pi[\Phi;j,c],\ j\geq\mathtt{J}_{0}(\Phi),\ c\in\Bigl{(}0,\frac{r}{\mathtt{L}_{ \mathtt{N}}}\Bigr{]}\Bigr{\}}\geq\frac{r}{\mathtt{L}_{\mathtt{N}}}\quad\text{ for every $r\in(0,1]$.} \tag{7.1}\]
Proof.: The condition (2.9) with \(\ell=0\) gives for all \(j\geq\mathsf{J}_{0}\) and \(s\in[0,1]\) the bound
\[|\Phi^{j}(s)|\leq\sum_{i=1}^{d}2^{\mathsf{n}_{i}j}|\Phi_{i}(2^{-j}s)|\leq\sum_{i= 1}^{d}2^{\mathsf{n}_{i}j}\mathbb{K}_{\mathtt{N}}(2^{-j}s)^{\mathsf{n}_{i}}\leq \mathbb{K}_{\mathtt{N}}\sum_{i=1}^{d}s^{\mathsf{n}_{i}}\leq\mathbb{K}_{ \mathtt{N}}ds\leq\frac{\mathbb{L}_{\mathtt{N}}}{2}s. \tag{7.2}\]
The fourth inequality in the display above uses the relation \(\mathsf{n}_{i}\geq 1\), a consequence of (2.3). This upper bound allows us to estimate \(\pi(B(0;r))\) as follows. For \(r\in(0,1]\) and \(c\in(0,r/\mathbb{L}_{\mathtt{N}}]\), we have
\[\pi(B(0;r))=\int_{c}^{1}\mathbf{1}_{B(0;r)}(\Phi^{j}(s))ds =|\{s\in[c,1]\colon|\Phi^{j}(s)|\leq r\}|\] \[\geq\Big{|}\Big{\{}s\in[c,1]\colon\frac{\mathbb{L}_{\mathtt{N}}} {2}s\leq r\Big{\}}\Big{|}\quad\text{(from \eqref{eq:L_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1_11\(1\)1_1_11\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1_11\(1\)1_1\(1\)1_1\(1\)1_1\(1\)11\(1\)1_1\(1\)1_1\(1\)1_1\(1\)1_1\(1\)11\(1\)1_1\(1\)1_1\(1\)11\(1\)1_1\(1\)1_1\(1\)11\(1\)1_1_11\(1\)1_1_11\(1\)1_11\(1\)1_1_11\(1\)1_1\(1\)11\(1\)1_11\(1\)11\(1\)1_1\(1\)11\(1\)1_11\(1\)1_11\(1\)1_11\(1\)11_11\(1\)11_11\(1\)11\(1\)11\(1\)11\(1\)11\(1\)11\(1\)11_11\(1\)11_11\(1\)11\(1\)11\(1\)11_11\(1\)11_11_11_11\(1\)11_11_11_11\(1\)11_11_11_11\(1\)11_11_11\(1\)11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_11_111_11_11_111_11_11_111_11_11_11_11_111_11_11_111_11_111_111_11_111_111_111_11_111_11_111_11_111_111_11_1111_111_11_111_111_111_111_111_111_111_111_111_111_111_111_1111_111_111_111_1111_111_111_111_111_111_111_111_1111_111_111_1111_111_111_111_1111_1111_111_11_111_1111_111_1111_111_111_1111_111_1111_1111_111_1111_111_111_1111_111_111_1111_111_1111_111_111_1111_1111_111_1111_111_1111_1111_1111_11111_1111_111_1111_1111_1111_111_11111_1111_1111_111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_1111_111_11111_111_1111_1111_111_1111_1111_11111_111_1111_1111_1111_11111_1111_11111_11111_11111_1111_11111_1111_1111_11111_11111_1111_11111_1111_1111_1111_1111_1111_11111_1111_11111_1111_11111_1111_11111_11111_1111_11111_11111_1111_111_1111_1111_1111_1111_11111_1111_1111_11111_11111_11111_1111_1111_1111_1111_11111_111111_1111_11111_11111_1111_1111_1111_11111_11111_1111_11111_1111_111_11111_1111_11111_1111_11111_1111_1111_11111_11111_111_11111_1111_11111_11111_1111_11111_1111_11111_1111_1111_1111_11111_1111_1111_11111_1111_11111_11111_11111_1111_11111_11111_1111_1111_11111_111111_1111_11111_1111_111111_111111_11111_111111_111111_11111_11111_11111_11111_11111_11111_11111_1111_11111_1111_111111_11111_11111_111111_11111_111111_1111_11111_111_111111_11111_11111_11111_11111_111111_11111_11111_111111_11111_111111_111111_1111_111111_11111_11111_11111_111111_11111_11111_1111111_111111_11111_111111_11111_111111_1111111_11111_111111_1111111_111111_111111_1111111_111111_111111_11111111_11111111_11111111_1111111_11111111_11111111_11111111_11111111_1111111111_111111111_11111111111_1
\[\geq\mathsf{c}\pi(B(0;\mathsf{b}h/2))\geq\frac{\mathsf{bc}}{4d\mathsf{K}_{\tt N}}h \geq\mathsf{L}_{\tt N}^{-1}h,\]
provided \(\mathsf{L}_{\tt N}\geq 4d\mathsf{K}_{\tt N}/(\mathsf{bc})\). Assuming also that \(\mathsf{L}_{\tt N}\geq 4d\mathsf{K}_{\tt N}/\mathsf{b}\), the above bound then holds for any \(h\in(0,\mathsf{L}_{\tt N}^{-1}]\). This establishes (7.4).
### A uniform method of stationary phase
Our next task is to study the behaviour of \(\widehat{\pi}(\xi)\). This information is given in Lemma 7.4, below, which we will prove using basic stationary phase techniques. The following elementary lemma will simplify the argument.
**Lemma 7.3**.: _Fix any constant \(\mathsf{a}\in(0,1]\) and any collection \(\{x_{1},\ldots,x_{n}\}\) of nonnegative real numbers, not all zero. Then there exists \(k\in\{1,\ldots,n\}\) such that \(x_{k}\neq 0\) and_
\[\frac{x_{i}}{x_{k}}\leq\mathsf{a}^{-n}\quad\text{if $1\leq i\leq k$},\qquad \frac{x_{i}}{x_{k}}\leq\mathsf{a}\quad\text{if $k<i\leq n$}.\]
Proof.: We will induct on \(n\). The base case \(n=1\) is trivial. Assume that \(n\geq 2\) and that the lemma holds with \(n-1\) in place of \(n\). Applying the induction hypothesis on \(\{x_{1},\ldots x_{n-1}\}\), let \(k_{0}\in\{1,\ldots,n-1\}\) be an index such that \(x_{k_{0}}\neq 0\) and \(x_{i}/x_{k_{0}}\leq\mathsf{a}^{-n+1}\) if \(1\leq i\leq k_{0}\) and \(x_{i}/x_{k_{0}}\leq\mathsf{a}\) if \(k_{0}<i\leq n-1\). If \(x_{n}/x_{k_{0}}\leq\mathsf{a}\), then the conclusion of the lemma holds with \(k=k_{0}\). Assume that \(x_{n}/x_{k_{0}}\geq\mathsf{a}\). Then for \(i<n\) we have
\[\frac{x_{i}}{x_{n}}=\frac{x_{i}}{x_{k_{0}}}\cdot\frac{x_{k_{0}}}{x_{n}}\leq \mathsf{a}^{-n+1}\mathsf{a}^{-1}=\mathsf{a}^{-n},\]
and the conclusion of the lemma holds with \(k=n\).
**Lemma 7.4**.: _There exists an admissible constant \(\mathsf{L}_{\tt N}\) with the following property: For every function \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) in standard form that is vanishing of type \(\tt N\) at the origin, there exists an (inadmissible) index \(\mathsf{J}\geq\mathsf{J}_{0}(\Phi)\) depending on \(\Phi\) such that_
\[\sup\{|\widehat{\pi}(\xi)|\colon\pi=\pi[\Phi;j,c],\,j\geq\mathsf{J},c\in(0,1 ]\}\leq\mathsf{L}_{\tt N}(1+|\xi|)^{-1/\tt N}\quad\text{for all $\xi\in\mathbb{R}^{d}$}. \tag{7.5}\]
Proof.: Fix \(\pi\) of the form \(\pi=\pi[\Phi;j,c]\) and \(\xi\in\mathbb{R}^{d}\setminus\{0\}\). It follows from (2.13) that
\[\widehat{\pi}(\xi)=\int e(x\cdot\xi)d\pi(x)=\int_{c}^{1}e(\xi\cdot\Phi^{j}(s) )ds,\quad\text{where $e(t):=e^{-2\pi it}$}. \tag{7.6}\]
The integral representing \(\widehat{\pi}(\xi)\) is a scalar oscillatory integral widely studied in harmonic analysis. Our goal is to apply the well-known method of stationary phase for oscillatory integrals (see [43, Chapter VIII]) to arrive at the desired bound (7.5). It is important to keep track of the implicit constants in this process to ensure uniformity in the parameters \(j\) and \(c\); we describe the steps below.
Let us choose, in the following order, a small constant \(\mathsf{a}\in(0,1]\) depending on \(\mathsf{K}_{\tt N}\), and a large integer \(\mathsf{J}\geq\mathsf{J}_{0}\) depending on \(\mathsf{a}\) and \(\Phi\), according to the constraints
\[d\mathsf{K}_{\tt N}\mathsf{a}\leq\frac{1}{8},\qquad d\mathsf{a}^{-d}\|\Phi\|_ {C^{\mathsf{a}}}2^{-\mathsf{J}}\leq\frac{1}{8}. \tag{7.7}\]
Here, \(\|\cdot\|_{C^{n}}\) refers to the standard norm on the space of \(n\)-times continuously differentiable functions \(f\colon[0,1]\to\mathbb{R}^{d}\), namely
\[\|f\|_{C^{n}}:=\sum_{\ell=0}^{n}\sup_{t\in[0,1]}|f^{(\ell)}(t)|.\]
We assume now that \(j\geq\mathsf{J}\). By Lemma 7.3, there exists an index \(k\in\{1,\ldots,d\}\) depending on \(\mathsf{a}\) and \(\xi\) such that
\[\xi_{k}\neq 0,\qquad\frac{|\xi_{i}|}{|\xi_{k}|}\leq\mathsf{a}^{-d}\quad\text{if }1 \leq i\leq k,\qquad\frac{|\xi_{i}|}{|\xi_{k}|}\leq\mathsf{a}\quad\text{if }k<i\leq d. \tag{7.8}\]
Let us define a function \(\varphi=\varphi_{k}\) by the formula
\[\varphi(s):=\frac{\xi}{\xi_{k}}\cdot\Phi^{j}(s)=\xi_{k}^{-1}\sum_{i=1}^{d}2^{ \mathsf{n}ij}\xi_{i}\Phi_{i}(2^{-j}s)\quad\text{for }s\in[c,1], \tag{7.9}\]
so that (7.6) reduces to
\[\widehat{\pi}(\xi)=\int_{c}^{1}e(\xi_{k}\varphi(s))ds. \tag{7.10}\]
We claim that the \(\mathsf{n}_{k}^{\text{th}}\) order derivative of the phase function \(\varphi\) in the oscillatory integral (7.10) is bounded from below by an absolute positive constant, due to our choice of \(\mathsf{a}\) and \(\mathsf{J}\) in (7.7). To verify this, we first note that
\[\varphi^{(\mathsf{n}_{k})}(s)=\xi_{k}^{-1}\sum_{i=1}^{d}2^{(\mathsf{n}_{i}- \mathsf{n}_{k})j}\xi_{i}\Phi_{i}^{(\mathsf{n}_{k})}(2^{-j}s)=\mathrm{I}+ \mathrm{II}+\mathrm{III},\]
where
\[\mathrm{I}:=\Phi_{k}^{(\mathsf{n}_{k})}(2^{-j}s),\quad\mathrm{II}:=\sum_{i=1}^ {k-1}\frac{\xi_{i}}{\xi_{k}}2^{(\mathsf{n}_{i}-\mathsf{n}_{k})j}\Phi_{i}^{( \mathsf{n}_{k})}(2^{-j}s),\quad\mathrm{III}:=\sum_{i=k+1}^{d}\frac{\xi_{i}}{ \xi_{k}}2^{(\mathsf{n}_{i}-\mathsf{n}_{k})j}\Phi_{i}^{(\mathsf{n}_{k})}(2^{-j }s).\]
Here, any empty sum is treated as zero. Noting that \(2^{-j}s\in(0,2^{-\mathsf{J}_{0}}]\), we use the left inequality in (2.10) to bound \(\mathrm{I}\) from below, obtaining \(|\mathrm{I}|\geq\frac{1}{2}\). The strict monotonicity (2.3) of the exponents \(\mathsf{n}_{i}\), the choice (7.8) of the index \(k\), and the condition (2.9) can be used to estimate \(\mathrm{II}\) and \(\mathrm{III}\) from above. A combination of these properties yields
\[|\mathrm{II}| \leq\sum_{i=1}^{k-1}\mathsf{a}^{-d}2^{-j}|\Phi_{i}^{(\mathsf{n}_{ k})}(2^{-j}s)|\leq d\mathsf{a}^{-d}2^{-\mathsf{J}}\|\Phi\|_{C^{\mathfrak{a}}} \leq\frac{1}{8},\] \[|\mathrm{III}| \leq\sum_{i=k+1}^{d}\mathsf{a}2^{(\mathsf{n}_{i}-\mathsf{n}_{k} )j}\mathsf{K}_{\mathsf{N}}(2^{-j}s)^{(\mathsf{n}_{i}-\mathsf{n}_{k})}\leq d \mathsf{a}\mathsf{K}_{\mathsf{N}}\leq\frac{1}{8},\]
where the last step in both inequalities follows from (7.7). As a result, we obtain
\[|\varphi^{(\mathsf{n}_{k})}(s)|\geq|\mathrm{I}|-|\mathrm{II}|-|\mathrm{III}| \geq\frac{1}{2}-\frac{1}{8}-\frac{1}{8}=\frac{1}{4}\quad\text{for all }s\in[c,1]. \tag{7.11}\]
This lower bound is critical to the application of the method of stationary phase, which proceeds via two cases.
_Case 1:_ First suppose that \(\mathtt{n}_{k}\geq 2\). Using van der Corput's lemma (see [43, Ch. VIII, SS1.2, Proposition 2]), one can find an absolute (hence admissible) constant \(c_{\mathtt{n}_{k}}\geq 1\) such that
\[|\widehat{\pi}(\xi)|=\Big{|}\int_{c}^{1}e^{-2\pi i\xi_{k}\varphi(s)}ds\Big{|} \leq c_{\mathtt{n}_{k}}|\xi_{k}|^{-1/\mathtt{n}_{k}}. \tag{7.12}\]
Property (7.8) implies that
\[|\xi|\leq|\xi_{1}|+\cdots+|\xi_{d}|\leq(k\mathtt{a}^{-d}+(d-k)\mathtt{a})|\xi _{k}|\leq d\mathtt{a}^{-d}|\xi_{k}|.\]
Inserting this into (7.12) and considering also the trivial estimate \(|\widehat{\pi}(\xi)|\leq\|\pi\|\leq 1\), we find that
\[|\widehat{\pi}(\xi)|\leq\min\Big{\{}1,c_{\mathtt{n}_{k}}\Big{(} \frac{\mathtt{a}^{d}}{d}|\xi|\Big{)}^{-1/\mathtt{n}_{k}}\Big{\}} \leq 2c_{\mathtt{n}_{k}}\Big{(}\frac{\mathtt{a}^{d}}{d}\Big{)}^{ -1/\mathtt{n}_{k}}(1+|\xi|)^{-1/\mathtt{n}_{k}} \tag{7.13}\] \[\leq 2c_{\mathtt{n}_{k}}d\mathtt{a}^{-d}(1+|\xi|)^{-1/\mathtt{ N}}\leq\mathtt{L}_{\mathtt{N}}(1+|\xi|)^{-1/\mathtt{N}}\]
with \(\mathtt{L}_{\mathtt{N}}=2d\mathtt{a}^{-d}\max\{c_{2},\ldots,c_{\mathtt{N}}\}\).
_Case 2:_ Next suppose that \(\mathtt{n}_{k}=1\). In view of (2.3), this means that \(k=1\). Property (7.8) thus implies that
\[|\xi|\leq|\xi_{1}|+\cdots+|\xi_{d}|\leq(1+(d-1)\mathtt{a})|\xi_{1}|\leq d|\xi _{1}|. \tag{7.14}\]
By virtue of (7.9), (2.9), (2.10), and (7.7), we have
\[|\varphi^{\prime\prime}(s)|=\Big{|}\sum_{i=1}^{d}2^{(\mathtt{n}_{ i}-2)j}\frac{\xi_{i}}{\xi_{1}}\Phi_{i}^{\prime\prime}(2^{-j}s)\Big{|} \leq 2^{-j}\|\Phi\|_{C^{\mathtt{N}}}+\sum_{i=2}^{d}2^{( \mathtt{n}_{i}-2)j}\mathtt{a}\mathtt{K}_{\mathtt{N}}(2^{-j}s)^{\mathtt{n}_{ i}-2}\] \[\leq 2^{-\mathtt{J}}\|\Phi\|_{C^{\mathtt{N}}}+d\mathtt{a}\mathtt{ K}_{\mathtt{N}}\leq 2d\mathtt{K}_{\mathtt{N}}\leq\mathtt{L}_{\mathtt{N}} \tag{7.15}\]
for a suitable choice of \(\mathtt{L}_{\mathtt{N}}\). Combining the lower bound (7.11) on \(|\varphi^{\prime}|\) and the upper bound (7.15) on \(|\varphi^{\prime\prime}|\) with integration by parts, we obtain
\[|\widehat{\pi}(\xi)|=\Big{|}\int_{c}^{1}e(\xi_{1}\varphi(s))ds \Big{|} =\Big{|}\int_{c}^{1}\frac{1}{2\pi\xi_{1}\varphi^{\prime}(s)}\cdot \frac{d}{ds}[e(\xi_{1}\varphi(s))]ds\Big{|}\] \[=\frac{1}{2\pi|\xi_{1}|}\bigg{|}\frac{e(\xi_{1}\varphi(s))}{\varphi ^{\prime}(s)}\Big{]}_{c}^{1}+\int_{c}^{1}\frac{\varphi^{\prime\prime}(s)}{( \varphi^{\prime}(s))^{2}}e(\xi_{1}\varphi(s))ds\bigg{|}\leq\frac{\mathtt{L}_{ \mathtt{N}}}{|\xi_{1}|}\]
for some (larger) choice of \(\mathtt{L}_{\mathtt{N}}\). We also have the trivial bound \(|\widehat{\pi}(\xi)|\leq\|\pi\|\leq 1\). These estimates, together with (7.14), imply that
\[|\widehat{\pi}(\xi)|\leq\min\Bigl{\{}1,\frac{d\mathtt{L}_{\mathtt{N}}}{|\xi| }\Bigr{\}}\leq\mathtt{L}_{\mathtt{N}}(1+|\xi|)^{-1}\leq\mathtt{L}_{\mathtt{N}} (1+|\xi|)^{-1/\mathtt{N}}, \tag{7.16}\]
where here we have allowed the value of \(\mathtt{L}_{\mathtt{N}}\) to change between the first two occurences. Combining the conclusions (7.13) and (7.16) of the two cases completes the proof of (7.5).
### Proof of Proposition 2.3
We begin by defining the admissible constant \(\mathtt{L}_{\mathtt{N}}\). Let \(\mathtt{M}_{1}\) be the admissible constant in Corollary 7.2 (appearing there as \(\mathtt{L}_{\mathtt{N}}\)) when applied with
\[\mathtt{a}=\frac{1}{2}\cdot 15^{-d}|B(0;1)|; \tag{7.17}\]
here, \(|\cdot|\) refers to Lebesgue measure. Let \(\mathtt{M}_{2}\) be the admissible constant in Lemma 7.4 (also appearing there as \(\mathtt{L}_{\mathtt{N}}\)). We set
\[\mathtt{L}_{\mathtt{N}}:=\max\{\mathtt{M}_{1},\mathtt{M}_{2},2\pi|B(0;1)|+2+2 \gamma_{\mathtt{N}}^{-1}\},\]
where \(\gamma_{\mathtt{N}}\) is the constant defined in (2.15).
Next we fix a function \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) in standard form that is vanishing of type \(\mathtt{N}\) at the origin, and we define the inadmissible integer \(\mathtt{J}(\Phi)\). For this, we can just use the integer \(\mathtt{J}\) from Lemma 7.4.
Now, let \(\mathtt{A},\mathtt{B},\mathtt{C}\) be any choice of constants satisfying (2.16), and let \(\mu\) be any Borel probability measure \(\mu\) on \([0,1]^{d}\) that obeys (2.17). We need to prove that (2.18) holds. With this aim in mind, fix \(\pi=\pi[\Phi;j,\mathtt{A}^{-6d}]\) with \(j\geq\mathtt{J}\), and fix \(\delta\in(0,\mathtt{A}^{-3d}]\). Write
\[\int\mu_{\delta}*\pi\,d\mu=\mathbb{I}_{1}+\mathbb{I}_{2},\]
where
\[\mathbb{I}_{1}:=\int\mu_{\mathtt{A}^{-3d}}*\pi\,d\mu,\qquad\mathbb{I}_{2}:= \int(\mu_{\delta}-\mu_{\mathtt{A}^{-3d}})*\pi\,d\mu.\]
We claim that
\[\mathbb{I}_{1}\geq\frac{1}{2}\mathtt{L}_{\mathtt{N}}^{-1}\mathtt{A}^{-3d} \quad\text{and}\quad|\mathbb{I}_{2}|\leq\mathtt{L}_{\mathtt{N}}\mathtt{A}^{-4 d}. \tag{7.18}\]
This, together with our assumption on \(\mathtt{A}\) in (2.16) and \(\mathtt{L}_{\mathtt{N}}\geq 1\), would imply (2.18).
We start with \(\mathbb{I}_{1}\). Using the constant \(\mathtt{a}\) defined in (7.17), let
\[\mathcal{G}=\{x\in\operatorname{supp}\mu\colon\eqref{eq:eq:eq:eq:eq:eq:eq:eq: eq:
Thus, \(\mu(\mathcal{G})\geq 1/2\), concluding the estimation of \(\mathbb{I}_{1}\).
We now turn to \(\mathbb{I}_{2}\). By Plancherel's theorem, we have
\[|\mathbb{I}_{2}|\leq\int|\widehat{\mu}(\xi)|^{2}|\widehat{\psi}(\delta\xi)- \widehat{\psi}(\mathtt{A}^{-3d}\xi)||\widehat{\pi}(\xi)|d\xi.\]
We will estimate this integral by breaking its domain into three pieces: low frequencies \(\{|\xi|\leq\mathtt{A}\}\), moderate frequencies \(\{\mathtt{A}\leq|\xi|\leq\mathtt{B}\}\), and high frequencies \(\{|\xi|\geq\mathtt{B}\}\). Beginning with the low-frequency piece, we use the trivial bounds \(\|\widehat{\mu}\|_{\infty}\leq 1\) and \(\|\widehat{\pi}\|_{\infty}\leq 1\) as well as (2.7) to get
\[\int_{|\xi|\leq\mathtt{A}}|\widehat{\mu}(\xi)|^{2}|\widehat{\psi} (\delta\xi)-\widehat{\psi}(\mathtt{A}^{-3d}\xi)||\widehat{\pi}(\xi)|d\xi \leq\int_{|\xi|\leq\mathtt{A}}\pi[(\delta|\xi|)^{2}+(\mathtt{A}^{ -3d}|\xi|)^{2}]d\xi\] \[\leq 2\pi|B(0;\mathtt{A})|(\mathtt{A}^{-3d}\mathtt{A})^{2}\leq 2 \pi|B(0;1)|\mathtt{A}^{-4d}.\]
We use the spectral gap hypothesis in (2.17) to control the moderate-frequency piece, namely
\[\int_{|\xi|\in[\mathtt{A},\mathtt{B}]}|\widehat{\mu}(\xi)|^{2}|\widehat{\psi} (\delta\xi)-\widehat{\psi}(\mathtt{A}^{-3d}\xi)||\widehat{\pi}(\xi)|d\xi\leq 2 \int_{|\xi|\in[\mathtt{A},\mathtt{B}]}|\widehat{\mu}(\xi)|^{2}d\xi\leq 2 \mathtt{A}^{-4d};\]
here, we have also used that \(\|\widehat{\psi}\|_{\infty}=1\) (from (2.7)). We are left to estimate the high-frequency piece. For this we use Lemma 7.4, definitions (2.14) and (2.15), the energy condition in (2.17), and our assumption on \(\mathtt{B}\) in (2.16). We obtain
\[\int_{|\xi|\geq\mathtt{B}}|\widehat{\mu}(\xi)|^{2}|\widehat{\psi} (\delta\xi)-\widehat{\psi}(\mathtt{A}^{-3d}\xi)||\widehat{\pi}(\xi)|d\xi \leq 2\mathtt{L}_{\mathtt{N}}\int_{|\xi|\geq\mathtt{B}}|\widehat{ \mu}(\xi)|^{2}|\xi|^{-\frac{1}{8}}d\xi\] \[\leq 2\mathtt{L}_{\mathtt{N}}\mathtt{B}^{-\frac{1}{28}}\gamma_{ \mathtt{N}}^{-1}I_{\sigma_{\mathtt{N}}}(\mu)\leq 2\mathtt{L}_{\mathtt{N}} \mathtt{B}^{-\frac{1}{28}}\gamma_{\mathtt{N}}^{-1}\mathtt{C}\leq 2\gamma_{ \mathtt{N}}^{-1}\mathtt{A}^{-4d}.\]
Now, summing the bounds for the three pieces and recalling our choice of choice of \(\mathtt{L}_{\mathtt{N}}\), it follows that \(|\mathbb{I}_{2}|\leq\mathtt{L}_{\mathtt{N}}\mathtt{A}^{-4d}\), as claimed. This completes the proof.
## 8. Constructing a suitable measure: Proof of Proposition 2.4
The goal of this section is to prove Proposition 2.4, which establishes the existence of a certain measure. Before embarking on the construction of this measure, we pause to collect a few necessary tools.
### Measure-theoretic preliminaries
Let \(\vec{n}=(n_{1},\ldots,n_{d})\) be a vector with positive integer entries, and recall the definition of \(\mathcal{D}^{*}=\mathcal{D}^{*}[\vec{n}]\) given in (2.19). Here, we do not assume that the entries of \(\vec{n}\) are distinct or ordered. The aim of this subsection is to define an anisotropic, dyadic version of "Hausdorff content" using the collection \(\mathcal{D}^{*}\) and establish connections between it and the standard notion of Hausdorff dimension. Although we opt to work from first principles, these connections can also be well understood using the broader framework of Hausdorff dimension in metric spaces; see [33, Chapter 4]. This perspective is described in Section 9. The lemmas stated in the present subsection may be unsurprising to experts. However, they do not appear in standard textbooks in the form that we need, and thus we provide their proof in the Appendix.
Our version of Hausdorff content is defined as follows: Let \(E\subseteq\mathbb{R}^{d}\) and \(s\geq 0\). Then
\[\mathcal{H}^{s}_{\mathcal{D}^{*}}(E):=\inf\Big{\{}\sum_{Q\in\mathcal{Q}}\ell (Q)^{s}\colon\mathcal{Q}\subseteq\mathcal{D}^{*}\text{ and }E\subseteq\bigcup\mathcal{Q}\Big{\}},\]
where \(\ell(Q)\) is as in (2.21). We are particularly interested in conditions on \(E\) and \(s\) that guarantee the positivity of \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)\). Let
\[\mathtt{N}=\mathtt{N}[\vec{n}]:=\max\{n_{1},\ldots,n_{d}\}\qquad\text{and} \qquad\mathtt{S}=\mathtt{S}[\vec{n}]:=n_{1}+\ldots+n_{d}. \tag{8.1}\]
The following lemma addresses this question and establishes the basic connection between \(\dim_{\mathrm{H}}E\) and \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)\).
**Lemma 8.1**.: _If \(E\subseteq\mathbb{R}^{d}\) and \(0\leq s<\mathtt{S}-(d-\dim_{\mathrm{H}}E)\mathtt{N}\), then \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)>0\)._
We will also need a restricted version of \(\mathcal{H}^{s}_{\mathcal{D}^{*}}\), namely
\[\mathcal{H}^{s}_{\mathcal{D}^{*}_{J}}(E):=\inf\Big{\{}\sum_{Q\in\mathcal{Q}} \ell(Q)^{s}\colon\mathcal{Q}\subseteq\mathcal{D}^{*}_{J}\text{ and }E\subseteq\bigcup\mathcal{Q}\Big{\}},\]
where \(\mathcal{D}^{*}_{J}\) is as in (2.20). From the definitions, it is clear that \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)\leq\mathcal{H}^{s}_{\mathcal{D}^{*}_{J} }(E)\). The next lemma states that, when \(E\) is contained in an element of \(\mathcal{D}^{*}_{\mathtt{J}}\), this inequality can be reversed. This will be helpful in locating the dyadic box \(\mathtt{Q}\) referenced in the proposition statement.
**Lemma 8.2**.: _Let \(E\subset\mathbb{R}^{d}\) be a subset of some element of \(\mathcal{D}^{*}_{J}\) for some \(J\). Then \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)=\mathcal{H}^{s}_{\mathcal{D}^{*}_{J}}(E)\) for all \(s\geq 0\)._
Next, we record a version of Frostman's lemma adapted to \(\mathcal{H}^{s}_{\mathcal{D}^{*}}\). We will use this to construct the measure \(\nu\) referenced in the proposition statement.
**Lemma 8.3**.: _Let \(E\subset\mathbb{R}^{d}\) be a compact set, and let \(s\geq 0\). Then there exists a Borel measure \(\vartheta\) supported on \(E\) such that \(\|\vartheta\|\geq\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)\) and \(\vartheta(Q)\leq\ell(Q)^{s}\) for every \(Q\in\mathcal{D}^{*}\)._
Our final lemma gives a connection between the Frostman condition for \(\mathcal{D}^{*}\) and the finiteness of energy integrals. It will help us verify that the blown-up measure \(\mu:=\nu^{\mathtt{Q}}\) satisfies the energy condition in (2.17).
**Lemma 8.4**.: _There exists a decreasing function \(\mathtt{E}\colon(0,\infty)\to[1,\infty)\), depending only on \(d\), with the following property: For any constant \(L\geq 0\) and any exponents \(\sigma,s\) with \(\sigma\in(0,d)\) and \(s>\sigma+\mathtt{S}-d\), one has_
\[\sup\Big{\{}I_{\sigma}(\vartheta)\colon\operatorname{supp}\vartheta\subseteq[ 0,1]^{d},\ \|\vartheta\|\leq 1,\ \sup_{Q\in\mathcal{D}^{*}}\frac{\vartheta(Q)}{\ell(Q)^{s}}\leq L\Big{\}}\leq L \mathtt{E}(s-\sigma-\mathtt{S}+d).\]
### Proof of Proposition 2.4
We now turn to the proof of Proposition 2.4, beginning with the selection of the constants \(\mathtt{A},\mathtt{B},\mathtt{C}\) and \(\varepsilon\).
#### 8.2.1. Choice of constants
We start by setting
\[\mathtt{C}:=4\mathtt{E}\Big{(}\frac{1}{4\mathtt{N}}\Big{)}, \tag{8.2}\]
where \(\mathtt{E}\) is the function from Lemma 8.4. Let \(\varphi\colon\mathbb{R}\to\mathbb{R}\) be a fixed nonnegative bump function supported in \([0,1)^{d}\) with \(\int\varphi=1\) and \(\|\varphi\|_{\infty}\leq 2\). This function will be used in the construction of the measure \(\nu\) and in the verification that its blow-up \(\mu\) obeys the energy
and spectral gap conditions in (2.17). The values of \(\mathtt{A}\) and \(\mathtt{B}\) will depend on \(\varphi\). We define them as follows: Let \(\mathtt{A}\) be chosen large enough that
\[\mathtt{A}^{d}\geq 4\mathtt{L}_{\mathtt{N}}^{2}\qquad\text{and}\qquad\int_{| \xi|\geq\mathtt{A}}|\widehat{\varphi}(\xi)|d\xi\leq\frac{1}{2}\mathtt{A}^{-4d}, \tag{8.3}\]
where \(\mathtt{L}_{\mathtt{N}}\) is the constant appearing in (2.16). Let \(\mathtt{B}\) be chosen sufficiently large relative to \(\mathtt{A},\mathtt{C}\), and \(\mathtt{L}_{\mathtt{N}}\), as specified in (2.16). This concludes our choice of the constants \(\mathtt{A},\mathtt{B},\mathtt{C}\).
We are left to define \(\varepsilon\). Toward this end, we introduce another large admissible constant \(\mathtt{T}\). Specifically, we take \(\mathtt{T}\) to be an integer satisfying
\[4\pi\sqrt{d}\mathtt{B}|B(0;\mathtt{B})|2^{-\mathtt{T}}\leq\frac{1}{2}\mathtt{A }^{-4d}. \tag{8.4}\]
We now define
\[\varepsilon:=\min\Big{\{}\frac{\log_{2}(1+2^{-d\mathtt{NT}-2})}{\mathtt{NT}}, \frac{1}{4\mathtt{N}^{2}}\Big{\}}. \tag{8.5}\]
#### 8.2.2. Locating the dyadic box \(\mathtt{Q}\)
At this point, we fix
* a function \(\Phi\colon[0,1]\to\mathbb{R}^{d}\) in standard form and vanishing of type \(\mathtt{N}\) at the origin, and
* a Borel set \(K\subseteq\mathbb{R}^{d}\) with \(\dim_{\mathrm{H}}K>d-\varepsilon\).
Let \(\mathtt{J}:=\mathtt{J}(\Phi)\) as in Proposition 2.3, and let \(\vec{\mathtt{n}}:=(\mathtt{n}_{1},\ldots,\mathtt{n}_{d})\) be the vector of integers associated with \(\Phi\) in (2.3). For the remainder of this section, \(\vec{\mathtt{n}}\) should be used whenever an object depends on a vector of integers. So, for example, \(\mathcal{D}^{*}:=\mathcal{D}^{*}[\vec{\mathtt{n}}]\) and \(\mathcal{D}^{*}_{J}:=\mathcal{D}^{*}_{J}[\vec{\mathtt{n}}]\).
Our next goal is to locate a box \(\mathtt{Q}\in\mathcal{D}^{*}_{\mathtt{J}}\) such that \(K\cap\vec{\mathtt{q}}\) will support a measure whose blow-up the obeys energy and spectral gap conditions in (2.17). Henceforth, we will assume that \(K\) is compact. A straightforward application of Frostman's lemma for Borel sets (e.g. [34, Theorem 2.7]) shows that \(K\) contains a compact subset whose Hausdorff dimension strictly exceeds \(d-\varepsilon\). Therefore, this assumption is permissible.
Using Lemma 8.1 and our dimension assumption on \(K\), we select some
\[s\in(\mathtt{S}-\mathtt{N}\varepsilon,\mathtt{S}] \tag{8.6}\]
such that \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(K)>0\). It follows that \(\mathcal{H}^{s}_{\mathcal{D}^{*}_{\mathtt{J}}}(K)>0\) as well, and we also have \(\mathcal{H}^{s}_{\mathcal{D}^{*}_{\mathtt{J}}}(K)<\infty\) trivially. We claim for each \(\delta>0\) there exists \(Q\in\mathcal{D}^{*}_{\mathtt{J}}\) such that
\[\mathcal{H}^{s}_{\mathcal{D}^{*}_{\mathtt{J}}}(K\cap Q)\geq(1-\delta)\ell(Q) ^{s}. \tag{8.7}\]
To see this, note that for each \(c>0\) there exists \(\mathcal{Q}_{c}\subseteq\mathcal{D}^{*}_{\mathtt{J}}\) such that \(\mathcal{Q}_{c}\) covers \(K\) and
\[\sum_{Q\in\mathcal{Q}_{c}}\ell(Q)^{s}\leq\mathcal{H}^{s}_{\mathcal{D}^{*}_{ \mathtt{J}}}(K)+c.\]
If there were some \(\delta>0\) such that no \(Q\in\mathcal{D}^{*}_{\mathtt{J}}\) satisfied (8.7), then we would have
\[\mathcal{H}^{s}_{\mathcal{D}^{*}_{\mathtt{J}}}(K)\leq\sum_{Q\in\mathcal{Q}_{c }}\mathcal{H}^{s}_{\mathcal{D}^{*}_{\mathtt{J}}}(K\cap Q)\leq(1-\delta)\sum_{Q \in\mathcal{Q}_{c}}\ell(Q)^{s}\leq(1-\delta)(\mathcal{H}^{s}_{\mathcal{D}^{*}_ {\mathtt{J}}}(K)+c),\]
and taking \(c\) sufficiently small would produce a contradiction. This proves the claim. By Lemma 8.1, condition (8.7) is equivalent to the statement that
\[\mathcal{H}^{s}_{\mathcal{D}^{*}}(K\cap Q)\geq(1-\delta)\ell(Q)^{s}. \tag{8.8}\]
We now define \(\mathtt{Q}\) to be any \(Q\in\mathcal{D}^{*}_{\mathtt{J}}\) such that (8.8) holds with \(\delta:=2^{-\mathtt{ST}-2}\).
#### 8.2.3. Constructing the measure \(\nu\)
Let \(\operatorname{ch}(\mathtt{Q})\) denote the set of \(\mathtt{T}^{\operatorname{th}}\)-generation descendants of \(\mathtt{Q}\) in \(\mathcal{D}^{*}\); that is,
\[\operatorname{ch}(\mathtt{Q}):=\{q\in\mathcal{D}^{*}\colon q\subseteq\mathtt{ Q},\ \ell(q)=2^{-\mathtt{T}}\ell(\mathtt{Q})\}.\]
For the sake of readability, we will from now on denote elements of \(\operatorname{ch}(\mathtt{Q})\) using the lowercase letter \(q\), while the capital letter \(Q\) will continue to refer to generic elements of \(\mathcal{D}^{*}\). We claim that
\[\mathcal{H}^{s}_{\mathcal{D}^{*}}(K\cap q)\geq\frac{1}{2}\ell(q)^{s}\quad \text{for every $q\in\operatorname{ch}(\mathtt{Q})$}. \tag{8.9}\]
To see this, let
\[\mathcal{G}:=\left\{q\in\operatorname{ch}(\mathtt{Q})\colon\mathcal{H}^{s}_{ \mathcal{D}^{*}}(K\cap q)\geq\frac{1}{2}\ell(q)^{s}\right\}\]
and suppose for contradiction that \(\mathcal{G}\subsetneq\operatorname{ch}(\mathtt{Q})\). Then, by our choice of \(\mathtt{Q}\), the inequality \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(K\cap q)\leq\ell(q)^{s}\), our choice of \(s\) in (8.6), and the definition of \(\varepsilon\) in (8.5), we have
\[1-2^{-\mathtt{ST}-2} \leq\frac{\mathcal{H}^{s}_{\mathcal{D}^{*}}(K\cap\mathtt{Q})}{ \ell(\mathtt{Q})^{s}}\] \[\leq\sum_{q\in\mathcal{G}}\frac{\ell(q)^{s}}{\ell(\mathtt{Q})^{s} }+\frac{1}{2}\sum_{q\in\operatorname{ch}(\mathtt{Q})\setminus\mathcal{G}} \frac{\ell(q)^{s}}{\ell(\mathtt{Q})^{s}}\] \[=\sum_{q\in\operatorname{ch}(\mathtt{Q})}\frac{\ell(q)^{s}}{\ell (\mathtt{Q})^{s}}-\frac{1}{2}\sum_{q\in\operatorname{ch}(\mathtt{Q})\setminus \mathcal{G}}\frac{\ell(q)^{s}}{\ell(\mathtt{Q})^{s}}\] \[\leq 2^{\mathtt{ST}}2^{-\mathtt{Ts}}-\frac{1}{2}\cdot 2^{- \mathtt{Ts}}<2^{\mathtt{ST}}-2^{-\mathtt{ST}-1}\leq 1+2^{-\mathtt{ST}-2}-2^{- \mathtt{ST}-1}.\]
This gives a contradiction (note the strict inequality), and so the claim is proved.
Combining the lower bound (8.9) with Lemma 8.3, we get for each \(q\in\operatorname{ch}(\mathtt{Q})\) a Borel measure \(\vartheta_{q}\) supported on \(K\cap\overline{q}\) such that
\[\|\vartheta_{q}\|\geq\frac{1}{2}\ell(q)^{s}\qquad\text{and}\qquad\vartheta_{q} (Q)\leq\ell(Q)^{s}\quad\text{for every $Q\in\mathcal{D}^{*}$}. \tag{8.10}\]
The measure \(\nu\) will be defined as a weighted sum of the measures \(\vartheta_{q}\). Specifically, let \(\varphi\) be the bump function introduced above, and let \(\mathbf{T}_{\mathtt{Q}}\) be the rescaling map defined in (2.22) that takes \(\mathtt{Q}\) to \([0,1)^{d}\). Let
\[w(q):=\int_{\mathbf{T}_{\mathtt{Q}}(q)}\varphi\qquad\text{and}\qquad\overline {\vartheta}_{q}:=\frac{w(q)\ell(\mathtt{Q})^{s}}{\|\vartheta_{q}\|}\vartheta_ {q}\]
for each \(q\in\operatorname{ch}(\mathtt{Q})\). With these definitions in place, we take \(\nu\) to be
\[\nu:=\sum_{q\in\operatorname{ch}(\mathtt{Q})}\overline{\vartheta}_{q}.\]
It is clear that \(\nu\) is supported on \(K\cap\overline{\mathtt{Q}}\) and has total mass
\[\|\nu\|=\sum_{q\in\operatorname{ch}(\mathtt{Q})}\|\overline{\vartheta}_{q}\|= \sum_{q\in\operatorname{ch}(\mathtt{Q})}w(q)\ell(\mathtt{Q})^{s}=\ell(\mathtt{ Q})^{s}\int\varphi=\ell(\mathtt{Q})^{s}>0. \tag{8.11}\]
Our next goal is to analyze the blow-up of \(\nu\) with respect to \(\mathbb{Q}\). For this measure to be well defined, it must satisfy \(\nu(\mathbb{Q})>0\). In view of (8.11), this could fail only if \(\operatorname{supp}\nu\) lies entirely within the boundary of \(\mathbb{Q}\). We claim that
\[\vartheta_{q}(\partial q)=0\quad\text{for every $q\in\operatorname{ch}( \mathbb{Q})$}; \tag{8.12}\]
here, \(\partial E\) denotes the boundary of \(E\). This claim implies that \(\nu(\partial\mathbb{Q})=0\), thus confirming that \(\nu(\mathbb{Q})>0\), and says that for all intents and purposes the measures \(\vartheta_{q}\) (and \(\overline{\vartheta}_{q}\)) have pairwise disjoint supports. To prove (8.12), fix \(q\in\operatorname{ch}(\mathbb{Q})\) and let \(J\) be such that \(q\in\mathcal{D}_{J}\). For each \(j\geq J\), let
\[\mathcal{Q}_{j}(q):=\{Q\in\mathcal{D}_{j}\colon Q\cap q\neq\emptyset\}.\]
By the Frostman-type condition in (8.10), we have
\[\vartheta_{q}(\partial q)\leq\#\mathcal{Q}_{j}(q)2^{-js}. \tag{8.13}\]
We can estimate \(\#\mathcal{Q}_{j}(q)\) as follows: For each \(i\in\{1,\ldots,d\}\), the boundary of \(q\) contains two \((d-1)\)-dimensional faces with side-lengths \(2^{-n_{1}J},\ldots,[2^{-n_{i}J}],\ldots,2^{-n_{d}J}\), where the term in brackets is omitted. Together, these account for all of the faces of \(\partial q\). The two faces corresponding to \(i\) each intersect exactly
\[\frac{\prod_{k\neq i}2^{-n_{k}J}}{\prod_{k\neq i}2^{-n_{k}j}}=2^{(\mathsf{S}-n _{i})(j-J)}\]
boxes in \(\mathcal{D}_{j}\). Hence,
\[\#\mathcal{Q}_{j}(q)\leq 2\sum_{i=1}^{d}2^{(\mathsf{S}-n_{i})(j-J)}\leq 2d2^{( \mathsf{S}-1)(j-J)}. \tag{8.14}\]
Combining (8.13) and (8.14), we get
\[\vartheta_{q}(\partial q)\leq 2d2^{-(\mathsf{S}-1)J}2^{j(\mathsf{S}-1-s)}.\]
By our choice of \(s\) in (8.6) and \(\varepsilon\) in (8.5) (specifically, that \(\mathtt{N}\varepsilon\leq 1\)), we have \(\mathsf{S}-1-s<0\). Thus, sending \(j\to\infty\) yields (8.12).
#### 8.2.4. Verification of the energy condition
It remains to show that the blow-up \(\mu:=\nu^{\mathfrak{d}}\) satisfies the energy and spectral gap conditions in (2.17). We begin with the former, namely that \(I_{\sigma_{\mathfrak{N}}}(\mu)\leq\mathtt{C}\), with \(\sigma_{\mathtt{N}}\) and \(\mathtt{C}\) as defined in (2.15) and (8.2), respectively. By our choice of \(s\) in (8.6) and \(\varepsilon\) in (8.5), we have
\[s-\sigma_{\mathtt{N}}-\mathsf{S}+d\geq-\mathtt{N}\varepsilon+\frac{1}{2 \mathtt{N}}\geq-\frac{1}{4\mathtt{N}}+\frac{1}{2\mathtt{N}}=\frac{1}{4\mathtt{ N}}.\]
Therefore, by Lemma 8.4, it suffices to show that \(\mu\) obeys the Frostman condition
\[\mu(Q)\leq 4\ell(Q)^{s}\quad\text{for every $Q\in\mathcal{D}^{*}$}. \tag{8.15}\]
For each \(q\in\operatorname{ch}(\mathbb{Q})\), we use (8.6) and the first property in (8.10) to get that
\[w(q)\ell(\mathbb{Q})^{s}\leq\|\varphi\|_{\infty}|\mathbf{T}_{ \mathbb{Q}}(q)|\ell(\mathbb{Q})^{s} \leq 2\Big{(}\frac{\ell(q)}{\ell(\mathbb{Q})}\Big{)}^{\mathsf{S}} \ell(\mathbb{Q})^{s} \tag{8.16}\] \[=2\Big{(}\frac{\ell(q)}{\ell(\mathbb{Q})}\Big{)}^{\mathsf{S}-s} \ell(q)^{s}\leq 2\ell(q)^{s}\leq 4\|\vartheta_{q}\|.\]
Consequently, by the second property in (8.10), each \(\overline{\vartheta}_{q}\) obeys
\[\overline{\vartheta}_{q}(Q)\leq 4\vartheta_{q}(Q)\leq 4\ell(Q)^{s}\quad\text{ for every }Q\in\mathcal{D}^{*}.\]
We claim that this property implies condition (8.15) with \(\nu\) in place of \(\mu\), i.e.
\[\nu(Q)\leq 4\ell(Q)^{s}\quad\text{for every }Q\in\mathcal{D}^{*}. \tag{8.17}\]
To see this, fix \(Q\in\mathcal{D}^{*}\) and consider three cases:
* If \(\ell(Q)\leq 2^{-\mathsf{T}}\ell(\mathtt{Q})\), then \(Q\) intersects at most one box in \(\operatorname{ch}(\mathtt{Q})\); thus \[\nu(Q)\leq\max_{q\in\operatorname{ch}(\mathtt{Q})}\overline{\vartheta}_{q}(Q) \leq 4\ell(Q)^{s}.\]
* If \(\ell(Q)\geq\ell(\mathtt{Q})\), then (8.11) implies that \[\nu(Q)\leq\|\nu\|=\ell(\mathtt{Q})^{s}\leq\ell(Q)^{s}.\]
* If \(2^{-\mathsf{T}}\ell(\mathtt{Q})\leq\ell(Q)\leq\ell(\mathtt{Q})\), then using the first line of (8.16), as well as (8.6), we get \[\nu(Q) \leq\#\{q\in\operatorname{ch}(\mathtt{Q})\colon q\cap Q\neq \emptyset\}\max_{q\in\operatorname{ch}(\mathtt{Q})}\|\overline{\vartheta}_{q}\|\] \[=\Big{(}\frac{\ell(Q)}{2^{-\mathsf{T}}\ell(\mathtt{Q})}\Big{)}^{ \mathtt{S}}\max_{q\in\operatorname{ch}(\mathtt{Q})}w(q)\ell(\mathtt{Q})^{s}\] \[\leq\Big{(}\frac{\ell(Q)}{2^{-\mathsf{T}}\ell(\mathtt{Q})}\Big{)} ^{\mathtt{S}}2^{-\mathsf{ST}+1}\ell(\mathtt{Q})^{s}=2\Big{(}\frac{\ell(Q)}{ \ell(\mathtt{Q})}\Big{)}^{\mathtt{S}-s}\ell(Q)^{s}\leq 2\ell(Q)^{s}.\]
Collectively, these imply (8.17). Now, in order to verify (8.15), we need one more simple fact regarding dyadic boxes, namely
\[\mathbf{T}_{Q}(Q^{\prime})\in\mathcal{D}^{*}\quad\text{with}\quad\ell( \mathbf{T}_{Q}(Q^{\prime}))=\frac{\ell(Q^{\prime})}{\ell(Q)}\]
and (equivalently)
\[\mathbf{T}_{Q}^{-1}(Q^{\prime})\in\mathcal{D}^{*}\quad\text{with}\quad\ell( \mathbf{T}_{Q}^{-1}(Q^{\prime}))=\ell(Q)\ell(Q^{\prime})\qquad\text{for all }Q,Q^{\prime}\in\mathcal{D}^{*}.\]
Combining this observation with the definition of blow-up in (2.23) and (2.24), as well as with (8.11) and (8.17), we obtain
\[\mu(Q)=\frac{\nu(\mathbf{T}_{\mathtt{Q}}^{-1}(Q))}{\|\nu\|}\leq\ell(\mathtt{ Q})^{-s}4\ell(\mathbf{T}_{\mathtt{Q}}^{-1}(Q))^{s}=4\ell(Q)^{s}\]
for every \(Q\in\mathcal{D}^{*}\), confirming (8.15).
#### 8.2.5. Verification of the spectral gap condition
Finally, we need to prove that \(\mu\) satisfies the spectral gap condition in (2.17). Let \(\mathcal{D}_{\mathtt{T},0}\) denote the set of \(\mathtt{T}^{\text{th}}\)-generation descendants of \([0,1)^{d}\) in \(\mathcal{D}^{*}\); that is,
\[\mathcal{D}_{\mathtt{T},0}:=\{Q\in\mathcal{D}_{\mathtt{T}}\colon Q\subseteq[0,1)^{d}\}=\{\mathbf{T}_{\mathtt{Q}}(q)\colon q\in\operatorname{ch}(\mathtt{Q} )\}.\]
We have
\[\mu(Q)=\frac{\nu(\mathbf{T}_{\mathtt{Q}}^{-1}(Q))}{\|\nu\|}=\ell(\mathtt{Q})^{- s}\|\overline{\vartheta}_{\mathbf{T}_{\mathtt{Q}}^{-1}(Q)}\|=w(\mathbf{T}_{ \mathtt{Q}}^{-1}(Q))=\int_{Q}\varphi\quad\text{for every }Q\in\mathcal{D}_{ \mathtt{T},0}. \tag{8.18}\]
We will treat \(\varphi\) as a measure via the formula
\[\int fd\varphi:=\int f\varphi;\]
thus (8.18) becomes \(\mu(Q)=\varphi(Q)\). Let \(c_{Q}\) denote the centre of the box \(Q\). If \(Q\in\mathcal{D}_{\mathtt{T},0}\), then
\[\int_{Q}e^{-2\pi ic_{Q}\cdot\xi}d\mu(x)=\int_{Q}e^{-2\pi ic_{Q}\cdot\xi}d\varphi (x)\]
for any \(\xi\in\mathbb{R}^{d}\). For fixed \(\xi\), the function \(x\mapsto e^{-2\pi ix\cdot\xi}\) is Lipschitz with constant at most \(2\pi|\xi|\). Since \(|x-c_{Q}|\leq\sqrt{d}\ell(Q)\) for \(x\in Q\), it follows that
\[|\widehat{\mu}(\xi)-\widehat{\varphi}(\xi)| =\Big{|}\int e^{-2\pi ix\cdot\xi}d\mu(x)-\int e^{-2\pi ix\cdot\xi} d\varphi(x)\Big{|}\] \[\leq\sum_{Q\in\mathcal{D}_{\mathtt{T},0}}\Big{|}\int_{Q}e^{-2\pi ix \cdot\xi}d\mu(x)-\int_{Q}e^{-2\pi ix\cdot\xi}d\varphi(x)\Big{|}\] \[\leq\sum_{Q\in\mathcal{D}_{\mathtt{T},0}}\Big{(}\int_{Q}|e^{-2 \pi ix\cdot\xi}-e^{-2\pi ic_{Q}\cdot\xi}|d\mu(x)+\int_{Q}|e^{-2\pi ix\cdot\xi }-e^{-2\pi ic_{Q}\cdot\xi}|d\varphi(x)\Big{)}\] \[\leq\sum_{Q\in\mathcal{D}_{\mathtt{T},0}}2\pi|\xi|\sqrt{d}2^{- \mathtt{T}}(\mu(Q)+\varphi(Q))=4\pi\sqrt{d}|\xi|2^{-\mathtt{T}}.\]
Now, using that \(\|\widehat{\mu}\|\leq 1\), as well as our assumptions on \(\mathtt{A}\) and \(\mathtt{T}\) in (8.3) and (8.4), we obtain
\[\int_{|\xi|\in[\mathtt{A},\mathtt{B}]}|\widehat{\mu}(\xi)|^{2}d \xi\leq\int_{|\xi|\in[\mathtt{A},\mathtt{B}]}|\widehat{\mu}(\xi)|d\xi \leq\int_{|\xi|\in[\mathtt{A},\mathtt{B}]}|\widehat{\mu}(\xi)- \widehat{\varphi}(\xi)|d\xi+\int_{|\xi|\in[\mathtt{A},\mathtt{B}]}|\widehat{ \varphi}(\xi)|d\xi\] \[\leq 4\pi\sqrt{d}2^{-\mathtt{T}}\int_{|\xi|\in[\mathtt{A}, \mathtt{B}]}|\xi|d\xi+\frac{1}{2}\mathtt{A}^{-4d}\] \[\leq 4\pi\sqrt{d}2^{-\mathtt{T}}\mathtt{B}|B(0;\mathtt{B})|+\frac {1}{2}\mathtt{A}^{-4d}\leq\mathtt{A}^{-4d},\]
which completes the proof.
## 9. Anisotropic boxes and Hausdorff dimension in metric spaces
In the previous section, we defined the "Hausdorff content" \(\mathcal{H}^{s}_{\mathcal{D}^{*}}\) associated to a collection \(\mathcal{D}^{*}\) of anisotropic dyadic boxes in \(\mathbb{R}^{d}\). It was used in the following way: Starting with a set \(K\subseteq\mathbb{R}^{d}\) of very high Hausdorff dimension, we located a box \(\mathtt{Q}\in\mathcal{D}^{*}\) such that \(K\) had nontrivial (anisotropic) Hausdorff content within each descendant of \(\mathtt{Q}\) of a certain generation. This enabled us to use a version of Frostman's lemma to construct a measure on \(K\cap\overline{\mathtt{Q}}\) whose blow-up satisfied both conditions in (2.17). In this section, we reinterpret \(\mathcal{D}^{*}\) and \(\mathcal{H}^{s}_{\mathcal{D}^{*}}\) in terms of Hausdorff dimension in metric spaces; see [33, Chapter 4]. This connection was first stated in [26], though in less generality.
Let \(\vec{n}=(n_{1},\ldots,n_{d})\) be a fixed vector of positive integers, and let \(\rho=\rho[\vec{n}]\) be the metric on \(\mathbb{R}^{d}\) given by
\[\rho(x,y):=\max_{1\leq i\leq d}(2|x_{i}-y_{i}|)^{1/n_{i}}.\]
The closed \(\rho\)-ball of radius \(r\) centred at \(x\) takes the form
\[B_{\rho}(x;r):=\{y\colon\rho(x,y)\leq r\}=x+\prod_{i=1}^{d}\Big{[}-\frac{r^{n_{i} }}{2},\frac{r^{n_{i}}}{2}\Big{]}.\]
Elements of \(\mathcal{D}^{*}[\vec{n}]\) can therefore be viewed as \(\rho\)-balls. More precisely, if \(Q\in\mathcal{D}^{*}\) and \(c_{Q}\) is the centre of \(Q\), then
\[\overline{Q}=c_{Q}+\prod_{i=1}^{d}\Big{[}-\frac{\ell(Q)^{n_{i}}}{2},\frac{\ell (Q)^{n_{i}}}{2}\Big{]}=B_{\rho}(c_{Q},\ell(Q)).\]
The \(s\)-dimensional Hausdorff measure \(\mathcal{H}^{s}=\mathcal{H}^{s}[\rho]\) is defined for the metric space \((\mathbb{R}^{d},\rho)\) in the usual way, namely
\[\mathcal{H}^{s}(E):=\lim_{\delta\searrow 0}\mathcal{H}^{s}_{\delta}(E),\]
where
\[\mathcal{H}^{s}_{\delta}(E):=\inf\Big{\{}\sum_{U\in\mathcal{U}}\operatorname{ diam}(U)^{s}\colon\mathcal{U}\text{ is a countable cover of }E,\,\sup_{U\in\mathcal{U}}\operatorname{diam}U\leq\delta\Big{\}}.\]
Of course, here \(\operatorname{diam}U\) refers to the \(\rho\)-diameter of \(U\). If \(U\) is a \(\rho\)-ball, then this is \(2^{1/\min_{i}n_{i}}\) times its radius. The quantities \(\mathcal{H}^{s}_{\mathcal{D}^{*}}\) and \(\mathcal{H}^{s}_{\mathcal{D}^{*}_{\mathcal{I}}}\) defined in the previous section can be viewed as discrete versions of \(\mathcal{H}^{s}_{\infty}\) and \(\mathcal{H}^{s}_{2^{-J}}\), respectively.
Lemma 8.1 provided a connection between \(\mathcal{H}^{s}_{\mathcal{D}^{*}}\) and the standard (Euclidean) Hausdorff dimension; namely, if \(s\) is not too large relative to \(\dim_{\operatorname{H}}E\), then \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)>0\). We get a more complete picture by considering Hausdorff dimension in \((\mathbb{R}^{d},\rho)\). This is again defined in the usual way:
\[\dim_{\operatorname{H}^{*}}E:=\inf\{s\colon\mathcal{H}^{s}(E)=0\}=\sup\{s \colon\mathcal{H}^{s}(E)=\infty\}.\]
If \(\mathtt{N}\) and \(\mathtt{S}\) are defined as in (8.1), then we have the relations
\[\mathtt{S}-(d-\dim_{\operatorname{H}}E)\mathtt{N}\leq\dim_{\operatorname{H}^{ *}}E\leq\mathtt{S} \tag{9.1}\]
for all sets \(E\subseteq\mathbb{R}^{d}\). The proof of (9.1) is similar to that of Lemma 8.1 (appearing in the Appendix); in particular, the first inequality follows by establishing that \(\mathcal{H}^{s}_{\infty}(E)>0\) for all \(s<\mathtt{S}-(d-\dim_{\operatorname{H}}E)\mathtt{N}\). It is worth noting that when \(\vec{n}=(1,1,\ldots,1)\), the corresponding metric \(\rho\) is a multiple of the Euclidean sup norm, and the balls it defines are just Euclidean cubes. Consequently, the definition of Hausdorff dimension in \((\mathbb{R}^{d},\rho)\) agrees with the usual one, and (9.1) reduces to the statement that \(\dim_{\operatorname{H}}E\leq d\).
## 10. Appendix
### Standardization of a function of finite type
Proof of Lemma 2.1.: Let \(\Theta\colon\mathtt{I}\to\mathbb{R}^{d}\) be a smooth function that is vanishing of type \(\mathtt{N}\) at the origin. This means that \(\mathtt{N}\) is the smallest integer with the following property:
For every \(u\in\mathbb{R}^{d}\setminus\{0\}\) there exists \(n\in\{1,\ldots,\mathtt{N}\}\) such that \(u\cdot\Theta^{(n)}(0)\neq 0\). (10.1)
Since \(\Theta(0)=0\), there exists a unique \(d\)-tuple \(\vec{\mathtt{m}}=(\mathtt{m}_{1},\ldots,\mathtt{m}_{d})\) of positive integers such that
\[\Theta=(\Theta_{1},\ldots,\Theta_{d})\quad\text{with}\quad\Theta_{i}(t)=t^{ \mathtt{m}_{i}}\theta_{i}(t) \tag{10.2}\]
for some smooth functions \(\theta_{i}\colon\mathtt{I}\to\mathbb{R}^{d}\) obeying \(\theta_{i}(0)\neq 0\). We will refer to \(\vec{\mathfrak{m}}\) as the _vanishing pattern_ of \(\Theta\). Our goal is to produce an invertible linear map \(\mathbf{L}\colon\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that the vanishing pattern \(\vec{\mathfrak{n}}\) of \(\mathbf{L}\circ\Theta\) satisfies \(\mathtt{n}_{1}<\mathtt{n}_{2}<\cdots<\mathtt{n}_{d}=\mathtt{N}\). Once we have this, we can apply a diagonal map to \(\mathbf{L}\circ\Theta\) to achieve the condition \(\phi_{1}(0)=\cdots=\phi_{d}(0)=1\); such a map leaves the vanishing pattern unchanged. This would establish both requirements (2.3) and (2.4). Definition (10.1) ensures that the type of a function is preserved under invertible linear transformations; i.e., \(\mathbf{L}\circ\Theta\) remains vanishing of type \(\mathtt{N}\) at the origin for any invertible linear transformation \(\mathbf{L}\). In view of this, we will outline a sequence of such transformations, relabelling the transformed function \(\Theta\) at each stage, until we reach a function \(\Theta\) that obeys (2.2), (2.3), and (2.4). The map \(\mathbf{L}\) in the statement of Lemma 2.1 is the composition of the maps used to reach this final \(\Theta\).
Our first task is to create a map \(\mathbf{L}\) such that \(\mathtt{N}\) is present in the vanishing pattern of \(\mathbf{L}\circ\Theta\). From (10.1), we know that there exists a nonzero vector \(\mathtt{u}=(\mathtt{u}_{1},\ldots,\mathtt{u}_{d})\) in \(\mathbb{R}^{d}\) such that
\[\mathtt{u}\cdot\Theta^{(n)}(0)=0\quad\text{for }1\leq n\leq\mathtt{N}-1\qquad \text{and}\qquad\mathtt{u}\cdot\Theta^{(\mathtt{N})}(0)\neq 0.\]
We may assume without loss of generality that \(\mathtt{u}_{d}\neq 0\). Let \(\mathbf{L}\) be the linear map given by
\[\mathbf{L}(x):=(x_{1},\ldots,x_{d-1},\mathtt{u}\cdot x)\quad\text{ for }x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}.\]
Then \(\mathbf{L}\circ\Theta=:(\widetilde{\Theta}_{1},\ldots,\widetilde{\Theta}_{d})\) obeys
\[\widetilde{\Theta}_{i}(t)=\begin{cases}\Theta_{i}(t)=t^{\mathtt{m}_{i}}\theta _{i}(t)&\text{ if }1\leq i<d,\\ \mathtt{u}\cdot\Theta(t)=t^{\mathtt{N}}\widetilde{\theta}_{d}(t)&\text{ if }i=d \end{cases}\]
for some smooth function \(\widetilde{\theta}_{d}\) such that \(\widetilde{\theta}_{d}(0)\neq 0\). Thus, \(\mathtt{N}\) appears in the vanishing pattern of \(\mathbf{L}\circ\Theta\) and is necessarily its largest entry. We relabel \(\mathbf{L}\circ\Theta\) as \(\Theta\) and assume it takes the form (10.2), with \(\max_{i}\mathtt{m}_{i}=\mathtt{N}\). The value of \(\max_{i}\mathtt{m}_{i}\) will never decrease in the sequence of transformations applied hereafter.
It remains to ensure that the entries of the vanishing pattern of \(\Theta\) can be made to obey the strict monotonicity in (2.3) after a further linear transformation \(\mathbf{L}\). If these entries are already distinct, then the construction of \(\mathbf{L}\) is easy; we can take \(\mathbf{L}\) to be a suitable permutation map. However, in general the vanishing pattern may have repeated entries, and such coincidences have to be eliminated. With that goal in mind, we claim the following:
_Let \(\Theta\) be a function with vanishing pattern \(\vec{\mathfrak{m}}\). If there exist indices \(i_{0},i_{1}\in\{1,\ldots,d\}\) with \(i_{0}<i_{1}\) such that \(\mathtt{m}_{i_{0}}=\mathtt{m}_{i_{1}}\), then there exists an invertible linear map \(\mathbf{L}\) such that \(\mathbf{L}\circ\Theta\) has vanishing pattern \(\vec{\mathfrak{m}}+\ell\mathtt{e}_{i_{1}}\) for some \(\ell\geq 1\)._
(Here, \(\mathtt{e}_{1},\ldots,\mathtt{e}_{d}\) denote the canonical basis vectors of \(\mathbb{R}^{d}\).) In other words, if the vanishing pattern of \(\Theta\) has a repeated value, then there exists a linear map \(\mathbf{L}\) such that the vanishing pattern of \(\mathbf{L}\circ\Theta\) is the same, except for one of the repeated entries having increased. For now, assume this claim holds. By applying the claim iteratively, we can remove any coincidences in the vanishing pattern of \(\Theta\), as follows. First, set \(i_{0}=1\). For every \(j>1\), we check whether \(\mathtt{m}_{1}=\mathtt{m}_{j}\). If not, then no action is needed. If the equality holds, we apply the claim with \(i_{1}=j\) (and \(i_{0}=1\)) to get a function whose vanishing pattern has distinct values in the first and the \(j^{\text{th}}\) entries; the other entries have not changed. At the end of \(d-1\) such checks with indices \(j=2,3,\ldots,d\), we obtain a function with the property that the first entry \(\mathtt{m}_{1}\) of its vanishing pattern is never repeated among the later entries. This concludes
the first step. Next, we repeat this process with \(i_{0}=2\) and \(j>2\). This produces a function with a vanishing pattern in which neither of the first two entries appears among the later ones. Proceeding in this way, we reach after \(d-1\) steps a function whose vanishing pattern has no repeated values. Finally, we apply a permutation map to rearrange the pattern into increasing order, as suggested above. Thus, we have constructed \(\mathbf{L}\) as the composition of the linear maps behind our applications of the claim, together with the final permutation.
We are left to prove the claim. Fix the indices \(i_{0}<i_{1}\) with \(\mathtt{m}_{i_{0}}=\mathtt{m}_{i_{1}}\), as specified there. Assuming \(\Theta\) takes the form (10.2), we define the linear map \(\mathbf{L}\) as follows:
\[\mathbf{L}(x):=y,\quad\text{where}\quad y_{i}:=\begin{cases}x_{i}&\text{if }i \neq i_{1},\\ x_{i}-\frac{\theta_{i_{1}}(0)}{\theta_{i_{0}}(0)}x_{i_{0}}&\text{if }i=i_{1}. \end{cases}\]
The transformed function \(\mathbf{L}\circ\Theta=:(\widetilde{\Theta}_{1},\dots,\widetilde{\Theta}_{d})\) satisfies \(\widetilde{\Theta}_{i}(t)=t^{\mathtt{m}}\widetilde{\theta}_{i}(t)\) for
\[\widetilde{\theta}_{i}:=\begin{cases}\theta_{i}&\text{if }i\neq i_{1},\\ \theta_{i_{1}}-\frac{\theta_{i_{1}}(0)}{\theta_{i_{0}}(0)}\theta_{i_{0}}& \text{if }i=i_{1}.\end{cases}\]
Now, on one hand, \(\widetilde{\theta}_{i_{1}}(0)=0\). On the other hand, \(t^{\mathtt{m}_{i_{1}}}\widetilde{\theta}_{i_{1}}(t)=\widetilde{\Theta}_{i_{1} }(t)=\mathtt{u}\cdot\Theta(t)\) for some nonzero vector \(\mathtt{u}\in\mathbb{R}^{d}\), and the finite type hypothesis (10.1) implies the existence of an integer \(\mathtt{n}\in\{1,\dots,\mathtt{N}\}\) such that \(\mathtt{u}\cdot\Theta(t)=t^{\mathtt{n}}\theta_{\mathtt{u}}(t)\) for some smooth function \(\theta_{\mathtt{u}}\) with \(\theta_{\mathtt{u}}(0)\neq 0\). Equating \(t^{\mathtt{m}_{i_{1}}}\widetilde{\theta}_{i_{1}}(t)\) and \(t^{\mathtt{n}}\theta_{\mathtt{u}}(t)\) and differentiating \(\mathtt{n}\) times at the origin shows that \(\mathtt{n}>\mathtt{m}_{i_{1}}\). Setting \(\ell:=\mathtt{n}-\mathtt{m}_{i_{1}}\), it follows that \(\mathbf{L}\circ\Theta\) has vanishing pattern \(\vec{\mathtt{n}}+\ell\mathtt{e}_{i_{1}}\), and so the claim is proved.
### Partial-avoidance of graph-like curves
Proof of Proposition 4.2.: The proof is quite similar to that of Proposition 4.1, so we will omit some details. Fix \(\mathbf{L}\) and \(\Phi\colon\mathtt{I}\to\mathbb{R}^{d}\) satisfying conditions (i)-(iii) in the definition of graph-like curve, with \(\mathtt{m}\) in condition (ii) being the subtype of \(\Gamma\). We may assume without loss of generality that \(\mathbf{L}\) is the identity. Since \(\Gamma\) contains the origin, we necessarily have \(0\in\mathtt{I}\) and \(\Phi(0)=0\). Additionally, since \(\Gamma\) is of type \(\mathtt{N}\) at the origin, it follows that \(\Phi\) is of type at least \(\mathtt{N}\) at zero. Therefore, there exists a unit vector \(\mathtt{u}\in\mathbb{R}^{d}\) such that \(\mathtt{u}\cdot\Phi^{(n)}(0)=0\) for every \(n\in\{0,1,\dots,\mathtt{N}-1\}\). Let \(z\) be defined as in the proof of Proposition 4.1. Property (4.3) still holds with this \(z\), but now only for \(n\in\{0,1,\dots,\mathtt{N}\}\). Properties (4.4) and (4.5) remain valid without any changes.
Let
\[\overline{s}:=\min\Big{\{}\frac{\mathtt{N}}{\mathtt{m}},d-\frac{(d-1)\mathtt{ m}}{\mathtt{N}}\Big{\}}.\]
Lemma 2.1 and the definition of subtype given in Subsection 4.2 imply that \(\mathtt{N}\geq\mathtt{m}+d-1\). In particular, we have \(\mathtt{N}>\mathtt{m}\), so that \(\overline{s}\in(1,d)\). It suffices to prove the proposition for all \(s\) sufficiently close to \(\overline{s}\). In view of this, let us fix \(s\in[1,\overline{s})\).
Our next goal is to define \(K\) as in the proof of Proposition 4.1; see the four bullet points in that proof. There, we chose an arbitrary Holder continuity exponent \(\alpha\) satisfying (3.1) and an arbitrary integer \(\mathtt{n}\) such that \(\mathtt{n}\alpha>\mathtt{m}\). Here, we want to choose \(\alpha\) and \(\mathtt{n}\) more carefully, so that \(\mathtt{n}\leq\mathtt{N}\). (This ensures that (4.3) can be applied later.) In fact, \(\mathtt{n}=\mathtt{N}\) will work, with a
suitable \(\alpha\). To see this, we consider two cases: \(s\leq d-1\) and \(s>d-1\). Suppose we are in the first case. Then the minimum in (3.1) is equal to \(1/s\). Since \(s<\overline{s}\leq\mathtt{N}/\mathtt{m}\), there exists an \(\alpha\in(0,1/s)\) such that \(\mathtt{N}\alpha>\mathtt{m}\). If we are in the second case, then the minimum in (3.1) is \(\frac{d-s}{d-1}\), and \(s<\overline{s}\leq d-\frac{(d-1)\mathtt{m}}{\mathtt{N}}\) implies that \(\mathtt{N}\alpha>\mathtt{m}\) for some \(\alpha\in(0,\frac{d-s}{d-1})\). Having fixed \(\alpha\) and \(\mathtt{n}=\mathtt{N}\), the definition of \(K\) proceeds just as in the proof of Proposition 4.1. This yields \(\dim_{\mathrm{H}}K=s\) and utilizes, in particular, the function \(F_{s}\) from Proposition 3.1.
Finally, to complete the proof, we assume that there exists some \(\gamma\in(\Gamma\setminus\{0\})\cap(K-K)\) and seek a contradiction. This can be done as in the proof of Proposition 4.1 without modification, using in particular property (4.5), the \(\alpha\)-Holder continuity of \(F_{s}\), and property (4.3) with \(n=\mathtt{N}\).
### Anisotropic boxes and Hausdorff dimension
#### 10.3.1. Proof of Lemma 8.1
For \(t\geq 0\), let \(\mathcal{H}^{t}\) denote the standard \(t\)-dimensional Hausdorff measure on \(\mathbb{R}^{d}\), defined by
\[\mathcal{H}^{t}(A):=\lim_{\delta\searrow 0}\mathcal{H}^{t}_{\delta}(A),\]
where
\[\mathcal{H}^{t}_{\delta}(A):=\inf\Big{\{}\sum_{U\in\mathcal{U}}\mathrm{diam}(U )^{t}\colon\mathcal{U}\text{ is a countable cover of }A,\ \sup_{U\in\mathcal{U}}\mathrm{diam}\,U\leq \delta\Big{\}}.\]
For any set \(A\subset\mathbb{R}^{d}\), we have (by definition)
\[\dim_{\mathrm{H}}A=\inf\{t\colon\mathcal{H}^{t}(A)=0\}=\sup\{t\colon\mathcal{H }^{t}(A)=\infty\}; \tag{10.3}\]
see [34, SS2.2]. Now, fix \(E\subseteq\mathbb{R}^{d}\) and \(s\in[0,\mathtt{S}-(d-\dim_{\mathrm{H}}E)\mathtt{N})\), as in the lemma statement. The definitions of \(\mathtt{S}\) and \(\mathtt{N}\) in (8.1) imply that \(\mathtt{S}\leq d\mathtt{N}\). It follows that
\[s=\mathtt{S}-(d-\alpha)\mathtt{N}\]
for some \(\alpha\in[0,\dim_{\mathrm{H}}E)\). By (10.3), we have \(\mathcal{H}^{\alpha}(E)=\infty\), and thus there exists some \(\delta\in(0,1]\) such that \(\mathcal{H}^{\alpha}_{\delta}(E)>0\). Let \(\mathcal{Q}\subseteq\mathcal{D}^{*}\) be an arbitrary cover of \(E\). If there exists some \(Q_{0}\in\mathcal{Q}\) such that \(\ell(Q_{0})^{\mathtt{N}}\geq\delta/\sqrt{d}\), then
\[\sum_{Q\in\mathcal{Q}}\ell(Q)^{s}\geq\ell(Q_{0})^{s}\geq\Big{(}\frac{\delta}{ \sqrt{d}}\Big{)}^{s/\mathtt{N}}.\]
Suppose instead that every \(Q\in\mathcal{Q}\) obeys \(\ell(Q)^{\mathtt{N}}\leq\delta/\sqrt{d}\). Each \(Q\) admits a covering \(\mathcal{C}(Q)\) by exactly \(\ell(Q)^{\mathtt{S}-d\mathtt{N}}\)_cubes_ of side-length \(\ell(Q)^{\mathtt{N}}\). In particular, the collection
\[\mathcal{C}(\mathcal{Q}):=\bigcup_{Q\in\mathcal{Q}}\mathcal{C}(Q)\]
is a countable cover of \(E\) consisting of sets of diameter at most \(\delta\). It follows that
\[\sum_{Q\in\mathcal{Q}}\ell(Q)^{s}=\sum_{Q\in\mathcal{Q}}\ell(Q)^{\mathtt{S}- d\mathtt{N}}\ell(Q)^{\alpha\mathtt{N}}=\sum_{C\in\mathcal{C}(\mathcal{Q})} \mathrm{diam}(C)^{\alpha}\geq\mathcal{H}^{\alpha}_{\delta}(E).\]
Since \(\mathcal{Q}\) was arbitrary, we may conclude that
\[\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)\geq\min\Big{\{}\Big{(}\frac{\delta}{ \sqrt{d}}\Big{)}^{s/\mathtt{N}},\mathcal{H}^{\alpha}_{\delta}(E)\Big{\}}>0,\]
completing the proof.
#### 10.3.2. Proof of Lemma 8.2
Fix a set \(E\subseteq\mathbb{R}^{d}\) such that \(E\) is contained in an element of \(\mathcal{D}_{J}^{*}\) for some \(J\). Let \(Q_{E}\) be the unique element of \(\mathcal{D}_{J}\) such that \(E\subseteq Q_{E}\), and let \(\mathcal{Q}\subseteq\mathcal{D}^{*}\) be an arbitrary cover of \(E\). On one hand, if \(\mathcal{Q}\not\subseteq\mathcal{D}_{J}^{*}\), then there exists some \(Q_{0}\in\mathcal{Q}\) such that \(\ell(Q_{0})\geq 2^{-J}\), and
\[\sum_{Q\in\mathcal{Q}}\ell(Q)^{s}\geq\ell(Q_{0})^{s}\geq 2^{-Js}=\ell(Q_{E})^{s} \geq\mathcal{H}_{\mathcal{D}_{J}^{*}}^{s}(E).\]
On the other hand, if \(\mathcal{Q}\subseteq\mathcal{D}_{J}^{*}\), then
\[\sum_{Q\in\mathcal{Q}}\ell(Q)^{s}\geq\mathcal{H}_{\mathcal{D}_{J}^{*}}^{s}(E)\]
by definition. Since \(\mathcal{Q}\) was arbitrary, we may conclude that \(\mathcal{H}_{\mathcal{D}^{*}}^{s}(E)\geq\mathcal{H}_{\mathcal{D}_{J}^{*}}(E)\) and hence \(\mathcal{H}_{\mathcal{D}^{*}}^{s}(E)=\mathcal{H}_{\mathcal{D}_{J}^{*}}(E)\)
#### 10.3.3. Proof of Lemma 8.3
Our argument follows the proof of the standard version of Frostman's lemma, [33, Theorem 8.8], with minor adjustments. There, the idea is to create a sequence of measures \(\{\vartheta^{j}\}_{j}\), with \(\vartheta^{j}\) obeying the required ball condition on balls of diameter at least \(2^{-j}\), and then take a weak limit. The main distinction here is that the dyadic cubes used in Frostman's proof are replaced by the anisotropic boxes in \(\mathcal{D}^{*}\).
Fix a compact set \(E\subset\mathbb{R}^{d}\) and \(s\geq 0\). By translation, we may assume that \(E\) is contained in some element of \(\mathcal{D}_{J}\) for some \(J\). For each \(j\geq J\), define a measure \(\vartheta_{j}^{j}\) on \(\mathbb{R}^{d}\) by specifying that
\[\vartheta_{j}^{j}|_{Q}=\begin{cases}0&\text{if }E\cap Q=\emptyset\\ \frac{\ell(Q)^{s}}{\lambda(Q)}\lambda|_{Q}&\text{if }E\cap Q\neq\emptyset\end{cases} \qquad\text{for each }Q\in\mathcal{D}_{j},\]
where \(\lambda\) denotes \(d\)-dimensional Lebesgue measure. Suppose that measures \(\vartheta_{j}^{j},\vartheta_{j-1}^{j},\ldots,\vartheta_{j-k}^{j}\) on \(\mathbb{R}^{d}\) have been constructed, with \(j-k>J\). Define the measure \(\vartheta_{j-k-1}^{j}\) by specifying that
\[\vartheta_{j-k-1}^{j}|_{Q}=\begin{cases}\vartheta_{j-k}^{j}|_{Q}&\text{if } \vartheta_{j-k}^{j}(Q)\leq\ell(Q)^{s}\\ \frac{\ell(Q)^{s}}{\vartheta_{j-k}^{j}(Q)}\vartheta_{j-k}^{j}|_{Q}&\text{if } \vartheta_{j-k}^{j}(Q)>\ell(Q)^{s}\end{cases}\qquad\text{for each }Q\in\mathcal{D}_{j-k-1}.\]
Let
\[j^{*}:=\max\{j^{\prime}\leq j\colon E\subset Q\text{ for some }Q\in \mathcal{D}_{j^{\prime}}\},\]
and define \(\vartheta^{j}:=\vartheta_{j^{*}}^{j}\) (noting that \(J\leq j^{*}\leq j\)). Let \(Q_{j^{*}}\in\mathcal{D}_{j^{*}}\) be such that \(E\subset Q_{j^{*}}\). The following are consequences of the construction of \(\vartheta^{j}\):
1. \(\operatorname{supp}\vartheta^{j}\subseteq\bigcup\{Q\in\mathcal{D}_{j}\colon E \cap Q\neq\emptyset\}\subseteq Q_{j^{*}}\);
2. \(\vartheta^{j}(Q)\leq\ell(Q)^{s}\) for all \(Q\in\mathcal{D}_{j^{\prime}}\) with \(J\leq j^{\prime}\leq j\);
3. Each point in \(E\) belongs to some \(Q\in\mathcal{D}_{j^{\prime}}\) with \(J\leq j^{\prime}\leq j\) such that \(\vartheta^{j}(Q)=\ell(Q)^{s}\).
Statements 1 and 2 imply that
\[\sup_{j\geq J}\|\vartheta^{j}\|\leq\sup_{j\geq J}\ell(Q_{j^{*}})^{s}\leq 2^{-Js}, \tag{10.4}\]
and consequently \(\{\vartheta^{j}\}_{j\geq J}\) has a weakly convergent subsequence. Let \(\vartheta\) be the weak limit along this subsequence.
We will show that this measure satisfies the conclusions of the lemma. Statement 1 and the hypothesis that \(E\) is compact imply that \(\vartheta\) is supported in \(E\). Fix \(Q\in\mathcal{D}^{*}\). If \(\ell(Q)\geq 2^{-J}\), then (10.4) implies that \(\vartheta(Q)\leq\|\vartheta\|\leq 2^{-Js}\leq\ell(Q)^{s}\), while if \(\ell(Q)\leq 2^{-J}\), then statement 2 implies that \(\vartheta(Q)\leq\ell(Q)^{s}\). This confirms the "ball" condition for \(\vartheta\). It remains to show that the total mass of \(\vartheta\) is at least \(\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)\). Toward this end, fix \(j\geq J\). Using statement 3, we select for each \(x\in E\) the largest \(Q\in\mathcal{D}^{*}\) such that \(x\in Q\) and \(\vartheta^{j}(Q)=\ell(Q)^{s}\). Let \(\mathcal{Q}_{j}\) denote the collection of these elements. Then \(\mathcal{Q}_{j}\) covers both \(E\) and \(\operatorname{supp}\vartheta^{j}\), and distinct elements of \(\mathcal{Q}_{j}\) are disjoint. It follows that
\[\|\vartheta^{j}\|=\sum_{Q\in\mathcal{Q}_{j}}\vartheta^{j}(Q)=\sum_{Q\in \mathcal{Q}_{j}}\ell(Q)^{s}\geq\mathcal{H}^{s}_{\mathcal{D}^{*}}(E).\]
Since \(j\) was arbitrary, we may conclude that \(\|\vartheta\|\geq\mathcal{H}^{s}_{\mathcal{D}^{*}}(E)\).
#### 10.3.4. Proof of Lemma 8.4
Fix \(L\), \(\sigma\), and \(s\) as in the lemma statement, and let \(\vartheta\) be any Borel measure \(\vartheta\) supported on \([0,1]^{d}\) such that
\[\|\vartheta\|\leq 1\qquad\text{and}\qquad\sup_{Q\in\mathcal{D}^{*}}\frac{ \vartheta(Q)}{\ell(Q)^{s}}\leq L. \tag{10.5}\]
Let \(\Omega:=[0,2)^{d}\), and let \(\Delta\) denote the diagonal of \(\Omega\times\Omega\), namely \(\Delta:=\{(x,y)\in\Omega\times\Omega\colon x=y\}\). Thus, the \(\sigma\)-dimensional energy of of \(\vartheta\) can be written as
\[I_{\sigma}(\vartheta)=\iint_{\Omega\times\Omega\setminus\Delta}|x-y|^{-\sigma }d\vartheta(x)d\vartheta(y).\]
We form a Whitney decomposition of \(\Omega\times\Omega\setminus\Delta\) as follows: For each \(j\geq-1\), let \(\mathcal{C}_{j}=\mathcal{C}_{j}(\Omega)\) be the set of dyadic cubes with side-length \(2^{-j}\) contained in \(\Omega\); that is,
\[\mathcal{C}_{j}:=\{x+[0,2^{-j})^{d}\colon x\in 2^{-j}\mathbb{Z}^{d}\cap \Omega\}.\]
We say that two dyadic cubes are _adjacent_ if their closures have nonempty intersection. For \(j\geq 0\), each cube in \(\mathcal{C}_{j}\) is contained in a unique "parent" cube in \(\mathcal{C}_{j-1}\). We say that \(C,C^{\prime}\in\mathcal{C}_{j}\) are _related_, and write \(C\sim C^{\prime}\), if \(C\) and \(C^{\prime}\) are nonadjacent but have adjacent parents. The following properties are easy to confirm:
1. For each \((x,y)\in\Omega\times\Omega\setminus\Delta\), there exists a (unique) pair of related cubes \(C,C^{\prime}\) such that \((x,y)\in C\times C^{\prime}\).
2. If \(C,C^{\prime}\in\mathcal{C}_{j}\) and \(C\sim C^{\prime}\), then \(|x-y|\geq 2^{-j}\) for all \((x,y)\in C\times C^{\prime}\).
3. For a fixed cube \(C\), there are at most \(6^{d}\) cubes \(C^{\prime}\) such that \(C\sim C^{\prime}\).
It follows from property 1 that
\[\Omega\times\Omega\setminus\Delta=\bigcup_{j=0}^{\infty}\bigcup_{\begin{subarray} {c}(C,C^{\prime})\in\mathcal{C}_{j}\times\mathcal{C}_{j}\colon\\ C\sim C^{\prime}\end{subarray}}C\times C^{\prime}.\]
Thus, using properties 2 and 3, together with (10.5) and our hypothesis on \(s\), we obtain
\[I_{\sigma}(\vartheta) \leq\sum_{j=0}^{\infty}\sum_{\begin{subarray}{c}(C,C^{\prime}) \in\mathcal{C}_{j}\times\mathcal{C}_{j}:\\ C\sim C^{\prime}\end{subarray}}\iint_{C\times C^{\prime}}|x-y|^{-\sigma}d \vartheta(x)d\vartheta(y)\] \[\leq\sum_{j=0}^{\infty}\sum_{C\in\mathcal{C}_{j}}\sum_{C^{ \prime}\in\mathcal{C}_{j}:\,C\sim C^{\prime}}2^{j\sigma}\vartheta(C)\vartheta (C^{\prime})\] \[\leq\sum_{j=0}^{\infty}2^{j\sigma}\sum_{C\in\mathcal{C}_{j}} \vartheta(C)\#\{C^{\prime}\in\mathcal{C}_{j}\colon C\sim C^{\prime}\}\max_{C^ {\prime}\in\mathcal{C}_{j}}\vartheta(C^{\prime})\] \[\leq 6^{d}\sum_{j=0}^{\infty}2^{j\sigma}\Big{(}\sum_{C\in\mathcal{ C}_{j}}\vartheta(C)\Big{)}\max_{C^{\prime}\in\mathcal{C}_{j}}\#\{Q\in \mathcal{D}_{j}\colon Q\cap C^{\prime}\neq\emptyset\}L2^{-js}\] \[=6^{d}L\sum_{j=0}^{\infty}2^{j\sigma}\|\vartheta\|2^{j(\mathfrak{ s}-d)}2^{-js}\leq 6^{d}L\sum_{j=0}^{\infty}2^{j(\sigma+\mathfrak{s}-d-s)}=\frac{ 6^{d}L}{1-2^{\sigma+\mathfrak{s}-d-s}}.\]
Since \(\vartheta\) was arbitrary, we have shown that the lemma holds with
\[\mathtt{E}(t):=\frac{6^{d}}{1-2^{-t}},\]
and the proof is complete.
|
2306.12556 | Off the Radar: Uncertainty-Aware Radar Place Recognition with
Introspective Querying and Map Maintenance | Localisation with Frequency-Modulated Continuous-Wave (FMCW) radar has gained
increasing interest due to its inherent resistance to challenging environments.
However, complex artefacts of the radar measurement process require appropriate
uncertainty estimation to ensure the safe and reliable application of this
promising sensor modality. In this work, we propose a multi-session map
management system which constructs the best maps for further localisation based
on learned variance properties in an embedding space. Using the same variance
properties, we also propose a new way to introspectively reject localisation
queries that are likely to be incorrect. For this, we apply robust noise-aware
metric learning, which both leverages the short-timescale variability of radar
data along a driven path (for data augmentation) and predicts the downstream
uncertainty in metric-space-based place recognition. We prove the effectiveness
of our method over extensive cross-validated tests of the Oxford Radar RobotCar
and MulRan dataset. In this, we outperform the current state-of-the-art in
radar place recognition and other uncertainty-aware methods when using only
single nearest-neighbour queries. We also show consistent performance increases
when rejecting queries based on uncertainty over a difficult test environment,
which we did not observe for a competing uncertainty-aware place recognition
system. | Jianhao Yuan, Paul Newman, Matthew Gadd | 2023-06-21T20:53:25Z | http://arxiv.org/abs/2306.12556v1 | Off the Radar: Uncertainty-Aware Radar Place Recognition with Introspective Querying and Map Maintenance
###### Abstract
Localisation with Frequency-Modulated Continuous-Wave (FMCW) radar has gained increasing interest due to its inherent resistance to challenging environments. However, complex artefacts of the radar measurement process require appropriate uncertainty estimation - to ensure the safe and reliable application of this promising sensor modality. In this work, we propose a multi-session map management system which constructs the "best" maps for further localisation based on learned variance properties in an embedding space. Using the same variance properties, we also propose a new way to introspectively reject localisation queries that are likely to be incorrect. For this, we apply robust noise-aware metric learning, which both _leverages_ the short-timescale variability of radar data along a driven path (for data augmentation) and _predicts_ the downstream uncertainty in metric-space-based place recognition. We prove the effectiveness of our method over extensive cross-validated tests of the _Oxford Radar RobotCar_ and _MultiRan_ dataset. In this, we outperform the current state-of-the-art in radar place recognition and other uncertainty-aware methods when using only single nearest-neighbour queries. We also show consistent performance increases when rejecting queries based on uncertainty over a difficult test environment, which we did not observe for a competing uncertainty-aware place recognition system.
Radar, Place Recognition, Deep Learning, Uncertainty Estimation, Autonomous Vehicles, Robotics
## I Introduction
Place recognition and localisation are important tasks in the field of robotics and autonomous systems, as they enable a system to understand and navigate its environment. Traditional vision-based methods for place recognition are often vulnerable to changes in environmental conditions such as lighting, weather, and occlusion, leading to performance degradation [1]. To address this issue, there has been increasing interest in the use of FMCW radar as a robust sensor substitute for such adversarial environments. Existing works have demonstrated the effectiveness of FMCW radar place recognition with hand-crafted [2, 3, 4, 5, 6] and learning-based feature extraction approaches [7, 8, 9, 10, 11, 12, 13, 14, 15]. Despite the success of existing works, the deployment of these methods in safety-critical applications such as autonomous driving is still limited by the lack of calibrated uncertainty estimation. While there have been works [16, 17, 18, 19] that address the problem in similar areas such as image retrieval or visual place recognition, which leverage both Bayesian and learning-based approaches, there is no previous work specifically targeting uncertainty estimation for radar place recognition.
In this area, it is important to consider: (1) safety requires uncertainty estimation to be well-calibrated to false positive rate in order to enable _introspective_ rejection; (2) real-time deployment requires _fast_ single-scan uncertainty-based inference capability; (3) repetitive route traversal in long-term autonomy requires online _continual_ map maintenance.
While the Variational Autoencoder (VAE) [20] is usually used for generative tasks, its probabilistic latent space can serve as an effective metric space representation for place recognition [21, 22, 23] and allow prior assumption on the data
noise distribution, which also gives a normalized aleatoric uncertainty estimation.
Thus, in this paper, to achieve reliable and safe deployment of FMCW radar in autonomous driving, we leverage a variational contrastive learning framework and propose a unified uncertainty-aware radar place recognition method as shown in Fig. 1. Our key contributions are as follows:
1. Uncertainty-aware metric learning framework for radar place recognition.
2. Introspective query mechanism based on false positive calibrated uncertainty estimation for real-time autonomous driving deployment.
3. Online recursive map maintenance and improvement mechanism for repeated traversals of changing environments in long-term autonomy.
In doing so we outperform the previous radar place recognition state-of-the-art [8] and show that our learned uncertainty is more suitable for query rejection than a previous approach [17] developed for vision but tested here in radar.
## II Related Work
### _Radar Place Recognition_
Place recognition can be viewed as a query and retrieval process, where a map is constructed with a dictionary of radar scans. For every new query scan, the algorithm needs to retrieve a scan from the dictionary such that samples from a similar topological location are close to each other, and vice versa. Traditional hand-crafted feature extraction methods, such as correlative scan matching [2], graph matching [3, 4], and scan context descriptor [6], have demonstrated the effectiveness of using radar perception for place recognition and localisation. Recently, learning-based methods have shown impressive performance. Supervised metric learning methods [7, 9, 10, 11, 12, 13], which exploit rotational invariance and spatial-temporal consistency in radar scans, have demonstrated remarkable performance. Then, Gadd _et al_[8] achieved comparable performance to supervised methods using contrastive learning. Moreover, multi-modal methods incorporating additional modalities, such as LIDAR points, through direct joint learning [15] or point-cloud registration [24], are also explored to aid radar-based recognition. Among numerous metric learning methods, the VAE structure is extensively explored. It uses variational inference to approximate the posterior distribution of latent representation and thus gives a probabilistic estimation on embedding with consideration of data noise. Lin _et al_[25] use VAE in metric learning for the visual classification task. Burnett _et al_[26] use a VAE-like structure with factor graph optimisation for radar odometry.
Inspired by previous works of the variational metric learning approach of Lin _et al_[25] and the contrastive learning approach of Gadd _et al_[8], we form the basis of our representation learning framework in a contrastive manner with a VAE-like structure.
### _Uncertainty Estimation_
Uncertainty estimation aims to quantify the model prediction confidence, which is a crucial component in the safety-critical application for the model to introspectively reject low confidence predictions. One line of work in similar areas, such as image retrieval and place recognition, uses the Bayesian approach such as Monte Carlo Dropout [16] to estimate the epistemic uncertainty, which, however, usually is very computationally expensive and hinders the deployment in real-time autonomous driving. Another line of work directly learns the uncertainty from data to estimate the aleatoric uncertainty. However, previous approaches such as PTL [19], BPE [18], and TAH [27] are restricted to specific pairwise or triplet loss, which lacks flexibility in offline system design. Later STUN [17] uses a knowledge distillation framework to break such constraints that adapt to arbitrary metric learning loss functions. Moreover, STUN [17] use a dynamic binning strategy that requires a large batch of test samples to determine a meaningful uncertainty rejection threshold at inference time. Here, only the relative uncertainty within a batch can be exploited. However, this is not suitable for real-time place recognition deployment as the entire test sequence is not known a priori. Thus, to tackle this problem, we develop a static binning strategy by leveraging the VAE variance prediction, such that the uncertainty estimated is compared against a pre-defined prior distribution.
### _Uncertainty-aware Radar Localisation_
Although uncertainty estimation in radar place recognition tasks is still largely under-explored, there have been attempts to develop introspective radar systems for odometry. Adolfsson _et al_[28] present a radar system that verifies loop closure candidates introspectively by combining multiple place-recognition techniques, including tightly-coupled place similarity and odometry uncertainty search, creating loop descriptors from origin-shifted scans, and delaying loop selection until after verification. In contrast, our system does not rely on a secondary odometry estimation system, as uncertainty is inherent to the representation we learn for place recognition. Aldera _et al_[29] focus only on odometry, and use inertial sensors to learn an uncertainty classifier, which we do not.
## III Method
In this section, we present the proposed modules for the variational uncertainty-aware radar place recognition method including metric space learning (Sec. III-A), mapping (Sec. III-B), and querying (Sec. III-C). We explain how these modules are integrated to form a unified system.
### _Variational Contrastive Learning_
The work in this section is both a key enabler of our core contributions, map maintenance and introspective querying - see below, but is also a novel integration of deep variational metric learning (DVML) [25] with radar place recognition [8, 14] and a new way to characterise uncertainty in place recognition. Indeed, we show in Sec. V that learning uncertainty in this way yields more calibrated introspection than in self-teaching uncertainty estimation (STUN) [17].
As illustrated in Fig. 2, we adopt a VAE structure to disentangle the radar scan embedding into a noise-induced variant part \(Z_{V}\), which captures the variance of prediction-irrelevant uncertainty sources, and a semantic invariant part \(Z_{I}\), for the essential features of the scene representation. The variant part is later sampled from a prior multivariate isotropic Gaussian distribution and added to the invariance part to form the overall representation \(Z=Z_{I}+Z_{V}\). The variant output is directly used as the uncertainty measure.
The assumption is we only consider the aleatoric uncertainty in model prediction caused by the inherent ambiguity and randomness in data as the main source of uncertainty [30]. In particular for radar scanning, this is likely due to speckle noise, saturation and temporary occlusion. The standard metric learning approaches, regardless of the loss function chosen, tend to enforce identical embeddings between positive sample pairs while ignoring the underlying variances between them. However, this can lead to the model being insensitive to minor features and overfitting to the training distribution. Thus, to model the noise variance, we use the extra probabilistic variance output in the VAE structure to estimate the aleatoric uncertainty.
To establish such a noise-aware representation for radar perception, we use four loss functions to guide the overall training. We choose to demonstrate the benefit of our specific method of learning uncertainty in the setting of losses which feature increased numbers of _negative_ examples - this having been proven in STUN [17] to perform best (i.e. quadruplet, with \(2\) negatives). In this area, the state-of-the-art in radar place recognition [8] already uses a loss with many (i.e. more than \(2\)) negative samples - so, we extend upon this one. For a fair comparison, we compare the uncertainty mechanism of STUN with ours under the same loss regime - see further experimental details in Sec. IV-B.
**(1) Invariant contrastive loss** on the deterministic representation \(Z_{I}\) to disentangle the task-independent noise from radar semantics such that the invariant embedding contains sufficient causal information; and
**(2) Variant contrastive loss** between on the overall representation \(Z\) to establish a meaningful metric space. Both contrastive losses take the following form in Eq. (1).
\[L_{Con}=-\sum_{i}^{m}\log P\left(i\mid\hat{\mathbf{x}}_{i}\right)-\sum_{i}^{m} \sum_{j\neq i}^{m}\log\left(1-P\left(i\mid\mathbf{x}_{j}\right)\right) \tag{1}\]
where a batch consists of \(m\) samples \(\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{m}\}\) and synthetically rotated temporal proximal frames augmentations \(\{\hat{\mathbf{x}}_{1},\hat{\mathbf{x}}_{2},\ldots,\hat{\mathbf{x}}_{m}\}\) using a "spinning" strategy [8], which is simply rotation augmentation, for rotation invariance. We aim to maximise the probability that an augmented sample \(\hat{x}_{i}\) is recognised as the original instance \(x_{i}\) while minimising the probability of the reversed case. The probability of recognition is approximated as Eq. (2).
\[P\left(i\mid\mathbf{x}_{j}\right)=\frac{\exp\left(\frac{\mathbf{Z}_{I}^{T} \mathbf{Z}_{j}}{\tau}\right)}{\sum_{k=1}^{m}\exp\left(\frac{\mathbf{Z}_{I}^{T} \mathbf{Z}_{j}}{\tau}\right)},j\neq i \tag{2}\]
where the embeddings \(\mathbf{z}\) are either \(Z_{I}\) or \(Z\) as in 1) and 2).
**(3) Kullback-Leibler (KL) divergence** between the learned Gaussian distribution and a standard isotropic multivariate Gaussian, which is our prior assumption on the data noise. This ensures an identical distribution across all the sample noises and provides a static reference for the absolute value of variant output.
\[D_{\mathrm{KL}}=\sum_{z\in\mathcal{Z}_{\mathcal{V}}}\mathcal{N}\left(\mathbf{\mu},\mathbf{\Sigma}\right)\log\left(\frac{\mathcal{N}\left(\mathbf{\mu},\mathbf{\Sigma} \right)}{\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)}\right) \tag{3}\]
**(4) Reconstruction loss** between the extract feature map \(M\) and decoder output \(M_{R}\), which forces the overall representation \(Z\) to contain sufficient information in the original radar scans for reconstruction. However, instead of pixel-level radar scan reconstruction, we only reconstruct a lower dimensional feature map to reduce the computational cost in the decoding process.
\[L_{Rec}=\left\|M_{R}-M\right\|_{2} \tag{4}\]
While the vanilla VAE structure driven by only KL divergence and reconstruction loss also provides latent variance, it is considered unreliable for uncertainty estimation [31] due to its well-known problem of posterior collapse [32] and vanishing variance [33]. We confirm this experimentally in Sec. V.
Fig. 2: **Overview of Variational Contrastive Learning Framework**, based on [25]. A metric space is learnt through an encoder-decoder structure with two reparameterised parts: a deterministic embedding for recognition and a set of parameter modelling multivariate Gaussian distribution with its variance serving as the uncertainty measure. The overall learning is jointly driven by both reconstruction and contrastive loss to ensure an informative and discriminative hidden representation for radar scans.
Such ineffectiveness is mainly due to the imbalance of two losses during the training process: when the KL divergence dominates, the latent space posterior is forced to equal the prior whereas when the reconstruction loss dominates, the latent variance is pushed to zero. In our method, however, we achieve more stable training by introducing the variant contrastive losses as an extra regulariser, where the variance is driven to keep a robust boundary among clustering centres in metric space. Hence, we obtain a more reliable latent space variance reflecting the underlying aleatoric uncertainty of radar perception. We show that this VAE-like uncertainty mechanism _alongside_ a metric learning objective is more well-calibrated to place recognition than STUN [17], vanilla VAEs [20], MC Dropout [16], etc, which we prove experimentally in Sec. V.
### _Continual Map Maintenance_
Continual Map maintenance is an important function of the online system since we aim to fully exploit the scanning data obtained during a lifetime of autonomous vehicle operation and improve the map in a recursive manner. The process of merging a new radar scan into the parent map, consisting of scans from previous traversals, is illustrated in Algorithm 1. Each radar scan is represented by both a hidden representation \(\mathbf{\mu}\) and an uncertainty measure \(U\). In the merging process, we search for matched positive scans for each new scan with topological distance under a threshold \(D_{t}\). If the new scan has a lower uncertainty then it will be integrated into the parent map and replace the matched scan, otherwise, it will be discarded.
```
1:Parent Map: \(M=\{(\mathbf{\mu}_{i},\mathbf{U}_{i})\,|i\in[0,n]\}\), Map GPS \(\{D_{i}\}|i\in[0,N]\), New Scan: \((\mathbf{\mu}_{j},\mathbf{U}_{j})\), Scan GPS \(D_{j}\).
2:Updated Parent Map: \(\underline{M}=\{(\mathbf{\mu}_{i},\mathbf{U}_{i})\,|i\in[0,n]\}\)
3:Find Matched Scan Index:
4:\(\{\Delta D_{i}|i\in[0,n]\}\leftarrow\text{Sort}(\|D_{i}-D_{j}\|)\)
5:if\(\Delta D_{i}<D_{t}\)then
6:\(\{U_{1},U_{2},...U_{m}\}\leftarrow\text{GetUncertainty}(M,\Delta D_{i})\)
7:endif
8:Uncertainty Comparison and Replacement:
9:for\(i=0\) to \(m\)do
10:if (\(U_{j}>U_{i}\)) then
11:\((\mathbf{\mu}_{i},\mathbf{U}_{i})\leftarrow(\mathbf{\mu}_{j},\mathbf{U}_{j})\)
12:endif
13:endfor
14:return\(\underline{M}\)
```
**Algorithm 1** Map Maintenance
As shown in Fig. 3, The replacement of higher uncertainty scans will remove noisy scans, which potentially lead to incorrect prediction, such as saturation and range uncertainty.
By iteratively performing the maintenance process for all obtained scans, we can gradually enhance the quality of the integrated parent map. Thus, the maintenance algorithm can serve as an effective online deployment strategy as it continually exploits multiple experiences of the same route traversal to boost recognition performance and, in the meanwhile, preserve a _constant parent map size_ resulting in budgeted computation and storage cost.
In order to facilitate this map maintenance, we currently use GPS, but it is important to note that this step is purely offline (place recognition is performed entirely online, with radar alone) and that this offline step may be readily be replaced by whatever underlying map representation the place recognition queries are issued against, i.e. one built by radar-only mapping and motion estimation [3, 12, 28, 34]. It is also important to note that our results in Sec. V show that even without this map maintenance, our uncertainty-aware representation of Sec. III-A outperforms previous methods, but that this method is capable of significantly boosting localisation performance by being careful about the learned uncertainty of map contents over many repeat traversals.
### _Introspective Query Rejection_
During the inference, we use the Euclidean distance between the query and scanned map in a metric space as a measure of similarity. The top \(k\) pairs with the highest similarity serve as the prediction result. To further enhance recognition performance and achieve introspective prediction, we also develop an uncertainty rejection mechanism.
**Static Thresholding** Since the aleatoric uncertainty is measured against a standard Gaussian distribution \(\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\), the estimated variances in all dimensions are close to 1. Thus, we can use two hyperparameters \(\Delta\) and \(N\) to fully define the scale and resolution of uncertainty rejection respectively. The resultant thresholds \(T\) are defined as follows:
\[T=\{(1-\Delta)+n\times\frac{2\Delta}{N}|n\in[0,N]\} \tag{5}\]
Given a scan with \(m\) dimension latent variance \(\Sigma\), we average across all the dimensions and obtain a scalar uncertainty measure \(U=\frac{1}{m}\sum_{i=1}^{m}\Sigma_{i}\).
Fig. 3: **Illustration of Map Maintenance.** The red and green nodes each represent radar scans with higher and lower uncertainty respectively. We always maintain a parent map as the localisation reference consisting of only scans with the lowest uncertainty at each location.
**Prediction Rejection** At inference time, we perform introspective query rejection, where the query scans with a higher variance than a defined threshold will be rejected from recognition. Existing methods, such as STUN and MC Dropout, dynamically divide the uncertainty range of batch samples into threshold levels. However, this imposes the requirement of multiple samples during inference and can result in unstable rejection performance, particularly when a small number of samples are available. In contrast, our static thresholding strategy offers sample-independent threshold levels and provides consistent single-scan uncertainty estimation and rejection. This feature is crucial for the real-time deployment of the place recognition system, as the radar scan is acquired on a frame-by-frame basis during the driving process.
## IV Experimental Setup
### _Datasets_
We evaluate the performance using two standard benchmarks. (1) _Oxford Radar RobotCar_[35]: It contains radar scans from 30 traversals of the same route around Oxford city centre equivalent to approximately \(300\,\mathrm{km}\) driving. We follow the train-test split of Saftescu _et al_[7]. An unseen route representing forward and backward traversal of both urban and vegetation environments is used for testing, and all the rest of the routes are used for training. We use five test sequences1 where two arbitrary sequences can form a mapping and localisation pair. We report evaluation metric averages over all possible combinations2. (2) _MulRan_[6]: It contains radar scans from the traversal of different routes at four locations in Daejeon equivalent to around \(120\,\mathrm{km}\) driving. We train on all sequences from three locations, DCC, KAIST, and Sejong, then test on the same route in Riverside of two sequences, which alternate to serve for mapping and localisation in two testing trails.
Footnote 1: There are in total 202 pairs, 10 unique pairs of sequences, but the assignment of reference/query being important for introspective query rejection
Both datasets encompass data collected from a CTS350-X Navtech FMCW scanning radar. This particular radar system is devoid of Doppler information and is mounted on a platform in such a manner that its axis of rotation is perpendicular to the driving surface. The operating frequency of this radar system is situated within the range of \(76\,\mathrm{GHz}\) to \(77\,\mathrm{GHz}\), which enables it to generate up to \(3768\) range readings with a resolution of \(4.38\,\mathrm{cm}\). The total range of the radar system is \(165\,\mathrm{m}\), and each range reading corresponds to one of the \(400\) azimuth readings, which have a resolution of \(0.9^{\circ}\) degrees. The radar system is characterized by a scan rotation rate of \(4\,\mathrm{Hz}\).
### _Baselines_
**Benchmarking** the recognition performance is done by comparison to several existing methods, including the vanilla VAE [20], the state-of-the-art radar place recognition method by Gadd _et al_[8] (referred as BCRadar), and the non-learning-based method RingKey [5] (one part of ScanContext, without rotation refinement). Additionally, the performance is compared against MC Dropout [16], and STUN [17], which serve as uncertainty-aware place recognition baselines. We note that our testing of BCRadar uses a different list of dataset/sequence combinations than in [8].
**Ablation study** In order to evaluate the effectiveness of our proposed Introspective Query (Q) and Map Maintenance (M) modules, we conduct an ablation study by comparing different variants of our method, denoted as OURS(O/M/Q/QM), which are as follows
1. O: No map maintenance, no introspective querying
2. M: Map maintenance only
3. Q: Introspective querying only
4. QM: Both map maintenance and introspective querying
Specifically, we compare the performance of O against M for recognition performance and Q against QM for uncertainty estimation performance. In each group of the variant comparison, we perform independent mapping inference over multiple localisation sequences and report the average metric, or merge multiple localisation sequences and report a single aggregate metric.
**Common settings** To ensure a fair comparison, we adopt a common batch contrastive loss [36] as for all metric learning-based methods, thus enabling a training regime with a consistent loss function across the benchmarking.
### _Implementation details_
**Scan settings** For all methods, we transform polar radar scans of with \(A=400\) azimuths and \(B=3768\) bins of size \(4.38\,\mathrm{cm}\) to Cartesian scans with side-length \(W=256\) and bin size \(0.5\,\mathrm{m}\).
**Training hyperparameters** We use a VGG-16 [37] as the backbone feature extractor with a linear layer to project the extracted feature to a lower embedding dimension of \(d=128\). We train all baselines for \(10\) epochs in _Oxford Radar RobotCar_ and \(15\) epochs in _MulRan_ respectively3, with a learning rate of \(1\mathrm{e}{-5}\), batch size of \(8\). In adapting the batch criterion of [36], we use a negative margin in the embedding space of \(0.1\) and temperature \(\tau\) of \(1\).
Footnote 3: STUN is finetuned for another \(5\) epochs for knowledge distillation
### _Evaluation Metric_
To assess the place recognition performance, we use Recall@N (R@N) metric, which is the accuracy of the localisation by determining whether at least one candidate amongst \(N\) candidates is close to the ground truth, as indicated by GPS/INS. This is particularly important to safety assurance in the autonomous driving application because it reflects the system's calibration to false negatives rate. We also use Average Precision (AP) to measure the mean precision across all recall levels. Finally, we employ F-scores with \(\beta=2/1/0.5\) to assign the increasing level of importance to recall over precision as an aggregate metric assessing overall recognition performance. For all metrics,
we impose a boundary of \(25\,\mathrm{m}\) such that all radar scans within it are considered positive pairs to a mapping scan. Similarly, a \(50\,\mathrm{m}\) lower boundary defines the corresponding true negative pairs.
Moreover, to assess the uncertainty estimation performance. We use Recall@RR, where we perform introspective query rejection and evaluate Recall@N=1 over different uncertainty threshold levels - rejecting all queries of scans with uncertainty greater than a threshold. We thus reject between \(0\,\mathrm{\char 37}\) to \(100\,\mathrm{\char 37}\) of queries. A reliable uncertainty measure should reflect the confidence level of the model's predictions, and the rejection of low-confidence scans should result in improved recognition performance (i.e. we should not reject certain, good results). Our goal is to achieve higher improvement in recognition performance with a lower rate of rejections.
## V Results
### _Place Recognition Performance_
As demonstrated in Tab. I of the _Oxford Radar RobotCar_ experiments, our methods, utilizing only the metric-learning module, achieve the highest performance across all the metrics. Specifically, in terms of Recall@1, our approach OURS(O) showcases the efficacy of the variance-disentangled representation learned through the variational contrastive learning framework, resulting in superior 90.46% recognition performance. This is further supported with the _MulRan_ experiment results demonstrated in Tab. II, where our method outperforms all the other methods on Recall@1 and aggregate F-scores and AP. Although VAE outperforms our method in Recall@5/10 in the _Mulran_ experiment, the best F-1/0.5/2 and AP of our method in both settings indicate an overall more accurate and robust recognition performance with high precision and recall. Furthermore, VAE uncertainty is shown below (Sec. V-B) to be poor for introspective query rejection, while our learned uncertainty is useful in both settings.
Moreover, by further utilizing the Continual Map Maintenance in _Oxford Radar RobotCar_, we are able to further improve the Recall@1 to 93.67%, outperforming the current state-of-the-art method, STUN, by a margin of 4.18%. This further proves the effectiveness of learnt variance as a valid uncertainty measure and the strategy of uncertainty-based map integration in improving place recognition performance.
### _Uncertainty Estimation Performance_
The change in recognition performance, specifically the Recall@1, with an increasing percentage of rejected uncertain queries, is illustrated in Fig. 4 for the _Oxford Radar RobotCar_ experiment and in Fig. 5 for the _MulRan_ experiment. Notably, our method is the only one to exhibit a consistent improvement in recognition performance as the rejection rate of uncertain queries increases in both experimental settings - _Oxford Radar RobotCar_ and _MulRan_ - whereas STUN increases for _Oxford Radar RobotCar_ but not _MulRan_ and MC Dropout increases for _MulRan_ but not _Oxford Radar RobotCar_. For _Oxford Radar RobotCar_ in Fig. 4, it is interesting to note that Ourts(QM) and Ourts(Q) but outperform the initial Recall@1 at a lower rejection rate than STUN does - about \(40\,\mathrm{\char 37}\) for Ourts(QM) and Ourts(Q) but a much more comprehensively rejection of \(70\,\mathrm{\char 37}\) than for STUN. This result serves as evidence of the robustness and effectiveness of our uncertainty measure.
ford Radar RobotCar_ experiments, all methods initially experience fluctuations in performance as the rejection rate increases, ultimately, OURS(Q/QM) outperforms both VAE and STUN with a higher Recall@RR metric at all rejection rates. The superior performance of OURS over VAE suggests the effectiveness of using a regularizing invariant discriminative loss to address the issues of posterior collapse and vanishing variance in the vanilla VAE structure. Furthermore, the better performance of OURS compared to STUN indicates that our variational contrastive learning framework can be more effective than knowledge distillation in learning a reliable variance for uncertainty estimation.
On the other hand, compared with MC Dropout, which estimates the epistemic uncertainty resultant from biased data and model misspecification [30], although it has a higher increase in Recall@1 at the early stage of rejection in _Oxford Radar RobotCar_ experiment, its performance is generally lower than ours and fails to achieve greater improvement as the rejection rate increases further. Also, in _MulRan_ experiments, it fails to produce reliable uncertainty estimates, as indicated by the general decreasing trend in Recall@RR. These results suggest that aleatoric uncertainty plays a more significant role in causing mispredictions in place recognition than epistemic uncertainty.
Finally, comparing OURS(Q) and OURS(QM) in _Oxford Radar RobotCar_ experiment, we observe a similar change in Recall@RR pattern while a considerable gap exists between them. This suggests that the Introspective Query and Map Maintenance mechanisms independently contribute to the place recognition system and that each mechanism exploits the uncertainty measure in an indispensable way.
## VI Discussion
### _Qualitative Analysis and Visualisation_
To qualitatively assess the source of uncertainty in radar perception, we provide a visual comparison of the high/low uncertainty samples from both datasets estimated with our method. As shown in Fig. 6, the high-uncertainty radar scans usually demonstrate heavy motion blur and sparse undetected regions, while the low-uncertainty scans usually contain distinct features with a stronger intensity across the histogram.
This further supports our hypothesis on the source of uncertainty in radar perception and serves as qualitative evidence of the effectiveness of our uncertainty measure that captures this data noise.
### _Dataset Difficulty: Uncertain Environments_
In our benchmark experiments, we observed a considerable discrepancy in the recognition performance between the two datasets. The difficulty of _MulRan_ as compared to _Oxford Radar RobotCar_ for both place recognition and metric localisation is already well-documented in [38, 8]. We posit that the scale of available training data can be a plausible cause. The training set in _Oxford Radar RobotCar_ comprises over \(300\,\mathrm{km}\) driving experiences, while the _MulRan_ dataset includes only around \(120\,\mathrm{km}\). However, also consider the drop in performance for the _non-learned_ Ring Key descriptor method. This suggests potential inherent indistinguishable features in radar scene perception. For instance, we found that environments with sparse open areas usually lead to identical scan and suboptimal recognition performance. We include results on this dataset to exhibit what happens to our system and the various baselines in these cases of high uncertainty. As shown in Fig. 5, our learned uncertainty is the most useful in these difficult scenarios.
## VII Conclusion
We have presented a novel application of uncertainty estimation to learned radar place recognition. We bolster the performance of an invariant instance feature learning
Fig. 5: Mulran introspective query rejection performance reported the same format as in Fig. 4.
Fig. 6: **Visualisation of Radar Scans with Different Levels of Uncertainty.** The four examples on the left are from the _Oxford Radar RobotCar Dataset_ while the four examples on the right are from _MulRan_. We show samples with the Top-10 highest (top) / lowest (bottom) uncertainty. The radar scan is displayed in a Cartesian coordinate with enhanced contrast. The histogram below each image shows the Ring Key descriptor [5] extracted feature of the intensity across all the azimuths. |
2304.13031 | DQS3D: Densely-matched Quantization-aware Semi-supervised 3D Detection | In this paper, we study the problem of semi-supervised 3D object detection,
which is of great importance considering the high annotation cost for cluttered
3D indoor scenes. We resort to the robust and principled framework of
selfteaching, which has triggered notable progress for semisupervised learning
recently. While this paradigm is natural for image-level or pixel-level
prediction, adapting it to the detection problem is challenged by the issue of
proposal matching. Prior methods are based upon two-stage pipelines, matching
heuristically selected proposals generated in the first stage and resulting in
spatially sparse training signals. In contrast, we propose the first
semisupervised 3D detection algorithm that works in the singlestage manner and
allows spatially dense training signals. A fundamental issue of this new design
is the quantization error caused by point-to-voxel discretization, which
inevitably leads to misalignment between two transformed views in the voxel
domain. To this end, we derive and implement closed-form rules that compensate
this misalignment onthe-fly. Our results are significant, e.g., promoting
ScanNet [email protected] from 35.2% to 48.5% using 20% annotation. Codes and data will
be publicly available. | Huan-ang Gao, Beiwen Tian, Pengfei Li, Hao Zhao, Guyue Zhou | 2023-04-25T17:59:54Z | http://arxiv.org/abs/2304.13031v2 | # DQS3D: Densely-matched Quantization-aware Semi-supervised 3D Detection
###### Abstract
In this paper, we study the problem of semi-supervised 3D object detection, which is of great importance considering the high annotation cost for cluttered 3D indoor scenes. We resort to the robust and principled framework of self-teaching, which has triggered notable progress for semi-supervised learning recently. While this paradigm is natural for image-level or pixel-level prediction, adapting it to the detection problem is challenged by the issue of proposal matching. Prior methods are based upon two-stage pipelines, matching heuristically selected proposals generated in the first stage and resulting in spatially sparse training signals. In contrast, we propose the first semi-supervised 3D detection algorithm that works in the single-stage manner and allows spatially dense training signals. A fundamental issue of this new design is the quantization error caused by point-to-voxel discretization, which inevitably leads to misalignment between two transformed views in the voxel domain. To this end, we derive and implement closed-form rules that compensate this misalignment on-the-fly. Our results are significant, e.g., promoting ScanNet \(\mathrm{mAP}@0.5\) from \(35.2\%\) to \(48.5\%\) using \(20\%\) annotation. Codes and data are publicly available1.
Footnote 1: Code: [https://github.com/AIR-DISCOVER/DOS3D](https://github.com/AIR-DISCOVER/DOS3D)
## 1 Introduction
3D object detection (and reconstruction/tracking) [25, 37, 42, 62, 3, 28] is a fundamental problem in 3D scene understanding [57, 59, 63, 17, 26], but its progress still lags behind 2D detection due to a high annotation cost. As such, semi-supervised 3D object detection [48, 56, 60] has recently attracted much attention as it holds the promise to improve accuracy using enormous unlabeled data. These semi-supervised 3D detectors are trained with a widely recognized framework called mean teachers (MT) [44]. While semi-supervised image classification [52] and semantic segmentation [2] using MT boi down to pairing predictions at the image or pixel level, how to pair predictions between two sets of 3D boxes remains an open question.
This open question is not yet well answered by prior methods [48, 56, 60], as demonstrated by the analysis in Fig. 1. Shown by the upper three bars, they only exploit a very limited number of box pairs for MT training and we attribute this limitation to the two-stage architecture (i.e., VoteNet [37]) they are built upon. VoteNet makes final box predictions using seed proposals extracted by the first stage and only a limited number of proposal pairs are aligned.
**Being densely-matched.** The emergence of fully convolutional 3D detection [39] inspires us to address the aforementioned issue using densely matched boxes, and it turns out fruitful. As shown by the lower two bars in Fig. 1, our method allows much more box pairs for MT training even after label filtering. This change leads to spatially dense training signals that translate to notable performance im
Figure 1: This figure demonstrates the average count of box pairs in representative proposal matching methods SESS [60], 3DIoUMatch [48] and Proficient Teachers [56]. Our dense matching formulation (DQS3D) allows significantly more box pairs and spatially dense training signals. The x-axis is distorted according to a squared root mapping.
provement (Table. 1). In one word, our method predicts one 3D box for each voxel, getting rid of the intermediate proposal generation stage. Thus pairing teacher and student predictions in a voxel-wise manner becomes a natural choice and this directly leads to dense training signals.
**Being quantization-aware.** During the development of our densely matched paradigm, we identify a fundamental issue specific to 3D detection: point-to-voxel quantization. It is widely known that the power of MT is unleashed only with diverse data augmentation [2, 14, 52] and random transformation is a typical augmentation strategy [10, 18, 45] for 3D point cloud. Unfortunately, applying random transformation inevitably leads to a different point-to-voxel mapping due to the existence of quantization error and a mismatch between teacher and student predictions on each voxel. To this end, we derive a closed-form compensation rule and implement it on-the-fly, which leads to consistent performance gains in various settings.
Highlighting our two technical contributions mentioned above, we name our method **DQS3D**, which is short for densely-matched quantization-aware semi-supervised 3D detection. Our contributions are summarized as follows:
* We shed light on the superiority of dense matching over proposal matching in semi-supervised 3D object detection, which could not only harvest more pseudo labels but also improve the pseudo-label quality.
* We propose the first framework for densely-matched quantization-aware semi-supervised 3D object detection, where we point out the problem of quantization error and come up with an on-the-fly fix to it.
* We conduct extensive experiments on public datasets and achieve significant results. For example, DQS3D scores \(48.5\%\)\(\mathrm{mAP}@0.5\) on ScanNet using \(20\%\) data while the best published result is \(35.2\%\).
## 2 Related Works
### Self-Training for Semi-supervised Learning
Semi-supervised learning (SSL) is a powerful learning paradigm that improves performance by leveraging both labeled and unlabeled data, making it especially useful in situations where obtaining manually annotated labels is costly or difficult. Recent works strive to apply this paradigm to tasks including semantic segmentation [12, 20], object detection [1, 23, 34], text recognition [36], action recognition [35], facial expression recognition [24], video paragraph grounding [19], _etc_.
In particular, self-training using pseudo-labeling [58, 22] is a principled method that has been widely adopted for SSL [11, 50, 53, 55, 54, 15, 13, 64]. A typical architecture for online self-training is mean teachers (MT) [44], which successfully integrates the self-training method into end-to-end frameworks. MT involves two identical but independent networks during training, with one (referred to as the student network) updated by gradient descent and the other one (referred to as the teacher network) updated by exponential moving average (EMA) of the student model's parameters. Predictions of the teacher network on unlabeled data are regarded as online pseudo-labels for the student network, and self-teaching is implemented by enforcing predictions of the two networks to be consistent. The architecture of MT has been proven highly effective on various tasks [1, 23, 40, 43, 47, 54, 55, 29, 30].
### Semi-supervised 3D Object Detection
**Proposal Matching for Voting-based Detector.** Specifically on the task of semi-supervised 3D object detection, numerous prior arts are also based on the MT architectures and take the voting-based VoteNet [37] as base detectors. SESS [60] introduced the nearest-center matching scheme (which we refer to as _proposal matching_) to generate pseudo-labels from all teacher proposals. 3DIoUMatch [48] proposed a filtering mechanism to impose multiple thresholds on teacher predictions for improving quality of pseudo labels. It further performs non-maximum suppression (NMS) on pseudo-labels to reduce redundancy. Proficient Teachers [56] implemented a spatial-ensembling module that generates detections from multiple augmented views of input point clouds, which are then combined to produce more pseudo-labels. Although these methods have shown promise on the task of semi-supervised 3D object detection, they rely heavily on proposal matching, which we argue to be ineffective as the harvested pseudo training signals are sparse in space.
**Dense Prediction Detector.** The dense prediction scheme for 2D object detection task has garnered a lot of interest in the research community [21, 27, 38, 46]. However, directly applying the backbones for 2D detection to 3D tasks [32] is not cost-efficient due to the sparse nature of point clouds in space, requiring non-trivially larger amount of computational resources than 2D counterparts.
Nevertheless, the advent of high-dimensional convolutional neural networks [61, 6, 4, 5, 16] has reduced both time and space complexity, making it possible to efficiently extract hierarchical features from 3D point clouds. Leveraging sparse 3D convolution, 3D object detection can scale to much larger scenes while remaining memory-efficient [39, 31, 49]. Motivated by this design, FCAF3D [39] uses a voxelized modification of ResNet as the backbone, which enables feature extraction and object prediction on a voxel basis. The voxelization, however, inevitably brings about the issue of quantization error in the point-to-voxel discretization when the input point cloud is randomly augmented. In this paper, we propose _dense matching_**for dense prediction detector**, identify the problem of quan
tization error and propose a solution to it on-the-fly.
## 3 Methodology
This section presents a detailed exposition of DQS3D. In Sec. 3.1, we formally define the task of semi-supervised 3D object detection. In Sec. 3.2, we introduce dense matching scheme and compare it with prior arts of proposal matching. In Sec. 3.3, we introduce the densely matched self-training framework and the loss design, combined to address the task of semi-supervised 3D object detection. In Sec. 3.4, we point out the problem of quantization error and derive a closed-form solution.
### Preliminary
We formally define the task of 3D object detection as to predict all objects \(\mathbf{Y}=\{\mathbf{y}_{i}\}_{i=1}^{K}\) given an input point cloud \(\mathbf{X}\in\mathbb{R}^{N\times 3}\), where \(K\) denotes the number of objects in the scene and each target object \(\mathbf{y}_{i}\) is represented by its bounding box parameters \(\boldsymbol{\delta}_{i}\) and corresponding semantic label \(q_{i}\). Specifically, in terms of 3d object detection in the semi-supervised setting, only a small proportion of the training dataset (denoted by \(\{\mathbf{X}^{L}\}\)) is equipped with ground-truth object bounding-box labels (denoted by \(\{\mathbf{Y}^{L}\}\)), whereas the remainder (denoted by \(\{\mathbf{X}^{U}\}\)) has no labels.
### Dense Matching
To address the task of semi-supervised 3D object detection, self-training methods (e.g., mean teachers [44, 48, 56, 60]) enforce consistency between the predictions of the student and teacher networks. Thus, it is crucial to establish a mapping between student and teacher predictions in aligned views, which we refer to as _matching_.
Prior arts with self-training adopt _proposal matching_[48, 60] to align the predicted objects (referred to as _proposals_) of the student and teacher networks, which is typically done through a nearest-center strategy. More specifically, each teacher proposal is aligned with the student proposal whose center is the nearest to that of the teacher proposal, as illustrated in Fig. 2(a). Note that, the dashed boxes are only demonstration for spatial locations of teacher outputs.
Despite the fact that teacher proposals are generally more accurate than student proposals, we argue that _proposal matching_ is ineffective and may hinder knowledge propagation from the teacher to the student. The ineffectiveness is mainly attributed to the two adverse situations illustrated in Fig. 2(a), inevitably caused by the sparseness of the proposals in space: (1) Adjacent teacher proposals are aligned to the same student proposal and cause confused supervision for the student. (2) Student proposals that are distant from any teacher proposal are aligned to none and receive no supervision from teacher proposals.
To address the aforementioned issues, a sufficient condition is a bijection between the student and teacher predictions. Inspired by the facts that the objects are predicted on a voxel basis with the dense prediction base detector, and that the voxels anchors of the student and teacher views are inherently corresponded in space, we propose _dense matching_ (illustrated in Fig. 2(b)) to establish the bijection, simply by pairing the predictions at corresponding voxel anchors.
We believe that the dense matching scheme has the following advantages: **(1)** Each predicted object is represented by multiple bounding box predictions whose regression scores varies at different voxel anchors (e.g., points \(\mathbf{A}\) and \(\mathbf{B}\) in Fig. 2(b)). This phenomenon imposes spatial regularization on the dense prediction model and improves the models's awareness of local geometry, as the optimization process forces the predicted bounding box parameters of the same object but at different voxel anchors to be sampled from a smooth function in space. **(2)** The required bijection between the student and teacher predictions is naturally established with the correspondence of the voxel anchors, with which each student prediction receives supervision from only one teacher prediction. This eliminates the two aforementioned adverse situations, namely the "multiple supervision" and the "no supervision". In light of the benefits of dense matching over proposal matching, we propose a self-training framework specifically tailored for the dense matching scheme in the upcoming section.
Figure 2: Illustration of two matching schemes. (a) Proposal matching: each teacher prediction is matched with the student prediction whose center is closest to that of the teacher prediction. (b) Dense matching: matching is established through spatially-aligned voxel anchors. Dashed boxes are only demonstrations for spatial locations of the teacher predictions.
### Densely Matched Self-Training Framework
**Overall Architecture.** Following prior arts [48, 56, 60], we adapt the robust self-training approach for our densely-matched semi-supervised learning framework, as depicted in Fig. 3. The framework includes two identical but independent networks (teacher and student models) as the base detectors, which are implemented by FCAF3D [39]. During training, an input batch is composed of both labeled data with ground-truth object annotations and unlabeled data. The input batch is then augmented by asymmetric quantization-aware transformation \(\mathbf{R}_{\theta,\Delta\mathbf{r}}\), which consists of a random rotation \(\theta\) around the upright axis, a random translation \(\Delta\mathbf{r}\) and the quantization error correction (detailed later in Sec. 3.4).
The augmented and unaugmented batches are then fed into the student and teacher models, producing voxel-level predictions \(\tilde{\mathbf{Y}}_{S}=\{\tilde{\mathbf{Y}}_{S}^{L},\tilde{\mathbf{Y}}_{S}^{U}\}\) and \(\mathbf{Y}_{T}=\{\mathbf{Y}_{T}^{L},\mathbf{Y}_{T}^{U}\}\), respectively. The teacher predictions are then aligned to the student predictions with the same transformation \(\mathbf{R}_{\theta,\Delta\mathbf{r}}\), resulting in the transformed teacher predictions \(\tilde{\mathbf{Y}}_{T}=\{\tilde{\mathbf{Y}}_{T}^{L},\tilde{\mathbf{Y}}_{T}^{U}\}\). With the same transformation, the dense matching between the two sets of predictions is naturally established, which is further filtered to increase the quality of pseudo-labels. The student network is optimized by enforcing a consistency loss on the remaining match set and a supervised loss between the student predictions and the ground-truth labels. In the following sections, we describe three core components of the framework. The colors of the titles are the same as the corresponding regions in Fig. 3.
Aligning Teacher-Student Predictions. Suppose \(\mathbf{A}=(\hat{x},\hat{y},\hat{z})\) represents the coordinate of the voxel. Following [39], the teacher prediction \(\mathbf{y^{A}}\in\mathbf{Y}_{T}\) is formulated by the bounding box parameters \(\boldsymbol{\delta^{\mathbf{A}}}=\{\delta_{i}^{\mathbf{A}}\}_{i=1}^{8}\), the centerness \(c^{\mathbf{A}}\in[0,1]\) and the semantic regression scores \(\{p_{i}^{\mathbf{A}}\}_{i=1}^{N_{\text{cls}}}\), where \(N_{\text{cls}}\) denotes the number of semantic categories. The first six bounding box parameters \(\delta_{1},\delta_{2},...,\delta_{6}\) represent the distance to the opposite surfaces of the bounding box in the width, length and height dimension, and \(\delta_{7},\delta_{8}\) utilize the topological equivalency of pair \((\frac{w}{l},\theta)\) to a Mobius strip for the disambiguation of heading angles of symmetric objects, namely:
\[\delta_{1}^{\mathbf{A}}=(x+\frac{w}{2})-\hat{x},\;\delta_{2}^{ \mathbf{A}}=\hat{x}-(x-\frac{w}{2}),\;\delta_{3}^{\mathbf{A}}=(y+\frac{l}{2} )-\hat{y},\] \[\delta_{4}^{\mathbf{A}}=\hat{y}-(y-\frac{l}{2}),\;\delta_{5}^{ \mathbf{A}}=(z+\frac{h}{2})-\hat{z},\delta_{6}^{\mathbf{A}}=\hat{z}-(z-\frac{ h}{2}), \tag{1}\] \[\delta_{7}^{\mathbf{A}}=\log\frac{w}{l}\sin(2\theta),\;\delta_{8} ^{\mathbf{A}}=\log\frac{w}{l}\cos(2\theta)\]
Assuming the transformation \(\mathbf{R}_{\theta,\Delta\mathbf{r}}\) maps \(\mathbf{y^{A}}\) to \(\tilde{\mathbf{y}}^{\mathbf{A}^{\prime}}:=\tilde{\mathbf{y}}^{\mathbf{A} \mathbf{R}_{\theta,\Delta\mathbf{r}}}\). Since the rotation around the upright-axis and the spatial translation have no effect on the semantics or the relative location towards anchor voxel of the predicted bounding box, we have \(\tilde{c}^{\mathbf{A}^{\prime}}=c^{\mathbf{A}},\tilde{\mathbf{y}}^{\mathbf{A} }=\mathbf{s}^{\mathbf{A}^{\prime}}\). The relationship between \(\tilde{\boldsymbol{\delta}}^{\mathbf{A}^{\prime}}\) and \(\boldsymbol{\delta^{\mathbf{A}}}\) can be derived from Eq. 1, which goes:
\[\tilde{\delta}_{1}^{\mathbf{A}^{\prime}}=\frac{\cos\theta+1}{2} \delta_{1}^{\mathbf{A}}+\frac{-\cos\theta+1}{2}\delta_{2}^{\mathbf{A}}+\frac{- \sin\theta}{2}\delta_{3}^{\mathbf{A}}+\frac{\sin\theta}{2}\delta_{4}^{ \mathbf{A}}, \tag{2}\] \[\tilde{\delta}_{2}^{\mathbf{A}^{\prime}}=-\frac{\cos\theta+1}{2} \delta_{1}^{\mathbf{A}}+\frac{\cos\theta+1}{2}\delta_{3}^{\mathbf{A}}+\frac{ \sin\theta}{2}\delta_{3}^{\mathbf{A}}+\frac{-\sin\theta}{2}\delta_{4}^{ \mathbf{A}},\] \[\tilde{\delta}_{3}^{\mathbf{A}^{\prime}}=\frac{\sin\theta}{2} \delta_{1}^{\mathbf{A}}+\frac{-\sin\theta}{2}\delta_{2}^{\mathbf{A}}+\frac{ \cos\theta+1}{2}\delta_{3}^{\mathbf{A}}+\frac{-\cos\theta+1}{2}\delta_{4}^{ \mathbf{A}},\] \[\tilde{\delta}_{4}^{\mathbf{A}^{\prime}}=\frac{-\sin\theta}{2} \delta_{1}^{\mathbf{A}}+\frac{\sin\theta}{2}\delta_{2}^{\mathbf{A}}+\frac{- \cos\theta+1}{2}\delta_{3}^{\mathbf{A}}+\frac{\cos\theta+1}{2}\delta_{4}^{ \mathbf{A}},\] \[\tilde{\delta}_{5}^{\mathbf{A}^{\prime}}=\delta_{5}^{\mathbf{A}}, \;\tilde{\delta}_{6}^{\mathbf{A}}=\delta_{6}^{\mathbf{A}},\;\tilde{\delta}_{7}^{ \mathbf{A}^{\prime}}=\delta_{7}^{\mathbf{A}}\cos(2\theta),\;\tilde{\delta}_{8} ^{\mathbf{A}^{\prime}}=\delta_{8}^{\mathbf{A}}\cos(2\theta).\]
The detailed derivation is in the supplementary material.
Matching Filtering. After establishing the matching between the two sets of predictions, a filtering strategy based on confidence is applied to the matching to reduce low-quality supervision. Specifically, with the predicted centerness score and semantic distribution in teacher outputs denoted by \(\tilde{\mathbf{c}}_{T}\) and \(\tilde{\mathbf{s}}_{T}\), only matching that satisfies \(\tilde{\mathbf{c}}_{T}>\gamma_{\text{center}}\) and \(\max\left(\operatorname{softmax}\left(\tilde{\mathbf{s}}_{T}\right)\right)> \gamma_{\text{cls}}\) is retained. \(\gamma_{\text{center}}\) and \(\gamma_{\text{cls}}\) are hyperparameters. Note that, even after the filtering, the matching in our method is still dense.
The primary distinction between the proposed _dense matching_ method and prior arts with _proposal matching_ is the processing order of the matching and filtering. In proposal matching methods, teacher proposals are first filtered using confidence scores for higher quality and then matched, resulting in even sparser teacher proposals. In the dense matching framework, on the contrary, the matching is established first and then filtered, preserving the spatial alignment of the predictions.
Optimization. The student model is optimized by gradient descent with the supervised loss \(\mathcal{L}_{\text{supervised}}\) and the consistency loss \(\mathcal{L}_{\text{consistency}}\).
The supervised loss \(\mathcal{L}_{\text{supervised}}\) is enforced between the student predictions of the labeled input point clouds \(\{\tilde{\mathbf{Y}}_{S}^{L}\}\) and the corresponding labels after the augmentation \(\{\tilde{\mathbf{Y}}^{L}\}\). Following [39], we adopt 3DIoU loss on the predicted bounding boxes, a binary cross entropy loss on the predicted
Figure 3: Illustration of our proposed densely-matched quantization-aware semi-supervised 3D object detection framework.
centerness and a cross entropy loss on the predicted semantic distribution.
The consistency losses \(\mathcal{L}_{\text{consistency}}\) are enforced on the filtered matching between student and teacher predictions. For box parameters, we adopt Huber loss, which is less sensitive to outliers in pseudo labels:
\[\mathcal{L}_{\text{box}}^{\mathbf{A}}=\begin{cases}\frac{1}{2}(\Delta\mathbf{ \delta}_{i}^{\mathbf{A}})^{2},&\text{for }|\Delta\mathbf{\delta}_{i}^{\mathbf{A}}|< \tau_{\text{box}},\\ \tau_{\text{box}}\cdot(|\Delta\mathbf{\delta}_{i}^{\mathbf{A}}|-\frac{1}{2}\tau_{ \text{box}}),&\text{otherwise}.\end{cases} \tag{3}\]
where \(\Delta\mathbf{\delta}^{\mathbf{A}}=\mathbf{\delta}^{\mathbf{A}}-\mathbf{\delta}^{\mathbf{A}}\) and \(\tau_{\text{box}}\) is a hyperparameter. For centerness we adopt L2 loss \(\mathcal{L}_{\text{center}}^{\mathbf{A}}=||c^{\mathbf{A}}-\bar{c}^{\mathbf{A}} ||_{2}^{2}\). For predicted semantic distribution, we adopt KL divergence \(\mathcal{L}_{\text{semantic}}^{\mathbf{A}}=\sum_{c=1}^{N_{\text{cl}}}\mathbf{ s}_{c}^{\mathbf{A}}\log(\frac{\mathbf{s}^{\mathbf{A}}}{\bar{\mathbf{s}}_{c}^{ \mathbf{A}}})\). The final consistency loss is then formulated as:
\[\mathcal{L}_{\text{consistency}}=\lambda_{\text{box}}\mathcal{L}_{\text{box}} +\lambda_{\text{center}}\mathcal{L}_{\text{center}}+\lambda_{\text{ semantic}}\mathcal{L}_{\text{semantic}}. \tag{4}\]
where \(\lambda_{\text{box}}\), \(\lambda_{\text{center}}\) and \(\lambda_{\text{semantic}}\) are loss weights.
As for the teacher model, the gradients are detached and the model parameters are updated using exponential moving average (EMA) of those of the student model:
\[\theta_{t}^{n+1}=\alpha\theta_{t}^{n}+(1-\alpha)\theta_{s}^{n} \tag{5}\]
where \(\theta_{t}^{n}\) and \(\theta_{s}^{n}\) denote the parameters of the teacher and student networks at the \(n\)-th step, and \(\alpha\) is the average factor. The quality of the guidance provided by the teacher model is gradually improved with the knowledge from the student model.
### Quantization Error Correction
In this section, we shed light on the problem of quantization error and propose a quantization error correction (QEC) module with closed-form solutions to address the problem.
Following implementation in MinkowskiEngine [4], we define the quantization (or voxelization) operator \([\cdot]\) on a vector as \([\mathbf{A}]=(\lfloor x_{\mathbf{A}}\rfloor\,,\lfloor y_{\mathbf{A}}\rfloor \,,\lfloor z_{\mathbf{A}}\rfloor)\), where the notation \(\lfloor\cdot\rfloor\) denotes the floor operator. The process of quantization is illustrated in Fig. 4(a). Since the stochastic transformation does not commute with voxelization (depicted in Fig. 4(b)), the spatial location of an input point corresponds to two different ones after being processed by the student and teacher networks (\([\mathbf{AR}]\) and \([\mathbf{A}]\)R in Fig. 4(b)), the difference of which is defined as the _quantization error_.
The quantization error is detrimental to dense pseudo-label self-training scheme, as it violates the exact dense matching we pursue and causes inaccurate training signals and performance decrease. We propose an online solution that finds a compensation term \(\mathbf{r}^{\prime}(\mathbf{A},\mathbf{R}_{\theta,\Delta\mathbf{r}})\) for the given location \(\mathbf{A}\) and transformation \(\mathbf{R}_{\theta,\Delta\mathbf{r}}\in\mathcal{T}\), namely find \(\mathbf{\vec{r}}^{\prime}\) that satisfies:
\[[\mathbf{AR}_{\theta,\Delta\mathbf{r}}+\mathbf{\vec{r}}^{\prime}]\xlongequal{ \text{same voxel}}[\mathbf{A}]\mathbf{R}_{\theta,\Delta\mathbf{r}} \tag{6}\]
We rewrite Eq. 6 by applying voxelization to both sides to replace the _same voxel_ equality with the arithmetic equality. Since quantization operation holds the property of idempotence, we have:
\[[\mathbf{AR}_{\theta,\Delta\mathbf{r}}+\mathbf{\vec{r}}^{\prime}]=[[\mathbf{A }]\mathbf{R}_{\theta,\Delta\mathbf{r}}] \tag{7}\]
By refactoring \(\mathbf{AR}_{\theta,\Delta\mathbf{r}}\) into \(\mathbf{AR}_{\theta}+\Delta\mathbf{r}\) and using _Lemma.1_ and _Lemma.2_ from the supplementary material, we have:
\[[\{\mathbf{A}\}\mathbf{R}_{\theta}+\{\Delta\mathbf{r}\}+\mathbf{\vec{r}}^{ \prime}]=\mathbf{0} \tag{8}\]
when \(\theta\in\{\frac{k\pi}{2}\}_{k=0}^{3}\). The operator \(\{\cdot\}:\mathbf{x}\mapsto\mathbf{x}-[\mathbf{x}]\) is defined as the remainder after quantization. We solve Eq. 26 by interpreting the equation as a requirement for the terms on the left-hand side to lie within the voxel represented by the original point. Therefore, assuming \(\mathbf{\gamma}\in[0,S_{\text{v}}]^{3}\) (\(S_{\text{v}}\) denotes the voxel size), we derive the compensation term for \(\mathbf{\vec{r}}^{\prime}(\mathbf{A},\mathbf{R}_{\theta,\Delta\mathbf{r}})\) as:
\[\mathbf{\vec{r}}^{\prime}(\mathbf{\gamma})=\mathbf{\gamma}-\{\mathbf{A}\}\mathbf{R}_{ \theta}-\{\Delta\mathbf{r}\} \tag{9}\]
Figure 4: Demonstration of quantization error and correction. \(R\) denotes stochastic augmentation and \([\cdot]\) denotes quantization. For the purpose of illustration, voxels and transformations are depicted in 2D space. (a) Concept of quantization. (b) Concept of quantization error. Without loss of generality, the random transformation is represented by a 90\({}^{\circ}\)counter-clockwise rotation. (c) The process of quantization error correction (QEC). QEC is applied on the student branch between random transformation and quantization to eliminate the quantization error.
To alleviate the negative impacts caused by the perturbations to the point cloud structures, we select \(\mathbf{\gamma}_{0}\) such that:
\[\mathbf{\gamma}_{0}=\operatorname*{argmin}_{\mathbf{\gamma}\in[0,\mathrm{S}_{\mathrm{v}}] ^{3}}\ ||\mathbf{\gamma}-\{\mathbf{A}\}\mathbf{R}_{\theta}-\{\Delta\mathbf{r}\}||_{2} \tag{10}\]
Finding \(\mathbf{\gamma}_{0}\) is a typical mathematical optimization problem, and we provide a closed-form solution to Eq. 28 in the supplementary material.
## 4 Experiments
### Datasets
Following prior arts [48][60] aiming at semi-supervised 3D object detection, we evaluate our framework on ScanNet v2 [9] and SUN RGB-D [41].
**ScanNet v2**[9] is a widely used 3D indoor scene dataset which contains 1512 scans of indoor scenes reconstructed from 2.5 million high-quality RGB-D images. The annotations include per-point instance labels which enable the derivation of axis-aligned object bounding boxes for training and evaluation of 3D object detection methods. The challenge with this dataset in the semi-supervised setting is the limited amount of labeled data. For instance, the 5% labeled setting corresponds to only a few dozen labeled scenes, making it difficult to learn a good detector from labeled data alone.
**SUN RGB-D**[41] is a widely used benchmark dataset with 10335 indoor scene scans for evaluating scene understanding algorithms, particularly in the context of 3D object detection. Apart from the RGB-D data, the dataset also provides ground-truth 3D bounding box annotations, which enables the evaluation of the task of 3D object detection. The main challenge of this dataset is that the scenes are not axis-aligned. This rotational variability makes it difficult to predict object bounding boxes accurately in the semi-supervised setting, as the model is challenged to recognize objects with any possible orientation after training on limited labeled data.
### Implementation Details
**Hyperparameters.** We use the same set of hyperparameters for both datasets. As suggested in [39], the voxel size is set to \(S_{\mathrm{v}}=0.01\)m. The confidence thresholds are set to \(\tau_{\text{center}}=0.40\) and \(\tau_{\text{cls}}=0.20\). The threshold for Huber loss is set to \(\tau_{\text{box}}=0.30\). The weights for consistency losses are set to \(\lambda_{\text{box}}=1.00\), \(\lambda_{\text{center}}=0.25\), and \(\lambda_{\text{semantic}}=0.50\). The same warmup strategy as in [60] is adopted for the consistency losses. The average factor \(\alpha\) of the exponential moving average is set to \(0.999\). As for the stochastic transformation strategies, rotation \(\theta\) around the upright-axis is randomly chosen from \(\{0,\frac{\pi}{2},\pi,\frac{3\pi}{2}\}\) and random translation \(\Delta\mathbf{r}\) is sampled uniformly from \([-0.5\text{m},0.5\text{m}]^{3}\).
**Details during training and evaluation.** We use MMDetection3D [8] toolbox to implement our proposed framework. For semi-supervised detection on both ScanNet and SUN RGB-D, our method runs for 12000 training steps which empirically leads to good convergence. During training, we adopt the AdamW optimizer [33] with an initial learning rate of \(10^{-3}\) and a weight decay factor of \(10^{-4}\), and a scheduler decaying the learning rate by 90% at 67% and 90% of the training process. In the semi-supervised setting, a training batch contains 8 labeled samples and 8 unlabeled samples. During evaluation, to ensure fair comparison with former semi-supervised 3D object detection methods [48, 60], we perform only one forward pass without test-time augmentation used by [39]. Meanwhile, we keep other evaluation settings including IoU thresholds for NMS the same as [39]. We report the \(\mathrm{[email protected]}\) and \(\mathrm{[email protected]}\) metrics.
### Comparison of Matching Schemes
In this section, we first provide the comparison between our proposed _dense matching_ method and the _proposal matching_ methods. We validate the superiority of our proposed methods by demonstrating the quantity and quality of the pseudo labels generated by dense matching strategy.
**Are more pseudo-labels harvested?** We trained our proposed method as well as former arts on ScanNet [9] dataset with 10% data equipped with labels and collect the pseudo labels harvested by these methods. The average amounts of pseudo-labels harvested from one scene are illustrated in Fig. 1. As shown in the figure, our methods with dense matching strategy harvest a significantly larger amount of pseudo labels compared with SESS [60], 3DIoUMatch [48] and Proficient Teachers [56], which
Figure 5: Transductive analysis on different matching schemes in semi-supervised 3D object detection. Experiments are conducted on ScanNet with 10% training data equipped with labels. \(\mathrm{Coverage@\{0.25,0.50\}}\) and \(\mathrm{mAP@\{0.25,0.50\}}\) on the unlabeled set are reported. The proposed dense matching scheme achieves significantly higher performance than prior arts.
is a contributing factor to better performance for semi-supervised 3D object detection. As shown by later experiments in Sec. 4.4, this translates to the improvement of the detection performance.
**Are the pseudo-labels of good quality?** In this section, we investigate the quality of pseudo-labels generated by different matching schemes. We borrow the concept of transductive analysis [60] where we regard the model performance on the unlabeled set as the indicating measure of the pseudo-label quality. The results are obtained by training our methods and former arts on ScanNet [9] dataset with 10% of data equipped with labels. In Fig. 5, we depict the \(\mathrm{Coverage@}\{0.25,0.50\}\) and \(\mathrm{mAP@}\{0.25,0.50\}\) on the unlabeled set during the training stage, where \(\mathrm{Coverage}\) indicates the recall rate of objects in the scene. It can be observed that dense matching provides significantly more informative and accurate pseudo-labels, compared with the proposal-based counterparts. We attribute the improved transductive results to the way in which the dense prediction scheme resolves the "multiple supervision" and "no supervision" issues demonstrated in Fig. 2(a) and provides spatially dense training signals for the student network.
### Comparisons with prior SOTAs
In this section, we conduct extensive experiments and report the performance of DQS3D and the prior arts in both semi-supervised and fully-supervised settings on the ScanNet and SUN RGB-D datasets. In the semi-supervised setting, the proportion of the labeled set varies among 5%, 10%, and 20%. The consistency losses are imposed on both the labeled and unlabeled sets. In the fully-supervised setting, the entire dataset is regarded as both the labeled and unlabeled sets to examine if the proposed framework can further learn from the additional supervision of pseudo-labels. The experiment results are presented in Tab. 1.
Our method outperforms prior proposal matching methods by large margins and sets new state-of-the-art results on the semi-supervised 3D object detection benchmark for both ScanNet and SUN RGB-D datasets. It is noteworthy that the improvements of \(\mathrm{mAP@}0.50\) are generally larger than those of \(\mathrm{mAP@}0.25\). We attribute this to the denseness of the pseudo labels, which provides more spatially fine-grained supervision for the student model. In this way, the predicted object bounding boxes overlap with the target objects to a larger extent, which helps achieve more distinct margins with higher IoU thresholds.
Surprisingly, in the fully-supervised setting, our method also pushes the boundaries of 3D object detection, as shown in Tab. 1. We attribute these improvements to the way in which the framework of self-training serves as regularization and helps improve the stability of the training procedure, as the pseudo-labels generated by the teacher net
\begin{table}
\begin{tabular}{c|c|c c c c c c|c c} \hline \multirow{3}{*}{} & \multirow{3}{*}{Model} & \multicolumn{3}{c}{5\%} & \multicolumn{3}{c}{10\%} & \multicolumn{3}{c}{20\%} & \multicolumn{3}{c}{100\%} \\ \cline{3-10} & & mAP & mAP & mAP & mAP & mAP & mAP & mAP \\ & & @0.25 & @0.50 & @0.25 & @0.50 & @0.25 & @0.50 & @0.25 & @0.50 \\ \hline \multirow{6}{*}{**Coverage**} & VoteNet [37] & 27.9 & 10.8 & 36.9 & 18.2 & 46.9 & 27.5 & 57.8 & 36.0 \\ & FCAF3D [39] & 43.8 & 29.3 & 51.1 & 35.7 & 58.2 & 42.1 & 69.5 & 55.1 \\ & SESS [60] & 32.0 & 14.4 & 39.5 & 19.8 & 49.6 & 29.0 & 61.3 & 39.0 \\ & 3DIoUMatch [48] & 40.0 & 22.5 & 47.2 & 28.3 & 52.8 & 35.2 & 62.9 & 42.1 \\ \cline{2-10} & **DQS3D (Ours)** & **49.2** & **35.0** & **57.1** & **41.8** & **64.3** & **48.5** & **71.9** & **56.3** \\ & Improv. & **+9.2** \(\uparrow\) & **+12.5**\(\uparrow\) & **+9.9**\(\uparrow\) & **+13.5**\(\uparrow\) & **+11.5**\(\uparrow\) & **+13.3**\(\uparrow\) & **+2.4**\(\uparrow\) & **+1.2**\(\uparrow\) \\ \hline \multirow{6}{*}{**Coverage**} & VoteNet [37] & 29.9 & 10.5 & 38.9 & 17.2 & 45.7 & 22.5 & 58.0 & 33.4 \\ & FCAF3D [39] & 49.5 & 31.7 & 50.7 & 33.4 & 54.3 & 36.5 & 63.6 & 47.5 \\ \cline{1-1} & SESS [60] & 34.2 & 13.1 & 42.1 & 20.9 & 47.1 & 24.5 & 60.5 & 38.1 \\ \cline{1-1} & 3DIoUMatch [48] & 39.0 & 21.1 & 45.5 & 28.8 & 49.7 & 30.9 & 61.5 & 41.3 \\ \cline{1-1} \cline{2-10} & **DQS3D (Ours)** & **53.2** & **35.6** & **55.7** & **38.2** & **58.0** & **42.3** & **64.1** & **48.2** \\ \cline{1-1} & Improv. & **+14.2**\(\uparrow\) & **+14.5**\(\uparrow\) & **+10.2**\(\uparrow\) & **+9.4**\(\uparrow\) & **+8.3**\(\uparrow\) & **+11.4**\(\uparrow\) & **+0.5**\(\uparrow\) & **+0.7**\(\uparrow\) \\ \hline \end{tabular}
\end{table}
Table 1: Experiment results on the task of 3D object detection in various semi-supervised settings (5%, 10%, 20% labels available) and the fully-supervised setting on ScanNet [9] and SUN-RGBD [41] datasets. The proposed DQS3D is compared with **semi-supervised** 3D object detection frameworks SESS [60] and 3DIoUMatch [48], with the margins over 3DIoUMatch [48] marked in blue. Proficient Teachers [56] is currently not comparable as their experiments were only conducted in outdoor scenes and their codes are not currently available, which hinders us to reproduce their experiments on indoor benchmarks. DQS3D is also compared with **fully-supervised** 3D object detectors VoteNet [37] and FCAF3D [39], with the margins over FCAF3D [39] marked in magenta.
works are not affected by the bias in mini-batches.
### Ablation Studies
**Quantization Error Correction.** To demonstrate effectiveness of our proposed quantization error correction (QEC) module (detailed in Sec. 3.4), we conduct experiments on ScanNet (20% of training set equipped with labels), in which we train the proposed framework with various voxel sizes and ablate on the compensation term. In addition to the \(\mathrm{mAP@}\{0.25,0.50\}\) on the _validation_ set, we also report the weighted IoU of the predicted and ground-truth bounding boxes on the _labeled training_ set. The object's weight is determined by the predicted centerness.
The results are presented in Tab. 2. Notably, with all voxel sizes, experiments trained with the compensation term achieve higher performances (up to +2.49% IoU) than those trained without the term. The non-trivial improvements demonstrate the effectiveness of the QEC module in addressing the inherent issue of quantization error and consequently improving the detection accuracy.
Furthermore, we conduct a statistical analysis for intuitively understanding the QEC module. In experiments training on ScanNet dataset, we collect the compensation terms of 80 million points and plot the distribution of the L2 norms and the directions of the terms in Fig. 6. As depicted in Fig. 6(a), the L2 norms of more than 80% of the compensation terms lie in the range of \([0.03S_{\mathrm{v}},S_{\mathrm{v}}]\) (\(S_{\mathrm{v}}\) denotes the voxel size), demonstrating that the quantization error is a non-trivial phenomenon. As depicted in Fig. 6(b), the majority of the compensation terms are aligned with axes. This is because the QEC terms have the smallest possible magnitude by design to preserve the point cloud structure.
**Consistency Losses.** Ablation experiments on the consistency losses are conducted on the ScanNet dataset with 20% training data equipped with labels. The results are reported in Tab. 3. According to the results, the absence of the box consistency loss has the largest influence with the largest performance drops of -2.0% (\(\mathrm{mAP@}0.25\)) and -3.2% (\(\mathrm{mAP@}0.50\)), while the absences of other two consistency losses also bring about drops in performance. These results indicate that each component of the proposed consistency losses is necessary.
## 5 Conclusion
This paper presents a densely-matched quantization-aware framework, DQS3D, for semi-supervised 3D object detection. By leveraging dense matching instead of proposal matching, and by addressing the issue of quantization error, DQS3D achieves significant improvements over former arts on two widely-used benchmarks, ScanNet v2 and SUN RGB-D, in the semi-supervised setting.
Furthermore, the paper provides evidence that the use of dense predictions leads to more meaningful pseudo-labels and promotes self-training. We hope the insights and techniques introduced in this work would inspire future research in the field of semi-supervised learning.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline Voxel Size (m) & QEC & IoU (\%) & \(\mathrm{mAP@}0.25\) & \(\mathrm{mAP@}0.50\) \\ \hline \multirow{2}{*}{0.01} & & 75.19 & 63.7 & 47.1 \\ & ✓ & **76.82 (+1.63)** & **64.3 (+0.6)** & **48.5 (+1.6)** \\ \hline \multirow{2}{*}{0.02} & & 71.06 & 59.0 & 42.2 \\ & ✓ & **72.63 (+1.57)** & **60.2 (+1.2)** & **42.8 (+0.6)** \\ \hline \multirow{2}{*}{0.03} & & 65.63 & 51.9 & 35.2 \\ & ✓ & **68.12 (+2.49)** & **52.7 (+0.8)** & **35.9 (+0.7)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablations on the QEC term. Experiments are conducted on ScanNet with 20% training data equipped with labels.
\begin{table}
\begin{tabular}{c|c c c c|c c} \hline \hline T & Box & Centerness & Class & \(\mathrm{mAP@}0.25\) & \(\mathrm{mAP@}0.50\) \\ \hline ✓ & ✓ & ✓ & ✓ & **64.3** (+4.3) & **48.5** (+3.9) \\ \hline ✓ & & ✓ & ✓ & 62.3 (+2.0) (+2.3) & 45.3 (-3.2) (+0.7) \\ ✓ & ✓ & & ✓ & 63.5 (-0.8) (+3.5) & 46.4 (-2.1) (+1.8) \\ ✓ & ✓ & ✓ & & 63.4 (-0.9) (+3.4) & 47.3 (-1.2) (+2.7) \\ \hline ✓ & ✓ & & & 63.0 (-1.3) (+3.0) & 46.3 (-2.2) (+1.7) \\ ✓ & & ✓ & & 61.6 (-2.7) (+1.6) & 44.9 (-3.6) (+0.3) \\ ✓ & & & ✓ & 62.1 (-2.2) (+2.1) & 45.3 (-3.2) (+0.7) \\ \hline ✓ & & & & 60.0 (-4.3) & 44.6 (-3.9) \\ & & & & 58.2 (-6.1) (-1.8) & 42.1 (-6.4) (-2.5) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablations on the consistency losses. Experiments are conducted on ScanNet with 20% training data equipped with labels. T denotes random transformations. Red margins are comparisons with DQS3D with all consistency losses. Blue margins are comparisons with DQS3D with no consistency losses.
Figure 6: Visualized statistics of the Quantization Error Correction terms. QEC terms are collected from 80M points from ScanNet scenes under training-stage transformations. (a) L2 norm distribution of QEC terms. (b) Directions of QEC terms, note that **the solid blue lines are actually formed by a large amount of crowded data points.** |
2306.13890 | Virtual element methods for Biot-Kirchhoff poroelasticity | This paper analyses conforming and nonconforming virtual element formulations
of arbitrary polynomial degrees on general polygonal meshes for the coupling of
solid and fluid phases in deformable porous plates. The governing equations
consist of one fourth-order equation for the transverse displacement of the
middle surface coupled with a second-order equation for the pressure head
relative to the solid with mixed boundary conditions. We propose novel
enrichment operators that connect nonconforming virtual element spaces of
general degree to continuous Sobolev spaces. These operators satisfy additional
orthogonal and best-approximation properties (referred to as a conforming
companion operator in the context of finite element methods), which play an
important role in the nonconforming methods. This paper proves a priori error
estimates in the best-approximation form, and derives residual--based reliable
and efficient a posteriori error estimates in appropriate norms, and shows that
these error bounds are robust with respect to the main model parameters. The
computational examples illustrate the numerical behaviour of the suggested
virtual element discretisations and confirm the theoretical findings on
different polygonal meshes with mixed boundary conditions. | Rekha Khot, David Mora, Ricardo Ruiz-Baier | 2023-06-24T07:32:54Z | http://arxiv.org/abs/2306.13890v2 | # Virtual element methods for Biot-Kirchhoff poroelasticity+
###### Abstract
This paper analyses conforming and nonconforming virtual element formulations of arbitrary polynomial degrees on general polygonal meshes for the coupling of solid and fluid phases in deformable porous plates. The governing equations consist of one fourth-order equation for the transverse displacement of the middle surface coupled with a second-order equation for the pressure head relative to the solid with mixed boundary conditions. We propose novel enrichment operators that connect nonconforming virtual element spaces of general degree to continuous Sobolev spaces. These operators satisfy additional orthogonal and best-approximation properties (referred to as a conforming companion operator in the context of finite element methods), which play an important role in the nonconforming methods. This paper proves a priori error estimates in the best-approximation form, and derives residual-based reliable and efficient a posteriori error estimates in appropriate norms, and shows that these error bounds are robust with respect to the main model parameters. The computational examples illustrate the numerical behaviour of the suggested virtual element discretisations and confirm the theoretical findings on different polygonal meshes with mixed boundary conditions.
**Mathematics Subject Classification:** 65N30, 65N12, 65N15.
**Keywords:** Conforming and nonconforming virtual element methods, poromechanics, fourth- and second-order problems, Kirchhoff plate models, companion operators, inverse estimates, norm equivalence, a priori and a posteriori error estimates.
## 1 Introduction
**Scope.** Fluid-saturated porous media that deform are an essential ingredient in many engineering, biophysical and environmental applications. From these materials, a family featuring interesting properties is compressible thin plates. Porosity and permeability characteristics through the thickness can be averaged, leading to a different scaling of poromechanical properties from the typical structure exhibited in Biot's consolidation systems (see, for example [18, Chapter 8]).
A number of works have addressed the rigorous derivation of poroelastic plate effective equations [34, 35, 38, 40, 41]. The well-posedness analysis has been conducted, for a slightly different model,
in the recent paper [30]. Regarding numerical methods, a discontinuous Galerkin formulation has been proposed in [32] (following [39]) and splitting algorithms have been analysed. High-order finite element methods have been used for layer-wise poroelastic shells in [27].
The virtual element method (VEM) is a relatively new numerical technique that has been gaining popularity in recent years due to its ability to handle complex geometries and provide high accuracy numerical solutions for partial differential equations (PDEs). Another important feature of VEM is the possibility of easily implement highly regular discrete spaces. This idea is initiated in [12], where spaces of high global regularity (such as \(C^{1}\), \(C^{2}\) or more) are easily built in a very effcient way. This has been applied and tested in some biharmonic models of thin plates. The literature contains error analysis of VEM for biharmonic problems for thin plates models (Kirchhoff plates), with a particular emphasis on conforming and nonconforming approximations, including eigenvalue problems [2, 4, 12, 15, 23, 36, 37]. Other virtual element discretisations for biharmonic problems in plate models provide a detailed error analysis of the particular type of method and demonstrate its effectiveness through numerical experiments, and the analysis also includes a posteriori error estimates, time dependent problem, 3D case, among others. See for example [1, 7, 8, 21, 44, 43].
The enrichment (averaging) operator for finite elements is introduced in [9] for multigrid methods and explored in [29] to prove a priori error estimates utilising a posteriori error bounds in nonconforming finite element methods (known as medius analysis). These averaging operators are enhanced with orthogonal properties and best-approximation estimates in [17] (referred as conforming companion operators) in the context of reliable a posteriori error control. The enrichment operators for VEM are initiated in [31] and companion operators for VEM in [16, 15]. Since virtual element functions need not be computed explicitly, the projection operators are paramount in VEM. The non-computable conforming companion operators can be exploited in the analysis, whereas the computable ones also in defining the discrete problem [15] allowing rough sources in turn. These companion operators are computable using the degrees of freedom, and hence involve the shape-regular sub-triangulation of the polygonal decomposition and so the finite element spaces of lowest-order like Crouzeix-Raviart for second-order and Morley for fourth-order problems. In this paper, we propose a new enrichment operator that maps nonconforming VE spaces to conforming VE spaces of one degree higher (which is different from the construction in [31]) and in addition, satisfies \(H^{2}\)-orthogonality and best approximation estimates. We then modify this enrichment operator through variety of bubble-functions to design new companion operators having the lower-order orthogonalities (\(H^{1}\) and \(L^{2}\)). This paper considers the sources in \(L^{2}(\Omega)\) and hence deals with possibly non-computable companion operators, but achieves the important properties. The treatment of general boundary conditions is carefully addressed in this paper, necessitating the definition and thorough analysis of new companion operators to establish well-posedness and obtain error estimates.
This paper presents an extension of nonconforming VE formulations for the coupling of biharmonic problems and second-order elliptic equations (see the similar methods advanced for single-physics problems in the recent contributions [16, 15]). The model encodes the interaction with a fluid phase, and the study of this type of problems has gained significant attention due to its relevance in various physical applications. More generally, the proposed framework offers a unified approach to solve coupled problems with mixed boundary conditions on polygonal domains, even when they are non-convex. For conforming cases, we combine \(C^{1}-C^{0}\) types of VEMs with various polynomial degrees. The error estimates, measured in the weighted \(H^{2}\times H^{1}\) energy norm (for deflection and moment pressure), demonstrate robustness with respect to material parameters. Additionally, we introduce a reliable and efficient a posteriori error estimator of residual type. Leveraging the flexibility of VEMs in utilising polygonal meshes, we employ the error estimator to drive an adaptive scheme. Notably, the proposed a posteriori analysis is novel for high-order nonconforming VEMs and can be applied to tackle more complex coupled problems: we emphasise that the proposed models presented in this work can serve as a fundamental building block for establishing a comprehensive framework for complex mixed-dimensional poroelastic models. These models can also be extended to incorporate interaction with multi-layered structures, such as thermostats and micro-actuators, offering broad applicability and versatility.
The main contributions of this work can be summarised as follows:
* Application of the proposed conforming and nonconforming VEMs to the plate Biot equations.
* Design of new companion operators with the orthogonal properties and the best-approximation estimates.
* A priori error estimates in the energy norm for both conforming and nonconforming formulations in the best-approximation form that remain robust with respect to material parameters.
* The detailed proofs of the inverse estimate and the norm equivalence for the nonconforming VE functions.
* Introduction and analysis of a residual-based a posteriori error estimator.
* Presentation of numerical results validating the theoretical estimates and demonstrating the competitive performance of the proposed schemes.
**Content and structure.** The remainder of the paper has been organised in the following manner. In the rest of this section we provide preliminary notational conventions and definitions to be used throughout the paper. Section 2 contains the model description and defines the weak formulation of the governing equations. The local and global VE spaces, the degrees of freedom and the computable polynomial projection operators are addressed in Subsection 3.1, and the derivations for both conforming and nonconforming approximations and the analysis of existence and uniqueness of discrete solution are conducted in Subsection 3.2. The a priori error analysis for the conforming VE methods in the best-approximation form is carried out in Section 4. For the nonconforming case, the companion operators are defined in Subsection 5.1 along with the proofs of the properties and the best approximation estimates followed by a priori error estimates in Subsection 5.2. Subsections 6.1 recalls the preliminary estimates and 6.2 contains the detailed proofs of standard estimates such as the inverse estimate and the norm equivalence for the nonconforming VE functions, and a Poincare-type inequality for \(H^{2}\) functions. The reliability and efficiency of an a posteriori error estimator are included in Subsection 6.4-6.5. Finally, a collection of illustrative numerical tests is presented in Section 7.
**Recurrent notation and domain configuration.** Consider a spatial domain \(\widehat{\Omega}=\Omega\times(-\zeta,\zeta)\subset\mathbb{R}^{3}\) occupied by an undeformed thin poroelastic plate (a deformable solid matrix or an array of solid particles) of characteristic thickness \(2\zeta\), and where \(\Omega\subset\mathbb{R}^{2}\) represents the mid-surface of the undeformed poroelastic plate. The plate is assumed to be isotropic in the plate plane and to follow the Kirchhoff law. In particular, it is assumed that the plate filaments are orthogonal to the deflected mid-surface [25]. An appropriate modification of Biot constitutive poroelasticity equations is adopted in combination with Darcy flow in deforming pores (see [33]). Following the model presented in [32], we assume that the equations governing the balance of momentum and mass of the solid and fluid phases can be written in terms of the averaged-through-thickness deflection \(u\) (vertical displacement of the solid phase) and the first moment of the pressure of the fluid phase \(p\). We will denote by \(\mathbf{n}\) the unit normal vector on the undeformed boundary \(\partial\Omega\). The boundary \(\partial\Omega\) is disjointly split between a closed set \(\Gamma^{c}\) and an open set \(\Gamma^{s}\) where we impose, respectively, homogeneous deflections and homogeneous normal derivatives of deflections and of pressure moment (clamped sub-boundary with zero-flux) and homogeneous pressures with normal deflections and normal derivatives of the deflection Laplacian (simply supported sub-boundary).
For a subdomain \(S\subseteq\Omega\) we will adopt the notation \((\cdot,\cdot)_{m,S}\) for the inner product, and \(\|\cdot\|_{m,S}\) (resp. \(|\cdot|_{m,S}\)) for the norm (resp. seminorm) in the Sobolev space \(H^{m}(S)\) (or in its vector counterpart \(\mathbf{H}^{m}(S)\)) with \(m\geq 0\). We sometimes drop \(0\) from the subscript in \(L^{2}\) inner product and norms for convenience. Also, given an integer \(k\geq 1\) and \(S\subset\mathbb{R}^{d}\), \(d=1,2\), by \(\mathbb{P}_{k}(S)\) we will denote the space of polynomial functions defined locally in \(S\) and being of total degree up to \(k\). Given a barycentre
and diameter \(h_{S}\) of a domain \(S\), we define the set of scaled monomials \(\mathbb{M}_{k}(S)\) of total degree up to \(k\) and \(\mathbb{M}_{k}^{*}(S)\) of degree equal to \(k\) by
\[\mathbb{M}_{k}(S)=\Big{\{}\Big{(}\frac{x-x_{S}}{h_{S}}\Big{)}^{\ell}:|\ell| \leq k\Big{\}},\text{ and }\mathbb{M}_{k}^{*}(S)=\Big{\{}\Big{(}\frac{x-x_{S}}{h_{S}}\Big{)}^{\ell}:| \ell|=k\Big{\}}.\]
Throughout the paper we use \(C\) to denote a generic positive constant independent of the mesh size \(h\) and of the main model parameters, that might take different values at its different occurrences. Moreover, given any positive expressions \(X\) and \(Y\), the notation \(X\;\lesssim\;Y\) means that \(X\;\leq\;C\,Y\) (similarly for \(X\;\gtrsim\;Y\)).
## 2 Plate Biot equations and solvability analysis
The Biot-Kirchhoff equations (using their usual deflection-pressure formulation in inertial regime) governing the transverse dynamics of a thin poroelastic body and considering mixed boundary conditions, read
\[\frac{\partial^{2}u}{\partial t^{2}}+\Delta^{2}u+\alpha\Delta p =f \text{in }\Omega\times(0,T], \tag{1a}\] \[\beta\frac{\partial p}{\partial t}-\alpha\frac{\partial(\Delta u )}{\partial t}-\gamma\Delta p =g \text{in }\Omega\times(0,T],\] (1b) \[u=\nabla u\cdot\mathbf{n}=\nabla p\cdot\mathbf{n} =0 \text{on }\Gamma^{c}\times(0,T],\] (1c) \[u=\Delta u =p =0 \text{on }\Gamma^{s}\times(0,T], \tag{1d}\]
which also equip appropriate initial conditions. Here \(f\in L^{2}(0,T;\Omega)\) is the normal vertical loading and \(g\in L^{2}(0,T;\Omega)\) is a prescribed mass source/sink. The model parameters depend on the first and second Lame constants of the solid - \(\lambda,\mu\); and on the total storage capacity and Biot-Willis poroelastic coefficients - \(c_{0},\alpha\), respectively:
\[\beta=\big{(}c_{0}[\lambda+2\mu]+\alpha^{2}\big{)}\gamma,\quad\gamma=\frac{ \lambda+\mu}{\mu}.\]
System (1) is similar to the non-inertial problem in [32] which accommodates fluid-saturated plates where diffusion is possible in the in-plane direction (see also the set of problems recently analysed in [30]), here extended to the case of mixed boundary conditions. In order to fix ideas, we will focus first on a simplified system, resulting from applying a centered and backward Euler semi-discretisation in time to (1a)-(1b), with a conveniently rescaled final time \(T\) and rescaled time step to \(\Delta t=1\). Owing to the specification of boundary conditions (1c)-(1d) (taken homogeneous for sake of simplicity of the presentation), a weak formulation is obtained, which reads: Find \((u,p)\in V\times Q:=[H_{\bar{\Gamma}^{c}}^{1}(\Omega)\cap H_{0}^{1}(\Omega)] \times H_{\bar{\Gamma}^{*}}^{1}(\Omega)\) such that
\[(u,v)_{\Omega}+(\nabla^{2}u,\nabla^{2}v)_{\Omega}-\alpha(\nabla p,\nabla v)_{\Omega}=(\tilde{f},v)_{\Omega} \forall\;v\in V, \tag{2a}\] \[\beta(p,q)_{\Omega}+\alpha(\nabla q,\nabla u)_{\Omega}+\gamma( \nabla p,\nabla q)_{\Omega}=(\tilde{g},q)_{\Omega} \forall\;q\in Q, \tag{2b}\]
with \(\nabla^{2}v:=\begin{pmatrix}v_{xx}&v_{xy}\\ v_{yx}&v_{yy}\end{pmatrix}\) being the Hessian matrix (of second-order derivatives) for a given \(v\in H^{2}(\Omega)\). The right-hand side terms also include the value of deflection and pressure moments in the previous backward Euler time steps, denoted as \(\widetilde{u}^{n},\widetilde{u}^{n-1}\) and \(\widetilde{p}^{n}\), respectively:
\[\tilde{f}=f+2\widehat{u}^{n}-\widehat{u}^{n-1},\qquad\tilde{g}=g+\widehat{p}^ {n},\]
where the index \(n\geq 0\) indicates the time step.
Let us now group the trial and test fields as \(\vec{\mathbf{u}}=(u,p)\) and \(\vec{\mathbf{v}}=(v,q)\), respectively; and introduce the operator \(\mathcal{A}:\mathbf{H}_{\epsilon}\to\mathbf{H}_{\epsilon}\) defined as
\[\langle\mathcal{A}(\vec{\mathbf{u}}),\vec{\mathbf{v}}\rangle:=(u,v)_{\Omega}+(\nabla^{ 2}u,\nabla^{2}v)_{\Omega}-\alpha(\nabla p,\nabla v)_{\Omega}+\beta(p,q)_{\Omega }+\alpha(\nabla q,\nabla u)_{\Omega}+\gamma(\nabla p,\nabla q)_{\Omega},\]
where \(\langle\cdot,\cdot\rangle\) denotes the duality pairing between \(\mathbf{H}_{\epsilon}\) and \(\mathbf{H}^{\prime}_{\epsilon}\). The product space \(\mathbf{H}_{\epsilon}\) contains all \(\vec{\boldsymbol{u}}\in[H^{2}_{\Gamma^{c}}(\Omega)\cap H^{1}_{0}(\Omega)]\times H ^{1}_{\Gamma^{s}}(\Omega)\) which are bounded in the norm
\[\|\vec{\boldsymbol{u}}\|^{2}_{\mathbf{H}_{\epsilon}}:=\|u\|^{2}_{\Omega}+|u|^{2 }_{2,\Omega}+\beta\|p\|^{2}_{\Omega}+\gamma|p|^{2}_{1,\Omega}. \tag{3}\]
The subscript \(\epsilon\) denotes the weighting parameters (in our case, \(\beta,\gamma\)). We also define the linear and bounded operator \(\mathcal{F}:\mathbf{H}_{\epsilon}\to\mathbb{R}\) as
\[\vec{\boldsymbol{v}}\mapsto\mathcal{F}(\vec{\boldsymbol{v}}):=(\tilde{f},v)_{ \Omega}+(\tilde{g},q)_{\Omega},\]
and therefore Problem (2) is recast as: Find \(\vec{\boldsymbol{u}}\in\mathbf{H}_{\epsilon}\) such that
\[\langle\mathcal{A}(\vec{\boldsymbol{u}}),\vec{\boldsymbol{v}}\rangle= \mathcal{F}(\vec{\boldsymbol{v}})\qquad\forall\;\vec{\boldsymbol{v}}\in \mathbf{H}_{\epsilon}. \tag{4}\]
We are now in a position to state the solvability of the continuous problem (4).
**Theorem 2.1**.: _Problem (4) is well-posed in the space \(\mathbf{H}_{\epsilon}\) equipped with the norm (3)._
Proof.: It follows from the Lax-Milgram lemma (see, e.g., [26, Lemma 25.2]), requiring the boundedness of \(\mathcal{A}\) over the space \(\mathbf{H}_{\epsilon}\)
\[\langle\mathcal{A}(\vec{\boldsymbol{u}}),\vec{\boldsymbol{v}}\rangle\lesssim \|\vec{\boldsymbol{u}}\|_{\mathbf{H}_{\epsilon}}\|\vec{\boldsymbol{v}}\|_{ \mathbf{H}_{\epsilon}}\qquad\forall\;\vec{\boldsymbol{u}},\vec{\boldsymbol{v}} \in\mathbf{H}_{\epsilon},\]
and the boundedness of \(\mathcal{F}\), as well as the coercivity condition
\[\langle\mathcal{A}(\vec{\boldsymbol{u}}),\vec{\boldsymbol{u}}\rangle=\|\vec{ \boldsymbol{u}}\|^{2}_{\mathbf{H}_{\epsilon}}\qquad\forall\;\vec{\boldsymbol{ u}}\in\mathbf{H}_{\epsilon}.\]
For the continuity it suffices to apply the Cauchy-Schwarz inequality while the coercivity is a direct consequence of the definition of the solution operator (whose off-diagonal terms cancel out).
Now, we state an additional regularity result for the solution of problem (4).
**Regularity estimates**[28]. Given \(\tilde{f}\in H^{s-4}(\Omega)\) and \(\tilde{g}\in H^{r-2}(\Omega)\) with \(s\geq 2\) and \(r\geq 1\), there exists a unique solution \(\vec{\boldsymbol{u}}=(u,p)\in(H^{s}(\Omega)\cap V)\times(H^{r}(\Omega)\cap Q)\) to (4) such that
\[\|u\|_{s,\Omega}+\|p\|_{r,\Omega}\lesssim\|\tilde{f}\|_{s-4,\Omega}+\|\tilde{ g}\|_{r-2,\Omega}. \tag{5}\]
## 3 Virtual element formulation and unique solvability of the discrete problem
Let us denote by \(\{\mathcal{T}_{h}\}_{h>0}\) a shape-regular family of partitions of \(\bar{\Omega}\), conformed by polygons \(K\) of maximal diameter \(h_{K}\), and we denote the mesh size by \(h:=\max\{h_{K}:K\in\mathcal{T}_{h}\}\). Let \(\mathcal{V}=\mathcal{V}^{i}\cup\mathcal{V}^{c}\cup\mathcal{V}^{s}\) and \(\mathcal{E}=\mathcal{E}^{i}\cup\mathcal{E}^{c}\cup\mathcal{E}^{s}\) be the set of interior vertices \(\mathcal{V}^{i}\) and boundary vertices \(\mathcal{V}^{c}\cup\mathcal{V}^{s}\), and the set of interior edges \(\mathcal{E}^{i}\) and boundary edges \(\mathcal{E}^{c}\cup\mathcal{E}^{s}\). By \(N_{K}\) we will denote the number of vertices/edges in the generic polygon \(K\). For all edges \(e\in\partial K\), we denote by \(\boldsymbol{n}^{e}_{K}\) the unit normal pointing outwards \(K\), \(\boldsymbol{t}^{e}_{K}\) the unit tangent vector along \(e\) on \(K\), and \(V_{i}\) represents the \(i^{th}\) vertex of the polygon \(K\). We suppose that there exists a universal positive constant \(\rho\) such that
1. every polygon \(K\in\mathcal{T}_{h}\) of diameter \(h_{K}\) is star-shaped with respect to every point of a ball of radius greater than or equal to \(\rho h_{K}\),
2. every edge \(e\) of \(K\) has a length \(h_{e}\) greater than or equal to \(\rho h_{K}\).
Throughout this section we will construct and analyse a conforming and a nonconforming family of VE methods.
### Virtual element spaces
**VE spaces for displacement approximation**. First we define the bilinear form \(a^{K}\) as the restriction to \(K\) of
\[a(v,w):=\int_{\Omega}\nabla^{2}v:\nabla^{2}w\,\mathrm{d}\mathbf{x}.\]
For \(K\in\mathcal{T}_{h}\) and \(k\geq 2\), define the projection operator \(\Pi_{k}^{\nabla^{2}}:H^{2}(K)\to\mathbb{P}_{k}(K)\), for \(v\in H^{2}(K)\), by
\[a^{K}(\Pi_{k}^{\nabla^{2}}v,\chi_{k})=a^{K}(v,\chi_{k})\qquad\forall\chi_{k} \in\mathbb{P}_{k}(K), \tag{11}\]
with the additional conditions
\[\overline{\Pi_{k}^{\nabla^{2}}v} =\overline{v}\quad\text{and}\quad\overline{\nabla\Pi_{k}^{\nabla ^{2}}v}=\overline{\nabla v} \text{for conforming VEM}, \tag{12a}\] \[\overline{\Pi_{k}^{\nabla^{2}}v} =\overline{v}\quad\text{and}\quad\int_{\partial K}\nabla\Pi_{k} ^{\nabla^{2}}v\,\mathrm{d}s=\int_{\partial K}\nabla v\,\mathrm{d}s \text{for nonconforming VEM}, \tag{12b}\]
where \(\overline{v}\) is the average \(\frac{1}{N_{K}}\sum_{i=1}^{N_{K}}v(V_{i})\) of the values of \(v\) at the vertices \(V_{i}\) of \(K\). Since the linear polynomials \(\chi_{k}\in\mathbb{P}_{1}(K)\subset\mathbb{P}_{k}(K)\) lead to the identity \(0=0\) in (11), it follows that the two conditions in (12a) for conforming and (12b) for nonconforming fix the affine contribution and define \(\Pi_{k}^{\nabla^{2}}v\) uniquely for a given \(v\). Furthermore, the Poincare-Friedrichs inequality implies
\[\|v-\Pi_{k}^{\nabla^{2}}v\|_{K}\lesssim h_{K}|v-\Pi_{k}^{\nabla^{2}}v|_{1,K} \lesssim h_{K}^{2}|v-\Pi_{k}^{\nabla^{2}}v|_{2,K}. \tag{13}\]
The local conforming VE space \(V_{h}^{k,c}(K)\)[12] is a set of solutions to a biharmonic problem over \(K\) with clamped boundary conditions on \(\partial K\), and it is defined, for \(k\geq 2\) and \(r=\max\{k,3\}\), as
\[V_{h}^{k,c}(K):=\left\{\begin{array}{c}v_{h}\in H^{2}(K)\cap C^{1}(\partial K ):\Delta^{2}v_{h}\in\mathbb{P}_{k}(K),\ v_{h}|_{e}\in\mathbb{P}_{r}(e)\ \text{and}\ \nabla v_{h}|_{e}\cdot\mathbf{n}_{K}^{c}\in\mathbb{P}_{k-1}(e)\\ \qquad\quad\forall\ e\in\partial K,\ \text{and}\ (v_{h}-\Pi_{k}^{\nabla^{2}}v_{h}, \chi)_{K}=0\quad\forall\ \chi\in\mathbb{P}_{k}(K)\setminus\mathbb{P}_{k-4}(K)\end{array}\right\}.\]
On the other hand, the local nonconforming VE space is a set of solutions to a biharmonic problem with simply supported boundary conditions and was first introduced in [44]. Carstensen _et. al_ pointed out in [15] that the definition in [44] works for a polygon \(K\) without hanging nodes, and provided an alternate definition for the lowest-order case (\(k=2\)) with possibly hanging nodes in \(K\). In this paper, we extend this definition of the nonconforming VE space for general degree \(k\). First we need to define some preliminary geometrical notations. Let \(K\in\mathcal{T}_{h}\) be a polygonal element, and \(\mathcal{E}_{K}:=\{e_{1},\ldots,e_{N_{K}}\}\) and \(V_{1},\ldots,V_{N_{K}}\) be the edges and vertices of \(K\). Suppose that \(z_{1},\ldots,z_{\tilde{N}_{K}}\) denote the corner points of \(K\) for some \(\tilde{N}_{K}\leq N_{K}\), where the angle at each \(z_{j}\) is different from \(0,\pi,2\pi\). The boundary \(\partial K=e_{1}\cup\cdots\cup e_{N_{K}}\) can also be viewed as a union of the sides \(s_{1},\ldots,s_{\tilde{N}_{K}}\), where \(s_{j}:=\text{conv}\{z_{j},z_{j+1}\}\) for \(z_{j}=V_{m_{j}}\) and \(z_{j+1}=V_{m_{j}+n_{j}}\) with \(z_{\tilde{N}_{K}+1}=z_{1}\). See a sketch in Figure 3.1.
With these notations, we are in a position to define the local nonconforming VE space \(V_{h}^{k,\text{nc}}(K)\) for \(k\geq 2\) by
\[V_{h}^{k,\text{nc}}(K):=\left\{\begin{array}{c}v_{h}\in H^{2}(K)\cap C^{0}( \partial K):\Delta^{2}v_{h}\in\mathbb{P}_{k}(K),\ v_{h}|_{e}\in\mathbb{P}_{k}(e) \ \text{and}\ \Delta v_{h}|_{e}\in\mathbb{P}_{k-2}(e)\\ \qquad\forall\ e\in\mathcal{E}_{K},\ v_{h}|_{s_{j}}\in C^{1}(s_{j}),\ \int_{e_{m_{j}}}v_{h}\chi\,\mathrm{d}s=\int_{e_{m_{j}}}\Pi_{k}^{\nabla^{2}}v_{h }\chi\,\mathrm{d}s\quad\forall\ \chi\in\mathbb{P}_{k-2}(e_{m_{j}}),\\ \qquad\text{and}\ \int_{e_{m_{j}+i}}v_{h}\chi\,\mathrm{d}s=\int_{e_{m_{j}+i}} \Pi_{k}^{\nabla^{2}}v_{h}\chi\,\mathrm{d}s\quad\forall\ \chi\in\mathbb{P}_{k-3}(e_{m_{j}+i})\ \text{for}\ i=1,\ldots,n_{j},\\ \qquad\text{and}\ j=1,\ldots,\tilde{N}_{K},\quad(v_{h}-\Pi_{k}^{\nabla^{2}}v_{h },\chi)_{K}=0\quad\forall\ \chi\in\mathbb{P}_{k}(K)\setminus\mathbb{P}_{k-4}(K)\end{array}\right\}.\]
The local degrees of freedom (DoFs) for both conforming and nonconforming VE spaces are summarised in Table 3.1.
It can be shown that the triplets \((K,V_{h}^{k,c}(K),\{(\mathbb{D}1)-(\mathbb{D}5)\})\) and \((K,V_{h}^{k,\text{nc}}(K),\{(\mathbb{D}1^{\star})-(\mathbb{D}4^{\star})\})\) form a finite element in the sense of Ciarlet [24], and the projection operator \(\Pi_{k}^{\nabla^{2}}v_{h}\) for \(v_{h}\in V_{h}^{k,c}(K)\)
(resp. \(v_{h}\in V_{h}^{k,c}(K)\)) is computable in terms of the DoFs (\(\mathbb{D}1\))-(\(\mathbb{D}5\)) (resp. (\(\mathbb{D}1^{\star}\))-(\(\mathbb{D}4^{\star}\))). We refer to [12] (resp. [15]) for a proof.
Let \(\Pi_{k}\) denote the \(L^{2}\)-projection onto the polynomial space \(\mathbb{P}_{k}(K)\). That is,
\[(\Pi_{k}v,\chi)_{K}=(v,\chi)_{K}\qquad\forall\;\chi\in\mathbb{P}_{k}(K).\]
The orthogonality condition in the definition of the local VE spaces \(V_{h}^{k,c}(K)\) and \(V_{h}^{k,\mathrm{nc}}(K)\) implies that \(\Pi_{k}\) is also computable in terms of the DoFs.
For \(v\in H^{1}(K)\) and \(\vec{\chi}\in(\mathbb{P}_{k-1}(K))^{2}\), an integration by parts leads to the expressions
\[(\Pi_{k-1}\nabla v,\vec{\chi})_{K}=-(v,\mathbf{div}\;\vec{\chi})_{K}+(v,\chi \cdot\mathbf{n}_{K}^{e})_{\partial K}=-(\Pi_{k}v,\mathbf{div}\;\vec{\chi})_{K}+(v,\chi\cdot\mathbf{n}_{K}^{e})_{\partial K}, \tag{11}\]
owing to the definition of \(\Pi_{k}\) in the last step. Observe that the DoFs (\(\mathbb{D}1\))-(\(\mathbb{D}2\)) and (\(\mathbb{D}4\)) determine \(v_{h}\in\mathbb{P}_{r}(e)\) explicitly for all \(e\in\partial K\). This and the computability of \(\Pi_{k}\) imply that \(\Pi_{k-1}\nabla v_{h}\) for \(v_{h}\in V_{h}^{k,c}(K)\) is computable in terms of the DoFs. Since \(\Pi_{k}^{\nabla^{2}}v_{h}\) is computable, the values \(\int_{e_{m_{j}}}v_{h}\chi\,\mathrm{d}s\) for \(\chi\in\mathbb{M}_{k-2}(e_{m_{j}})\) are computable from the definition of \(V_{h}^{k,\mathrm{nc}}(K)\). If \(n_{j}=0\), these \((k-1)\) estimates, and the values at the vertices \(V_{m_{j}}\) and \(V_{m_{j}+1}\) uniquely determine \(v_{h}\in\mathbb{P}_{k}(e_{m_{j}})\). If \(n_{j}>0\), the point values \(v_{h}(V_{m_{j}+i}),v_{h}(V_{m_{j}+i+1}),\partial_{\tau}v_{h}(V_{m_{j}+i})\) and \(\int_{e_{m_{j}+i}}v_{h}\chi\,\mathrm{d}s\) for \(\chi\in\mathbb{M}_{k-3}(e_{m_{j}+i})\) evaluate \(v_{h}\) on each edge \(e_{m_{j}+i}\) for \(i=1,\ldots,n_{j},j=1,\ldots,\tilde{N}_{K}\), and consequently \(v_{h}\) is known on the boundary \(\partial K\). Similarly as above, this step and the computability of \(\Pi_{k}\) imply that \(\Pi_{k-1}\nabla v_{h}\) is computable in terms of the DoFs for \(v_{h}\in V_{h}^{k,\mathrm{nc}}(K)\).
\begin{table}
\begin{tabular}{|l|l|l|} \hline \hline degree & DoFs of \(v_{h}\in V_{h}^{k,c}(K)\) & DoFs of \(v_{h}\in V_{h}^{k,\mathrm{nc}}(K)\) \\ \hline \(k\geq 2\) & \((\mathbb{D}1)\)\(v_{h}(V_{i})\quad\forall\;i=1,\ldots,N_{K}\) & \((\mathbb{D}1^{\star})\)\(v_{h}(V_{i})\quad\forall\;i=1,\ldots,N_{K}\) \\ & \((\mathbb{D}2)\)\(h_{V_{i}}\nabla v_{h}(V_{i})\quad\forall\;i=1,\ldots,N_{K}\) & \((\mathbb{D}2^{\star})\)\(\int_{e}\partial_{\mathbf{n}}v_{h}\chi\,\mathrm{d}s\quad\forall\;\chi \in\mathbb{M}_{k-2}(e),\;e\in\mathcal{E}_{K}\) \\ \hline \(k\geq 3\) & \((\mathbb{D}3)\)\(\int_{e}\partial_{\mathbf{n}}v_{h}\chi\,\mathrm{d}s\quad\forall\;\chi\in \mathbb{M}_{k-3}(e),\;e\in\mathcal{E}_{K}\) & \((\mathbb{D}3^{\star})\)\(\int_{e}v_{h}\chi\,\mathrm{d}s\quad\forall\;\chi\in\mathbb{M}_{k-3}(e),\;e \in\mathcal{E}_{K}\) \\ \hline \(k\geq 4\) & \((\mathbb{D}4)\)\(\int_{e}v_{h}\chi\,\mathrm{d}s\quad\forall\;\chi\in\mathbb{M}_{k-4}(e),\;e\in \mathcal{E}_{K}\) & \((\mathbb{D}4^{\star})\)\(\int_{K}v_{h}\chi\,\mathrm{d}\mathbf{x}\quad\forall\;\chi\in\mathbb{M}_{k-4}(K)\) \\ & \((\mathbb{D}5)\)\(\int_{K}v_{h}\chi\,\mathrm{d}\mathbf{x}\quad\forall\;\chi\in\mathbb{M}_{k-4}(K)\) & \\ \hline \end{tabular}
\end{table}
Table 1: The left panel describes the DoFs of \(V_{h}^{k,c}(K)\) with the characteristic length (see [12], for example) \(h_{V_{i}}\) associated with each vertex \(V_{i}\) for all \(i=1,\ldots,N_{K}\) and the right column lists the DoFs of \(V_{h}^{k,\mathrm{nc}}(K)\).
**Proposition 3.1** (Polynomial approximation [11]).: _Under the assumption (M1), for every \(v\in H^{s}(K)\), there exists \(\chi_{k}\in\mathbb{P}_{k}(K)\) with \(k\in\mathbb{N}_{0}\) such that_
\[|v-\chi_{k}|_{m,K}\lesssim h_{K}^{s-m}|v|_{s,K}\quad\text{for }0\leq m\leq s\leq k+1.\]
The global VE spaces \(V_{h}^{k,c}\) and \(V_{h}^{k,\text{nc}}\) are defined, respectively, as
\[V_{h}^{k,c}:=\{v_{h}\in V:v_{h}|_{K}\in V_{h}^{k,c}(K)\quad\forall\;K\in \mathcal{T}_{h}\},\]
and
\[V_{h}^{k,\text{nc}}:=\left\{\begin{array}{rl}&v_{h}\in L^{2}(\Omega):v_{h}|_ {K}\in V_{h}^{k,\text{nc}}(K)\quad\forall\;K\in\mathcal{T}_{h},\;v_{h}\;\text{ is continuous at interior vertices}\\ &\text{and zero at boundary vertices},\;\int_{e}[\partial_{\boldsymbol{n}}v_{h}] \chi\,\mathrm{d}s=0\quad\forall\;\chi\in\mathbb{P}_{k-2}(e),\;e\in\mathcal{E} ^{i}\cup\mathcal{E}^{c}\\ &\text{and}\;\int_{e}[v_{h}]\chi\,\mathrm{d}s=0\quad\forall\;\chi\in\mathbb{P} _{k-3}(e),\;e\in\mathcal{E}\end{array}\right\}.\]
**VE spaces for pressure approximation**. We define the projection operator \(\Pi_{\ell}^{\nabla}:H^{1}(K)\to\mathbb{P}_{\ell}(K)\) for \(\ell\geq 1\) and \(q\in H^{1}(K)\) through the following equation
\[(\nabla\Pi_{\ell}^{\nabla}q,\nabla\chi_{\ell})_{K}=(\nabla q,\nabla\chi_{\ell })_{K}\qquad\forall\;\chi_{\ell}\in\mathbb{P}_{\ell}(K), \tag{11}\]
with the additional condition needed to fix the constant
\[\overline{\Pi_{\ell}^{\nabla}q} =\overline{q} \text{for conforming VEM}, \tag{12a}\] \[\int_{\partial K}\Pi_{\ell}^{\nabla}q\,\mathrm{d}s =\int_{\partial K}q\,\mathrm{d}s \text{for nonconforming VEM}. \tag{12b}\]
This defines \(\Pi_{\ell}^{\nabla}q\) uniquely for a given \(q\). To approximate the pressure space \(Q\), we introduce the local conforming VE space \(Q_{h}^{\ell,c}(K)\) for \(\ell\geq 1\) and \(K\in\mathcal{T}_{h}\) as the set of solutions to a Poisson problem with Dirichlet boundary conditions [6]. In particular,
\[Q_{h}^{\ell,c}(K):=\left\{\begin{array}{rl}&q_{h}\in H^{1}(K)\cap C^{0}( \partial K):\Delta q_{h}\in\mathbb{P}_{\ell}(K),\;q_{h}|_{e}\in\mathbb{P}_{ \ell}(e)\quad\forall\;e\in\partial K,\\ &\text{and}\;(q_{h}-\Pi_{\ell}^{\nabla}q_{h},\chi)_{K}=0\quad\forall\;\chi\in \mathbb{P}_{\ell}(K)\setminus\mathbb{P}_{\ell-2}(K)\end{array}\right\}.\]
In turn, the local nonconforming VE space \(Q_{h}^{\ell,\text{nc}}(K)\) is the set of solutions to a Poisson problem with Neumann boundary condition [5] and is defined for \(\ell\geq 1\) as
\[Q_{h}^{\ell,\text{nc}}(K):=\left\{\begin{array}{rl}&q_{h}\in H^{1}(K)\cap C ^{0}(\partial K):\Delta q_{h}\in\mathbb{P}_{\ell}(K),\;\partial_{\boldsymbol{ n}}q_{h}|_{e}\in\mathbb{P}_{\ell-1}(e)\quad\forall\;e\in\partial K,\\ &\text{and}\;(q_{h}-\Pi_{\ell}^{\nabla}q_{h},\chi)_{K}=0\quad\forall\;\chi\in \mathbb{P}_{\ell}(K)\setminus\mathbb{P}_{\ell-2}(K)\end{array}\right\}.\]
The DoFs for \(Q_{h}^{\ell,c}(K)\) and \(Q_{h}^{\ell,\text{nc}}(K)\) are provided in Table 3.2.
The triplets \((K,Q_{h}^{\ell,c}(K),\{(\mathbb{F}1)-(\mathbb{F}3)\})\) and \((K,Q_{h}^{\ell,\text{nc}}(K),\{(\mathbb{F}1^{\star})-(\mathbb{F}2^{\star})\})\) form a finite element in the sense of Ciarlet [24] (see, e.g. [6]). Note that \(\Pi_{\ell}^{\nabla}q_{h}\) can be computed from DoFs of \((\mathbb{F}1)\)-\((\mathbb{F}3)\) (resp.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \hline degree & DoFs of \(q_{h}\in Q_{h}^{\ell,c}(K)\) & DoFs of \(q_{h}\in Q_{h}^{\ell,nc}(K)\) \\ \hline \hline \(\ell\geq 1\) & \((\mathbb{F}1)\;q_{h}(V_{i})\quad\forall\;i=1,\ldots,N_{K}\) & \((\mathbb{F}1^{\star})\;\oint_{e}q_{h}\chi\,\mathrm{d}s\quad\forall\chi\in \mathbb{M}_{\ell-1}(e)\) \\ \hline \(\ell\geq 2\) & \((\mathbb{F}2)\;\oint_{e}q_{h}\chi\,\mathrm{d}s\quad\forall\chi\in\mathbb{M}_{ \ell-2}(e)\) & \((\mathbb{F}2^{\star})\;\oint_{K}q_{h}\chi\,\mathrm{d}\boldsymbol{x}\quad\forall \chi\in\mathbb{M}_{\ell-2}(K)\) \\ & \((\mathbb{F}3)\;\oint_{K}q_{h}\chi\,\mathrm{d}\boldsymbol{x}\quad\forall\chi\in \mathbb{M}_{\ell-2}(K)\) & \\ \hline \end{tabular}
\end{table}
Table 3.2: The left (resp. right) panel describes the DoFs of \(Q_{h}^{\ell,c}(K)\) (resp. \(Q_{h}^{\ell,\text{nc}}(K)\)).
\((\mathbb{F}1^{\star})\)-\((\mathbb{F}2^{\star})\)) for \(q_{h}\in O_{h}^{\ell,c}(K)\) (resp. \(q_{h}\in Q_{h}^{\ell,\mathrm{nc}}(K)\)). Refer to [6] (resp. [16]) for a proof. Consequently, the \(L^{2}\)-projection \(\Pi_{\ell}\) is also computable from the orthogonality condition in the definition of the spaces \(Q_{h}^{\ell,c}(K)\) and \(Q_{h}^{\ell,\mathrm{nc}}(K)\). This and the explicit expression of \(q_{h}\) on the boundary \(\partial K\) in (11) show that \(\Pi_{\ell-1}\nabla q_{h}\) is computable for \(q_{h}\in Q_{h}^{\ell,c}(K)\). The computability of \(\Pi_{\ell}\) and \((\mathbb{F}1^{\star})\) in (11) imply that of \(\Pi_{\ell-1}\nabla q_{h}\) for \(q_{h}\in Q_{h}^{\ell,\mathrm{nc}}(K)\).
Next we define the global VE spaces for conforming and nonconforming pressure approximation, for \(\ell\geq 1\), as
\[Q_{h}^{\ell,c}:=\{q_{h}\in Q:q_{h}|_{K}\in Q_{h}^{\ell}(K)\quad\forall\ K\in \mathcal{T}_{h}\},\]
and
\[Q_{h}^{\ell,\mathrm{nc}}:=\left\{\begin{array}{ll}q_{h}\in L^{2}(\Omega):&q _{h}|_{K}\in Q_{h}^{\ell,\mathrm{nc}}(K)\quad\forall\ K\in\mathcal{T}_{h}\ \mathrm{and}\\ &\int_{e}[q_{h}]\chi\,\mathrm{d}s=0\quad\forall\ \chi\in\mathbb{P}_{\ell-1}(e),\ \forall\ e\in\mathcal{E}^{i}\cup\mathcal{E}^{s} \end{array}\right\},\]
respectively.
### Discrete problem and well-posedness
Let us first set the continuous bilinear forms \(a_{1}:V\times V,a_{2}:Q\times V\) and \(a_{3}:Q\times Q\) as
\[a_{1}(u,v) :=(u,v)_{\Omega}+a(u,v) \forall\ u,v\in V,\] \[a_{2}(p,v) :=\alpha(\nabla p,\nabla v)_{\Omega} \forall\ p\in Q\ \mathrm{and}\ \forall\ v\in V,\] \[a_{3}(p,q) :=\beta(p,q)_{\Omega}+\gamma(\nabla p,\nabla q)_{\Omega} \forall\ p,q\in Q\]
with the local counterparts \(a_{1}^{K},a_{2}^{K}\) and \(a_{3}^{K}\) for \(K\in\mathcal{T}_{h}\) and the piecewise versions \(a_{1}^{\mathrm{pw}}:=\sum_{K}a_{1}^{K},a_{2}^{\mathrm{pw}}:=\sum_{K}a_{2}^{K}\) and \(a_{3}^{\mathrm{pw}}:=\sum_{K}a_{3}^{K}\) respectively. For all \(u_{h},v_{h}\in V_{h}^{k,c}(K)\) or \(V_{h}^{k,\mathrm{nc}}(K)\) and \(p_{h},q_{h}\in Q_{h}^{\ell,c}(K)\) or \(Q_{h}^{\ell,\mathrm{nc}}(K)\) with \(k\geq 2\) and \(\ell\geq 1\), define the discrete counterparts by
\[a_{1}^{h}(u_{h},v_{h})|_{K} :=(\Pi_{k}u_{h},\Pi_{k}v_{h})_{K}+S_{1,0}^{K}((1-\Pi_{k})u_{h},(1- \Pi_{k})v_{h})+(\Pi_{k-2}(\nabla^{2}u_{h}),\Pi_{k-2}(\nabla^{2}v_{h}))_{K}\] \[\quad+S_{2}^{K}((1-\Pi_{k}^{\nabla})u_{h},(1-\Pi_{k}^{\nabla})v_{ h}), \tag{12a}\] \[a_{2}^{h}(p_{h},v_{h})|_{K} :=\alpha(\Pi_{\ell-1}\nabla p_{h},\Pi_{k-1}\nabla v_{h})_{K},\] (12b) \[a_{3}^{h}(p_{h},q_{h})|_{K} :=\beta(\Pi_{\ell}p_{h},\Pi_{\ell}q_{h})_{K}+S_{2,0}^{K}((1-\Pi_{ \ell})p_{h},(1-\Pi_{\ell})q_{h})+\gamma(\Pi_{\ell-1}(\nabla p_{h}),\Pi_{\ell-1} (\nabla q_{h}))_{K}\] \[\quad+S_{2}^{K}((1-\Pi_{\ell}^{\nabla})p_{h},(1-\Pi_{\ell}^{ \nabla})q_{h}). \tag{12c}\]
The stabilisation terms \(S_{\nabla^{2}}^{K}\) and \(S_{1,0}^{K}\) on \(V_{h}^{k,c}(K)\) or \(V_{h}^{k,\mathrm{nc}}\), and \(S_{2}^{K}\) and \(S_{2,0}^{K}\) on \(Q_{h}^{\ell,c}(K)\) or \(Q_{h}^{\ell,\mathrm{nc}}(K)\) are positive definite bilinear forms and there exist positive constants \(C_{\nabla^{2}},C_{1,0},C_{\nabla},C_{2,0}\) such that
\[C_{\nabla^{2}}^{-1}|v_{h}|_{2,K}^{2} \leq S_{\nabla^{2}}^{K}(v_{h},v_{h})\leq C_{\nabla^{2}}|v_{h}|_{2,K}^ {2} \forall\ v_{h}\in\mathrm{Ker}(\Pi_{k}^{\nabla^{2}}), \tag{13a}\] \[C_{1,0}^{-1}|v_{h}||_{K}^{2} \leq S_{1,0}^{K}(v_{h},v_{h})\leq C_{1,0}|v_{h}||_{K}^{2} \forall\ v_{h}\in\mathrm{Ker}(\Pi_{k}),\] (13b) \[C_{\nabla}^{-1}\gamma|q_{h}|_{1,K}^{2} \leq S_{1}^{K}(q_{h},q_{h})\leq C_{\nabla}\gamma|q_{h}|_{1,K}^{2} \forall\ q_{h}\in\mathrm{Ker}(\Pi_{\ell}^{\nabla}),\] (13c) \[C_{2,\nabla}^{-1}\beta\|q_{h}\|_{K}^{2} \leq S_{2,0}^{K}(q_{h},q_{h})\leq C_{2,0}\beta\|q_{h}\|_{K}^{2} \forall\ q_{h}\in\mathrm{Ker}(\Pi_{\ell}). \tag{13d}\]
The standard examples of the stabilisation terms satisfying (13a)-(13d) respectively are
\[S_{\nabla^{2}}^{K}(v_{h},w_{h})=h_{K}^{-2}\sum_{i}\mathrm{dof}_{i }(v_{h})\mathrm{dof}_{i}(w_{h})\quad\text{for all }v_{h},w_{h}\in V_{h}^{k,c}(K)\ \text{or}\ V_{h}^{k,\mathrm{nc}}(K)\ \text{and}\] \[S_{\nabla}^{K}(p_{h},q_{h})=\sum_{j}\mathrm{dof}_{j}(p_{h}) \mathrm{dof}_{j}(q_{h})\quad\text{for all}\ p_{h},q_{h}\in Q_{h}^{\ell,c}(K)\ \text{or}\ Q_{h}^{\ell,\mathrm{nc}}(K)\]
with \(S_{1,0}^{K}=h_{K}^{4}S_{\nabla^{2}}^{K}\) and \(S_{2,0}^{K}=h_{K}^{2}S_{\nabla}^{K}\). The global discrete bilinear forms \(a_{1}^{h}:V_{h}^{k,c}\times V_{h}^{k,c}\) (resp. \(V_{h}^{k,\mathrm{nc}}\times V_{h}^{k,\mathrm{nc}}\)), \(a_{2}^{h}:Q_{h}^{\ell,c}\times V_{h}^{k,\mathrm{nc}}\) (resp. \(Q_{h}^{\ell,\mathrm{nc}}\times V_{h}^{k,\mathrm{nc}}\)) and \(a_{3}^{h}:Q_{h}^{\ell,c}\times Q_{h}^{\ell,c}\) (resp. \(Q_{h}^{\ell,\mathrm{nc}}\times Q_{h}^{\ell,\mathrm{nc}}\)) are
defined by \(a_{1}^{h}(\cdot,\cdot):=\sum_{K\in\mathcal{T}_{h}}a_{1}^{h}(\cdot,\cdot)|_{K},a_{2} ^{h}(\cdot,\cdot):=\sum_{K\in\mathcal{T}_{h}}a_{2}^{h}(\cdot,\cdot)|_{K}\) and \(a_{3}^{h}(\cdot,\cdot):=\sum_{K\in\mathcal{T}_{h}}a_{3}^{h}(\cdot,\cdot)|_{K}\) for conforming (resp. nonconforming) VEM. The discrete problem is to find \((u_{h},p_{h})\in V_{h}^{k,c}\times Q_{h}^{\ell,c}\) (resp. \(V_{h}^{k,\mathrm{nc}}\times Q_{h}^{\ell,\mathrm{nc}}\)) such that
\[a_{1}^{h}(u_{h},v_{h})-a_{2}^{h}(p_{h},v_{h}) =(\tilde{f}_{h},v_{h})_{\Omega} \forall\;v_{h}\in V_{h}^{k,c}\;(\text{resp.}\;V_{h}^{k,\mathrm{nc}}), \tag{11a}\] \[a_{2}^{h}(q_{h},u_{h})+a_{3}^{h}(p_{h},q_{h}) =(\tilde{g}_{h},q_{h})_{\Omega} \forall\;q_{h}\in Q_{h}^{\ell,c}\;(\text{resp.}\;Q_{h}^{\ell, \mathrm{nc}}), \tag{11b}\]
with the discrete right-hand sides \((\tilde{f}_{h},v_{h})_{\Omega}:=(\tilde{f},\Pi_{k}v_{h})_{\Omega}\) and \((\tilde{g}_{h},q_{h})_{\Omega}:=(\tilde{g},\Pi_{\ell}q_{h})_{\Omega}\). To rewrite the above discrete problem, define the discrete product space \(\mathbf{H}_{\epsilon}^{h,c}:=V_{h}^{k,c}\times Q_{h}^{\ell,c}\) and the discrete operator \(\mathcal{A}_{h}^{c}:\mathbf{H}_{\epsilon}^{h,c}\to\mathbf{H}_{\epsilon}^{h,c}\) as
\[\langle\mathcal{A}_{h}^{c}(\vec{\boldsymbol{u}}_{h}),\vec{\boldsymbol{v}}_{h} \rangle:=a_{1}^{h}(u_{h},v_{h})-a_{2}^{h}(p_{h},v_{h})+a_{2}^{h}(q_{h},u_{h}) +a_{3}^{h}(p_{h},q_{h}) \tag{12}\]
for \(\vec{\boldsymbol{u}}_{h}=(u_{h},p_{h}),\vec{\boldsymbol{v}}_{h}=(v_{h},q_{h}) \in\mathbf{H}_{\epsilon}^{h,c}\). We also define the linear and bounded functional \(\mathcal{F}_{h}^{c}:\mathbf{H}_{\epsilon}^{h,c}\to\mathbb{R}\) as
\[\vec{\boldsymbol{v}}_{h}\mapsto\mathcal{F}_{h}^{c}(\vec{\boldsymbol{v}}_{h}): =(\tilde{f}_{h},v_{h})_{\Omega}+(\tilde{g}_{h},q_{h})_{\Omega},\]
and therefore problem (11) is recast as: Find \(\vec{\boldsymbol{u}}_{h}^{c}\in\mathbf{H}_{\epsilon}^{h,c}\) such that
\[\langle\mathcal{A}_{h}^{c}(\vec{\boldsymbol{u}}_{h}^{c}),\vec{\boldsymbol{v}}_{ h}\rangle=\mathcal{F}_{h}^{c}(\vec{\boldsymbol{v}}_{h})\qquad\forall\;\vec{ \boldsymbol{v}}_{h}\in\mathbf{H}_{\epsilon}^{h,c}. \tag{13}\]
Similarly we define \(\mathbf{H}_{\epsilon}^{h,\mathrm{nc}}:=V_{h}^{k,\mathrm{nc}}\times Q_{h}^{\ell,\mathrm{nc}}\), the discrete operators \(\mathcal{A}_{h}^{\mathrm{nc}}\) and \(\mathcal{F}_{h}^{\mathrm{nc}}\), and seek \(\vec{\boldsymbol{u}}_{h}^{\mathrm{nc}}\in\mathbf{H}_{\epsilon}^{h,\mathrm{nc}}\) such that
\[\langle\mathcal{A}_{h}^{\mathrm{nc}}(\vec{\boldsymbol{u}}_{h}),\vec{\boldsymbol {v}}_{h}\rangle=\mathcal{F}_{h}^{\mathrm{nc}}(\vec{\boldsymbol{v}}_{h})\qquad \forall\;\vec{\boldsymbol{v}}_{h}\in\mathbf{H}_{\epsilon}^{h,\mathrm{nc}}. \tag{14}\]
Define the piecewise version \(\|\cdot\|_{\mathbf{H}_{\epsilon}^{h}}\) of the norm \(\|\cdot\|_{\mathbf{H}_{\epsilon}^{c}}\) for \(\vec{\boldsymbol{u}}=(u,p)\in H^{2}(\mathcal{T}_{h})\times H^{1}(\mathcal{T}_{ h})\) as
\[\|\vec{\boldsymbol{u}}\|_{\mathbf{H}_{\epsilon}^{c}}^{2}:=\|u\|_{\Omega}^{2}+ |u|_{2,h}^{2}+\beta\|p\|_{\Omega}^{2}+\gamma|p|_{1,h}^{2}:=\sum_{K\in\mathcal{ T}_{h}}(\|u\|_{K}^{2}+|u|_{2,K}^{2}+\beta\|p\|_{K}^{2}+\gamma|p|_{1,K}^{2}).\]
The following result yields the solvability of the discrete problems.
**Theorem 3.1**.: _Problem (3.11) (resp. (3.12)) is well-posed in the space \(\mathbf{H}_{\epsilon}^{h,c}\) (resp. \(\mathbf{H}_{\epsilon}^{h,\mathrm{nc}}\)) equipped with the norm (3) (resp. \(\|\cdot\|_{\mathbf{H}_{\epsilon}^{h}}\))._
Proof.: The boundedness of \(\mathcal{A}_{h}^{c}\) and \(\mathcal{A}_{h}^{\mathrm{nc}}\) clearly follows from the stability of the \(L^{2}\)-projection operators \(\Pi_{k-2},\Pi_{k},\Pi_{\ell-1}\), and \(\Pi_{\ell}\) for \(k\geq 2\) and \(\ell\geq 1\), and from (10a)-(10d). For \(\vec{\boldsymbol{v}}_{h}\in\mathbf{H}_{\epsilon}^{h,c}\) or \(\mathbf{H}_{\epsilon}^{h,\mathrm{nc}}\), the definition (12) implies \(\langle\mathcal{A}_{h}(\vec{\boldsymbol{v}}_{h}),\vec{\boldsymbol{v}}_{h} \rangle=a_{1}^{h}(v_{h},v_{h})+a_{3}^{h}(q_{h},q_{h})\). The definition (12a) of \(a_{1}^{h}\) and the lower bounds of stabilisation terms (10a)-(10b) lead to
\[a_{1}^{h}(v_{h},v_{h})\gtrsim\|\Pi_{k}v_{h}\|_{\Omega}^{2}+\|(1-\Pi_{k})v_{h} \|_{\Omega}^{2}+\|\Pi_{k-2}(\nabla^{2}v_{h})\|_{\Omega}^{2}+|(1-\Pi_{k}^{\nabla 2})v_{h}|_{2,h}^{2}\gtrsim\|v_{h}\|_{\Omega}^{2}+|v_{h}|_{2,\Omega}^{2},\]
where we have employed \(\|(1-\Pi_{k-2})(\nabla^{2}v_{h})\|_{\Omega}\leq|(1-\Pi_{k}^{\nabla 2})v_{h}|_{2,h}\) and triangle inequalities in the last step. Analogously we can prove that \(a_{3}^{h}\) is coercive, and consequently \(\mathcal{A}_{h}^{c}\) (also \(\mathcal{A}_{h}^{\mathrm{nc}}\)) is coercive with respect to the weighted norm \(\|\cdot\|_{\mathbf{H}_{\epsilon}^{h}}\). Hence the Lax-Milgram lemma concludes the proof.
## 4 Error analysis for conforming VEM
This section recalls the standard conforming interpolation estimates and establishes the a prioi error estimates in the energy norm \(\|\cdot\|_{\mathbf{H}_{\epsilon}}\) (cf. Theorem 4.1).
**Proposition 4.1** (Conforming interpolation [13, 21]).: _There exists an interpolation operator \(\tilde{I}_{h}^{c}:(V\cap H^{s}(\Omega))\times(Q\cap H^{r}(\Omega))\to V_{h}^{k,c} \times Q_{h}^{\ell,c}\) such that, for \(v\in V\cap H^{s}(\Omega)\) with \(2\leq s\leq k+1\) and \(q\in Q\cap H^{r}(\Omega)\) with \(1\leq r\leq\ell+1\), \(\tilde{I}_{h}^{c}v:=(v_{l}^{c},q_{I}^{c})\) and_
\[|v-v_{I}^{c}|_{j,h}\lesssim h^{s-j}|v|_{s,\Omega}\quad\text{for }0\leq j\leq 2\quad \text{and}\quad|q-q_{I}^{c}|_{j,h}\lesssim h^{r-j}|q|_{r,\Omega}\quad\text{for }0\leq j\leq 1.\]
Throughout this paper, the oscillations of \(\tilde{f},\tilde{g}\in L^{2}(\Omega)\) for \(k\geq 2\) and \(\ell\geq 1\) are defined as
\[\mathrm{osc}_{2}(\tilde{f},\mathcal{T}_{h}):=\Big{(}\sum_{K\in \mathcal{T}_{h}}\|h_{K}^{2}(1-\Pi_{k})\tilde{f}\|_{K}^{2}\Big{)}^{1/2}\quad \text{and}\quad\mathrm{osc}_{1}(\tilde{g},\mathcal{T}_{h}):=\Big{(}\sum_{K\in \mathcal{T}_{h}}\|h_{K}(1-\Pi_{\ell})\tilde{g}\|_{K}^{2}\Big{)}^{1/2}.\]
**Theorem 4.1**.: _Given \(u\in V\cap H^{s}(\Omega)\) for \(s\geq 2\) and \(p\in Q\cap H^{r}(\Omega)\) for \(r\geq 1\), the unique solution \(\vec{u}_{h}^{c}=(u_{h}^{c},p_{h}^{c})\in\mathbf{H}_{h}^{c,c}=V_{h}^{k,c}\times Q _{h}^{c,c}\) for \(k\geq 2\) and \(\ell\geq 1\) (assume \(\ell\leq k\)) to (3.11) satisfies_
\[\|\vec{u}-\vec{u}_{h}^{c}\|_{\mathbf{H}_{\ell}} \lesssim\|u-u_{I}\|_{2,\Omega}+\|u-\Pi_{k}^{\nabla^{2}u}\|_{2,h}+ \|p-p_{I}\|_{1,\Omega}+\|p-\Pi_{\ell}^{\nabla}p\|_{1,h}+|u-\Pi_{\ell}^{\nabla }u|_{1,h}\] \[\quad+\mathrm{osc}_{2}(\tilde{f},\mathcal{T}_{h})+\mathrm{osc}_{ 1}(\tilde{g},\mathcal{T}_{h})\lesssim h^{\min\{k-1,s-2,\ell,r-1\}}(\|\tilde{f} \|_{s-4,\Omega}+\|\tilde{g}\|_{r-2,\Omega}).\]
Proof.: We drop the superscript \(c\) (denoting the conforming case) in the proof just for the sake of notational simplicity. Let \(\vec{\mathbf{e}}_{h}:=(e_{h}^{u},e_{h}^{p})=(u_{I}-u_{h},p_{I}-p_{h})=\vec{\mathbf{u}}_ {I}-\vec{\mathbf{u}}_{h}\in\mathbf{H}_{\ell}^{h,c}\) for \(\vec{\mathbf{u}}_{I}=(u_{I},p_{I})\). The coercivity of \(\mathcal{A}_{h}\) from Theorem 3.1 and the discrete problem (3.11) in the first step, and an elementary algebra in the second step lead to
\[\|\vec{e}_{h}\|_{\mathbf{H}_{\ell}}^{2} \lesssim\mathcal{A}_{h}(\vec{u}_{I},\vec{e}_{h})-\mathcal{F}_{h}( \vec{e}_{h})=(a_{h}^{h}(u_{I}-\Pi_{k}^{\nabla^{2}}u,e_{h}^{u})+a_{1}^{\rm pw}( \Pi_{k}^{\nabla^{2}}u-u,e_{h}^{u}))+(a_{2}(p,e_{h}^{u})\] \[\quad-a_{2}^{h}(p_{I},e_{h}^{u}))+(a_{3}^{h}(p_{I}-\Pi_{\ell}^{ \nabla}p,e_{h}^{p})+a_{3}^{\rm pw}(\Pi_{\ell}^{\nabla}p-p,e_{h}^{p}))+(a_{2} ^{h}(e_{h}^{p},u_{I})-a_{2}(e_{h}^{p},u))\] \[\quad+((\tilde{f}-\tilde{f}_{h},e_{h}^{u})_{\Omega}+(\tilde{g}- \tilde{g}_{h},e_{h}^{p})_{\Omega})=:T_{1}+T_{2}+T_{3}+T_{4}+T_{5}. \tag{4.1}\]
The continuity of \(a_{1}^{h}\) and \(a_{3}^{h}\) from Theorem 3.1, and the Cauchy-Schwarz inequality for \(a_{1}^{\rm pw}\) and \(a_{3}^{\rm pw}\) show
\[T_{1} +T_{3}\lesssim(\|u_{I}-\Pi_{k}^{\nabla^{2}}u\|_{2,h}+\|u-\Pi_{k} ^{\nabla^{2}}u\|_{2,h})(\|e_{h}^{u}\|_{\Omega}+|e_{h}^{u}|_{2,\Omega})+(\|p_{I }-\Pi_{\ell}^{\nabla}p\|_{1,{\rm pw}}+\|p-\Pi_{\ell}^{\nabla}p\|_{1,{\rm pw}})\] \[\times(\beta\|e_{h}^{p}\|_{\Omega}+\gamma|e_{h}^{p}|_{1,h}) \lesssim(\|u-u_{I}\|_{2,h}+\|u-\Pi_{k}^{\nabla^{2}}u\|_{2,h}+\|p-p_{I}\|_{1,h} +\|p-\Pi_{\ell}^{\nabla}p\|_{1,h})\|\vec{e}_{h}\|_{\mathbf{H}_{\ell}}\] \[\lesssim h^{\min\{k-1,s-2,\ell,r-1\}}(|u|_{s,\Omega}+|p|_{r, \Omega})\|\vec{e}_{h}\|_{\mathbf{H}_{\ell}}, \tag{4.2}\]
with triangle inequalities in the second step, and Propositions 3.1-4.1 in the last step. Algebraic manipulations and the \(L^{2}\)-orthogonality of \(\Pi_{\ell-1}\) imply that
\[\alpha^{-1}(T_{2}+T_{4}) =(\nabla p-\Pi_{\ell-1}\nabla p_{I},\nabla e_{h}^{u})_{\Omega}+( \Pi_{\ell-1}\nabla p_{I},(1-\Pi_{k-1})\nabla e_{h}^{u})_{\Omega}\] \[\quad+(\Pi_{\ell-1}\nabla e_{h}^{p},\Pi_{k-1}\nabla u_{I}-\nabla u )_{\Omega}+((\Pi_{\ell-1}-1)\nabla e_{h}^{p},(1-\Pi_{\ell-1})\nabla u)_{\Omega}. \tag{4.3}\]
In addition, triangle inequalities and the \(L^{2}\)-orthogonality of \(\Pi_{\ell-1}\) provide
\[\|\nabla p-\Pi_{\ell-1}\nabla p_{I}\|_{\Omega}\leq|p-p_{I}|_{1,\Omega}+|p_{I}- \Pi_{\ell}^{\nabla}p|_{1,h}\lesssim|p-p_{I}|_{1,\Omega}+|p-\Pi_{\ell}^{\nabla}p| _{1,h}.\]
The second term in (4.3) vanishes because of the \(L^{2}\)-orthogonality of \(\Pi_{k-1}\) and assumption \(\ell\leq k\). Similarly, the third term in (4.3) reduces to \((\Pi_{\ell-1}\nabla e_{h}^{p},\Pi_{k-1}\nabla u_{I}-\nabla u)_{\Omega}=(\Pi_{ \ell-1}\nabla e_{h}^{p},\nabla u_{I}-\nabla u)_{\Omega}\). The Cauchy-Schwarz inequality in combination with the previous bounds in (4.3) result in
\[\alpha^{-1}(T_{2}+T_{4}) \lesssim(|p-p_{I}|_{1,\Omega}+|p-\Pi_{\ell}^{\nabla}p|_{1,h})|e_{h }^{u}|_{1,\Omega}+(|u-u_{I}|_{1,\Omega}+|u-\Pi_{\ell}^{\nabla}u|_{1,h})|e_{h}^{ p}|_{1,\Omega}\] \[\lesssim h^{\min\{\ell,r-1,k,s-1\}}(|e_{h}^{u}|_{1,\Omega}+|e_{h}^{ p}|_{1,\Omega}),\]
where Propositions 3.1-4.1 were used for the last inequality. The \(L^{2}\)-orthogonality of \(\Pi_{k}\) and \(\Pi_{\ell}\) together with Proposition 3.1 and \(\gamma\geq 1\) allow us to assert that
\[T_{5} =(h_{\mathcal{T}_{h}}^{2}(\tilde{f}-\Pi_{k}\tilde{f}),h_{\mathcal{ T}_{h}}^{-2}(1-\Pi_{k})e_{h}^{u})_{\Omega}+(h_{\mathcal{T}_{h}}(\tilde{g}-\Pi_{ \ell}\tilde{g}),h_{\mathcal{T}_{h}}^{-1}(1-\Pi_{\ell})e_{h}^{p})_{\Omega}\] \[\lesssim\mathrm{osc}_{2}(\tilde{f},\mathcal{T}_{h})|e_{h}^{u}|_{ 2,\Omega}+\mathrm{osc}_{1}(\tilde{g},\mathcal{T}_{h})|e_{h}^{p}|_{1,\Omega}\] \[\lesssim h^{\min\{k+3,s-2,\ell+2,r-1\}}(\|\tilde{f}\|_{s-4,\Omega}+ \|\tilde{g}\|_{r-2,\Omega})(|e_{h}^{u}|_{2,\Omega}+\gamma|e_{h}^{p}|_{1,\Omega}). \tag{4.4}\]
The estimates (4.2)-(4.4) in (4.1) show that
\[\|\vec{e}_{h}\|_{\mathbf{H}_{\ell}}\leq h^{\min\{k-1,s-2,\ell,r-1\}}(|u|_{s, \Omega}+|p|_{r,\Omega}+\|\tilde{f}\|_{s-4,\Omega}+\|\tilde{g}\|_{r-2,\Omega}) \|\vec{e}_{h}\|_{\mathbf{H}_{\ell}}.\]
This and Proposition 4.1 in the triangle inequality \(\|\vec{u}-\vec{u}_{h}^{c}\|_{\mathbf{H}_{\ell}^{h}}\
## 5 Error analysis for nonconforming VEM
Since the nonconforming discrete spaces \(V_{h}^{k,\mathrm{nc}}\) and \(Q_{h}^{\ell,\mathrm{nc}}\) need not be subsets of continuous spaces \(V\) and \(Q\), this section explains the different constructions (at least two) of conforming companion operators which connect nonconforming VE spaces to continuous Sobolev spaces. The two crucial ideas in the design are
* first to map a nonconforming VE space to a conforming VE space of one degree higher, and
* second to modify the linear operator constructed in the first step through standard bubble-function techniques to achieve additional orthogonal properties (in particular, \(L^{2}\)-orthogonality).
### Construction of companion operators
Let \(\mathrm{dof}_{i}^{\ell,\mathrm{c}}\) for \(i=1,\ldots,\mathrm{N}_{1}^{\ell,\mathrm{c}}\) and \(\mathrm{dof}_{j}^{\ell,\mathrm{nc}}\) for \(j=1,\ldots,\mathrm{N}_{1}^{\ell,\mathrm{nc}}\) be the linear functionals associated with DoFs of the VE spaces \(Q_{h}^{\ell,\mathrm{c}}\) and \(Q_{h}^{\ell,\mathrm{nc}}\) of dimensions \(\mathrm{N}_{1}^{\ell,\mathrm{c}}\) and \(\mathrm{N}_{1}^{\ell,\mathrm{nc}}\) for \(\ell\geq 1\). Let \(\mathrm{dof}_{i}^{k,c}\) for \(i=1,\ldots,\mathrm{N}_{2}^{k,\mathrm{c}}\) and \(\mathrm{dof}_{j}^{k,\mathrm{nc}}\) for \(j=1,\ldots,\mathrm{N}_{2}^{k,\mathrm{nc}}\) be the linear functionals associated with DoFs of the VE spaces \(V_{h}^{k,\mathrm{c}}\) and \(V_{h}^{k,\mathrm{nc}}\) of dimensions \(\mathrm{N}_{2}^{k,\mathrm{c}}\) and \(\mathrm{N}_{2}^{k,\mathrm{nc}}\) for \(k\geq 2\).
**Theorem 5.1**.: _There exists a linear operator \(J_{1}:V_{h}^{k,\mathrm{nc}}\to V_{h}^{k+1,\mathrm{c}}\) satisfying the following properties:_
* \(\mathrm{dof}_{j}^{k,\mathrm{nc}}(J_{1}v_{h})=\mathrm{dof}_{j}^{k,\mathrm{nc}}( v_{h})\) _for all_ \(j=1,\ldots,\mathrm{N}_{2}^{k,\mathrm{nc}}\)_,_
* \(a^{\mathrm{pw}}(v_{h}-J_{1}v_{h},\chi)=0\) _for all_ \(\chi\in\mathbb{P}_{k}(\mathcal{T}_{h})\)_,_
* \(\nabla(v_{h}-J_{1}v_{h})\perp(\mathbb{P}_{k-3}(\mathcal{T}_{h}))^{2}\) _in_ \((L^{2}(\Omega))^{2}\) _for_ \(k\geq 3\)_,_
* \(\sum_{j=0}^{2}h^{j-2}|v_{h}-J_{1}v_{h}|_{j,h}\lesssim\inf_{\chi\in \mathbb{P}_{k}(\mathcal{T}_{h})}|v_{h}-\chi|_{2,h}+\inf_{v\in V}|v_{h}-v|_{2,h}\)_._
_Construction of \(J_{1}\)._ First we observe that DoFs of \(V_{h}^{k,\mathrm{nc}}\) is a subset of DoFs of \(V_{h}^{k+1,c}\). Next we define a linear operator \(J_{1}:V_{h}^{k,\mathrm{nc}}\to V_{h}^{k+1,c}\) through DoFs of \(V_{h}^{k+1,c}\), for \(v_{h}\in V_{h}^{k,\mathrm{nc}}\), by
\[\mathrm{dof}_{j}^{k,\mathrm{nc}}(J_{1}v_{h}) =\mathrm{dof}_{j}^{k,\mathrm{nc}}(v_{h})\quad\forall\;j=1,\ldots, \mathrm{N}_{2}^{k,\mathrm{nc}},\] \[\nabla J_{1}v_{h}(z) =\frac{1}{|\mathcal{T}_{z}|}\sum_{K\in\mathcal{T}_{z}}\nabla \Pi_{k}^{\nabla^{2}}v_{h}|_{K}(z)\quad\forall\;z\in\mathcal{V}^{i},\] \[\fint_{K}J_{1}v_{h}\chi\,\mathrm{d}\mathbf{x} =\fint_{K}\Pi_{k}^{\nabla^{2}}v_{h}\chi\,\mathrm{d}\mathbf{x}\quad \forall\;\chi\in\mathbb{M}_{k-3}^{*}(K),\]
where the set \(\mathcal{T}_{z}:=\{K\in\mathcal{T}_{h}:z\in K\}\) of cardinality \(|\mathcal{T}_{z}|\) contains the neighbouring polygons \(K\) sharing the vertex \(z\). We assign \(\nabla J_{1}v_{h}(z)=\mathbf{0}\) for the boundary vertices \(z\in\mathcal{V}^{s}\) if \(z\) is a corner (the angle at \(z\) is not equal to \(0,\pi,2\pi\)) and for all \(z\in\mathcal{V}^{c}\). If the angle at \(z\in\mathcal{V}^{s}\) is equal to \(0,\pi,2\pi\), then we assign
\[\partial_{\mathbf{t}}(J_{1}v_{h})(z)=0\quad\text{and}\quad\partial_{\mathbf{n}}(J_{1}v _{h})(z)=\frac{1}{|\mathcal{T}_{z}|}\sum_{K\in\mathcal{T}_{z}}\partial_{\mathbf{n}} (\Pi_{k}^{\nabla^{2}}v_{h})|_{K}(z).\]
Proof of Theorem 5.1(a).: This is an immediate consequence of the definition of \(J_{1}\).
Proof of Theorem 5.1(b).: Let \(\chi\in\mathbb{P}_{k}(K)\) and set the notation \(M_{\mathbf{n}\mathbf{n}}(\chi)=\partial_{\mathbf{n}\mathbf{n}}(\chi),T(\chi)=\partial_{\mathbf{n}} (\Delta\chi+\partial_{\mathbf{\tau}\mathbf{\tau}}\chi)\), and \([M_{\mathbf{n}\mathbf{\tau}}(\chi)]_{z_{j}}=\partial_{\mathbf{n}\mathbf{\tau}}(\chi)|_{e_{j-1}}( z_{j})-\partial_{\mathbf{n}\mathbf{\tau}}(\chi)|_{e_{j}}(z_{j})\) for \(j=1,\ldots,N_{K}\) with \(e_{0}=e_{N_{K}}\). Since \(\chi\in H^{4}(K)\) and \(v_{h}-J_{1}v_{h}\in H^{2}(K)\), an integration by parts leads to
\[a^{K}(v_{h}-J_{1}v_{h},\chi)=\int_{K}\Delta^{2}\chi(v_{h}-J_{1}v_{h})\,\mathrm{d }\mathbf{x}+\int_{\partial K}M_{\mathbf{n}\mathbf{n}}(\chi)\partial_{\mathbf{n}}(v_{h}-J_{1}v_{h })\,\mathrm{d}s\]
\[-\int_{\partial K}T(\chi)(v_{h}-J_{1}v_{h})\,\mathrm{d}s+\sum_{j=1}^{N_{K}}[M_{ \boldsymbol{n\tau}}\chi]_{z_{j}}(v_{h}-J_{1}v_{h})(z_{j})=0,\]
with part 5.1(a) being used in the last step. This holds for any \(K\in\mathcal{T}_{h}\) and concludes the proof of Theorem 5.1(b).
Proof of Theorem 5.1(c).: For any \(v_{h}\in V_{h}^{k,\mathrm{nc}},\chi\in\mathbb{P}_{k-2}(K)\) with \(k\geq 3\) and \(K\in\mathcal{T}_{h}\), an integration by parts and Theorem 5.1(a) show that
\[(\nabla(v_{h}-J_{1}v_{h}),\nabla\chi)_{K}=-(v_{h}-J_{1}v_{h},\Delta\chi)_{K}+( v_{h}-J_{1}v_{h},\partial_{\boldsymbol{n}}\chi)_{\partial K}=0.\]
This proves 5.1(c).
Proof of Theorem 5.1(d).: Since \((\Pi_{k}^{\nabla^{2}}v_{h}-J_{1}v_{h})|_{K}\in V_{h}^{k+1,c}(K)\), the norm equivalence found in, e.g., [31, Lemma 3.6] shows that
\[|\Pi_{k}^{\nabla^{2}}v_{h}-J_{1}v_{h}|_{2,K}\simeq h_{K}^{-1}\|\mathrm{Dof}^{ k+1,c}(\Pi_{k}^{\nabla^{2}}v_{h}-J_{1}v_{h})\|_{\ell^{2}}, \tag{10}\]
for the vector \(\mathrm{Dof}^{k+1,c}\) with arguments as the local DoFs of \(V_{h}^{k+1,c}(K)\). Let \(z\) be an interior vertex in \(\mathcal{V}^{i}\cap\mathcal{V}_{K}\) belonging to an edge \(e\in\mathcal{E}_{K}\). The equality \(J_{1}v_{h}(z)=v_{h}(z)\) from Theorem 5.1(a) and the inverse estimate for polynomials imply
\[|(\Pi_{k}^{\nabla^{2}}v_{h}-J_{1}v_{h})|_{K}(z)| \leq\|(\Pi_{k}^{\nabla^{2}}v_{h}-v_{h})\|_{\infty,e}\lesssim h_{e} ^{-1/2}\|v_{h}-\Pi_{k}^{\nabla^{2}}v_{h}\|_{e}\] \[\lesssim h_{e}^{-1}\|v_{h}-\Pi_{k}^{\nabla^{2}}v_{h}\|_{K}+|v_{h}- \Pi_{k}^{\nabla^{2}}v_{h}|_{1,K}\lesssim h_{K}|v_{h}-\Pi_{k}^{\nabla^{2}}v_{h} |_{2,K}.\]
The third step follows from the trace inequality, and the last step from (11) and (12). Let \(z\) be an interior vertex in \(\mathcal{V}^{i}\cap\mathcal{V}_{K}\) or a boundary vertex in \(\mathcal{V}^{s}\cap\mathcal{V}_{K}\) with angle at \(z\) equal to \(\pi\), and polygons \(K_{1}=K,\ldots,K_{|\mathcal{T}_{s}|}\) share the node \(z\). Suppose \((\Pi_{k}^{\nabla^{2}}v_{h})_{i}=\Pi_{k}^{\nabla^{2}}v_{h}|_{K_{i}}\), and \(K_{i}\) and \(K_{i+1}\) are two neighbouring polygons. Then
\[\nabla(\Pi_{k}^{\nabla^{2}}v_{h}-J_{1}v_{h})_{1}(z) =\frac{1}{|\mathcal{T}_{z}|}\sum_{j=2}^{|\mathcal{T}_{z}|}(\nabla (\Pi_{k}^{\nabla^{2}}v_{h})_{1}-\nabla(\Pi_{k}^{\nabla^{2}}v_{h})_{j})(z)\] \[=\frac{1}{|\mathcal{T}_{z}|}\sum_{j=2}^{|\mathcal{T}_{z}|}\sum_{i =1}^{j-1}(\nabla(\Pi_{k}^{\nabla^{2}}v_{h})_{i}-\nabla(\Pi_{k}^{\nabla^{2}}v_{ h})_{i+1})(z). \tag{11}\]
A consequence of the mesh regularity assumptions (11)-(12) is that \(|\mathcal{T}_{z}|\) is uniformly bounded for any \(z\in\mathcal{V}\). Hence it suffices to bound the term \((\nabla(\Pi_{k}^{\nabla^{2}}v_{h})_{1}-\nabla(\Pi_{k}^{\nabla^{2}}v_{h})_{2})(z)\) for \(z\in e\) and an edge \(e\in K_{1}\cap K_{2}\). In addition, the inverse estimate for polynomials leads to
\[|(\nabla(\Pi_{k}^{\nabla^{2}}v_{h})_{1}-\nabla(\Pi_{k}^{\nabla^{2}}v_{h})_{2} )(z)|\leq\|\nabla(\Pi_{k}^{\nabla^{2}}v_{h})_{1}-\nabla(\Pi_{k}^{\nabla^{2}}v_ {h})_{2}\|_{\infty,e}\lesssim h_{e}^{-1/2}\|[\nabla\Pi_{k}^{\nabla^{2}}v_{h}]_{ e}\|_{e}.\]
Let \(v\in V\) be an arbitrary function and \(a_{e}:=\fint_{e}\nabla(v-v_{h})\,\mathrm{d}s\). Since \(a_{e}\) is uniquely defined from the definition of \(v_{h}\in V_{h}^{k,\mathrm{nc}}\), rewrite \(h_{e}^{-1/2}\|[\nabla\Pi_{k}^{\nabla^{2}}v_{h}]_{e}\|_{e}=h_{e}^{-1/2}\|[ \nabla\Pi_{k}^{\nabla^{2}}v_{h}-\nabla v+a_{e}]_{e}\|_{e}\). Let \(\omega_{e}\) denote the edge patch of \(e\). Then the trace inequality and the triangle inequality show
\[h_{e}^{-1/2}\|[\nabla\Pi_{k}^{\nabla^{2}}v_{h}-\nabla v+a_{e}]_{e }\|_{e}\lesssim h_{e}^{-1}\|\nabla\Pi_{k}^{\nabla^{2}}v_{h}-\nabla v+a_{e}\|_{ \omega_{e}}+|\Pi_{k}^{\nabla^{2}}v_{h}-v|_{2,\omega_{e}}\] \[\lesssim h_{e}^{-1}(\|\Pi_{k}^{\nabla^{2}}v_{h}-v_{h}\|_{1, \omega_{e}}+\|\nabla(v_{h}-v)+a_{e}\|_{\omega_{e}})+|\Pi_{k}^{\nabla^{2}}v_{h}- v_{h}|_{2,\omega_{e}}+|v_{h}-v|_{2,\omega_{e}}.\]
Since \(\fint_{e}\nabla(v_{h}-v)+a_{e}\,\mathrm{d}s=0\), the Poincare-Friedrichs inequality and (12) imply \(\|\nabla(v_{h}-v)+a_{e}\|_{\omega_{e}}\lesssim h_{e}|v_{h}-v|_{2,\omega_{e}}\). This and (11) in the above displayed estimate provide
\[h_{e}^{-1/2}\|[\nabla\Pi_{k}^{\nabla^{2}}v_{h}]_{e}\|_{e}\lesssim|\Pi_{k}^{ \nabla^{2}}v_{h}-v_{h}|_{2,\omega_{e}}+|v_{h}-v|_{2,\omega_{e}}. \tag{12}\]
The combination (5.2)-(5.3) results in
\[|h_{z}\nabla(\Pi^{\nabla^{2}}_{k}v_{h}-J_{1}v_{h})|_{K}(z)|\lesssim h_{K}(|v_{h}- \Pi^{\nabla^{2}}_{k}v_{h}|_{2,\omega_{e}}+|v-v_{h}|_{2,\omega_{e}}).\]
Theorem 5.1(a), the Cauchy-Schwarz inequality, the trace inequality lead for any \(\chi\in\mathbb{M}_{k-2}(e)\) and \(e\in\mathcal{E}_{K}\setminus\mathcal{E}^{c}\) to
\[\int_{e}\partial_{\mathbf{n}}(\Pi^{\nabla^{2}}_{k}v_{h}-J_{1}v_{h}) \chi\,\mathrm{d}s \leq h_{e}^{1/2}\|\partial_{\mathbf{n}}(\Pi^{\nabla^{2}}_{k}v_{h}-v_{ h})\|_{e}\] \[\lesssim|v_{h}-\Pi^{\nabla^{2}}_{k}v_{h}|_{1,\omega_{e}}+h_{e}|v _{h}-\Pi^{\nabla^{2}}_{k}v_{h}|_{2,\omega_{e}}\lesssim h_{e}|v_{h}-\Pi^{\nabla ^{2}}_{k}v_{h}|_{2,\omega_{e}},\]
with (3.3) in the end. Analogously we can prove for any \(\chi\in\mathbb{M}_{k-3}(e)\) and \(e\in\mathcal{E}_{K}\cap\mathcal{E}^{i}\) that
\[\fint_{e}(\Pi^{\nabla^{2}}_{k}v_{h}-J_{1}v_{h})\chi\,\mathrm{d}s \lesssim h_{e}|v_{h}-\Pi^{\nabla^{2}}_{k}v_{h}|_{2,\omega_{e}}.\]
Again Theorem 5.1(a), the Cauchy-Schwarz inequality, and (3.3) show for any \(\chi\in\mathbb{M}_{k-4}(K)\) that
\[\fint_{K}(\Pi^{\nabla^{2}}_{k}v_{h}-J_{1}v_{h})\chi\,\mathrm{d}\mathbf{x}\lesssim h _{K}|v_{h}-\Pi^{\nabla^{2}}_{k}v_{h}|_{2,K},\]
and \(\fint_{K}(\Pi^{\nabla^{2}}_{k}v_{h}-J_{1}v_{h})\chi\,\mathrm{d}\mathbf{x}=0\) for \(\chi\in\mathbb{M}^{\star}_{k-3}(K)\). The definition of \(\Pi^{\nabla^{2}}_{k}\) from (3.1) implies \(\|v_{h}-\Pi^{\nabla^{2}}_{k}v_{h}\|_{2,h}\leq\inf_{\chi\in\mathbb{P}_{k}( \mathcal{T}_{h})}|v_{h}-\chi|_{2,h}\). The previous estimates in (5.1) prove that
\[|\Pi^{\nabla^{2}}_{k}v_{h}-J_{1}v_{h}|_{2,h}\lesssim\inf_{\chi\in\mathbb{P}_{ k}(\mathcal{T}_{h})}|v_{h}-\chi|_{2,h}+\inf_{v\in V}|v_{h}-v|_{2,h}.\]
Hence the triangle inequality \(|v_{h}-J_{1}v_{h}|_{2,h}\leq|v_{h}-\Pi^{\nabla^{2}}_{k}v_{h}|_{2,h}+|\Pi^{ \nabla^{2}}_{k}v_{h}-J_{1}v_{h}|_{2,h}\) and (3.3) prove the estimate in Theorem 5.1(d) for the term \(|v_{h}-J_{1}v_{h}|_{2,h}\). The Poincare-Friedrichs inequality implies \(\sum_{j=0}^{1}h^{j-2}|v_{h}-J_{1}v_{h}|_{j,h}\lesssim|v_{h}-J_{1}v_{h}|_{2,h}\).
The following theorem establishes the construction of the second companion operator which will be used in the sequel.
**Theorem 5.2**.: _There exists a linear operator \(J_{2}:V^{k,\mathrm{nc}}_{h}\to V\) such that it satisfies Theorem 5.1(a)-5.1(d) and in addition the \(L^{2}\)-orthogonality property. In particular,_
* \(\mathrm{d}\mathrm{o}f^{2,\mathrm{nc}}_{j}(J_{2}v_{h})=\mathrm{d}\mathrm{o}f^{2,\mathrm{nc}}_{j}(v_{h})\) _for all_ \(j=1,\ldots,\mathrm{N}^{2,\mathrm{nc}}_{k}\)_,_
* \(a_{\mathrm{pw}}(v_{h}-J_{2}v_{h},\chi)=0\) _for all_ \(\chi\in\mathbb{P}_{k}(\mathcal{T}_{h})\)_,_
* \(\nabla(v_{h}-J_{2}v_{h})\perp(\mathbb{P}_{k-3}(\mathcal{T}_{h}))^{2}\) _in_ \((L^{2}(\Omega))^{2}\) _for_ \(k\geq 3\)_,_
* \(v_{h}-J_{2}v_{h}\perp\mathbb{P}_{k}(\Omega)\) _in_ \(L^{2}(\Omega)\)_,_
* \(\sum_{j=0}^{2}h^{j-2}|v_{h}-J_{2}v_{h}|_{j,h}\lesssim\inf_{\chi\in\mathbb{P}_ {k}(\mathcal{T}_{h})}|v_{h}-\chi|_{2,\mathrm{pw}}+\inf_{v\in V}|v_{h}-v|_{2,h}\)_._
_Construction of \(J_{2}\)._ Let \(b_{K}\in H^{2}_{0}(K)\) be a bubble-function supported in \(K\) and \(v_{K}\in\mathbb{P}_{k}(K)\) be the Riesz representative of the linear functional \(\mathbb{P}_{k}(K)\to\mathbb{R}\), defined by, \(w_{k}\mapsto(v_{h}-J_{1}v_{h},w_{k})_{K}\), for \(w_{k}\in\mathbb{P}_{k}(K)\) in the Hilbert space \(\mathbb{P}_{k}(K)\) endowed with the weighted scalar product \((b_{K}\bullet,\bullet)_{K}\). Given \(v_{h}\in V^{k,\mathrm{nc}}_{h}\), the function \(\tilde{v}_{h}\in\mathbb{P}_{k}(\mathcal{T}_{h})\) with \(\tilde{v}_{h}|_{K}:=v_{K}\) and the bubble-function \(b_{h}|_{K}:=b_{K}\in H^{2}_{0}(\Omega)\) satisfy
\[(b_{h}\tilde{v}_{h},w_{k})_{\Omega}=(v_{h}-J_{1}v_{h},w_{k})_{ \Omega}\qquad\forall\;w_{k}\in\mathbb{P}_{k}(\mathcal{T}_{h}), \tag{5.4}\]
and define
\[J_{2}v_{h}:=J_{1}v_{h}+b_{h}\tilde{v}_{h}\in V. \tag{5.5}\]
Proof of Theorem 5.2(a).: Since \(b_{K}=0=\partial_{\mathbf{n}}(b_{K})\) on \(\partial K\) for any \(K\in\mathcal{T}_{h}\), there holds, for any \(v_{h}\in V_{h}^{k,\mathrm{nc}}\),
\[J_{2}v_{h}(z) =J_{1}v_{h}(z)=v_{h}(z) \text{for any }\;z\in\mathcal{V},\] \[\int_{e}\partial_{\mathbf{n}}(J_{2}v_{h})\chi\,\mathrm{d}s =\int_{e}\partial_{\mathbf{n}}(J_{1}v_{h})\chi\,\mathrm{d}s=\int_{e} \partial_{\mathbf{n}}(v_{h})\chi\,\mathrm{d}s \text{for }\chi\in\mathbb{M}_{k-2}(e)\text{ and }e\in\mathcal{E},\] \[\int_{e}J_{2}(v_{h})\chi\,\mathrm{d}s =\int_{e}J_{1}(v_{h})\chi\,\mathrm{d}s=\int_{e}v_{h}\chi\,\mathrm{ d}s \text{for }\chi\in\mathbb{M}_{k-3}(e)\text{ and }e\in\mathcal{E}.\]
For \(\chi\in\mathcal{M}_{k-4}(K)\) and \(K\in\mathcal{T}_{h}\), the definition (5.5) of \(J_{2}\) and (5.4) show \(\int_{K}J_{2}v_{h}\chi\,\mathrm{d}\mathbf{x}=\int_{K}(J_{1}v_{h}+b_{K}v_{K})\chi \,\mathrm{d}\mathbf{x}=\int_{K}v_{h}\chi\,\mathrm{d}\mathbf{x}\). This concludes the proof of Theorem 5.2(a).
Proof of Theorem 5.2(b)-5.2(c).: This results from Theorem 5.2(a) and it follows as in the proof of Theorem 5.1(b)-5.1(c).
Proof of Theorem 5.2(d).: This is an immediate consequence of the definition (5.5) of \(J_{2}\) and (5.4).
Proof of Theorem 5.2(e).: The Poincare-Friedrichs inequality implies \(\sum_{j=0}^{1}h^{j-2}|v_{h}-J_{2}v_{h}|_{j,h}\lesssim|v_{h}-J_{2}v_{h}|_{2,h}\). Hence it remains to bound the term \(|v_{h}-J_{2}v_{h}|_{2,h}\). The triangle inequality and (5.5) lead to
\[|v_{h}-J_{2}v_{h}|_{2,h}\leq|v_{h}-J_{1}v_{h}|_{2,h}+|b_{h}\tilde{v}_{h}|_{2,h}. \tag{5.6}\]
For any \(\chi\in\mathbb{P}_{k}(K)\) and \(K\in\mathcal{T}_{h}\), there exist inverse estimates
\[\|\chi\|_{K}^{2}\lesssim(b_{K},\chi^{2})_{K}\lesssim\|\chi\|_{K}^{2}\quad \text{and}\quad\|\chi\|_{K}\lesssim\sum_{m=0}^{2}h_{K}^{m}|b_{K}\chi|_{m,K} \lesssim\|\chi\|_{K}. \tag{5.7}\]
This implies
\[|b_{K}v_{K}|_{2,K}\lesssim h_{K}^{-2}\|v_{K}\|_{K}. \tag{5.8}\]
The first inequality in (5.7), and (5.4) with \(w_{k}=v_{K}\in\mathbb{P}_{k}(K)\) result in
\[\|v_{K}\|_{K}^{2}\lesssim(b_{K}v_{K},v_{K})_{K}=(v_{h}-J_{1}v_{h},v_{K})_{K}.\]
Hence \(\|v_{K}\|_{K}\lesssim\|v_{h}-J_{1}v_{h}\|_{K}\). This, the estimates (5.6) and (5.8), and Theorem 5.1(d) conclude the proof of Theorem 5.2(e).
The same idea follows for the second-order VE space \(Q_{h}^{\ell,\mathrm{nc}}\) and the following two theorems similarly construct \(J_{3}\) (as \(J_{1}\)) and modify \(J_{3}\) to obtain \(J_{4}\) (as \(J_{2}\)) with the \(L^{2}\)-orthogonality. We prefer to highlight only the main steps in the construction of \(J_{3}\) to avoid the repetition of the arguments.
**Theorem 5.3**.: _There exists a linear operator \(J_{3}:Q_{h}^{\ell,\mathrm{nc}}\to Q_{h}^{\ell+1,c}\) satisfying the following properties:_
* \(\mathrm{dof}_{j}^{1,\mathrm{nc}}(J_{3}q_{h})=\mathrm{dof}_{j}^{\ell,\mathrm{nc}} (q_{h})\) _for all_ \(j=1,\ldots,\mathbb{N}_{1}^{\ell,\mathrm{nc}}\)_,_
* \((\nabla_{\mathrm{pw}}(q_{h}-J_{3}q_{h}),\nabla_{\mathrm{pw}}\chi)_{\Omega}=0\) _for all_ \(\chi\in\mathbb{P}_{\ell}(\mathcal{T}_{h})\)_,_
* \(\sum_{j=0}^{1}h^{j-1}|q_{h}-J_{3}q_{h}|_{j,h}\lesssim\inf_{\chi\in \mathbb{P}_{\ell}(\mathcal{T}_{h})}|q_{h}-\chi|_{1,h}+\inf_{q\in Q}|q_{h}-q|_{1,h}.\)__
_Construction of \(J_{3}\)._ First we observe that the DoFs of \(Q_{h}^{\ell,\mathrm{nc}}\) conform a subset of the DoFs of \(Q_{h}^{\ell+1,\mathrm{c}}\). We define a linear operator \(J_{3}:Q_{h}^{\ell,\mathrm{nc}}\to Q_{h}^{\ell+1,c}\) through DoFs of \(Q_{h}^{\ell+1,c}\), for \(q_{h}\in Q_{h}^{\ell,\mathrm{nc}}\), by
\[\mathrm{dof}_{j}^{\ell,\mathrm{nc}}(J_{3}q_{h})=\mathrm{dof}_{j}^{\ell,\mathrm{nc }}(q_{h})\quad\forall\;j=1,\ldots,\mathbb{N}_{1}^{\ell,\mathrm{nc}},\]
\[J_{3}q_{h}(z) =\frac{1}{|\mathcal{T}_{z}|}\sum_{K\in\mathcal{T}_{z}}\Pi^{\nabla}_{ \ell}q_{h}|_{K}(z)\quad\forall\;z\in\mathcal{V}^{i}\cup\mathcal{V}^{c},\] \[\fint_{K}J_{3}v_{h}\chi\,\mathrm{d}\mathbf{x}= \fint_{K}\Pi^{\nabla}_{\ell}q_{h}\chi\,\mathrm{d}\mathbf{x}\quad\forall \;\chi\in\mathbb{M}^{*}_{\ell-1}(K).\]
Proof of Theorem 5.3(a).: This is an immediate consequence of the definition of \(J_{3}\).
Proof of Theorem 5.3(b).: An integration by parts and Theorem 5.3(a) prove, for any \(\chi\in\mathbb{P}_{\ell}(K)\) and \(K\in\mathcal{T}_{h}\), that
\[(\nabla(q_{h}-J_{3}q_{h}),\nabla\chi)_{K}=-(q_{h}-J_{3}q_{h},\Delta\chi)_{K}+( q_{h}-J_{3}q_{h},\partial_{\mathbf{n}}\chi)_{\partial K}=0.\]
This concludes the proof of Theorem 5.3(b).
Proof of Theorem 5.3(c).: This follows analogously as the proof of Theorem 5.1(d) with obvious modifications.
**Theorem 5.4**.: _There exists a linear operator \(J_{4}:Q^{\ell,\mathrm{nc}}_{h}\to Q\) such that it satisfies Theorem 5.3(a)-5.3(c) and in addition the \(L^{2}\)-orthogonality properly. In particular,_
1. \(\mathrm{d}\mathrm{o}\ell^{\ell,\mathrm{nc}}_{j}(J_{4}q_{h})=\mathrm{dof}\ell^ {\ell,\mathrm{nc}}_{j}(q_{h})\) _for all_ \(j=1,\ldots,\mathrm{N}^{\ell,\mathrm{nc}}_{1}\)_,_
2. \((\nabla_{\mathrm{pw}}(q_{h}-J_{3}q_{h}),\nabla_{\mathrm{pw}}\chi)_{\Omega}=0\) _for all_ \(\chi\in\mathbb{P}_{\ell}(\mathcal{T}_{h})\)_,_
3. \(q_{h}-J_{4}q_{h}\perp\mathbb{P}_{\ell}(\Omega)\) _in_ \(L^{2}(\Omega)\)_,_
4. \(\sum_{j=0}^{1}h^{j-1}|q_{h}-J_{4}q_{h}|_{j,h}\lesssim\inf_{\chi\in\mathbb{P}_{ \ell}(\mathcal{T}_{h})}|q_{h}-\chi|_{1,h}+\inf_{q\in Q}|q_{h}-q|_{1,h}\)_._
### Energy error estimate
This section proves the energy error estimate for the nonconforming case invoking the companion operators constructed in Subsection 5.1.
**Proposition 5.1** (Nonconforming interpolation).: _There exists an interpolation operator \(\vec{I}^{\mathrm{nc}}_{h}:(V\cap H^{s}(\Omega))\times(Q\cap H^{r}(\Omega)) \to V^{k,\mathrm{nc}}_{h}\times Q^{\ell,\mathrm{nc}}_{h}\) such that, for \(v\in V\cap H^{s}(\Omega)\) with \(2\leq s\leq k+1\) and \(q\in Q\cap H^{r}(\Omega)\) with \(1\leq r\leq\ell+1\), \(\vec{I}^{\mathrm{nc}}_{h}v:=(v^{\mathrm{nc}}_{I},q^{\mathrm{nc}}_{I})\) satisfies_
\[|v-v^{\mathrm{nc}}_{I}|_{j,h}\lesssim h^{s-j}|v|_{s,\Omega}\quad\text{for $0\leq j \leq 2$}\quad\text{and}\quad|q-q^{\mathrm{nc}}_{I}|_{j,h}\lesssim h^{r-j}|q|_{r, \Omega}\quad\text{for $0\leq j\leq 1$}.\]
**Theorem 5.5**.: _Given \(u\in V\cap H^{s}(\Omega)\) for \(s\geq 2\) and \(p\in Q\cap H^{r}(\Omega)\) for \(r\geq 1\), the unique solution \(\vec{u}^{\mathrm{nc}}_{h}=(u^{\mathrm{nc}}_{h},p^{\mathrm{nc}}_{h})\in\mathbf{ H}^{h,\mathrm{nc}}_{h}=V^{k,\mathrm{nc}}_{h}\times Q^{\ell,\mathrm{nc}}_{h}\) for \(k\geq 2\) and \(\ell\geq 1\) to (3.12) satisfies_
\[\|\vec{u}-\vec{u}^{\mathrm{nc}}_{h}\|_{\mathbf{H}^{s}_{\ell}} \lesssim\|u-u_{I}\|_{2,h}+\|u-\Pi^{\nabla^{2}}_{k}u\|_{2,h}+\|p-p _{I}\|_{1,h}+\|p-\Pi^{\nabla}_{\ell}p\|_{1,h}\] \[\quad+|u-\Pi^{\nabla}_{\ell}u|_{1,h}+h|p-\Pi^{\nabla}_{k-2}p\|_{1,h}+\mathrm{osc}_{2}(\tilde{f},\mathcal{T}_{h})+\mathrm{osc}_{1}(\tilde{g}, \mathcal{T}_{h})\] \[\lesssim h^{\min\{k-1,\ell,s-2,r-1\}}(\|\tilde{f}\|_{s-4,\Omega}+\| \tilde{g}\|_{r-2,\Omega}).\]
Proof.: Let \(\vec{u}_{I}:=(u_{I},p_{I})\in V^{k,\mathrm{nc}}_{h}\times Q^{\ell,\mathrm{nc}}_ {h}\) be an interpolation of \(\vec{u}\) and \(\vec{e}_{h}=(e^{u}_{h},e^{p}_{h}):=(u_{I}-u_{h},p_{I}-p_{h})\). The coercivity of \(\mathcal{A}_{h}\) from Theorem 3.1 and the discrete problem (3.12) lead to
\[\|\vec{e}_{h}\|_{\mathbf{H}^{s}_{\ell}}^{2} \lesssim\mathcal{A}_{h}(\vec{e}_{h},\vec{e}_{h})=\mathcal{A}_{h}( \vec{u}_{I},\vec{e}_{h})-\mathcal{F}_{h}(\vec{e}_{h})\] \[=a^{h}_{1}(u_{I},e^{u}_{h})-a^{h}_{2}(p_{I},e^{u}_{h})+a^{h}_{2}(e ^{p}_{h},u_{I})+a^{h}_{3}(p_{I},e^{p}_{h})-(\tilde{f}_{h},e^{u}_{h})_{\Omega}-( \tilde{g}_{h},e^{p}_{h})_{\Omega}\] \[=(a^{h}_{1}(u_{I}-\Pi^{\nabla^{2}}_{k}u_{i},e^{u}_{h})+a^{ \mathrm{pw}}_{1}(\Pi^{\nabla^{2}}_{k}u-u,e^{u}_{h}))+(a^{\mathrm{pw}}_{1}(u,e^{ u}_{h})-(\tilde{f},e^{u}_{h})_{\Omega})+(\tilde{f}-\tilde{f}_{h},e^{u}_{h})_{\Omega}\]
\[(-a_{2}^{h}(p_{I},e_{h}^{u})+a_{2}^{h}(e_{h}^{p},u_{I}))+(a_{3}^{h}(p_ {I}-\Pi_{\ell}^{\nabla}p,e_{h}^{p})+a_{3}^{\rm pw}(\Pi_{\ell}^{\nabla}p-p,e_{h}^ {p}))\] \[+(a_{3}^{\rm pw}(p,e_{h}^{p})-(\tilde{g},e_{h}^{p})_{\Omega})+( \tilde{g}-\tilde{g}_{h},e_{h}^{p})_{\Omega}:=:T_{1}+T_{2}+T_{3}+T_{4}+T_{5}+T_{6 }+T_{7},\]
with an elementary algebra in the last two steps. The boundedness of \(\mathcal{A}_{h}\) from Theorem 3.1 and the Cauchy-Schwarz inequality for \(a_{1}^{\rm pw}\) and \(a_{3}^{\rm pw}\) show the chain of bounds
\[T_{1}+T_{5} \lesssim(\|u_{I}-\Pi_{k}^{\nabla^{2}}u\|_{\Omega}+\|u-\Pi_{k}^{ \nabla^{2}}u\|_{\Omega})\|e_{h}^{u}\|_{\Omega}+(|u_{I}-\Pi_{k}^{\nabla^{2}}u|_ {2,h})|e_{h}^{u}\|_{2,h}\] \[\quad+(\|p_{I}-\Pi_{\ell}^{\nabla}p\|_{\Omega}+\|p-\Pi_{\ell}^{ \nabla}p\|_{\Omega})\beta\|e_{h}^{p}\|_{\Omega}+(|p_{I}-\Pi_{\ell}^{\nabla}p| _{1,h}+|p-\Pi_{\ell}^{\nabla}p|_{1,h})\gamma|e_{h}^{p}\|_{1,h}\] \[\lesssim(\|u-u_{I}\|_{2,h}+\|u-\Pi_{k}^{\nabla^{2}}u\|_{2,h}+\|p -p_{I}\|_{1,h}+\|p-\Pi_{\ell}^{\nabla}p\|_{1,h})\|\vec{e}_{h}\|_{\mathbf{H}_{ \varepsilon}^{h}}\] \[\lesssim h^{\min\{k-1,s-2,\ell,r-1\}}(|u|_{s,\Omega}+|p|_{r, \Omega})\|\vec{e}_{h}\|_{\mathbf{H}_{\varepsilon}^{h}}, \tag{10}\]
with the triangle inequality in the second step, and Propositions 3.1-5.1 in the last step. Taking \(\vec{v}=(J_{2}e_{h}^{u},J_{4}e_{h}^{p})\) in the continuous problem (4) allows us to assert that
\[T_{2} +T_{4}+T_{6}\] \[=a_{1}^{\rm pw}(u,e_{h}^{u}-J_{2}e_{h}^{u})+a_{2}(p,J_{2}e_{h}^{u })+(\tilde{f},J_{2}e_{h}^{u}-e_{h}^{u})_{\Omega}+a_{3}^{\rm pw}(p,e_{h}^{p}-J _{4}e_{h}^{p})\] \[\quad-a_{2}(J_{4}e_{h}^{p},u)+(\tilde{g},J_{4}e_{h}^{p}-e_{h}^{p}) _{\Omega}-(a_{2}^{h}(p_{I},e_{h}^{u})-a_{2}^{h}(e_{h}^{p},u_{I}))\] \[=\left(a_{1}^{\rm pw}(u-\Pi_{k}^{\nabla^{2}}u,e_{h}^{u}-J_{2}e_{h }^{u})+(\tilde{f}-\Pi_{k}\tilde{f},J_{2}e_{h}^{u}-e_{h}^{u})_{\Omega}+a_{3}^{ \rm pw}(p-\Pi_{\ell}^{\nabla}p,e_{h}^{p}-J_{4}e_{h}^{p})\right.\] \[\quad+(\tilde{g}-\Pi_{\ell}\tilde{g},J_{4}e_{h}^{p}-e_{h}^{p})_{ \Omega}\right)+\left(a_{2}(p,J_{2}e_{h}^{u})-a_{2}^{h}(p_{I},e_{h}^{u})+a_{2}^{ h}(e_{h}^{p},u_{I})-a_{2}(J_{4}e_{h}^{p},u)\right)=:T_{8}+T_{9}.\]
The last step follows from Theorems 5.2(b)-5.2(d) and 5.4(b)-5.4(c). The Cauchy-Schwarz inequality and Theorems 5.2(e) and 5.4(d) show for \(T_{8}\) that
\[T_{8} \lesssim\|u-\Pi_{k}^{\nabla^{2}}u\|_{\Omega}\|e_{h}^{u}\|_{\Omega }+(|u-\Pi_{k}^{\nabla^{2}}u|_{2,h}+\operatorname{osc}_{2}(\tilde{f},\mathcal{ T}_{h}))|e_{h}^{u}|_{2,h}+\beta\|p-\Pi_{\ell}^{\nabla}p\|_{\Omega}\|e_{h}^{p}\|_{\Omega}\] \[\quad+(\gamma|p-\Pi_{\ell}^{\nabla}p|_{1,h}+\operatorname{osc}_{ 1}(\tilde{g},\mathcal{T}_{h}))|e_{h}^{p}|_{1,h}\] \[\lesssim h^{\min\{k-1,s-2,\ell,r-1\}}(|u|_{s,\Omega}+|p|_{r, \Omega}+|\tilde{f}|_{s-4,\Omega}+|\tilde{g}|_{r-2,\Omega})\|\vec{e}_{h}\|_{ \mathbf{H}_{\varepsilon}^{h}}.\]
Next, an elementary algebraic manipulation for \(T_{9}\) provides
\[\alpha^{-1}T_{9} =(\nabla p-\Pi_{\ell-1}\nabla p_{I},\nabla J_{2}e_{h}^{u})_{ \Omega}+(|\mu_{\ell-1}\nabla p_{I},\nabla J_{2}e_{h}^{u}-\Pi_{k-1}\nabla e_{h} ^{u})_{\Omega}\] \[\quad+(\Pi_{\ell-1}\nabla e_{h}^{p},\Pi_{k-1}\nabla u_{I}-\nabla u )_{\Omega}+(\Pi_{\ell-1}\nabla e_{h}^{p}-\nabla J_{4}e_{h}^{p},\nabla u-\nabla \Pi_{\ell}^{\nabla}u)_{\Omega}, \tag{11}\]
with the \(L^{2}\)-orthogonality of \(\Pi_{\ell-1}\) and Theorem 5.4(b) in the last term. The Cauchy-Schwarz inequality, the triangle inequality \(\|\nabla p-\Pi_{\ell-1}\nabla p_{I}\|_{\Omega}\leq|p-p_{I}|_{1,h}+\|(1-\Pi_{ \ell-1})\nabla_{\rm pw}p_{I}\|_{\Omega}\) for the first term in (11) lead to
\[(\nabla p-\Pi_{\ell-1}\nabla p_{I},\nabla J_{2}e_{h}^{u})_{\Omega} \leq(|p-p_{I}|_{1,h}+\|(1-\Pi_{\ell-1})\nabla_{\rm pw}p_{I}\|_{\Omega})|J_{2}e_{ h}^{u}|_{1,\Omega}\] \[\quad\lesssim(|p-p_{I}|_{1,h}+\|\nabla p-\Pi_{\ell-1}\nabla p\|_{ \Omega})|J_{2}e_{h}^{u}|_{2,\Omega}\lesssim h^{\min\{\ell,r-1\}}|p|_{r,\Omega}|e_{ h}^{u}|_{2,h}, \tag{12}\]
having employed \(\|\nabla p_{I}-\Pi_{\ell-1}\nabla p_{I}\|_{K}\leq\|\nabla p_{I}-\Pi_{\ell-1} \nabla p\|_{K}\) for any \(K\in\mathcal{T}_{h}\) followed by the triangle inequality in the second step, and Propositions 3.1-5.1 and the stability of \(J_{2}\) from Theorem 5.2(e) in the last step. For \(k=2\), the Cauchy-Schwarz inequality and the \(L^{2}\)-stability of \(\Pi_{\ell-1}\) for the second term in (11) imply
\[(\Pi_{\ell-1}\nabla p_{I},\nabla J_{2}e_{h}^{u}-\Pi_{1}\nabla e_{h }^{u})_{\Omega}\leq|p_{I}|_{1,h}\|\nabla J_{2}e_{h}^{u}-\Pi_{1}\nabla e_{h}^{u} \|_{\Omega}\] \[\quad\leq(|p_{I}-p|_{1,h}+|p|_{1,\Omega})(|e_{h}^{u}-J_{2}e_{h}^{ u}|_{1,h}+|e_{h}^{u}-\Pi_{1}e_{h}^{u}|_{1,h})\lesssim h|p|_{1,\Omega}|e_{h}^{u}|_{2,h}.\]
The second step results from the triangle equality, and the last step from Propositions 3.1-5.1, and Theorem 5.2(e). Theorem 5.2(c) in the second term of (11) for \(k\geq 3\) leads to
\[(\Pi_{\ell-1}\nabla p_{I},\nabla J_{2}e_{h}^{u}-\Pi_{k-1}\nabla e_{h}^{u})_{ \Omega}=(\Pi_{\ell-1}\nabla p_{I}-\nabla_{\rm pw}(\Pi_{k-2}^{\nabla}p), \nabla J_{2}e_{h}^{u}-\Pi_{k-1}\nabla e_{h}^{u})_{\Omega}\]
\[\leq(\|\nabla p-\Pi_{\ell-1}\nabla p\|_{\Omega}+|p-p_{I}|_{1,h}+|p- \Pi_{k-2}^{\nabla}p|_{1,h})(|e_{h}^{u}-J_{2}e_{h}^{u}|_{1,h}+\|(1-\Pi_{k-1}) \nabla_{\text{pw}}e_{h}^{u}\|_{\Omega})\] \[\lesssim h^{\min\{\ell,r,k-1\}}|p|_{r,\Omega}|e_{h}^{u}|_{2,h},\]
where we have used the bound \(\|\nabla p_{I}-\Pi_{\ell-1}\nabla p_{I}\|_{K}\leq\|\nabla p_{I}-\Pi_{\ell-1} \nabla p\|_{K}\) for any \(K\in\mathcal{T}_{h}\) and the triangle inequality in the second step, and Propositions 3.1-5.1 and Theorem 5.2(e) in the last step. Similarly the remaining two terms in (5.10) are handled as
\[(\Pi_{\ell-1}\nabla e_{h}^{p},\Pi_{k-1}\nabla u_{I}-\nabla u)_{ \Omega} \lesssim h^{\min\{k-1,s-1\}}|u|_{s,\Omega}|e_{h}^{p}|_{1,h}, \tag{5.12a}\] \[(\Pi_{\ell-1}\nabla e_{h}^{p}-\nabla J_{4}e_{h}^{p},\nabla u- \nabla_{\text{pw}}(\Pi_{\ell}^{\nabla}u))_{\Omega} \lesssim h^{\min\{\ell,s-1\}}|u|_{s,\Omega}|e_{h}^{p}|_{1,h}. \tag{5.12b}\]
The combination (5.11)-(5.12b) and the observation \(\alpha\leq 1\leq\gamma\) in (5.10) prove that
\[T_{9}\lesssim h^{\min\{k-1,s-1,\ell,r-1\}}(|u|_{s,\Omega}+|p|_{r,\Omega})(|e_{ h}^{u}|_{2,h}+\gamma|e_{h}^{p}|_{1,h}).\]
The \(L^{2}\)-orthogonality of \(\Pi_{k}\) and \(\Pi_{\ell}\), and Proposition 3.1 result in
\[T_{3}+T_{7} =(h_{7_{h}}^{2}(1-\Pi_{k})\tilde{f},h_{7_{h}}^{-2}(1-\Pi_{k})e_{h }^{u})_{\Omega}+(h_{7_{h}}(1-\Pi_{\ell})\tilde{g},h_{7_{h}}^{-1}(1-\Pi_{\ell}) e_{h}^{p})_{\Omega}\] \[\lesssim\operatorname{osc}_{2}(\tilde{f},\mathcal{T}_{h})|e_{1}^ {u}|_{2,h}+\operatorname{osc}_{1}(\tilde{g},\mathcal{T}_{h})|e_{h}^{p}|_{1,h}\] \[\lesssim h^{\min\{k+1,s-2,\ell+1,r-1\}}(|\tilde{f}|_{s-4,\Omega} +|\tilde{g}|_{r-2,\Omega})(|e_{h}^{u}|_{2,h}+\gamma|e_{h}^{p}|_{1,h}),\]
with \(\gamma\geq 1\) in the end. The previous estimates in (5.9) readily prove that
\[\|\vec{e}_{h}\|_{\mathbf{H}_{k}^{b}}\lesssim h^{\min\{k-1,s-2,\ell,r-1\}}(|u|_ {s,\Omega}+|p|_{r,\Omega}+|\tilde{f}|_{s-4,\Omega}+|\tilde{g}|_{r-2,\Omega}).\]
This and Proposition 5.1 in the triangle inequality \(\|\vec{u}-\vec{u}_{h}\|_{\mathbf{H}_{\varepsilon}^{h}}\leq\|\vec{u}-\vec{u}_ {I}\|_{\mathbf{H}_{\varepsilon}^{h}}+\|\vec{e}_{h}\|_{\mathbf{H}_{\varepsilon }^{h}}\) followed by regularity estimates conclude the proof of the theorem.
**Remark 5.1** (Best-approximation for lowest-order case).: _If we reconstruct \(J_{2}\) for \(k=2\) with an additional \(H^{1}\)-orthogonality in Theorem 5.2(c) as \(\nabla(v_{h}-J_{2}v_{h})\perp(\mathbb{P}_{0}(\mathcal{T}_{h}))^{2}\) in \((L^{2}(\Omega))^{2}\) (see Subsection 6.3 for a definition), then the error estimate in Theorem 5.5 for \(k=2\) and \(\ell=1\) can be written in the best-approximation form_
\[\|\vec{u}-\vec{u}_{h}^{\text{nc}}\|_{\mathbf{H}_{\varepsilon}^{h}} \lesssim\|u-u_{I}\|_{2,h}+\|u-\Pi_{2}^{\nabla^{2}}u\|_{2,h}+\|p-p_{I }\|_{1,h}+\|p-\Pi_{1}^{\nabla}p\|_{1,h}\] \[\quad+|u-\Pi_{1}^{\nabla}u|_{1,h}+\operatorname{osc}_{2}(\tilde{ f},\mathcal{T}_{h})+\operatorname{osc}_{1}(\tilde{g},\mathcal{T}_{h}).\]
## 6 A posteriori error analysis
This section contains the derivation of a posteriori error indicators and the proof of their robustness. We provide the details for the nonconforming case and a remark for the conforming case to avoid the repetition of arguments.
### Preliminaries
We collect here the following local estimates, proven in [31, Lemma 3.2] and [31, Lemmas 3.3-3.4], respectively.
**Lemma 6.1**.: _For any \(\epsilon>0\), there exists a positive constant \(c(\epsilon)\) such that_
\[|v|_{1,K}\lesssim\epsilon h_{K}|v|_{2,K}+c(\epsilon)h_{K}^{-1}\|v\|_{K}\qquad \forall v\in H^{2}(K).\]
**Lemma 6.2**.: _For every \(v\in H^{2}(K)\) such that \(\Delta^{2}v\in\mathbb{P}_{k-4}(K)\), there exists a polynomial \(p\in\mathbb{P}_{k}(K)\) satisfying_
\[\Delta^{2}v=\Delta^{2}p\qquad\text{in }K.\]
_Moreover, the following estimates hold_
\[|p|_{2,K}\lesssim h_{K}^{2}\|\Delta^{2}v\|_{K}\lesssim|v|_{2,K},\qquad|p|_{2,K} \lesssim h_{K}^{-2}\|v\|_{K},\qquad\|p\|_{K}\lesssim\|v\|_{K}.\]
### Standard estimates
We start with technical results (inverse estimates and norm equivalences) that are required in the analysis of the nonconforming formulations. These tools are available in the literature only for the conforming case. There are two terminologies, namely original and enhanced VE spaces, in the VE literature (see [3] for more details). This paper utilises the enhanced versions, but we first prove the results for the original space and then build for the enhanced space. Let us denote the original local nonconforming VE space for deflections by \(\widetilde{V}_{h}^{k,\mathrm{nc}}(K)\) and define by
\[\widetilde{V}_{h}^{k,\mathrm{nc}}(K):=\left\{\begin{array}{c}v_{h}\in H^{2}( K)\cap C^{0}(\partial K):\Delta^{2}v_{h}\in\mathbb{P}_{k-4}(K),\;v_{h}|_{e}\in \mathbb{P}_{k}(e)\;\mathrm{and}\;\Delta v_{h}|_{e}\in\mathbb{P}_{k-2}(e)\\ \forall\;e\in\mathcal{E}_{K},\;v_{h}|_{s_{j}}\in C^{1}(s_{j}),\;\int_{e_{m_{j} }}(v_{h}-\Pi_{k}^{\nabla^{2}}v_{h})\chi\,\mathrm{d}s=0\quad\forall\;\chi\in \mathbb{P}_{k-2}(e_{m_{j}}),\;\mathrm{and}\\ \int_{e_{m_{j}+i}}(v_{h}-\Pi_{k}^{\nabla^{2}}v_{h})\chi\,\mathrm{d}s=0\quad \forall\;\chi\in\mathbb{P}_{k-3}(e_{m_{j}+i})\;\mathrm{for}\;i=1,\ldots,n_{j}; \;j=1,\ldots,\tilde{N}_{K}\end{array}\right\}.\]
**Lemma 6.3** (Inverse estimates).: _For any \(\tilde{v}\in\widetilde{V}_{h}^{k,\mathrm{nc}}(K)\) and \(K\in\mathcal{T}_{h}\), there holds_
\[|\tilde{v}|_{2,K}\lesssim h_{K}^{-2}\|\tilde{v}\|_{K}\quad\text{and}\quad| \tilde{v}|_{1,K}\lesssim h_{K}^{-1}\|\tilde{v}\|_{K}. \tag{6.1}\]
Proof.: Given \(\tilde{v}\in\widetilde{V}_{h}^{k,\mathrm{nc}}(K)\), \(\Delta^{2}\tilde{v}\in\mathbb{P}_{k-4}(K)\) and consequently, we can choose a polynomial \(p\in\mathbb{P}_{k}(K)\) from Lemma 6.2. The triangle inequality and the second bound in Lemma 6.2 assert that
\[|\tilde{v}|_{2,K}\leq|\tilde{v}-p|_{2,K}+|p|_{2,K}\lesssim|\tilde{v}-p|_{2,K} +h_{K}^{-2}\|\tilde{v}\|_{K}.\]
If we prove \(|\tilde{v}-p|_{2,K}\lesssim h_{K}^{-2}\|\tilde{v}-p\|_{K}\), then the triangle inequality together with the third bound in Lemma 6.2 will provide
\[|\tilde{v}-p|_{2,K}\lesssim h_{K}^{-2}\|\tilde{v}-p\|_{K}\lesssim h_{K}^{-2}( \|\tilde{v}\|_{K}+\|p\|_{K})\lesssim h_{K}^{-2}\|\tilde{v}\|_{K}.\]
Hence we concentrate on showing \(|\tilde{v}-p|_{2,K}\lesssim h_{K}^{-2}\|\tilde{v}-p\|_{K}\). First we note that since \(\Delta^{2}\tilde{v}=\Delta^{2}p\) and \(\tilde{v}-p\in\widetilde{V}_{h}^{k,\mathrm{nc}}(K)\), without loss of generality we can assume that \(\Delta^{2}\tilde{v}=0\). Then we define, for a fixed \(\tilde{v}\), the following set
\[S(K):=\{w\in H^{2}(K):w=\tilde{v}\text{ on }\partial K,\;\mathrm{and}\; \int_{e}\partial_{\mathbf{n}}(\tilde{v}-w)\chi\,\mathrm{d}s=0\quad\forall \chi\in\mathbb{P}_{k-2}(e),\,e\in\mathcal{E}_{K}\} \tag{6.2}\]
and the fact \(a^{K}(\tilde{v},\tilde{v}-w)=0\) for \(w\in S(K)\) leads to
\[|\tilde{v}|_{2,K}\leq|w|_{2,K}\quad\forall w\in S(K). \tag{6.3}\]
Next, we define \(Q_{K}\tilde{v}\in\widetilde{V}_{h}^{k+1,\mathrm{c}}(K)\) for \(\tilde{v}\in\widetilde{V}_{h}^{k,\mathrm{nc}}(K)\) through the DoFs as
\[\mathrm{Dof}_{\partial K}^{k+1,\mathrm{c}}(Q_{K}\tilde{v})=\mathrm{Dof}_{ \partial K}^{k+1,\mathrm{c}}(\tilde{v}),\quad\text{and}\;\mathrm{Dof}_{K}^{k+ 1,\mathrm{c}}(Q_{K}\tilde{v})=0, \tag{6.4}\]
where
\[\mathrm{Dof}^{k+1,\mathrm{c}}(\bullet)=\underbrace{\mathrm{Dof}_{\partial K}^ {k+1,\mathrm{c}}(\bullet)}_{\text{boundary DoFs}}\cup\underbrace{\mathrm{Dof}_{K}^ {k+1,\mathrm{c}}(\bullet)}_{\text{interior DoFs}}.\]
Observe that, for \(\tilde{v}\in H^{2}(K)\), its tangential derivative \(\partial_{\mathbf{\tilde{t}}}\tilde{v}\) is well-defined along \(\partial K\). If \(z\) is not a corner in \(\partial K\), then we can assign that \(\partial_{\mathbf{n}}\tilde{v}(z)=0\), and if \(z\) is a corner then the two tangential derivatives at \(z\) will suffice to define \(\nabla\tilde{v}(z)\) uniquely. This implies that \(Q_{K}\tilde{v}\) is well-defined. In addition, since \(Q_{K}\tilde{v}\) is uniquely determined by boundary DoFs of \(\tilde{v}\) and \(\tilde{v}|_{e}\in\mathbb{P}_{k}(e)\) for all \(e\in\mathcal{E}_{K}\), we have
\[Q_{K}\tilde{v}=\tilde{v}\quad\text{on }\partial K.\]
This and (6.4) show that \(Q_{K}\tilde{v}\in S(K)\) and, consequently (6.3) imply the first inequality in
\[|\tilde{v}|_{2,K}\leq|Q_{K}\tilde{v}|_{2,K}\lesssim h_{K}^{-2}\|Q_{K}\tilde{v} \|_{K}\lesssim h_{K}^{-1}\|\mathrm{Dof}^{k+1,\mathrm{c}}(Q_{K}\tilde{v})\|_{ \ell^{2}}=h_{K}^{-1}\|\mathrm{Dof}_{\partial K}^{k+1,\mathrm{c}}(Q_{K}\tilde{v} )\|_{\ell^{2}}\]
with the inverse estimate and the norm equivalence available for conforming VE functions in the next two inequalities, and (6.4) in the last equality.
Let us examine each contribution to the DoFs in the expression above. Firstly, for any \(z\in\mathcal{V}_{K}\) it can be inferred that
\[|\tilde{v}(z)| \leq\|\tilde{v}\|_{L^{\infty}(\partial K)}\lesssim h_{K}^{-1/2}\| \tilde{v}\|_{L^{2}(\partial K)}\] \[\lesssim h_{K}^{-1}\|\tilde{v}\|_{K}+|\tilde{v}|_{1,K}\] \[\lesssim(1+c(\epsilon))h_{K}^{-1}\|\tilde{v}\|_{K}+\epsilon h_{K }|\tilde{v}|_{2,K},\]
where we have used the inverse estimate for polynomials in 1d in the second step, and the trace inequality and Lemma 6.1 in the last two steps.
Secondly, the similar arguments show
\[h_{z}|\nabla\tilde{v}(z)| =h_{z}|\nabla Q_{K}\tilde{v}(z)|\leq h_{z}|Q_{K}\tilde{v}|_{1, \infty,\partial K}\lesssim\|Q_{K}\tilde{v}\|_{\infty,\partial K}\] \[\lesssim h_{K}^{-1/2}\|\tilde{v}\|_{\partial K}\lesssim h_{K}^{-1 }\|\tilde{v}\|_{K}+\epsilon h_{K}|\tilde{v}|_{2,K}.\]
Note that the inverse inequality for polynomials is suitable here since \(\partial_{\mathbf{n}}Q_{K}\tilde{v}|_{e}\in\mathbb{P}_{k-1}(e)\). For the remaining boundary moments, the Cauchy-Schwarz inequality and the inverse estimate lead to
\[\Big{|}\int_{e}\partial_{\mathbf{n}}\tilde{v}\chi_{k-2}\,\mathrm{ d}s\Big{|} =\Big{|}\int_{e}\partial_{\mathbf{n}}(Q_{K}\tilde{v})\chi_{k-2}\, \mathrm{d}s\Big{|}\leq\|\partial_{\mathbf{n}}(Q_{K}\tilde{v})\|_{e}h_{e}^{1/2}\] \[\lesssim h_{e}^{-1/2}\|Q_{K}\tilde{v}\|_{e}=h_{e}^{-1/2}\|\tilde {v}\|_{e}\lesssim h_{K}^{-1}\|\tilde{v}\|_{K}+\epsilon h_{K}|\tilde{v}|_{2,K}\]
with (6.3) and Lemma 6.1 once more in the last two steps. Similarly, we can prove that
\[\fint_{e}\tilde{v}\chi_{k-3}\,\mathrm{d}s\lesssim h_{K}^{-1}\|\tilde{v}\|_{K} +\epsilon h_{K}|\tilde{v}|_{2,K}.\]
These bounds allow us to write \(\|\mathrm{Dof}_{\partial K}^{k+1,c}(Q_{K}\tilde{v})\|_{\ell^{2}}\lesssim h_{K }^{-1}\|\tilde{v}\|_{K}+\epsilon h_{K}|\tilde{v}|_{2,K}\), which in turn proves that
\[|\tilde{v}|_{2,K}\lesssim h_{K}^{-2}\|\tilde{v}\|_{K}+\epsilon|\tilde{v}|_{2,K}\]
and absorbing the \(\epsilon\) term on the left-hand side we immediately have the first bound in (6.1). For the second bound it suffices to combine the first bound with Lemma 6.1.
**Lemma 6.4** (Poincare-type inequality).: _Let \(K\) be a polygonal domain and \(v\in H^{2}(K)\). If \(v(A)=v(B)=v(C)\) for any three non-collinear consecutive vertices \(A,B,C\) of \(K\) (see the diagram in Figure 6.1), then there exists a positive constant \(C_{\mathrm{P}}\) depending only on the mesh regularity parameter \(\rho\), such that_
\[|v|_{1,K}\leq C_{\mathrm{P}}h_{K}|v|_{2,K}.\]
Figure 6.1: Sketch of a polygonal domain \(K\) and three consecutive vertices \(A,B,C\). The unit vectors \(\mathbf{t}_{1},\mathbf{t}_{2}\) form an angle \(\theta\) on \(A\).
Proof.: With respect to Figure 6.1, let \(\mathbf{t}_{1},\mathbf{t}_{2}\) be two tangential unit vectors along the sides \(AB\) and \(AC\), respectively, oriented as in the diagram (moving away from the vertex \(A\)), forming an angle \(\theta\in(0,\pi)\), and \(|\mathbf{t}_{1}\cdot\mathbf{t}_{2}|=|\cos\theta|\). Owing to the transformation stability result from [14] we know that
\[\min_{\mathbf{a}\in\mathbb{R}^{2}\setminus\{\mathbf{0}\}}\frac{(\mathbf{a}\cdot\mathbf{v})^{2} +(\mathbf{a}\cdot\mathbf{u})^{2}}{|\mathbf{a}|^{2}}=1-|\mathbf{v}\cdot\mathbf{u}|,\]
for linearly independent unit vectors \(\mathbf{v},\mathbf{u}\in\mathbb{R}^{2}\). We use this result in our context with \(\mathbf{a}=\nabla v(\mathbf{x})\), \(\mathbf{v}=\mathbf{t}_{1}\), \(\mathbf{u}=\mathbf{t}_{2}\), giving
\[(1-|\cos\theta|)|\nabla v(\mathbf{x})|^{2}\leq(\nabla v(\mathbf{x})\cdot\mathbf{t}_{1})^{ 2}+(\nabla v(\mathbf{x})\cdot\mathbf{t}_{2})^{2}.\]
Now we define \(f_{j}=\nabla v\cdot\mathbf{t}_{j}\) for \(j=1,2\). An integration over \(K\) leads to
\[|v|_{1,K}^{2}\leq\frac{1}{1-|\cos\theta|}(\|f_{1}\|_{K}^{2}+\|f_{2}\|_{K}^{2}).\]
On the other hand, note that since \(\int_{A}^{B}f_{1}\,\mathrm{d}s=0=\int_{A}^{C}f_{2}\,\mathrm{d}s\) from the assumption \(v(A)=v(B)=v(C)\), we can apply the Poincare-Friedrichs inequality to \(f_{1}\) and \(f_{2}\) (see, for example, [16]). We are then left with
\[\|f_{i}\|_{K}\leq C_{\mathrm{PF}}h_{K}|f_{i}|_{1,K}\quad\text{for }i=1,2.\]
Hence
\[|v|_{1,K}^{2}\leq\frac{C_{\mathrm{PF}}^{2}}{1-|\cos\theta|}h_{K}^{2}(|f_{1}|_ {1,K}^{2}+|f_{2}|_{1,K}^{2})\leq\frac{2C_{\mathrm{PF}}^{2}}{1-|\cos\theta|}h_ {K}^{2}|v|_{2,K}^{2},\]
which proves the sought bound with \(C_{\mathrm{P}}:=C_{\mathrm{PF}}\sqrt{\frac{2}{1-|\cos\theta|}}\).
**Lemma 6.5** (Local norm equivalence).: _For \(\tilde{v}\in\tilde{V}_{h}^{k,\mathrm{nc}}(K)\), there holds_
\[\|\tilde{v}\|_{K}\simeq h_{K}\|\mathrm{Dof}^{k,\mathrm{nc}}(\tilde{v})\|_{ \ell^{2}}.\]
Proof.: **Step 1.** Proceeding as in the proof of Lemma 6.3, for the DoFs we have
\[|\tilde{v}(z)|\lesssim h_{K}^{-1}\|\tilde{v}\|_{K}+|\tilde{v}|_{1,K}\lesssim h _{K}^{-1}\|\tilde{v}\|_{K}\]
with the inverse inequality applied to \(\tilde{v}\) from Lemma 6.3. Using the Cauchy-Schwarz inequality, the trace inequality, and Lemma 6.3 again, we arrive at
\[\Big{|}\int_{e}\partial_{\mathbf{n}}\tilde{v}\chi_{k-2}\,\mathrm{d}s\Big{|}\leq\| \partial_{\mathbf{n}}\tilde{v}\|_{e}h_{e}^{1/2}\lesssim|\tilde{v}|_{1,K}+h_{K}| \tilde{v}|_{2,K}\lesssim h_{K}^{-1}\|\tilde{v}\|_{K}.\]
In addition,
\[\Big{|}\!\int_{e}\tilde{v}\chi_{k-3}\,\mathrm{d}s\Big{|}\leq h_{e}^{-1/2}\| \tilde{v}\|_{e}\lesssim h_{K}^{-1}\|\tilde{v}\|_{K}.\]
The Cauchy-Schwarz inequality for the cell moments proves that
\[\Big{|}\!\int_{K}\tilde{v}\chi_{k-4}\,\mathrm{d}\mathbf{x}\Big{|}\leq|K|^{-1/2}\| \tilde{v}\|_{K}=h_{K}^{-1}\|\tilde{v}\|_{K},\]
and combining these bounds together we readily obtain
\[\|\mathrm{Dof}^{k,\mathrm{nc}}(\tilde{v})\|_{\ell^{2}}\lesssim h_{K}^{-1}\| \tilde{v}\|_{K}. \tag{6.5}\]
**Step 2.** On the other hand, let us consider the problem of finding \(\tilde{v}_{2}\in H_{0}^{1}(K)\cap H^{2}(K)\) such that
\[\Delta^{2}\tilde{v}_{2}=\Delta^{2}\tilde{v}\quad\text{in }K;\qquad M_{\mathbf{n} \mathbf{n}}(\tilde{v}_{2})=M_{\mathbf{n}\mathbf{n}}(\tilde{v})\quad\text{on }\partial K.\]
Let \(\tilde{v}_{1}=\tilde{v}-\tilde{v}_{2}\). Then, it follows that
\[\Delta^{2}\tilde{v}_{1}=0\quad\text{in }K;\qquad M_{\boldsymbol{n}\boldsymbol{n}}( \tilde{v}_{1})=0\text{ and }\tilde{v}_{1}=\tilde{v}\quad\text{on }\partial K.\]
Next we recall that for any \(w\in S(K)\) (cf. (10)) we have \(a^{K}(\tilde{v}_{1},\tilde{v}_{1}-w)=0\). From the proof of Lemma 6.3 we also recall that \(Q_{K}\tilde{v}_{1}\) is well-defined and
\[Q_{K}\tilde{v}_{1}=\tilde{v}_{1}=\tilde{v}\quad\text{on }\partial K.\]
The triangle inequality and Lemma 6.4 for \(\tilde{v}_{1}-Q_{K}\tilde{v}_{1}\) (applies from the definition of \(Q_{K}\)) result in
\[\|\tilde{v}_{1}\|_{K}\leq\|\tilde{v}_{1}-Q_{K}\tilde{v}_{1}\|_{K}+\|Q_{K} \tilde{v}_{1}\|_{K}\lesssim h_{K}^{2}|\tilde{v}_{1}-Q_{K}\tilde{v}_{1}|_{2,K} +\|Q_{K}\tilde{v}_{1}\|_{K}\lesssim h_{K}^{2}|\tilde{v}_{1}|_{2,K}+\|Q_{K} \tilde{v}_{1}\|_{K}.\]
From the proof of Lemma 6.3, we can infer that \(|\tilde{v}_{1}|_{2,K}\lesssim h_{K}^{-3/2}\|\tilde{v}_{1}\|_{\partial K}\) and \(\|Q_{K}\tilde{v}_{1}\|_{K}\lesssim\|\tilde{v}_{1}\|_{\partial K}\). This in the previous bound results in \(\|\tilde{v}_{1}\|_{K}\lesssim h_{K}^{1/2}\|\tilde{v}_{1}\|_{\partial K}\).
Recall that \(\Delta^{2}\tilde{v}_{2}=\Delta^{2}\tilde{v}=:g_{1}\in\mathbb{P}_{k-4}(K)\) and \(M_{\boldsymbol{n}\boldsymbol{n}}(\tilde{v}_{2})|_{e}=M_{\boldsymbol{n} \boldsymbol{n}}(\tilde{v})|_{e}=:g_{2}|_{e}\in\mathbb{P}_{k-2}(e)\) for all \(e\in\mathcal{E}_{K}\). Therefore, after expanding \(g_{1}=\sum_{|\alpha|\leq k-4}g_{1}^{\alpha}m_{\alpha}\) and \(g_{2}|_{e}=\sum_{|\beta|\leq k-2}g_{2}^{\beta}m_{\beta}^{e}\) in terms of the scaled monomials \(m_{\alpha}\in\mathbb{M}_{k-4}(K)\) and \(m_{\beta}^{e}\in\mathbb{M}_{k-2}(e)\), an integration by parts provides
\[|\tilde{v}_{2}|_{2,K}^{2} =(\Delta^{2}\tilde{v}_{2},\tilde{v}_{2})_{K}+(M_{\boldsymbol{n} \boldsymbol{n}}(\tilde{v}_{2}),\partial_{\boldsymbol{n}}\tilde{v}_{2})_{ \partial K}=(g_{1},\tilde{v})_{K}-(g_{1},\tilde{v}_{1})_{K}+(g_{2},\partial_{ \boldsymbol{n}}\tilde{v})_{\partial K}-(g_{2},\partial_{\boldsymbol{n}}\tilde{ v}_{1})_{\partial K}\] \[=\sum_{|\alpha|\leq k-4}g_{1}^{\alpha}(m_{\alpha},\tilde{v})_{K}+ \sum_{|\beta|\leq k-2}g_{2}^{\beta}(m_{\beta},\partial_{\boldsymbol{n}} \tilde{v})_{\partial K}-(g_{1},\tilde{v}_{1})_{K}-(g_{2},\partial_{ \boldsymbol{n}}\tilde{v}_{1})_{\partial K}.\]
Set the notation \(\vec{g}_{1}=(g_{1}^{\alpha})_{\alpha}\), \(\vec{g}_{2}=(g_{2}^{\beta})_{\beta}\) and recall from [19, Lemma 4.1] that \(h_{K}\|\vec{g}_{1}\|_{\ell^{2}}\lesssim\|g_{1}\|_{K}\) and \(h_{K}^{1/2}\|\vec{g}_{2}\|_{\ell^{2}}\lesssim\|g_{2}\|_{\partial K}\). Hence the Cauchy-Schwarz inequality in the previous bound and the definition of DoFs show
\[|\tilde{v}_{2}|_{2,K}^{2} \leq\|\vec{g}_{1}\|_{\ell^{2}}|K\|\|\mathrm{Dof}_{K}^{\text{\rm{ \tiny{R}},\text{\rm{nc}}}}(\tilde{v})\|_{\ell^{2}}+\|\vec{g}_{2}\|_{\ell^{2}}\| \mathrm{Dof}_{\partial K}^{k,\text{\rm{nc}}}(\tilde{v})\|_{\ell^{2}}+\|g_{1}\|_ {K}\|\tilde{v}_{1}\|_{K}+\|g_{2}\|_{\partial K}\|\partial_{\boldsymbol{n}} \tilde{v}_{1}\|_{\partial K}\] \[\lesssim h_{K}\|g_{1}\|_{K}\|\mathrm{Dof}_{K}^{k,\text{\rm{nc}}}( \tilde{v})\|_{\ell^{2}}+h_{K}^{-1/2}\|g_{2}\|_{\partial K}\|\mathrm{Dof}_{ \partial K}^{k,\text{\rm{nc}}}(\tilde{v})\|_{\ell^{2}}\] \[\quad+\|g_{1}\|_{K}\|\tilde{v}_{1}\|_{K}+\|g_{2}\|_{\partial K}((1 +\epsilon)h_{K}^{1/2}|\tilde{v}_{1}|_{2,K}+C(\epsilon)h_{K}^{-3/2}\|\tilde{v}_{1 }\|_{K})\]
with \(\|\partial_{\boldsymbol{n}}\tilde{v}_{1}\|_{\partial K}\lesssim h_{K}^{-1/2}| \tilde{v}_{1}|_{1,K}+h_{K}^{1/2}|\tilde{v}_{1}|_{2,K}\lesssim(1+\epsilon)h_{K} ^{1/2}|\tilde{v}_{1}|_{2,K}+C(\epsilon)h_{K}^{-3/2}\|\tilde{v}_{1}\|_{K}\) from the trace inequality and Lemma 6.1 in the last step. Note also that, thanks to [20], we can assert that
\[\|g_{1}\|_{K}=\|\Delta^{2}\tilde{v}_{2}\|_{K}\lesssim h_{K}^{-2}|\tilde{v}_{2}|_ {2,K},\]
and using the inverse inequality on \(g_{2}|_{e}\in\mathbb{P}_{k-2}(e)\), the trace inequality, and Lemma 6.4, we are left with
\[\|g_{2}\|_{\partial K}\lesssim h_{K}^{-1}|\tilde{v}_{2}|_{1,\partial K} \lesssim h_{K}^{-3/2}|\tilde{v}_{2}|_{1,K}+h_{K}^{-1/2}|\tilde{v}_{2}|_{2,K} \lesssim h_{K}^{-1/2}|\tilde{v}_{2}|_{2,K}.\]
Hence, all the above bounds result in
\[|\tilde{v}_{2}|_{2,K}\lesssim h_{K}^{-1}\|\mathrm{Dof}^{k,\text{\rm{nc}}}(\tilde{v })\|_{\ell^{2}}+h_{K}^{-3/2}\|\tilde{v}\|_{\partial K}.\]
We can then invoke again Lemma 6.4 to obtain \(\|\tilde{v}_{2}\|_{K}\lesssim h_{K}^{2}|\tilde{v}_{2}|_{2,K}\), and so
\[\|\tilde{v}\|_{K}\leq\|\tilde{v}_{1}\|_{K}+\|\tilde{v}_{2}\|_{K}\lesssim h_{K}^{ 1/2}\|\tilde{v}\|_{\partial K}+h_{K}\|\mathrm{Dof}^{k,\text{\rm{nc}}}(\tilde{v})\|_{ \ell^{2}}.\]
Since \(\tilde{v}\) is a polynomial along each \(e\in\mathcal{E}_{K}\), then standard scaling arguments imply that \(\|\tilde{v}\|_{\partial K}\simeq h_{K}^{1/2}\|\mathrm{Dof}_{\partial K}^{k, \text{\rm{nc}}}(\tilde{v})\|_{\ell^{2}}\), and therefore
\[\|\tilde{v}\|_{K}\lesssim h_{K}\|\mathrm{Dof}^{k,\text{\rm{nc}}}(\tilde{v})\|_{ \ell^{2}}. \tag{12}\]
Finally, the desired result follows from combining the estimates (12) (from Step 1) and (12) (from Step 2).
**Lemma 6.6**.: _The inverse estimates and norm equivalence results hold for any \(v\in V_{h}^{k,\mathrm{nc}}(K)\)._
Proof.: Given \(v\in V_{h}^{k,\mathrm{nc}}(K)\), construct \(\tilde{v}\in\widetilde{V}_{h}^{k,\mathrm{nc}}(K)\) with \(\mathrm{Dof}^{k,\mathrm{nc}}(v)=\mathrm{Dof}^{k,\mathrm{nc}}(\tilde{v})\). Such \(\tilde{v}\) can be found since the DoFs of both local VE spaces \(V_{h}^{k,\mathrm{nc}}(K)\) and \(\widetilde{V}_{h}^{k,\mathrm{nc}}(K)\) coincide. Starting from the triangle inequality \(|v|_{2,K}\leq|v-\tilde{v}|_{2,K}+|\tilde{v}|_{2,K}\), we apply integration by parts, the Cauchy-Schwarz inequality, and the inverse inequality on \(\Delta^{2}(v-\tilde{v})\in\mathbb{P}_{k}(K)\) to obtain
\[|v-\tilde{v}|_{2,K}^{2}=a^{K}(v-\tilde{v},v-\tilde{v}) =\int_{K}\Delta^{2}(v-\tilde{v})\,(v-\tilde{v})\,\mathrm{d}\mathbf{x}\] \[\leq\|\Delta^{2}(v-\tilde{v})\|_{K}\|v-\tilde{v}\|_{K}\lesssim h_ {K}^{-2}|v-\tilde{v}|_{2,K}\|v-\tilde{v}\|_{K}.\]
This and the triangle inequality show that \(|v-\tilde{v}|_{2,K}\lesssim h_{K}^{-2}(\|v\|_{K}+\|\tilde{v}\|_{K})\). Putting together this bound with Lemma 6.3-6.5 readily implies that
\[|v|_{2,K}\lesssim h_{K}^{-2}(\|v\|_{K}+\|\tilde{v}\|_{K}),\qquad\|\tilde{v}\|_ {K}\simeq h_{K}\|\mathrm{Dof}^{k,\mathrm{nc}}(\tilde{v})\|_{\ell^{2}}=h_{K}\| \mathrm{Dof}^{k,\mathrm{nc}}(v)\|_{\ell^{2}}.\]
Then we can follow the arguments developed in the proof of Lemma 6.3 to get
\[|v|_{2,K}\lesssim h_{K}^{-2}\|v\|_{K}+\epsilon|v|_{2,K},\]
and the proof of the inverse estimate is concluded after absorbing the \(\epsilon\) term on the left-hand side.
Regarding the norm equivalence result, we note that steps 1 and 2 from the proof of Lemma 6.5 apply to any \(v\in V_{h}^{k,\mathrm{nc}}(K)\), and decompose \(v=v_{1}+v_{2}\). The proof follows analogously only differing in the presence of the term \(\Delta^{2}v_{2}=\Delta^{2}v=:g_{1}\in\mathbb{P}_{k}(K)\), now written as
\[(\Delta^{2}v_{2},v)_{K}=\sum_{|\alpha|\leq k-4}g_{1}^{\alpha}(m_{\alpha},v)_{ K}+\sum_{k-4\leq|\alpha|\leq k}g_{1}^{\alpha}(m_{\alpha},v)_{K}.\]
The first term on the right-hand side is treated just as in Lemma 6.5. For the second term we can apply the definition of the space \(V_{h}^{k,\mathrm{nc}}(K)\) and the Cauchy-Schwarz inequality to obtain the estimate
\[\sum_{k-4\leq|\alpha|\leq k}g_{1}^{\alpha}(m_{\alpha},v)_{K}=\sum_{k-4\leq| \alpha|\leq k}g_{1}^{\alpha}(m_{\alpha},\Pi_{k}^{\nabla^{2}}v)_{K}\leq h_{K}^ {-1}\|g_{1}\|_{K}\|m_{\alpha}\|_{K}\|\Pi_{k}^{\nabla^{2}}v\|_{K}\approx\|g_{1} \|_{K}\|\Pi_{k}^{\nabla^{2}}v\|_{K}\]
with the observation \(\|m_{\alpha}\|_{K}\approx h_{K}\) in the last step. Since \(\Pi_{k}^{\nabla^{2}}v\) is uniquely determined by the DoFs of \(v\) and \(\mathrm{Dof}^{k,\mathrm{nc}}(v)=\mathrm{Dof}^{k,\mathrm{nc}}(\tilde{v})\), we can conclude that \(\Pi_{k}^{\nabla^{2}}v=\Pi_{k}^{\nabla^{2}}\tilde{v}\). This and the triangle inequality show \(\|\Pi_{k}^{\nabla^{2}}v\|_{K}=\|\Pi_{k}^{\nabla^{2}}\tilde{v}\|_{K}\leq\|\Pi_{ k}^{\nabla^{2}}\tilde{v}-\tilde{v}\|_{K}+\|\tilde{v}\|_{K}\). The Poincare-Friedrichs inequality and the inverse inequality provide \(\|\Pi_{k}^{\nabla^{2}}\tilde{v}-\tilde{v}\|_{K}\lesssim\|\tilde{v}\|_{K}\). These bounds together with Lemma 6.5 establish
\[\|\Pi_{k}^{\nabla^{2}}v\|_{K}\lesssim\|\tilde{v}\|_{K}\lesssim h_{K}\|\mathrm{ Dof}^{k,\mathrm{nc}}(\tilde{v})\|_{\ell^{2}}=h_{K}\|\mathrm{Dof}^{k,\mathrm{nc}}(v)\|_{ \ell^{2}},\]
and this observation concludes the proof of norm equivalence.
### Modified companion map
In this subsection, we consider the lowest-order (\(k=2\)) nonconforming VE space \(V_{h}^{2,\mathrm{nc}}\) and the aim is to modify the companion operator \(J_{1}v_{h}\) for \(v_{h}\in V_{h}^{2,\mathrm{nc}}\) from Theorem 5.1 so that the new companion \(J_{2}^{*}v_{h}\) satisfies the \(H^{1}\)-orthogonality \(\nabla(v_{h}-J_{2}^{*}v_{h})\perp(\mathbb{P}_{0}(\mathcal{T}_{h}))^{2}\) in addition to the \(H^{2}\)- and \(L^{2}\)-orthogonalities established in Theorem 5.2(b)-5.2(d).
If \(e\in\mathcal{E}^{i}\) is an interior edge, we assume that it's shared by two triangles \(T^{+}\subset K^{+}\) and \(T^{-}\subset K^{-}\) inside two neighbouring polygons \(K^{+}\) and \(K^{-}\), and set \(\omega_{e}:=K^{+}\cup K^{-}\), and if \(e\in\mathcal{E}^{b}\) is a boundary edge, we assume that it only belongs to a triangle \(T^{+}\subset K^{+}\) and set \(\omega_{e}:=K^{+}\). Let \(\psi_{e},\phi_{e}\in H_{0}^{2}(T^{+}\cup T^{-})\) be two edge bubble-functions from [10, 22] satisfying the following properties:
* \(\oint_{e}\psi_{e}\,\,\mathrm{d}s=1,|\psi_{e}|_{2,T^{\pm}}\approx h_{e}^{-1}\),
* \(\phi_{e}\equiv 0\) on \(\partial T^{\pm}\), \(\int_{e}\partial_{\mathbf{n}}\phi_{e}\,\mathrm{d}s=1,|\phi_{e}|_{2,T^{\pm}}\approx h _{e}^{-1}\).
**Step 1**. Given \(J_{1}v_{h}\) and the bubble-function \(\psi_{e}\), define
\[J_{1}^{*}v_{h}:=J_{1}v_{h}+\sum_{e\in\mathcal{E}}\Big{(}\fint_{e}(v_{h}-J_{1}v _{h})\,\mathrm{d}s\Big{)}\psi_{e}\in V. \tag{100}\]
Observe from the definition of \(J_{1}\) and \(\psi_{e}\) that \(J_{1}^{*}v_{h}(z)=v_{h}(z)\) for any \(z\in\mathcal{V}\) and \(\fint_{e}(v_{h}-J_{1}^{*}v_{h})\,\mathrm{d}s=0\) for any \(e\in\mathcal{E}\). The Cauchy-Schwarz inequality and the scaling of \(\psi_{e}\) from above show that
\[\Big{|}\Big{(}\fint_{e}(v_{h}-J_{1}v_{h})\,\mathrm{d}s\Big{)} \Big{|}\|\psi_{e}\|_{\omega_{e}} \lesssim h_{e}^{-3/2}\|v_{h}-J_{1}v_{h}\|_{e}\] \[\lesssim h_{e}^{-2}\|v_{h}-J_{1}v_{h}\|_{\omega_{e}}+h_{e}^{-1}| v_{h}-J_{1}v_{h}|_{1,\omega_{e}}\lesssim|v_{h}-J_{1}v_{h}|_{2,\omega_{e}}\]
with the trace inequality and Theorem 5.1 in the last two steps. This proves that \(|v_{h}-J_{1}^{*}v_{h}|_{2,h}\lesssim|v_{h}-J_{1}v_{h}|_{2,h}\). Similarly the scaling \(|\psi_{e}|_{1,T^{\pm}}\approx 1\) and Theorem 5.1(d) result in \(|v_{h}-J_{1}^{*}v_{h}|_{1,h}\lesssim h|v_{h}-J_{1}v_{h}|_{2,h}\).
**Step 2**. Given \(J_{1}^{*}v_{h}\) and the bubble-function \(\phi_{e}\), define
\[J_{1}^{**}v_{h}:=J_{1}^{*}v_{h}+\sum_{e\in\mathcal{E}}\Big{(}\int_{e}\partial_ {\mathbf{n}}(v_{h}-J_{1}^{*}v_{h})\,\mathrm{d}s\Big{)}\phi_{e}\in V. \tag{101}\]
Since \(\phi_{e}|_{e}\equiv 0\), we have \(J_{1}^{**}v_{h}(z)=v_{h}(z)\) for any \(z\in\mathcal{V}\) and \(\fint_{e}(v_{h}-J_{1}^{**}v_{h})\,\mathrm{d}s=0\) for any \(e\in\mathcal{E}\). Note from \(\int_{e}\partial_{\mathbf{n}}\phi_{e}\,\mathrm{d}s=1\) that \(\int_{e}\partial_{\mathbf{n}}(v_{h}-J_{1}^{*}v_{h})\,\mathrm{d}s=0\). Again as in Step 1, it is easy to prove that
\[\Big{|}\Big{(}\fint_{e}\partial_{\mathbf{n}}(v_{h}-J_{1}^{*}v_{h})\,\mathrm{d}s \Big{)}\Big{|}\|\phi_{e}\|_{\omega_{e}}\lesssim|v_{h}-J_{1}^{*}v_{h}|_{2,\omega _{e}},\]
and consequently,
\[|v_{h}-J_{1}^{**}v_{h}|_{2,h}\lesssim|v_{h}-J_{1}^{*}v_{h}|_{2,h}\lesssim|v_{h }-J_{1}v_{h}|_{2,h}.\]
**Step 3**. Next we construct the operator \(J_{2}^{*}\) and the design employs the tools from the construction of \(J_{2}\). Recall the element bubble-function \(b_{h}|_{K}=b_{K}\in H_{0}^{2}(K)\) and suppose \(v_{2}\in\mathbb{P}_{2}(K)\) is the Riesz representative of the linear functional \(\mathbb{P}_{2}(K)\to\mathbb{R}\), defined by, \(w_{2}\mapsto(v_{h}-J_{1}^{**}v_{h},w_{2})_{K}\), for \(w_{2}\in\mathbb{P}_{2}(K)\) in the Hilbert space \(\mathbb{P}_{2}(K)\) endowed with the weighted scalar product \((b_{K}\bullet,\bullet)_{K}\). Given \(v_{h}\in V_{h}^{2,\mathrm{nc}}\), the function \(\tilde{v}_{2}\in\mathbb{P}_{2}(\mathcal{T}_{h})\) with \(\tilde{v}_{2}|_{K}:=v_{2}\) satisfy \((b_{h}\tilde{v}_{2},w_{2})_{\Omega}=(v_{h}-J_{1}^{*}v_{h},w_{2})_{\Omega}\) for all \(w_{2}\in\mathbb{P}_{2}(\mathcal{T}_{h})\) and define
\[J_{2}^{*}v_{h}:=J_{1}^{**}v_{h}+b_{h}\tilde{v}_{2}\in V. \tag{102}\]
**Theorem 6.1**.: _The modified conforming companion operator \(J_{2}^{*}\) satisfies the following properties._
* \(\mathrm{d}\mathrm{d}f_{j}^{2,\mathrm{nc}}(J_{2}^{*}v_{h})=\mathrm{d}\mathrm{d} f_{j}^{2,\mathrm{nc}}(v_{h})\) _for all_ \(j=1,\dots,N_{2}^{2,\mathrm{nc}}\)_,_
* \(\nabla^{2}(v_{h}-J_{2}^{*}v_{h})\perp(\mathbb{P}_{0}(\mathcal{T}_{h}))^{2\times 2}\) _in_ \((L^{2}(\Omega))^{2\times 2}\)_,_
* \(\nabla(v_{h}-J_{2}^{*}v_{h})\perp(\mathbb{P}_{0}(\mathcal{T}_{h}))^{2}\) _in_ \((L^{2}(\Omega))^{2}\)_,_
* \(v_{h}-J_{2}^{*}v_{h}\perp\mathbb{P}_{2}(\mathcal{T}_{h})\) _in_ \(L^{2}(\Omega)\)_,_
* \(\sum_{j=0}^{2}h^{j-2}|v_{h}-J_{2}^{*}v_{h}|_{j,h}\lesssim\inf_{\chi\in\mathbb{P} _{2}(\mathcal{T}_{h})}|v_{h}-\chi|_{2,h}+\inf_{v\in V}|v_{h}-v|_{2,h}\)_._
Proof.: We provide only the proof of modified property 6.1(c) and the remaining properties follow analogously as in the proof of Theorem 5.2. Since \(b_{K}=0\) on \(\partial K\), it results from that definition of \(J_{2}^{*}\) and \(J_{1}^{**}\) that \(\fint_{e}J_{2}^{*}v_{h}\,\mathrm{d}s=\fint_{e}J_{1}^{**}v_{h}\,\mathrm{d}s=\fint_ {e}v_{h}\,\mathrm{d}s\). Hence, for \(\vec{\chi}\in(\mathbb{P}_{0}(K))^{2}\) and for any \(K\in\mathcal{T}_{h}\), an integration by parts shows
\[(\nabla(v_{h}-J_{2}^{*}v_{h}),\vec{\chi})_{K}=-(v_{h}-J_{2}^{*}v_{h},\mathrm{ div}(\vec{\chi}))_{K}+(v_{h}-J_{2}^{*}v_{h},\vec{\chi}\cdot\mathbf{n}_{K})_{\partial K}=0,\]
and therefore it concludes the proof of Theorem 6.1(c).
### Reliability
Recall \((u_{h},p_{h})\) is the nonconforming VE solution to the discrete problem. Define the local contributions
\[\eta^{2}_{1,K} :=h^{4}_{K}\|\tilde{f}-\Pi_{k}\tilde{f}\|^{2}_{K}+h^{4}_{K}\|\tilde{ f}-\Delta^{2}\Pi^{\nabla^{2}}_{k}u_{h}-\Pi_{k}u_{h}-\alpha\nabla\cdot\Pi_{\ell-1} \nabla p_{h}\|^{2}_{K},\] \[\eta^{2}_{2,K} :=h^{2}_{K}\|\tilde{g}-\Pi_{\ell}\tilde{g}\|^{2}_{K}+h^{2}_{K}\| \tilde{g}+\gamma\nabla\cdot\Pi_{\ell-1}\nabla p_{h}-\beta\Pi_{\ell}p_{h}+ \alpha\nabla\cdot\Pi_{k-1}\nabla u_{h}\|^{2}_{K},\] \[\eta^{2}_{3,K} :=\sum_{e\in\mathcal{E}^{i}_{K}\cup\mathcal{E}^{*}_{K}}h_{e}\|[M_ {\boldsymbol{n}\boldsymbol{n}}(\Pi^{\nabla^{2}}_{k}u_{h})]\|^{2}_{e},\] \[\eta^{2}_{4,K} :=\sum_{e\in\mathcal{E}^{i}_{K}}h^{3}_{e}\|[T(\Pi^{\nabla^{2}}_ {k}u_{h})+\alpha\Pi_{\ell-1}\nabla p_{h}\cdot\boldsymbol{n}]\|^{2}_{e},\] \[\eta^{2}_{5,K} :=\sum_{e\in\mathcal{E}^{i}_{K}\cup\mathcal{E}^{*}_{K}}h_{e}\|[ \alpha\Pi_{k-1}\nabla u_{h}\cdot\boldsymbol{n}+\gamma\Pi_{\ell-1}\nabla p_{h} \cdot\boldsymbol{n}]\|^{2}_{e},\] \[\eta^{2}_{6,K} :=S^{K}_{\nabla^{2}}((1-\Pi^{\nabla^{2}}_{k})u_{h},(1-\Pi^{\nabla ^{2}}_{k})u_{h})+S^{K}_{\nabla}((1-\Pi^{\nabla}_{\ell})p_{h},(1-\Pi^{\nabla}_ {\ell})p_{h}),\] \[\eta^{2}_{7,K} :=\alpha^{2}\|\mathrm{Dof}^{\mathrm{s,nc}}(u_{h}-\Pi^{\nabla}_{ \ell}u_{h})\|^{2}_{\ell^{2}},\] \[\eta^{2}_{8,K} :=\sum_{e\in\mathcal{E}_{K}}h^{-1}_{e}(\|[\nabla\Pi^{\nabla^{2}}_ {k}u_{h}]\|^{2}_{e}+\|[\Pi dp_{h}]\|^{2}_{e}).\]
Denote \(\eta^{2}_{i}:=\sum_{K\in\mathcal{T}_{h}}\eta^{2}_{i,K}\) for \(i=1,\ldots,8\) and the following theorem shows that the sum of these contributions form an upper bound for the error in energy norm.
**Theorem 6.2**.: _With the aforementioned notation, there holds, for \(k=2\) and \(\ell=1\),_
\[\|\boldsymbol{\tilde{u}}-\boldsymbol{\tilde{u}}_{h}\|^{2}_{\mathbf{H}^{k}_{h} }\lesssim\eta^{2}:=\sum_{i=1}^{8}\eta^{2}_{i}.\]
Proof.: Let \(J\boldsymbol{\tilde{u}}_{h}:=(J^{*}_{2}u_{h},J_{4}p_{h})\) and \(\boldsymbol{\bar{e}}:=\boldsymbol{\tilde{u}}-J\boldsymbol{\tilde{u}}_{h}\in \mathbf{H}_{\epsilon}\). Even though \(J^{*}_{2}\) is constructed for the lowest-order case (\(k=2\)), we prefer to write the proof with the notation \(k\) and \(\ell\) to point out the challenges for general values. The coercivity of the continuous bilinear form \(\mathcal{A}\) leads to
\[\|\boldsymbol{\bar{e}}\|^{2}_{\mathbf{H}_{\epsilon}}\lesssim\mathcal{A}( \boldsymbol{\tilde{u}},\boldsymbol{\bar{e}})-\mathcal{A}(J\boldsymbol{\tilde{u }}_{h},\boldsymbol{\bar{e}})=\mathcal{F}(\boldsymbol{\bar{e}})-\mathcal{F}_{h} (\boldsymbol{\bar{e}}_{I})+\mathcal{A}_{h}(\boldsymbol{\tilde{u}}_{h}, \boldsymbol{\bar{e}}_{I})-\mathcal{A}(J\boldsymbol{\tilde{u}}_{h},\boldsymbol {\bar{e}}).\]
The identities \((\Pi_{k}\tilde{f},e^{y}_{I}-J^{*}_{2}e^{y}_{I})_{\Omega}=0=(\Pi_{\ell}\tilde{g}, e^{p}_{I}-J_{4}e^{p}_{I})_{\Omega}\) from Theorems 6.1.d and 5.4.c, and the \(L^{2}\)-orthogonality of \(\Pi_{k}\) and \(\Pi_{\ell}\) show
\[\mathcal{F}(\boldsymbol{\bar{e}})-\mathcal{F}_{h}(\boldsymbol{\bar {e}}_{I})=(\tilde{f},e^{u})_{\Omega}-(\Pi_{k}\tilde{f},J^{*}_{2}e^{y}_{I})_{ \Omega}+(\tilde{g},e^{p})_{\Omega}-(\Pi_{\ell}\tilde{g},J_{4}e^{p}_{I})_{\Omega}\] \[\quad=(\tilde{f},v)_{\Omega}+(\tilde{f}-\Pi_{k}\tilde{f},J^{*}_{2 }e^{y}_{I}-\Pi^{\nabla^{2}}_{k}e^{y}_{I})_{\Omega}+(\tilde{g},q)_{\Omega}+( \tilde{g}-\Pi_{\ell}\tilde{g},J_{4}e^{p}_{I}-\Pi^{\nabla}_{\ell}e^{p}_{I})_{\Omega} \tag{6.10}\]
with \(v=e^{u}-J^{*}_{2}e^{u}_{I}\) and \(q=e^{p}-J_{4}e^{p}_{I}\). The definitions of \(\mathcal{A}_{h}\) and \(\mathcal{A}\) provide
\[\mathcal{A}_{h}(\boldsymbol{\tilde{u}}_{h},\boldsymbol{\bar{e}}_ {I})-\mathcal{A}(J\boldsymbol{\tilde{u}}_{h},\boldsymbol{\bar{e}})=(a^{h}_{1}(u_ {h},e^{u}_{I})-a_{1}(J^{*}_{2}u_{h},e^{u}))+(a_{2}(J_{4}p_{h},e^{u})-a^{h}_{2}(p _{h},e^{u}_{I}))\] \[\quad+(a^{h}_{2}(e^{p}_{I},u_{h})-a_{2}(e^{p},J^{*}_{2}u_{h}))+(a ^{h}_{3}(p_{h},e^{p}_{I})-a_{3}(J_{4}p_{h},e^{p}))=:T_{1}+T_{2}+T_{3}+T_{4}. \tag{6.11}\]
Theorem 6.1(b)-6.1.(d) imply the relations \((\Pi_{k-2}\nabla^{2}u_{h},\Pi_{k-2}\nabla^{2}e^{u}_{I})_{\Omega}=(\nabla^{2}_{ \mathrm{pw}}\Pi^{\nabla^{2}}_{k}u_{h},\nabla^{2}J^{*}_{2}e^{u}_{I})_{\Omega}\) and \((\Pi_{k}u_{h},\Pi_{k}e^{u}_{I})_{\Omega}=(\Pi_{k}u_{h},J^{*}_{2}e^{u}_{I})_{\Omega}\). This results in
\[T_{1} =-(\Pi_{k}u_{h},v)_{\Omega}+(\Pi_{k}u_{h}-J^{*}_{2}u_{h},e^{u})_{ \Omega}-(\nabla^{2}_{\mathrm{pw}}\Pi^{\nabla^{2}}_{k}u_{h},\nabla^{2}_{\mathrm{ pw}}v)_{\Omega}+(\nabla^{2}_{\mathrm{pw}}(\Pi^{\nabla^{2}}_{k}u_{h}-J^{*}_{2}u_{h}), \nabla^{2}e^{u})_{\Omega}\] \[\quad+S_{1,0}((1-\Pi_{k})u_{h},(1-\Pi_{k})e^{u}_{I})+S_{\nabla^{2} }((1-\Pi^{\nabla^{2}}_{k})u_{h},(1-\Pi^{\nabla^{2}}_{k})e^{u}_{I}).\]
An integration by parts and the fact that \(v\in V\), lead to
\[-(\nabla^{2}_{\mathrm{pw}}\Pi^{\nabla^{2}}_{k}u_{h},\nabla^{2}_{\mathrm{pw}}v)_{ \Omega}=-\sum_{K\in\mathcal{T}_{h}}(\Delta^{2}\Pi^{\nabla^{2}}_{k}u_{h},v)_{K}- \sum_{e\in\mathcal{E}^{i}\cup\mathcal{E}^{s}}([M_{\boldsymbol{n}\boldsymbol{n} }(\Pi^{\nabla^{2}}_{k}u_{h})],\partial_{\boldsymbol{n}}v)_{e}-\sum_{e\in \mathcal{E}^{i}}([T(\Pi^{\nabla^{2}}_{k}u_{h})],v)_{e}.\]
This simplifies \(T_{1}\) to
\[T_{1} =\sum_{K\in\mathcal{T}_{h}}\Big{(}(-\Pi_{k}u_{h}-\Delta^{2}\Pi_{k}^{ \nabla^{2}}u_{h},v)_{K}+(\Pi_{k}u_{h}-J_{2}^{*}u_{h},e^{u})_{K}+(\nabla^{2}(\Pi_ {k}^{\nabla^{2}}u_{h}-J_{2}^{*}u_{h}),\nabla^{2}e^{u})_{K}\] \[\qquad+S_{1,0}^{K}((1-\Pi_{k})u_{h},(1-\Pi_{k})e_{I}^{u})+S_{ \nabla^{2}}^{K}((1-\Pi_{k}^{\nabla^{2}})u_{h},(1-\Pi_{k}^{\nabla^{2}})e_{I}^{ u})\Big{)}\] \[\qquad-\sum_{e\in\mathcal{E}^{t}\cup\mathcal{E}^{s}}([M_{\mathbf{n} \mathbf{n}}(\Pi_{k}^{\nabla^{2}}u_{h})],\partial_{\mathbf{n}}v)_{e}-\sum_{e\in \mathcal{E}^{t}}([T(\Pi_{k}^{\nabla^{2}}u_{h})],v)_{e}. \tag{6.12}\]
Theorem 6.1(c) and the \(L^{2}\)-orthogonality of \(\Pi_{k-1}\) imply \((\Pi_{\ell-1}\nabla p_{h},\nabla J_{2}^{*}e_{I}^{u}-\Pi_{k-1}\nabla e_{I}^{u} )_{\Omega}=0\). This and an integration by parts show
\[\alpha^{-1}T_{2}=(\nabla J_{4}p_{h}-\Pi_{\ell-1}\nabla p_{h},\nabla e^{u})_{ \Omega}-\sum_{K\in\mathcal{T}_{h}}(\nabla\cdot\Pi_{\ell-1}\nabla p_{h},v)_{K} +\sum_{e\in\mathcal{E}^{t}}([\Pi_{\ell-1}\nabla p_{h}\cdot\mathbf{n}_{e}],v)_{e}. \tag{6.13}\]
Theorem 5.4(b), the \(L^{2}\)-orthogonality of \(\Pi_{\ell-1}\), and again an integration by parts prove that
\[\alpha^{-1}T_{3} =\sum_{K\in\mathcal{T}_{h}}\Big{(}(\Pi_{\ell-1}\nabla e_{I}^{p}- \nabla J_{4}e_{I}^{p},\Pi_{k-1}\nabla u_{h}-\Pi_{\ell-1}\nabla u_{h})_{K}+(q, \nabla\cdot\Pi_{k-1}\nabla u_{h})_{K}\] \[\qquad+(\nabla e^{p},\Pi_{k-1}\nabla u_{h}-\nabla J_{2}^{*}u_{h}) _{K}\Big{)}-\sum_{e\in\mathcal{E}^{t}\cup\mathcal{E}^{e}}(q,[\Pi_{k-1}\nabla u _{h}\cdot\mathbf{n}_{e}])_{e}. \tag{6.14}\]
The identities \((\Pi_{\ell-1}\nabla p_{h},\Pi_{\ell-1}\nabla e_{I}^{p})_{\Omega}=(\Pi_{\ell-1} \nabla p_{h},\nabla J_{4}e_{I}^{p})_{\Omega}\) and \((\Pi_{\ell}p_{h},\Pi_{\ell}e_{I}^{p})_{\Omega}=(\Pi_{\ell}p_{h},J_{4}e_{I}^{p}) _{\Omega}\) follow from Theorem 5.4(b)-5.4(d). This in the first step and an integration by parts in the next step lead to
\[T_{4} =\beta((\Pi_{\ell}p_{h}-J_{4}p_{h},e^{p})_{\Omega}-(\Pi_{\ell}p_{ h},q)_{\Omega})+\gamma((\Pi_{\ell-1}\nabla p_{h}-\nabla J_{4}p_{h},\nabla e^{p})_{ \Omega}-(\Pi_{\ell-1}\nabla p_{h},\nabla q)_{\Omega})\] \[\quad+S_{2,0}((1-\Pi_{\ell})p_{h},(1-\Pi_{\ell})e_{I}^{p})+S_{ \nabla}((1-\Pi_{\ell}^{\nabla})p_{h},(1-\Pi_{\ell}^{\nabla})e_{I}^{p})\] \[=\sum_{K\in\mathcal{T}_{h}}\Big{(}\beta(\Pi_{\ell}p_{h}-J_{4}p_{h},e^{p})_{K}+\gamma(\Pi_{\ell-1}\nabla p_{h}-\nabla J_{4}p_{h},\nabla e^{p})_{K }+(\gamma\nabla\cdot\Pi_{\ell-1}\nabla p_{h}-\beta\Pi_{\ell}p_{h},q)_{K}\] \[\quad+S_{2,0}^{K}((1-\Pi_{\ell})p_{h},(1-\Pi_{\ell})e_{I}^{p})+S \mathcal{K}^{\tilde{K}}_{0}((1-\Pi_{\ell}^{\nabla})p_{h},(1-\Pi_{\ell}^{\nabla })e_{I}^{p})\Big{)}-\sum_{e\in\mathcal{E}^{t}\cup\mathcal{E}^{c}}\gamma([\Pi_{ \ell-1}\nabla p_{h}\cdot\mathbf{n}_{e}],q)_{e}.\]
The rearrangement of the terms results in
\[\|\vec{e}\|_{\mathbf{\Pi}_{\mathbf{\xi}}}^{2}\lesssim T_{5}+\cdots+T_{10}, \tag{6.15}\]
where
\[T_{5} :=(\tilde{f}-\Pi_{k}\tilde{f},J_{2}^{*}e_{I}^{u}-\Pi_{k}^{\nabla^{ 2}}e_{I}^{u})_{\Omega}+(\tilde{f}-\Pi_{k}v_{h}-\Delta^{2}\Pi_{k}^{\nabla^{2}}u _{h}-\alpha\nabla\cdot\Pi_{\ell-1}\nabla p_{h},v)_{\Omega},\] \[T_{6} :=(\tilde{g}-\Pi_{\ell}\tilde{g},J_{4}e_{I}^{p}-\Pi_{\ell}^{\nabla }e_{I}^{p})_{\Omega}+(\tilde{g}+\gamma\nabla\cdot\Pi_{\ell-1}\nabla p_{h}-\beta \Pi_{\ell}p_{h}+\alpha\nabla\cdot\Pi_{k-1}\nabla u_{h},q)_{\Omega},\] \[T_{7} :=(\Pi_{k}u_{h}-J_{2}^{*}u_{h},e^{u})_{\Omega}+(\nabla_{\text{pw} }^{2}(\Pi_{k}^{\nabla^{2}}u_{h}-J_{2}^{*}u_{h}),\nabla^{2}e^{u})_{\Omega}+ \alpha(\nabla J_{4}p_{h}-\Pi_{\ell-1}\nabla p_{h},\nabla e^{u})_{\Omega}\] \[\quad+(\Pi_{\ell-1}\nabla e_{I}^{p}-\nabla J_{4}e_{I}^{p},\Pi_{k- 1}\nabla u_{h}-\Pi_{\ell-1}\nabla u_{h})_{K}+\alpha(\nabla e^{p},\Pi_{k-1} \nabla u_{h}-\nabla J_{2}^{*}u_{h})_{\Omega}\] \[\quad+\beta(\Pi_{\ell}p_{h}-J_{4}p_{h},e^{p})_{\Omega}+\gamma(\Pi_ {\ell-1}\nabla p_{h}-\nabla J_{4}p_{h},\nabla e^{p})_{\Omega},\] \[T_{8} :=S_{1,0}((1-\Pi_{k})u_{h},(1-\Pi_{k})e_{I}^{u})+S_{\nabla^{2}}((1- \Pi_{k}^{\nabla^{2}})u_{h},(1-\Pi_{k}^{\nabla^{2}})e_{I}^{u})+S_{2,0}((1-\Pi_ {\ell})p_{h},(1-\Pi_{\ell})e_{I}^{p})\] \[\quad+S_{\nabla}((1-\Pi_{\ell}^{\nabla})p_{h},(1-\Pi_{\ell}^{ \nabla})e_{I}^{p}),\] \[T_{9} :=-\sum_{e\in\mathcal{E}^{t}\cup\mathcal{E}^{c}}([M_{\mathbf{n}\mathbf{n}}( \Pi_{k}^{\nabla^{2}}u_{h})],\partial_{\mathbf{n}}v)_{e}-\sum_{e\in\mathcal{E}^{t} }([T(\Pi_{k}^{\nabla^{2}}u_{h})+\alpha\Pi_{\ell-1}\nabla p_{h}\cdot\mathbf{n}_{e}],v)_{e}.\]
The Poincare-Friedrichs inequality implies that \(h_{p}^{-2}\|J_{2}^{*}e_{I}^{u}-\Pi_{k}^{\nabla^{2}}e_{I}^{u}\|_{L^{2}(K)} \lesssim|J_{2}^{*}e_{I}^{u}-\Pi_{k}^{\nabla^{2}}e_{I}^{u}|_{2,K}\) and \(h_{p}^{-2}\|v\|_{L^{2}(K)}\lesssim|v|_{2,K}\). Then the triangle inequality
and \(|v|_{2,K}\leq|e^{u}-e_{I}^{u}|_{2,K}+|e_{I}^{u}-J_{2}^{*}e_{I}^{u}|_{2,K}\) followed by propositions 5.1 and 3.1, and Theorem 6.1(e) show
\[T_{5}\lesssim\Big{(}\sum_{K\in\mathcal{T}_{h}}\eta_{1,K}\Big{)}|e^{u}|_{2, \Omega}. \tag{101}\]
Similarly we can prove that \(T_{6}\lesssim\Big{(}\sum_{K\in\mathcal{T}_{h}}\eta_{2,K}\Big{)}|e^{p}|_{1,\Omega}\). Then we proceed to rewrite the terms in \(T_{7}\) using the \(L^{2}\)-orthogonality \(\Pi_{k},\Pi_{\ell}\) and Theorem 6.1(d)-5.4(c) as
\[(\Pi_{k}u_{h}-J_{2}^{*}u_{h},e^{u})_{\Omega} =(\Pi_{k}u_{h}-J_{2}^{*}u_{h},e^{u}-\Pi_{k}e^{u})_{\Omega}=(\Pi_{ k}^{\nabla^{2}}u_{h}-J_{2}^{*}u_{h},e^{u}-\Pi_{k}e^{u})_{\Omega}\] \[(\Pi_{\ell}p_{h}-J_{4}p_{h},e^{p})_{\Omega} =(\Pi_{\ell}p_{h}-J_{4}p_{h},e^{p}-\Pi_{\ell}e^{p})_{\Omega}=(\Pi _{\ell}^{\nabla}p_{h}-J_{4}p_{h},e^{p}-\Pi_{\ell}e^{p})_{\Omega}.\]
Then we combine Cauchy-Schwarz and triangle inequalities, which results in
\[T_{7} \lesssim(\|u_{h}-\Pi_{k}^{\nabla^{2}}u_{h}\|_{\Omega}+|u_{h}-\Pi _{k}^{\nabla^{2}}u_{h}|_{2,h}+\|u_{h}-J_{2}^{*}u_{h}\|_{\Omega}+|u_{h}-J_{2}^{ *}u_{h}|_{2,h}+|u_{h}-\Pi_{\ell}^{\nabla}u_{h}|_{1,h}\] \[\quad+\|p_{h}-\Pi_{\ell}^{\nabla}p_{h}\|_{\Omega}+|p_{h}-\Pi_{ \ell}^{\nabla}p_{h}|_{1,h}+\|p_{h}-J_{4}p_{h}\|_{\Omega}+|p_{h}-J_{4}p_{h}\|_{ 1,h})\|\vec{e}\|_{\mathbf{H}_{\epsilon}}\] \[\lesssim\sum_{K\in\mathcal{T}_{h}}\Big{(}(1+h_{K}+h_{K}^{2})|u_{h }-\Pi_{k}^{\nabla^{2}}u_{h}|_{2,K}+|u_{h}-\Pi_{\ell}^{\nabla}u_{h}|_{1,h}+(1+ h_{K})|p_{h}-\Pi_{\ell}^{\nabla}p_{h}|_{1,K}+\eta_{8,K}\Big{)}\|\vec{e}\|_{ \mathbf{H}_{\epsilon}}\] \[\lesssim\sum_{K\in\mathcal{T}_{h}}(\eta_{6,K}+\eta_{7,K}+\eta_{8, K})\|\vec{e}\|_{\mathbf{H}_{\epsilon}}.\]
The second step follows from the Poincare-Friedrichs inequality and Theorem 5.1-5.4, and the last step from (3.8a)-(3.8c) and the equivalence \(|u_{h}-\Pi_{\ell}^{\nabla}u_{h}|_{1,K}\approx\|\mathrm{Dof}^{\mathrm{f,nc}}(u_ {h}-\Pi_{\ell}^{\nabla}u_{h})\|_{\ell^{2}}\) (see [31] for a proof). Cauchy-Schwarz inequalities for inner products and (3.8b)-(3.8d) lead to
\[T_{8} \lesssim\Big{(}\sum_{K\in\mathcal{T}_{h}}\|u_{h}-\Pi_{k}u_{h}\|_{L ^{2}(K)}+\|p_{h}-\Pi_{\ell}p_{h}\|_{L^{2}(K)}+\eta_{6,K}\Big{)}(|e_{I}^{u}|_{2, h}+|e_{I}^{p}|_{1,h})\] \[\lesssim\Big{(}\sum_{K\in\mathcal{T}_{h}}\eta_{6,K}\Big{)}(|e_{I} ^{u}|_{2,h}+|e_{I}^{p}|_{1,h})\]
with \(\|u_{h}-\Pi_{k}u_{h}\|_{K}\leq\|u_{h}-\Pi_{k}^{\nabla^{2}}u_{h}\|_{K}\lesssim h _{K}^{2}|u_{h}-\Pi_{k}^{\nabla^{2}}u_{h}|_{2,K}\) and \(\|p_{h}-\Pi_{\ell}p_{h}\|_{K}\leq\|p_{h}-\Pi_{\ell}^{\nabla}p_{h}\|_{K}\lesssim h _{K}|p_{h}-\Pi_{\ell}^{\nabla}p_{h}\|_{K}\) followed by (3.8a)-(3.8c) in the last estimate. The trace inequality shows \(\|q\|_{e}\lesssim h_{e}^{-1/2}\|q\|_{\infty_{e}}+h_{e}^{1/2}|q|_{1,\omega_{e}} \lesssim h_{e}^{1/2}|q|_{1,\omega_{e}}\) with the Poincare-Friedrichs inequality in the last bound. This and the Cauchy-Schwarz inequality prove that
\[T_{9}\lesssim\Big{(}\sum_{K\in\mathcal{T}_{h}}\eta_{5,K}\Big{)}|e^{p}|_{1, \Omega}.\]
Finally, analogous arguments as those used in \(T_{9}\) show that \(T_{10}\lesssim\Big{(}\sum_{K\in\mathcal{T}_{h}}(\eta_{3,K}+\eta_{4,K})\Big{)}|e ^{u}|_{2,\Omega}\). The previous bounds in (6.15) conclude the proof.
### Efficiency
**Theorem 6.3** (Efficiency up to stabilisation and data oscillation).: _Under the assumption \(\ell\leq k\leq\ell+2\), the local error estimators are bounded above as follows:_
\[\eta_{1,K} \lesssim\|u-u_{h}\|_{K}+|u-u_{h}|_{2,K}+|p-p_{h}|_{1,K}+\eta_{6,K}+ \mathrm{osc}_{2}(\tilde{f},K), \tag{102a}\] \[\eta_{2,K} \lesssim|u-u_{h}|_{2,K}+\beta\|p-p_{h}\|_{K}+\gamma|p-p_{h}|_{1,K}+ \eta_{6,K}+\mathrm{osc}_{1}(\tilde{g},K),\] (102b) \[\eta_{3,K} \lesssim\sum_{e\in\mathcal{E}_{K}}\sum_{K^{\prime}\in\omega_{e}} \Big{(}\|u-u_{h}\|_{K^{\prime}}+|u-u_{h}|_{2,K^{\prime}}+|p-p_{h}|_{1,K^{\prime}}+ \eta_{6,K^{\prime}}+\mathrm{osc}_{2}(\tilde{f},K^{\prime})\Big{)},\] (102c) \[\eta_{4,K} \lesssim\sum_{e\in\mathcal{E}_{K}}\sum_{K^{\prime}\in\omega_{e}} \Big{(}\|u-u_{h}\|_{K^{\prime}}+|u-u_{h}|_{2,K^{\prime}}+|p-p_{h}|_{1,K^{\prime}}+ \eta_{6,K^{\prime}}+\mathrm{osc}_{2}(\tilde{f},K^{\prime})\Big{)}, \tag{102d}\]
\[\eta_{5,K} \lesssim\sum_{e\in\mathcal{K}_{k}}\sum_{K^{\prime}\in\omega_{e}} \Big{(}|u-u_{h}|_{1,K^{\prime}}+\beta\|p-p_{h}\|_{K^{\prime}}+\gamma|p-p_{h}|_{1, K^{\prime}}+\eta_{6,K}+\operatorname{osc}_{1}(\tilde{g},K^{\prime})\Big{)}, \tag{11e}\] \[\eta_{7,K} \lesssim|u-u_{h}|_{1,K}+|u-\Pi_{\ell}^{\nabla}u|_{1,K},\] (11f) \[\eta_{8,K} \lesssim|u-u_{h}|_{2,K}+|p-p_{h}|_{1,K}+\eta_{6,K}. \tag{11g}\]
Proof.: Recall the element bubble-function \(b_{2,K}:=b_{K}\in H_{0}^{2}(K)\) supported in \(K\in\mathcal{T}_{h}\) and \(\ell\leq k\). Let
\[v_{k}:=\Pi_{k}\tilde{f}-\Delta^{2}\Pi_{k}^{\nabla^{2}}u_{h}-\Pi_{k}u_{h}- \alpha\nabla\cdot\Pi_{\ell-1}\nabla p_{h}\in\mathbb{P}_{k}(K)\text{ and }v:=v_{k}b_{2,K}\in H_{0}^{2}(\Omega)\subset V.\]
This in the first equation of the continuous problem (2a), and \(a^{K}(\Pi_{k}^{\nabla^{2}}u_{h},v)=(\Delta^{2}\Pi_{k}^{\nabla^{2}}u_{h},v)_{K}\) and \((\Pi_{\ell-1}\nabla p_{h},\nabla v)_{K}=-(\nabla\cdot\Pi_{\ell-1}\nabla p_{h}, v)_{K}\) from an integration by parts lead to
\[(u-\Pi_{k}u_{h},v)_{K}+a^{K}(u-\Pi_{k}^{\nabla^{2}}u_{h},v)-\alpha(\nabla p- \Pi_{\ell-1}\nabla p_{h},\nabla v)_{K}=(\tilde{f}-\Pi_{k}\tilde{f},v)_{K}+(v_ {k},v)_{K}.\]
Hence \(v=v_{k}b_{2,K}\) and the inequalities (10) show that
\[h_{K}^{2}\|v_{k}\|_{K} \lesssim h_{K}^{2}\|\tilde{f}-\Pi_{k}\tilde{f}\|_{K}+h_{K}^{2}\|u -\Pi_{k}u_{h}\|_{K}+|u-\Pi_{k}^{\nabla^{2}}u_{h}|_{2,K}+h_{K}\|\nabla p-\Pi_{ \ell-1}\nabla p_{h}\|_{K}\] \[\lesssim\|u-u_{h}\|_{K}+|u-u_{h}|_{2,K}+|p-p_{h}|_{1,K}+\eta_{6,K }+\operatorname{osc}_{2}(\tilde{f},K)\]
with triangle inequalities and (11b)-(11d) in the last estimate. This and the triangle inequality \(\eta_{1,K}\leq\operatorname{osc}_{2}(\tilde{f},K)+h_{K}^{2}\|v_{k}\|_{K}\) conclude the proof of (11a). The bubble-function \(b_{1,K}\in H_{0}^{1}(K)\) supported in \(K\) is constructed as in [16] and it satisfies, for any \(\chi\in\mathbb{P}_{\ell}(K)\), that
\[\|\chi\|_{K}\lesssim\sum_{m=0}^{1}\,h_{K}^{m}|b_{1,K}\chi|_{m,K}\lesssim\|\chi \|_{K}. \tag{12}\]
Let \(q_{\ell}:=\Pi_{\ell}\tilde{g}+\gamma\nabla\cdot\Pi_{\ell-1}\nabla p_{h}-\beta \Pi_{\ell}p_{h}+\alpha\nabla\cdot\Pi_{k-1}\nabla u_{h}\) and \(q:=q_{\ell}b_{1,K}.\) This in the second equation of the continuous problem (2b), and \((\nabla q,\Pi_{k-1}\nabla u_{h})_{K}=-(q,\nabla\cdot\Pi_{k-1}\nabla u_{h})_{K}\) and \((\Pi_{\ell-1}\nabla p_{h},\nabla q)_{K}=-(\nabla\cdot\Pi_{\ell-1}\nabla p_{h},q)_{K}\) show
\[\beta(p-\Pi_{\ell}p_{h},q)_{K}+\alpha(q,\nabla u-\Pi_{k-1}\nabla u_{h})_{K}+ \gamma(\nabla p-\Pi_{\ell-1}\nabla p_{h},\nabla q)_{K}=(\tilde{g}-\Pi_{\ell} \tilde{g},q)_{K}+(q_{\ell},q)_{K}.\]
Hence (12) in the above equation allows us to assert that
\[h_{K}\|q_{\ell}\|_{K} \lesssim|u-u_{h}|_{2,K}+\beta\|p-p_{h}\|_{K}+\gamma|p-p_{h}|_{1,K} +|u_{h}-\Pi_{k}^{\nabla^{2}}u_{h}|_{2,K}+|p_{h}-\Pi_{\ell}^{\nabla}p_{h}|_{1,K} +\operatorname{osc}_{1}(\tilde{g},K)\] \[\lesssim|u-u_{h}|_{2,K}+\beta\|p-p_{h}\|_{K}+\gamma|p-p_{h}|_{1,K} +\eta_{6,K}+\operatorname{osc}_{1}(\tilde{g},K).\]
This concludes the proof of (11b). It follows from [22] that \(v:=\phi_{e}[M_{\boldsymbol{n}\boldsymbol{n}}(\Pi_{k}^{\nabla^{2}}u_{h})]\) satisfies the first inequality
\[\|[M_{\boldsymbol{n}\boldsymbol{n}}(\Pi_{k}^{\nabla^{2}}u_{h})] \|_{e}^{2} \lesssim([M_{\boldsymbol{n}\boldsymbol{n}}(\Pi_{k}^{\nabla^{2}}u_{h})], \partial_{\boldsymbol{n}}v)_{e}=a^{\omega_{e}}(\Pi_{k}^{\nabla^{2}}u_{h},v)-( \Delta^{2}\Pi_{k}^{\nabla^{2}}u_{h},v)_{\omega_{e}}\] \[=a^{\omega_{e}}(\Pi_{k}^{\nabla^{2}}u_{h}-u,v)+(\tilde{f}-\Pi_{k} \tilde{f},v)_{\omega_{e}}+(v_{k},v)_{\omega_{e}}+(\Pi_{k}u_{h}-u,v)_{\omega_{e }}+\alpha(\nabla p-\Pi_{\ell-1}\nabla p_{h},\nabla v)_{\omega_{e}} \tag{12}\]
with the second equality from an integration by parts and the last equality from (2a). The Cauchy-Schwarz inequality in (12) and the inverse estimate result in
\[\|[M_{\boldsymbol{n}\boldsymbol{n}}(\Pi_{k}^{\nabla^{2}}u_{h})] \|_{e}^{2} \lesssim\sum_{K^{\prime}\in\omega_{e}}\Big{(}h_{K^{\prime}}^{-2}| [u-\Pi_{k}^{\nabla^{2}}u_{h}|_{2,K^{\prime}}+\eta_{1,K})+\|u-\Pi_{k}u_{h}\|_{K^ {\prime}}\] \[\quad+\alpha\Lambda_{K^{\prime}}^{-1/2}\|\nabla p-\Pi_{\ell-1} \nabla p_{h}\|_{K^{\prime}}\Big{)}\|v\|_{K^{\prime}}.\]
Refer to [22] for the estimate \(\|v\|_{\omega_{e}}\lesssim h_{e}^{3/2}\|[M_{\boldsymbol{n}\boldsymbol{n}}(\Pi_{k }^{\nabla^{2}}u_{h})]\|_{e}\). This and (11a) conclude the proof of (11c). Since \([T(\Pi_{k}^{\nabla^{2}}u_{h})+\alpha\Pi_{\ell-1}\nabla p_{h}\cdot\boldsymbol{n}]\) is a polynomial along an edge \(e\), the analogous arguments as in the bound of \(\eta_{3,K}\) for \(w:=\psi_{e}[T(\Pi_{k}^{\nabla^{2}}u_{h})+\alpha\Pi_{\ell-1}\nabla p_{h}\cdot \boldsymbol{n}]\) lead to
\[([T(\Pi_{k}^{\nabla^{2}}u_{h})+\alpha\Pi_{\ell-1}\nabla p_{h}\cdot\boldsymbol{n} ],w)_{e}=(u-\Pi_{k}u_{h},w)_{\omega_{e}}-(\tilde{f}-\Pi_{k}u_{h}-\Delta^{2}\Pi_{k}^ {\nabla^{2}}u_{h}-\alpha\nabla\cdot\Pi_{\ell-1}\nabla p_{h},w)_{\omega_{e}}\]
\[-\alpha(\nabla p-\Pi_{\ell-1}\nabla p_{h},\nabla w)_{\omega_{e}}-a^{\omega_{e}}(\Pi _{k}^{\nabla^{2}}u_{h}-u,w)+([M_{\boldsymbol{nn}}(\Pi_{k}^{\nabla^{2}}u_{h})], \partial_{\boldsymbol{n}}w)_{e}.\]
The Cauchy-Schwarz inequality, the inverse estimates \(\sum_{m=0}^{2}h_{K}^{m-2}|w|_{m,K}\lesssim\|w\|_{K}\lesssim h_{e}^{-3/2}\|[T( \Pi_{k}^{\nabla^{2}}u_{h})+\alpha\Pi_{\ell-1}\nabla p_{h}\cdot\boldsymbol{n} ]\|_{e}\), and (6.17a)-(6.17c) conclude the proof of (6.17d). Let \(b_{e}\in H^{1}_{0}(\omega_{e})\) be the edge-bubble function constructed as in [13, Lemma 9] with the estimates
\[\|\chi\|_{e}^{2}\lesssim(b_{e},\chi^{2})_{e}\lesssim\|\chi\|_{e}^{2}\quad\text {and}\quad\sum_{m=0}^{1}h_{K}^{m-1/2}\|b_{e}\chi\|_{m,K}\lesssim\|\chi\|_{e} \tag{6.20}\]
for \(\chi\in\mathbb{P}_{\ell}(e)\) with the constant elongation of \(\chi\) in the normal direction of \(e\in\mathcal{E}_{K}\). The test function \(q=b_{e}[\alpha\nabla\cdot\Pi_{k-1}\nabla u_{h}+\gamma\nabla\cdot\Pi_{\ell-1} \nabla p_{h}]\) in (2.2b) and an integration by parts show
\[\beta(p-\Pi_{\ell}p_{h},q)_{\omega_{e}}+\alpha(\nabla q,\nabla u- \Pi_{k-1}\nabla u_{h})_{\omega_{e}}+\gamma(\nabla p-\Pi_{\ell-1}\nabla p_{h}, \nabla q)_{\omega_{e}}=(\tilde{g}-\Pi_{\ell}\tilde{g},q)_{\omega_{e}}\] \[+(\Pi_{\ell}\tilde{g}-\Pi_{\ell}p_{h}+\alpha\nabla\cdot\Pi_{k-1} \nabla u_{h}+\gamma\nabla\cdot\Pi_{\ell-1}\nabla p_{h})_{\omega_{e}}-([\alpha \nabla\cdot\Pi_{k-1}\nabla u_{h}+\gamma\nabla\cdot\Pi_{\ell-1}\nabla p_{h}],q )_{e}.\]
The Cauchy-Schwarz inequality, \(\chi=[\alpha\nabla\cdot\Pi_{k-1}\nabla u_{h}+\gamma\nabla\cdot\Pi_{\ell-1} \nabla p_{h}]\) in (6.20), and the estimate (6.17b) for \(\eta_{2,K}\) conclude the proof of (6.17e). Invoke the equivalence \(|u_{h}-\Pi_{\ell}^{\nabla}u_{h}|_{1,K}\approx\|\text{Do}_{K}^{\text{f}k}(u_{h }-\Pi_{\ell}^{\nabla}u_{h})\|_{\ell^{2}}\) again as in the reliability, and then the definition of \(\Pi_{\ell}^{\nabla^{2}}\) and the triangle inequality prove
\[\eta_{\tau,K}\lesssim|u_{h}-\Pi_{\ell}^{\nabla}u|_{1,K}\leq|u-u_{h}|_{1,K}+|u- \Pi_{\ell}^{\nabla}u|_{1,K}.\]
The estimate (6.17g) immediately follows from the arguments involved in the proof of Theorem 5.1(d).
**Remark 6.1** (Higher degrees \(k\geq 3\) and \(\ell\leq k\leq\ell+2\)).: _If we introduce \(J\vec{u}_{h}=(J_{2}u_{h},J_{4}p_{h})\) in the proof of Theorem 6.2 for \(k\geq 3\), then the proof follows analogously with only difference in the estimate (6.13) of the term \(T_{2}\). There the arguments utilise the \(H^{1}\)-orthogonality \(\nabla_{\text{pw}}(v_{h}-J_{2}^{*}v_{h})\perp(\mathbb{P}_{\ell-1}(\mathcal{T}_ {h}))^{2}\) and hence the same estimator works if one can construct \(J_{2}^{*}\) for \(k\geq 3\) with this orthogonality (which is possible but not trivial). Still for higher \(k\), we can invoke the \(H^{1}\)-orthogonality Theorem 5.2(b) and this leads to_
\[\alpha(\Pi_{\ell-1}\nabla p_{h},\nabla J_{2}e_{I}^{u}-\Pi_{k-1} \nabla e_{I}^{u})_{K} =\alpha(\Pi_{\ell-1}\nabla p_{h}-\Pi_{k-3}\nabla p_{h},\nabla J_{2} e_{I}^{u}-\Pi_{k-1}\nabla e_{I}^{u})_{K}\] \[\lesssim\alpha(|p_{h}-\Pi_{\ell}^{\nabla}p_{h}|_{1,K}+|p_{h}-\Pi _{k-2}^{\nabla}p_{h}|_{1,K})h_{K}|e_{I}^{u}|_{1,K}\]
_with the Cauchy-Schwarz inequality, the triangle inequality, Theorem 5.2(e), and Proposition 3.1 in the last estimate. Consequently, we assume that \(k-2\leq\ell\) and obtain an additional contribution, say \(\eta_{9,K}\), in the error estimator. The equivalence of norms show_
\[\eta_{9,K}^{2}:=\alpha^{2}h_{K}^{2}\|\text{Dof}_{K}^{\text{f}k}(p_{h}-\Pi_{k-2 }^{\nabla}p_{h})\|_{\ell^{2}}^{2},\]
_and also the efficiency_
\[\eta_{9,K}\lesssim h_{K}|p_{h}-\Pi_{k-2}^{\nabla}p_{h}|_{1,K}\leq h_{K}|p_{h}- \Pi_{k-2}^{\nabla}p|_{1,K}\leq|p-p_{h}|_{1,K}+h_{K}|p-\Pi_{k-2}^{\nabla}p|_{1,K}.\]
**Remark 6.2** (Conforming VEM).: _As companion operators are not required in the conforming case, the proof of reliability and efficiency follows analogously assuming \(J=I\), where \(I\) denotes the identity operator. Note that the local contributions \(\eta_{8,K}\) and \(\eta_{9,K}\) arise due to the noncoformity of the method and hence the error estimator in the conforming case is_
\[\|u-u_{h}^{c}\|_{\mathbf{H}_{\epsilon}}^{2}\lesssim\sum_{i=1}^{7}\eta_{i}^{2}.\]
**Remark 6.3** (Choices of projection operators).: _Note that the projection \(\Pi_{k-2}\nabla v_{h}\) for \(v_{h}\in V_{h}^{k,c}\) (or \(V_{h}^{k,nc}\)) is also computable in terms of the DoFs, and both a priori and a posteriori error analysis hold with this choice. We prefer to use \(\Pi_{k-2}\nabla v_{h}\) instead of \(\Pi_{k-1}\nabla v_{h}\) in the numerical experiments below. Also from the theoretical analysis, observe that if we set \(\ell=k\) (one degree higher for pressure) and modify the term \((\nabla p,\nabla u)_{K}\approx(\Pi_{k-1}\nabla p,\Pi_{k-1}\nabla u)_{K}\) for all \(K\in\mathcal{T}_{h}\) in the discrete approximation, then the error estimator component \(\eta_{\ell}\) will disappear. But higher approximation of pressure may not be a good choice from numerical perspective._
## 7 Numerical results
We now present a number of computational tests that confirm the theoretical a priori error estimates from Sections 4-5 and a posteriori from Section 6, and we also include typical benchmark solutions for Kirchhoff-Love plates that we modify to include the coupling with filtration in porous media. All meshes were generated with the library PolyMesher [42].
### Example 1: Accuracy verification with smooth solutions
In order to investigate numerically the error decay predicted by Theorems 4.1 and 5.5, we follow the approach of manufactured solutions. We set the parameters \(\alpha=\beta=\gamma=1\) in all the examples below.
We construct a transverse load and a source function \(f,g\), respectively, as well as homogeneous and non-homogeneous boundary data for \(u\) and \(p\), such that the problem has the following smooth deflection and fluid pressure moment exact solutions
\[u(x,y)=\sin^{2}(\pi x)\sin^{2}(\pi y),\quad p(x,y)=\cos(\pi xy),\]
on the square domain \(\Omega=(0,1)^{2}\) with mixed boundary conditions \(\Gamma_{c}:=\{x=0\}\cup\{y=0\}\) and \(\Gamma_{s}:=\partial\Omega\setminus\Gamma_{c}\). Then we employ a sequence of successively refined meshes \(\mathcal{T}^{i}_{h}\) and compute the projected virtual element solution \((\Pi_{k}^{\nabla^{2}}u_{h},\Pi_{\ell}^{\nabla}p_{h})\) on each mesh refinement \(h_{i}\), and monitor the norms \(|u-\Pi_{k}^{\nabla^{2}}u_{h}|_{2,h}\) for displacement approximation, \(|p-\Pi_{\ell}^{\nabla}p_{h}|_{1,h}\) for pressure approximation and the combined energy norm \(\|\cdot\|_{\mathbf{H}^{h}_{\epsilon}}\). The experimental order of convergence \(\mathtt{r}_{i}\) is computed from the formula
\[\mathtt{r}_{i}=\frac{\log(\frac{\mathtt{e}_{i+1}}{\mathtt{e}_{i}})}{\log( \frac{h_{i+1}}{h_{i}})},\]
where \(\mathtt{e}_{i}\) denotes a norm of the error on the mesh \(\mathcal{T}^{i}_{h}\).
We impose the appropriate boundary conditions for both clamped and simply supported boundaries. In case of the conforming VEM, note that the degrees of freedom include the gradient values at vertices and \(u(z)=0=\Delta u(z)=0\) implies \(\nabla u(z)=\vec{0}\) for a corner \(z\) along the boundary \(\Gamma^{s}\). Hence, we have to impose the zero gradient values at the corners on simply supported part in addition to the clamped part.
We take \(\ell=k-1\) in numerical experiments to obtain the expected optimal convergence rates for both conforming and nonconforming VEM. See Table 7.1 (resp. 7.2) for \(k=2\) and \(\ell=1\) (resp. for
Figure 7.1: Approximation \(u_{h}\) of displacement \(u\) for \(k=2\) (left) and \(p_{h}\) of pressure \(p\) for \(\ell=1\) (right) on a smooth Voronoi mesh of \(400\) elements.
\(k=3\) and \(\ell=2\)). Tables 7.1-7.2 display the errors and the convergence rates on a sequence of Voronoi meshes.
conforming (resp. nonconforming), comute the upper bound \(\eta\) in Theorem 6.2, consider the Dorfler marking strategy with \(\theta=0.5\), and divide a marked polygon into quadrilaterals by connecting vertices to the centroid of the respective polygon. The same refinement strategy is utilised to divide all the elements in case of uniform refinement. The additional error estimator component \(\eta_{\text{B}}\) from Remark 6.1 is incorporated in the experiment of the nonconforming VEM with degree \(k=3\) and \(\ell=2\).
### Example 2: Convergence rates with non-smooth solutions
We consider the L shaped domain \(\Omega=(-1,1)^{2}\setminus[0,1)^{2}\) and the exact solution
\[u(r,\theta)=r^{5/3}\sin\Big{(}\frac{5\theta}{3}\Big{)},\quad p(r,\theta)=r^{2 /3}\sin\Big{(}\frac{2\theta}{3}\Big{)}\]
with clamped boundary conditions for \(u\) and Dirichlet boundary condition for \(p\) on \(\partial\Omega\) (observe that we can take Dirichlet boundary condition instead of Neumann for \(p\) on \(\Gamma^{c}\) without affecting the well-posedness and error analysis of the model problem). Since both the displacement \(u\in H^{(8/3)-\epsilon}(\Omega)\) and the pressure \(p\in H^{(5/3)-\epsilon}(\Omega)\) for all \(\epsilon>0\) have corner singularities, the lowest-order scheme \(k=2\) and \(\ell=1\) suffices to achieve the optimal convergence rates with respect to the regularity of \(u\) and \(p\).
When the adaptive algorithm is run, we see more refinement around the singular corner as displayed in Figures 7.4-7.5. Figure 7.6 shows that the method with uniform refinement leads to suboptimal rates
Figure 7.3: Left (resp. right) panel displays NDof vs error in energy norm (resp. error estimator) in both uniform and adaptive refinements for nonconforming VEM.
Figure 7.2: Left (resp. right) panel displays NDof vs error in energy norm (resp. error estimator) in both uniform and adaptive refinements for conforming VEM.
whereas adaptive refinement recovers the optimal convergence rates, and the error estimator mirrors the behaviour of the actual error. We observe from the plots of error estimator components that \(\eta_{7}\) (resp. \(\eta_{8}\)) dominates the remaining contributions for the case of conforming (resp. nonconforming) VEM. |
2306.02745 | Convergence of operators with deficiency indices $(k,k)$ and of their
self-adjoint extensions | We consider an abstract sequence $\{A_n\}_{n=1}^\infty$ of closed symmetric
operators on a separable Hilbert space $\mathcal{H}$. It is assumed that all
$A_n$'s have equal deficiency indices $(k,k)$ and thus self-adjoint extensions
$\{B_n\}_{n=1}^\infty$ exist and are parametrized by partial isometries
$\{U_n\}_{n=1}^\infty$ on $\mathcal{H}$ according to von Neumann's extension
theory. Under two different convergence assumptions on the $A_n$'s we give the
precise connection between strong resolvent convergence of the $B_n$'s and
strong convergence of the $U_n$'s. | August Bjerg | 2023-06-05T09:56:08Z | http://arxiv.org/abs/2306.02745v2 | # Convergence of operators with deficiency indices \((k,k)\) and of their self-adjoint extensions
# Convergence of operators with deficiency indices \((k,k)\) and of their self-adjoint extensions
August Bjerg
Department of Mathematical Sciences, University of Copenhagen,
Universitetsparken 5, DK-2100 Copenhagen \(\O\), Denmark
[email protected]
**Abstract:** We consider an abstract sequence \(\{A_{n}\}_{n=1}^{\infty}\) of closed symmetric operators on a separable Hilbert space \(\mathcal{H}\). It is assumed that all \(A_{n}\)'s have equal deficiency indices \((k,k)\) and thus self adjoint extensions \(\{B_{n}\}_{n=1}^{\infty}\) exist and are parametrized by partial isometries \(\{U_{n}\}_{n=1}^{\infty}\) on \(\mathcal{H}\) according to von Neumann's extension theory. Under two different convergence assumptions on the \(A_{n}\)'s we give the precise connection between strong resolvent convergence of the \(B_{n}\)'s and strong convergence of the \(U_{n}\)'s.
## 1 Introduction
We investigate in the following the notion of strong resolvent convergence of sequences of self-adjoint extensions of already specified (unbounded) closed symmetric operators on a Hilbert space. For the general theory on these topics we refer to [2] VIII and [1] X and introduce now the framework in which we will be working for the entirety of this note.
Consider a symmetric and closed operator \(A\) on an infinite dimensional separable Hilbert space \(\mathcal{H}\)1 defined on a dense subspace \(D(A)\). The kernels \(\mathcal{H}_{\mp}:=Z(A^{*}\pm i)\) are the _deficiency subspaces_ and the pair \((\dim\mathcal{H}_{+},\dim\mathcal{H}_{-})\) is the _deficiency indices_. We assume that the latter are equal and finite, i.e. \((\dim\mathcal{H}_{+},\dim\mathcal{H}_{-})=(k,k)\) for some \(k=1,2,\ldots\) (however, see Remark 11). This implies, cf. [1], that \(A\) has self-adjoint extensions, and moreover any self-adjoint extension \(B\) of \(A\) is given by the rule
Footnote 1: We adopt the convention that the inner product on \(\mathcal{H}\) is linear in the second entry
\[D(B)=\{\phi_{0}+\phi_{+}+U\phi_{+}\mid\phi_{0}\in D(A),\;\phi_{+}\in\mathcal{H }_{+}\},\]
\[B(\phi_{0}+\phi_{+}+U\phi_{+})=A\phi_{0}+i\phi_{+}-iU\phi_{+}\]
where \(U\colon\mathcal{H}_{+}\to\mathcal{H}_{-}\) is a unitary map which can be extended to a partial isometry on all of \(\mathcal{H}\) by letting \(U\phi=0\) for \(\phi\in[\mathcal{H}_{+}]^{\perp}\). Conversely, all extensions of \(A\) on this form are self-adjoint.
We introduce now sequences \(\{A_{n}\}_{n=1}^{\infty}\) and \(\{B_{n}\}_{n=1}^{\infty}\) of such operators. That is, the \(A_{n}\)'s are densely defined, symmetric and closed operators on \(\mathcal{H}\) with deficiency subspaces \(\mathcal{H}_{\pm}^{n}\) and deficiency indices \((k,k)\) independent of \(n\), and \(B_{n}\) is a self-adjoint extension of \(A_{n}\) defined by a unitary map \(U_{n}\colon\mathcal{H}_{+}^{n}\to\mathcal{H}_{-}^{n}\) (which can all, once again, be considered as partial isometries on \(\mathcal{H}\)) as described above for each \(n\). In this set-up one should think of \(A\), \(B\) and \(U\) as limiting operators of the sequences of \(A_{n}\)'s, \(B_{n}\)'s and \(U_{n}\)'s respectively and our main goal will be to examine the interplay between the convergence of these
sequences. A very natural question is for example whether we can obtain results along the lines of
\[\text{ ``Suppose }A_{n}\to A\text{. Then }B_{n}\to B\text{ if and only if }U_{n}\to U\text{.''} \tag{1}\]
Of course one needs here to specify which notions of convergences we involve in this statement for it to be mathematically interesting. For the purposes of this note we focus on strong convergence of operators on Hilbert spaces. Hence, \(U_{n}\to U\) should be understood as usual strong convergence of bounded operators and \(B_{n}\to B\) as strong resolvent convergence of self-adjoint unbounded operators, i.e. as strong convergence of \((B_{n}+i)^{-1}\) towards \((B+i)^{-1}\) - for an introduction to the topic and an explanation why this is in some sense the only "right" way of extending the concept of strong convergence to self-adjoint unbounded operators, see [2]. For the \(A_{n}\)'s, however, we cannot use this generalized version of strong convergence since these are not self-adjoint.
This issue will be addressed in section 2. Once this theoretical framework is in place, we will gradually progress towards presenting statements of the form (1) in Corollaries 14 and 15 which can safely be considered the main results of this note. Finally, an exposition on the optimality of these results - in particular of the latter - is included for completeness.
**Example 1**.: As a final note before diving into technical details we mention the structure of a class of motivational examples that illuminates why we even care to search for results like (1). The reader might have other reasons to find results like (1) exciting and can in this case safely skip the following.
Consider a sequence \(\{\widetilde{A}_{n}\}_{n=1}^{\infty}\) of explicitly given symmetric differential operators on an open subset \(\Omega\) of \(\mathbb{R}^{d}\) defined on \(D(\widetilde{A}_{n})=C_{c}^{\infty}(\Omega)\). Now the usual way to realize \(\widetilde{A}_{n}\) as a self-adjoint operator on \(L^{2}(\Omega)=\mathcal{H}\) is the following: Let \(A_{n}\) be the closure of \(\widetilde{A}_{n}\) for each \(n\) and if this is not already self-adjoint extend it by the above procedure to some self-adjoint operator \(B_{n}\). Here we have an example where the sequence \(\{A_{n}\}_{n=1}^{\infty}\) is concretely described and not often subject to change. It describes not only how the \(A_{n}\)'s but also (through the \(A_{n}^{*}\)'s) how the \(B_{n}\)'s should act on their domain, and often it will not be to difficult to prove that \(A_{n}\to A\) for some \(A\) in an appropriate sense. We suppose that this convergence has been established. Moreover, natural examples of sequences of this form will in most cases satisfy the crucial property that all the operators have the same deficiency indices. The deficiency subspaces will be parts of solutions spaces of differential equations and usually the \(U_{n}\)'s will be simple maps between such spaces. Hence, we will think of strong convergence of the \(U_{n}\)'s as a property which is a lot easier to handle than the full strong resolvent convergence of the \(B_{n}\)'s.
Now one can envision a couple of situations: If a sequence of \(B_{n}\)'s is known, (1) could help us determine a self-adjoint extension \(B\) of \(A\) so that \(B_{n}\to B\) in the strong resolvent sense. One needs only to find the strong limit of the \(U_{n}\)'s (if this exists) and use this to extend \(A\). If the strong limit of the \(U_{n}\)'s does not exists then the result will conversely tell us that the \(B_{n}\)'s do not converge towards any self-adjoint extension of \(A\). On the other hand it could be that \(B\) was a fixed self-adjoint extension of \(A\) and the result could in the same manner be used to find a sequence of \(B_{n}\)'s which extends the \(A_{n}\)'s and converge towards \(B\) in the strong resolvent sense - or whether such sequence exists at all.
Strong graph convergence and convergence of graph projections
Now some candidates for types of convergences for the \(A_{n}\)'s in (1) are treated. Along the way we introduce the key machinery needed for both formulating and proving our main results, and thus the present section will serve as our invaluable technical toolbox for the rest of this note.
Firstly we need to introduce a particular notion of convergence of subspaces of a Hilbert space.
**Definition 2**.: _Let \(\{V_{n}\}_{n=1}^{\infty}\) be a sequence of subspaces of a Hilbert space \(\mathcal{H}\). The subspace_
\[V_{\infty}:=\left\{x\in\mathcal{H}\left|\begin{array}{c}\text{There exists a sequence }\{x_{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\text{ with }\\ x_{n}\in V_{n}\text{ for each }n\text{ so that }x_{n}\to x\text{ as }n\to\infty\end{array}\right.\right\}\]
_is called the strong limit of \(\{V_{n}\}_{n=1}^{\infty}\) and we write \(V_{n}\to V_{\infty}\) strongly._
One should not be misled by the fact that we call this type of convergence "strong". We note that _any_ sequence of subspaces has a limit in the above sense (although it might be the trivial \(0\)-subspace) which already suggests that this way of converging cannot be a particularly strong one. The word "strong" merely refers to the fact that \(\{x_{n}\}_{n=1}^{\infty}\) should converge towards \(x\) strongly, i.e. with respect to the Hilbert space norm.
Another notion of convergence of sequences of closed subspaces of a Hilbert space is that of the orthogonal projections onto these converging strongly towards the orthogonal projection onto a limiting subspace. In fact, this is generally a stronger notion of convergence of subspaces than the above "strong" convergence.
**Lemma 3**.: _Let \(\{V_{n}\}_{n=1}^{\infty}\) be a sequence of closed subspaces of a Hilbert space \(\mathcal{H}\) and denote the orthogonal projections onto these by \(\{P_{n}\}_{n=1}^{\infty}\). Denote similarly by \(P\) the orthogonal projection onto another subspace \(V\subseteq\mathcal{H}\)._
1. \(V\) _is contained in the strong limit of_ \(\{V_{n}\}_{n=1}^{\infty}\) _if and only if_ \(P_{n}x\to x=Px\) _for all_ \(x\in V\)_._
2. _If_ \(P_{n}\to P\) _strongly then_ \(V\) _is the strong limit of_ \(\{V_{n}\}_{n=1}^{\infty}\)_._
Proof.: (a): Assume on the one hand that \(V\) is contained in the strong limit of \(\{V_{n}\}_{n=1}^{\infty}\). Then, for any \(x\in V\), there exists a sequence \(\{x_{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) with \(x_{n}\in V_{n}\) for all \(n\) so that \(x_{n}\to x\). Hence, \(\|P_{n}x-x\|\leq\|x_{n}-x\|\longrightarrow 0\) as needed. The other implication is clear if one considers the sequence \(\{P_{n}x\}_{n=1}^{\infty}\) for each \(x\in V\).
(b): Assume \(P_{n}\to P\) strongly and denote by \(V_{\infty}\) the strong limit of \(\{V_{n}\}_{n=1}^{\infty}\). By (a) we need only to argue that \(V_{\infty}\subseteq V\) or equivalently \(V^{\perp}\subseteq V_{\infty}^{\perp}\). However, if \(y\in V^{\perp}\) then for any \(x\in V_{\infty}\) we can choose the usual sequence \(\{x_{n}\}_{n=1}^{\infty}\) and obtain
\[\langle y,x\rangle=\lim_{n\to\infty}\langle(1-P_{n})y,x_{n}\rangle=0\]
proving \(y\in V_{\infty}^{\perp}\) as needed.
**Remark 4**.: While Lemma 3(b) shows that convergence of projections is a stronger type of convergence than "strong" convergence in the sense of Definition 2, the following example shows that it is actually _strictly_ stronger - a fact which will be important later on.
Consider a sequence \(\{V_{n}\}_{n=1}^{\infty}\) of subspaces of a Hilbert space \(\mathcal{H}\) of the form \(V_{n}=[\mathbb{C}x_{n}]^{\perp}\) where \(x_{n}\in\mathcal{H}\) is of unit length and denote by \(V_{\infty}\) the strong limit of this sequence. Suppose that \(x_{n}=x_{0}\) is fixed for \(n\) odd and \(x_{n}=y_{n}\) for \(n\) even where \(\{y_{n}\}_{n=1}^{\infty}\) is a sequence which is weekly convergent towards \(0\). Now \(x_{0}\in V_{\infty}\) since for \(n\) odd we have \(\operatorname{dist}(V_{n},x_{0})=1\). If, however, \(x\in[\mathbb{C}x_{0}]^{\perp}\) then we can consider the sequence \(z_{n}\) which is \(x\in V_{n}\) for \(n\) odd and \(x-\langle y_{n},x\rangle y_{n}\in V_{n}\) for \(n\) even. As \(\langle y_{n},x\rangle\to 0\) we see that \(z_{n}\to x\) proving \(x\in V_{\infty}\). We conclude that \(V_{\infty}=[\mathbb{C}x_{0}]^{\perp}\).
On the the other hand the orthogonal projections \(P_{n}\) onto the \(V_{n}\)'s do not converge strongly at all. In particular \(P_{n}x_{0}\) is \(0\) for \(n\) odd and \(x_{0}-\langle y_{n},x_{0}\rangle y_{n}\to x_{0}\) for \(n\) even. \(\blacksquare\)
Letting operators once again enter the picture we can now easily define a notion of convergence of any sequence of operators on a Hilbert space: The strong graph convergence which is also treated in [2].
**Definition 5**.: _Let \(\{A_{n}\}_{n=1}^{\infty}\) be any sequence of operators on a fixed Hilbert space \(\mathcal{H}\). If the graphs \(\operatorname{Gr}(A_{n})\) converge strongly towards the graph \(\operatorname{Gr}(A)\) of some operator \(A\) on \(\mathcal{H}\) as subspaces of \(\mathcal{H}\oplus\mathcal{H}\) then we say that \(A\) is the strong graph limit of the \(A_{n}\)'s and write \(A=\operatorname{str.gr.lim}A_{n}\)._
Let us return to the case of a sequence of densely defined and closed operators \(\{A_{n}\}_{n=1}^{\infty}\) for the remaining part of the section and fix once and for all the following convenient notation: By \(\Gamma_{\infty}\) we mean the strong limit of \(\{\operatorname{Gr}(A_{n})\}_{n=1}^{\infty}\) and by \(\Gamma_{\infty}^{*}\) the strong limit of \(\{\operatorname{Gr}(A_{n}^{*})\}_{n=1}^{\infty}\). Note that \((\phi,\psi)\in\Gamma_{\infty}\) if and only if there exists a sequence \(\{\phi_{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) such that both \(\phi_{n}\to\phi\) and \(A_{n}\phi_{n}\to\psi\), and we have the similarly characterization of \(\Gamma_{\infty}^{*}\). We can now present some basic properties of these subspaces.
**Lemma 6**.: _Suppose the \(A_{n}\)'s are as described above and let \(A\) be an operator with the same properties._
1. _If_ \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) _then_ \(\Gamma_{\infty}^{*}\subseteq\operatorname{Gr}(A^{*})\)_._
2. _If_ \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) _and_ \(\operatorname{Gr}(A^{*})\subseteq\Gamma_{\infty}^{*}\) _then_ \(\operatorname{Gr}(A)=\Gamma_{\infty}\) _and_ \(\operatorname{Gr}(A^{*})=\Gamma_{\infty}^{*}\)__
3. _If moreover the_ \(A_{n}\)_'s are symmetric and_ \(A\) _is self-adjoint then_ \(A=\operatorname{str.gr.lim}A_{n}\) _if and only if_ \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\)_._
_Proof._ (a): Take \((\phi,\psi)\in\Gamma_{\infty}^{*}\) arbitrary and a corresponding sequence \(\{\phi_{n}\}_{n=1}^{\infty}\) with \(\phi_{n}\in D(A_{n}^{*})\) so that \(\phi_{n}\to\phi\) and \(A_{n}^{*}\phi_{n}\to\psi\). Now for any \(\eta\in D(A)\) there exists a sequence \(\{\eta_{n}\}_{n=1}^{\infty}\) with \(\eta_{n}\in D(A_{n})\) so that \(\eta_{n}\to\eta\) and \(A_{n}\eta_{n}\to A\eta\). Using these sequences we see that
\[\langle\phi,A\eta\rangle=\lim_{n\to\infty}\langle\phi_{n},A_{n}\eta_{n}\rangle= \lim_{n\to\infty}\langle A_{n}^{*}\phi_{n},\eta_{n}\rangle=\langle\phi,\eta\rangle\]
proving that \(\phi\in D(A^{*})\) and \(A^{*}\phi=\psi\) as needed.
(b): This is a simple application of (a) and the fact that \(T^{**}=T\) for any closed operator \(T\).
(c): We need only to prove that \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) implies \(\Gamma_{\infty}\subseteq\operatorname{Gr}(A)\). This is seen by the inclusions \(\Gamma_{\infty}\subseteq\Gamma_{\infty}^{*}\) (by symmetry of the \(A_{n}\)'s) and \(\Gamma_{\infty}^{*}\subseteq\operatorname{Gr}(A^{*})=\operatorname{Gr}(A)\) (by (a) and self-adjointness of \(A\)).
The connection to convergence of the projections onto the graphs of the \(A_{n}\)'s is now given in the below proposition which concludes this technical section. It tells us that the difference between strong graph convergence and strong convergence of the sequence of graph projections is measured by the absence of strong graph convergence of the sequence of adjoint operators.
**Proposition 7**.: _Denote by \(P_{n}\) and \(P\) the orthogonal projections in \(\mathcal{H}\oplus\mathcal{H}\) onto \(\operatorname{Gr}(A_{n})\) and \(\operatorname{Gr}(A)\) respectively. Then \(P_{n}\to P\) strongly if and only if both \(\operatorname{Gr}(A)=\Gamma_{\infty}\) and \(\operatorname{Gr}(A^{*})=\Gamma_{\infty}^{*}\) (or equivalently if and only if both \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) and \(\operatorname{Gr}(A^{*})\subseteq\Gamma_{\infty}^{*}\), cf. Lemma 6(b) )._
Proof.: We will use the standard fact, see for example [4], that
\[\mathcal{H}\oplus\mathcal{H}=\operatorname{Gr}(T)\oplus W\operatorname{Gr}(T^{ *}) \tag{2}\]
for any densely defined and closed operator \(T\) on \(\mathcal{H}\) where the sum is orthogonal and \(W\) is the unitary map \((\phi,\psi)\mapsto(-\psi,\phi)\).
Now if \(P_{n}\to P\) strongly then \(\operatorname{Gr}(A)=\Gamma_{\infty}\) by Lemma 3(b). Also, \(1-P_{n}\to 1-P\) strongly so that similarly \(W\operatorname{Gr}(A_{n}^{*})\to W\operatorname{Gr}(A^{*})\) strongly by the decomposition (2). It is an easy exercise to check that this is equivalent to \(\operatorname{Gr}(A^{*})=\Gamma_{\infty}^{*}\).
If, on the other hand, \(\operatorname{Gr}(A)=\Gamma_{\infty}\) and \(\operatorname{Gr}(A^{*})=\Gamma_{\infty}^{*}\) then also \(W\operatorname{Gr}(A_{n}^{*})\to W\operatorname{Gr}(A^{*})\) strongly. Using this we get by Lemma 3(a) that \(P_{n}x\to Px\) for any \(x\in\operatorname{Gr}(A)\) and, by using additionally (2), \((1-P_{n})y\to(1-P)y\) for any \(y\in W\operatorname{Gr}(A^{*})\). Combining these convergences we conclude that \(P_{n}z\to Pz\) for any \(z\in\mathcal{H}\oplus\mathcal{H}\).
## 3 Main results
From the previous section we obtain two candidates for convergence type to impose on the \(A_{n}\)'s in (1): Strong graph convergence and strong convergence of graph projections. That these are actually both natural choices is illuminated by the result below which states that for sequences of self-adjoint operators each of them is equivalent to strong resolvent convergence - exactly the convergence type we seek! This result is well established, cf. [5] (and [2] for a partial result), but it is somewhat not standard in most textbooks.
**Theorem 8**.: _Let \(\{B_{n}\}_{n=1}^{\infty}\) be a sequence of self-adjoint operators on a Hilbert space \(\mathcal{H}\) and \(B\) another self-adjoint operator on \(\mathcal{H}\). Let further \(Q_{n}\) and \(Q\) be the orthogonal projections onto \(\operatorname{Gr}(B_{n})\) and \(\operatorname{Gr}(B)\) respectively and denote by \(\Gamma_{\infty}^{B}\) the strong limit of \(\{\operatorname{Gr}(B_{n})\}_{n=1}^{\infty}\). The following statements are equivalent:_
1. \(B_{n}\to B\) _in the strong resolvent sense,_
2. \(B=\operatorname{str.gr.lim}B_{n}\) _(i.e._ \(\operatorname{Gr}(B)=\Gamma_{\infty}^{B}\)_),_
3. \(\operatorname{Gr}(B)\subseteq\Gamma_{\infty}^{B}\)_,_
4. \(Q_{n}\to Q\) _strongly._
Proof.: When using the self-adjointness of the operators, the equivalence between (ii) and (iii) is Lemma 6(c) and the equivalence between (ii) and (iv) is Proposition 7.
Suppose \(B_{n}\to B\) in the strong resolvent sense and let \(\phi\in D(B)\) be arbitrary. By self-adjointness the relation \(\psi=(B_{n}+i)\phi_{n}=(B+i)\phi\) defines besides the \(\psi\in\mathcal{H}\) also for each \(n\) a \(\phi_{n}\in D(B_{n})\). Moreover,
\[\|(\phi_{n},B_{n}\phi_{n})-(\phi,B\phi)\|^{2} =\|\phi_{n}-\phi\|^{2}+\|B_{n}\phi_{n}-B\phi\|^{2}=\|\phi_{n}-\phi \|^{2}+\|i\phi-i\phi_{n}\|^{2}\] \[=2\|(B_{n}+i)^{-1}\psi-(B+i)^{-1}\psi\|^{2}\longrightarrow 0\]
which proves that (i) implies (iii).
Suppose finally that \(\operatorname{Gr}(B)\subseteq\Gamma_{\infty}^{B}\) and let \(\psi\in\mathcal{H}\) be arbitrary. Since \(\mathcal{H}=R(B+i)\) we have \(\psi=(B+i)\phi\) for some \(\phi\in D(B)\) and by the assumption there exists a sequence \(\{\phi_{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) so that \((\phi_{n},B_{n}\phi_{n})\rightarrow(\phi,B\phi)\). Hence,
\[[(B_{n}+i)^{-1}-(B+i)^{-1}]\psi=(B_{n}+i)^{-1}[(B+i)\phi-(B_{n}+i)\phi_{n}]- \phi+\phi_{n}\longrightarrow 0\]
where we use the fact that \(\|(B_{n}+i)^{-1}\|\leq 1\) for all \(n\). This proves that (iii) implies (i) and thus the full theorem.
We observe that though (ii)-(iv) in Theorem 8 are equivalent for sequences of self-adjoint operators, Proposition 7 tells us that (iii) is the weakest of these properties in general (since clearly (ii) is stronger than (iii)). In fact it is weak enough to be a consequence of pointwise convergence on a common core of the sequence and is thus easy to verify for for example differential operators, see Proposition 17 and Example 18. Thus, the first question we examine will be the following: If we impose the condition (iii) on the sequence \(\{A_{n}\}_{n=1}^{\infty}\) what more do we need in order for it to hold for the sequence \(\{B_{n}\}_{n=1}^{\infty}\) of extensions (thus yielding strong resolvent convergence)? The answer is straightforward, but in applications it can be useful even in this raw form. Recall that in our set-up the operators \(A\) and \(A_{n}\) has deficiency indices \((k,k)\).
**Corollary 9**.: _Suppose \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\). If there moreover for every pair (or, equivalently, for \(k\) linearly independent pairs) \((\phi,B\phi)\) from the orthogonal complement of \(\operatorname{Gr}(A)\) inside the Hilbert space \(\operatorname{Gr}(B)\) exists a sequence \(\{\phi_{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) so that \(\phi_{n}\in D(B_{n})\) for all \(n\) and \((\phi_{n},B_{n}\phi_{n})\rightarrow(\phi,B\phi)\) then \(B_{n}\to B\) in the strong resolvent sense._
Proof.: Denoting the strong limit of \(\{\mathrm{Gr}(B_{n})\}_{n=1}^{\infty}\) by \(\Gamma_{\infty}^{B}\) it is basically the assumption above that the orthogonal complement of \(\mathrm{Gr}(A)\) inside \(\mathrm{Gr}(B)\) is contained in \(\Gamma_{\infty}^{B}\). Moreover, \(\mathrm{Gr}(A)\subseteq\Gamma_{\infty}\subseteq\Gamma_{\infty}^{B}\) since all the \(B_{n}\)'s are extensions of the \(A_{n}\)'s. This concludes the proof. For the fact that it suffices to consider \(k\) linearly independent pairs, see the first couple of lines of the proof of Theorem 12.
For the remaining part of this section we will have the statement (1) as our guiding light and try to formulate and prove such results to the best of our ability with the different notions of convergence of the \(A_{n}\)'s introduced above. Before diving into these considerations, it will be essential to have the following characterization of strong convergence of the \(U_{n}\)'s at our disposal.
**Lemma 10**.: _We have \(U_{n}\to U\) strongly if and only if the following statement is true:_
\[\begin{array}{c}\text{For each $\phi_{+}\in\mathcal{H}_{+}$there exists a sequence $\{\phi_{+}^{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}$ so that}\\ \phi_{+}^{n}\in\mathcal{H}_{+}^{n}\text{ for all $n$ and $(\phi_{+}^{n},U_{n}\phi_{+}^{n}) \rightarrow(\phi_{+},U\phi_{+})$.}\end{array} \tag{3}\]
Note that the condition (3) actually says that the strong limit of the graphs of the \(U_{n}\)'s considered as operators only on \(\mathcal{H}_{+}^{n}\) should contain the corresponding graph of \(U\).
Proof (of Lemma 10).: Observe firstly that if \(\psi_{n}\rightarrow\psi\) then the inequalities
\[\|U_{n}\psi_{n}-U\psi\|\leq\|\psi_{n}-\psi\|+\|U_{n}\psi-U\psi\|\leq 2\|\psi_{n} -\psi\|+\|U_{n}\psi_{n}-U\psi\|\]
show that
\[U_{n}\psi_{n}\to U\psi\quad\text{if and only if}\quad U_{n}\psi \to U\psi. \tag{4}\]
For each \(n\) denote by \(P_{n}\) the orthogonal projection onto \(\mathcal{H}_{+}^{n}\) and by \(P\) the orthogonal projection onto \(\mathcal{H}_{+}\). Assume \(U_{n}\to U\) strongly. Then, for any \(\phi_{+}\in\mathcal{H}_{+}\) and \(\psi\in\mathcal{H}\), we have
\[\langle P_{n}\phi_{+},\psi\rangle=\langle U_{n}^{*}U_{n}\phi_{+},\psi\rangle= \langle U_{n}\phi_{+},U_{n}\psi\rangle\longrightarrow\langle U\phi_{+},U\psi \rangle=\langle P\phi_{+},\psi\rangle=\langle\phi_{+},\psi\rangle\]
so that \(P_{n}\phi_{+}\rightarrow\phi_{+}\) weakly in \(\mathcal{H}\). As further
\[\|\phi_{+}\|\leq\liminf_{n\rightarrow\infty}\|P_{n}\phi_{+}\|\leq\limsup_{n \rightarrow\infty}\|P_{n}\phi_{+}\|\leq\|\phi_{+}\|\]
by lower semi-continuity of the norm it is apparent that additionally \(\|P_{n}\phi_{+}\|\rightarrow\|\phi_{+}\|\) and consequently \(P_{n}\phi_{+}\rightarrow\phi_{+}=P\phi_{+}\) with respect to the Hilbert space norm. Since this is true for any \(\phi_{+}\in\mathcal{H}_{+}\), Lemma 3(a) tells us that the strong limit of \(\{\mathcal{H}_{+}^{n}\}_{n=1}^{\infty}\) is contained in \(\mathcal{H}_{+}\) and together with (4) and the assumption that \(U_{n}\to U\) strongly this verifies the condition (3).
Suppose now that (3) is satisfied. For any \(\phi_{+}\in\mathcal{H}_{+}\) we can choose the sequence from this condition and (4) implies that \(U_{n}\phi_{+}\to U\phi_{+}\). For proving convergence of \(U_{n}\psi\) for \(\psi\in[\mathcal{H}_{+}]^{\perp}\) fix such \(\psi\) and consider an orthogonal basis \(\phi_{+,1},\ldots\phi_{+,k}\) of \(\mathcal{H}_{+}\). By (3) there exist sequences \(\{\phi_{+,1}^{n}\}_{n=1}^{\infty},\ldots,\{\phi_{+,k}^{n}\}_{n=1}^{\infty} \subseteq\mathcal{H}\) with \(\phi_{+,\ell}^{n}\in\mathcal{H}_{+}^{n}\) for all \(n\) and \(\ell=1,\ldots,k\) and \(\phi_{+,\ell}^{n}\rightarrow\phi_{+,\ell}\) for all \(\ell\). Now by applying the Gram-Schmidt process to \(\{\phi_{+,1}^{n},\ldots,\phi_{+,k}^{n}\}\) for
each \(n\) it should not be a difficult exercise to check that we can assume that these form an orthonormal basis for \(\mathcal{H}_{+}^{n}\) for each \(n\). Thus,
\[[\mathcal{H}_{+}^{n}]^{\perp}\ni\psi_{n}:=\psi-\sum_{\ell=1}^{k}\langle\phi_{+, \ell}^{n}\,,\psi\rangle\phi_{+,\ell}^{n}\longrightarrow\psi,\]
and, since \(U_{n}\psi_{n}=0=U\psi\) for all \(n\), a final application of (4) proves that \(U_{n}\psi\to U\psi\).
**Remark 11**.: Lemma 10 is actually the main reason why we assume that the deficiency indices of the \(A_{n}\)'s are finite, since then we can simply restate the condition (3) as strong convergence of the \(U_{n}\)'s - which is exactly the kind of formulation we seek. If one, in the case of infinite deficiency indices, replaces "\(U_{n}\to U\) strongly" with (3) then the remaining results of this note indeed remain valid. One can realize that these conditions are truly different in the infinite case by taking the \(U_{n}\)'s and the \(U\) to be projections and recalling the content of Remark 4.
While this description of strong convergence of the \(U_{n}\)'s not at first sight simplifies things, the fact that it is so closely related to the definition of strong graph convergence gives us hope that we can actually apply our theory from section 2 via Theorem 8. As we are about to show this is indeed the case, and we are actually in a position to state and prove the main theoretical statement of this note.
**Theorem 12**.: _Using the notation from Proposition 7, the following holds:_
1. _If_ \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) _and_ \(U_{n}\to U\) _strongly then_ \(B_{n}\to B\) _in the strong resolvent sense and_ \(P_{n}\to P\) _strongly._
2. _If_ \(B_{n}\to B\) _in the strong resolvent sense and_ \(P_{n}\to P\) _strongly then_ \(U_{n}\to U\) _strongly._
Proof.: (a): Recall that, cf. [1],
\[\operatorname{Gr}(B)=\operatorname{Gr}(A)\oplus\{(\phi_{+}+U\phi_{+},i\phi_{+ }-iU\phi_{+})\,|\,\phi_{+}\in\mathcal{H}_{+}\},\]
where the sum is orthogonal, from which the \(k\)-dimensional orthogonal complement of \(\operatorname{Gr}(A)\) in \(\operatorname{Gr}(B)\) is apparent. Applying Lemma 10 we can for any \(\phi_{+}\in\mathcal{H}_{+}\) find \(\{\phi_{+}^{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) so that \(\phi_{+}^{n}\in\mathcal{H}_{+}^{n}\) for all \(n\) and
\[(\phi_{+}^{n}+U_{n}\phi_{+}^{n},i\phi_{+}^{n}-iU_{n}\phi_{+}^{n})\longrightarrow (\phi_{+}+U\phi_{+},i\phi_{+}-iU\phi_{+}).\]
Hence, Corollary 9 implies \(B_{n}\to B\) in the strong resolvent sense. Likewise we have also
\[\operatorname{Gr}(A^{*})=\operatorname{Gr}(A)\oplus\{(\phi_{+},i\phi_{+})\,| \,\phi_{+}\in\mathcal{H}_{+}\}\oplus\{(U\phi_{+},-iU\phi_{+})\,|\,\phi_{+}\in \mathcal{H}_{+}\}\]
and a similar application of Lemma 10 tells us that \(\operatorname{Gr}(A^{*})\subseteq\Gamma_{\infty}^{*}\). By invoking Proposition 7 we get thus additionally \(P_{n}\to P\) strongly.
(b): We note that by Theorem 8 (and using the notation herein) we have \(Q_{n}\to Q\) strongly, and consequently \(Q_{n}-P_{n}\to Q-P\) strongly. Now \(Q_{n}-P_{n}\) is the orthogonal
projection onto the orthogonal complement of \(\operatorname{Gr}(A_{n})\) inside \(\operatorname{Gr}(B_{n})\) and similarly for \(Q-P\). But we have just seen in the proof of (a) that these are exactly
\[\{(\phi_{+}^{n}+U_{n}\phi_{+}^{n},i\phi_{+}^{n}-iU_{n}\phi_{+}^{n})\mid\phi_{+}^ {n}\in\mathcal{H}_{+}^{n}\}\quad\text{and}\quad\{(\phi_{+}+U\phi_{+},i\phi_{+} -iU\phi_{+})\mid\phi_{+}\in\mathcal{H}_{+}\}\]
respectively. Hence, Lemma 3(b) tells us that for each \(\phi_{+}\in\mathcal{H}_{+}\) there exists a sequence \(\{\phi_{+}^{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) so that \(\phi_{+}^{n}\in\mathcal{H}_{+}^{n}\) for all \(n\) and
\[(\phi_{+}^{n}+U_{n}\phi_{+}^{n},i\phi_{+}^{n}-iU_{n}\phi_{+}^{n})\longrightarrow (\phi_{+}+U\phi_{+},i\phi_{+}-iU\phi_{+}).\]
By taking linear combinations of the entries we see that for this sequence \(\phi_{+}^{n}\to\phi_{+}\) and \(U_{n}\phi_{+}^{n}\to U\phi_{+}\), and to wrap things up Lemma 10 yields the claimed strong convergence of the \(U_{n}\)'s towards \(U\) as needed.
**Remark 13**.: There is a more transparent way of proving \(B_{n}\to B\) in the strong resolvent sense in Theorem 12(a) than the one presented above which in particular avoids the use of Corollary 9 and hence of Theorem 8. We choose to present this here as it might additionally strengthen the readers intuition on why this convergence must hold. Define the subspace \(V:=\{\phi_{+}+U\phi_{+}\mid\phi_{+}\in\mathcal{H}_{+}\}\) in \(\mathcal{H}\) and write
\[\mathcal{H}=R(B+i)=R(A+i)+R(B|_{V}+i).\]
Now since we assume \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) the convergence of \((B_{n}+i)^{-1}\) towards \((B+i)^{-1}\) on \(R(A+i)\) is proved as in (iii)\(\Rightarrow\)(i) in Theorem 8. Notice then that
\[(B+i)(\phi_{+}+U\phi_{+})=2i\phi_{+}\qquad\text{and}\qquad(B_{n}+i)(\phi_{+}^{ n}+U_{n}\phi_{+}^{n})=2i\phi_{+}^{n}\]
for any \(\phi_{+}\in\mathcal{H}_{+}\) and \(\phi_{+}^{n}\in\mathcal{H}_{+}^{n}\). This proves that \(R(B|_{V}+i)=\mathcal{H}_{+}\), and for each \(\phi_{+}\in\mathcal{H}_{+}\) we can use Lemma 10 to find a sequence \(\{\phi_{+}^{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) so that \(\phi_{+}^{n}\in\mathcal{H}_{+}^{n}\) for each \(n\) and
\[\|(B_{n}+i)^{-1}\phi_{+}-(B+ i)^{-1}\phi_{+}\|\] \[\leq\|(B_{n}+i)^{-1}\phi_{+}-(B_{n}+i)^{-1}\phi_{+}^{n}\|+\|(B_{n }+i)^{-1}\phi_{+}^{n}-(B+i)^{-1}\phi_{+}\|\] \[\leq\|\phi_{+}-\phi_{+}^{n}\|+\frac{1}{2}\|(\phi_{+}^{n}+U_{n}\phi _{+}^{n})-(\phi_{+}+U\phi_{+})\|\longrightarrow 0.\qed\]
As previously proclaimed we can now use Theorem 12 to prove various statements of the form (1). Taking \(A_{n}\to A\) to be in terms of strong convergence of the orthogonal projections onto the graphs, i.e. \(P_{n}\to P\) strongly, we have also \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) due to Proposition 7, and thus we get a particularly clean statement.
**Corollary 14**.: _Suppose \(P_{n}\to P\) strongly. Then \(B_{n}\to B\) in the strong resolvent sense if and only if \(U_{n}\to U\) strongly. _
The downside of Corollary 14 is, however, that the condition \(P_{n}\to P\) strongly is often not easy to verify in concrete cases. This encourages us to take the convergence of the \(A_{n}\)'s to be in the sense that \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\). We note that this is a strictly weaker notion of convergence than strong convergence of the graph projections, so one should not expect the corresponding statement to be as simple as in Corollary 14. Another application of Proposition 7 yields:
**Corollary 15**.: _Suppose \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\). Then \(U_{n}\to U\) strongly if and only if both \(B_{n}\to B\) in the strong resolvent sense and \(\operatorname{Gr}(A^{*})\subseteq\Gamma_{\infty}^{*}\). \(\Box\)_
An obvious question now arises: Is this the best we can do? In particular we can in the light of Corollary 14 ask whether the condition \(\operatorname{Gr}(A^{*})\subseteq\Gamma_{\infty}^{*}\) in Corollary 15 is actually needed. As a matter of fact it is by the following observations.
**Remark 16**.: We do not in general have the result "Suppose \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\). Then \(U_{n}\to U\) strongly if and only if \(B_{n}\to B\) in the strong resolvent sense." as the example below shows. Even changing \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\) to \(A=\operatorname{str.gr.lim}A_{n}\) does not make the statement true. Hence, we see that for these applications the difference between strong graph convergence and strong convergence of graph projections is essential - unlike in the case of self-adjoint operators. The backbone of the example is the extension theory for a well-studied class of operators on \(L^{2}(\mathbb{R}^{3})=\mathcal{H}\). This is treated in for example [3] to which we refer for the details.
Let \(\{y_{n}\}_{n=1}^{\infty}\subseteq\mathbb{R}^{3}\) be a sequence yet to be specified and define for each \(n\) the operator \(A_{n}\) to be the closure of \(-\Delta\) on \(C_{c}^{\infty}(\mathbb{R}^{3}\backslash\{y_{n}\})\). One can now find the deficiency subspaces
\[\mathcal{H}_{\pm}^{n}=\mathbb{C}\phi_{\pm}^{n},\qquad\phi_{\pm}^{n}(x)=\frac{e ^{i\sqrt{\pm i}|x-y_{n}|}}{4\pi|x-y_{n}|}\]
where \(\operatorname{Im}\sqrt{\pm i}>0\). Moreover, if one defines a self-adjoint extension \(B_{n}\) of \(A_{n}\) by the unitary map \(U_{n}\colon\mathcal{H}_{+}^{n}\ni\phi_{+}^{n}\mapsto-\phi_{-}^{n}\in\mathcal{ H}_{-}^{n}\) as in section 1 then \(B_{n}=B\) is actually the free Laplacian \(-\Delta\) defined on the Sobolev space \(H^{2}(\mathbb{R}^{3})\)_independently_ of \(n\). Now we have the orthogonal decomposition
\[\operatorname{Gr}(B)=\operatorname{Gr}(A_{n})\oplus\mathbb{C}(\phi_{+}^{n}- \phi_{-}^{n},i\phi_{+}^{n}+i\phi_{-}^{n})=:\operatorname{Gr}(A_{n})\oplus \mathbb{C}v_{n},\]
and consequently \(\operatorname{Gr}(A_{n})\) is the orthogonal complement of \(\mathbb{C}v_{n}\) in \(\operatorname{Gr}(B)\) for each \(n\). Notice now that the \(v_{n}\)'s depend only on the \(y_{n}\)'s. Choosing \(y_{n}\) so that \(|y_{n}|\to\infty\) it is not difficult to realize that the sequences \(\{\phi_{\pm}^{n}\}_{n=1}^{\infty}\) converge weakly towards \(0\) in \(L^{2}(\mathbb{R}^{3})\): This follows from the fact that they are translations of a fixed \(L^{2}\)-function. With such sequence of \(y_{n}\)'s we get thus
\[\langle(\phi,\psi),v_{n}\rangle=\langle\phi,\phi_{+}^{n}\rangle-\langle\phi, \phi_{-}^{n}\rangle+i\langle\psi,\phi_{+}^{n}\rangle+i\langle\psi,\phi_{-}^{n }\rangle\longrightarrow 0\]
for all \((\phi,\psi)\in\mathcal{H}\oplus\mathcal{H}\), i.e. \(v_{n}\to 0\) weakly in \(\mathcal{H}\oplus\mathcal{H}\) and hence in \(\operatorname{Gr}(B)\).
We observe from the above facts that we by choosing a sequence of \(y_{n}\)'s which is a fixed \(y_{n}=y_{0}\) for \(n\) odd and with \(\{y_{2n}\}_{n=1}^{\infty}\) unbounded can make the sequence \((\operatorname{Gr}(A_{n}))_{n=1}^{\infty}\) of subspaces of the Hilbert space \(\operatorname{Gr}(B)\) into a sequence like \(\{V_{n}\}_{n=1}^{\infty}\) in Remark 4. Consequently, the operator \(A=A_{1}\) is the strong graph limit of the \(A_{n}\)'s (and of course \(B_{n}\to B\)), but the orthogonal projections onto the graphs \(\operatorname{Gr}(A_{n})\) do not converge strongly towards the orthogonal projection onto \(\operatorname{Gr}(A)\), and hence Theorem 12(a) tells us that we cannot have \(U_{n}\to U\) strongly. Alternatively this can be checked more directly by using Lemma 10. \(\blacksquare\)
With Corollaries 14 and 15 we have thus answered the question of finding results like (1) optimally (when using only the concepts introduced in this note). We conclude by
proving a very simple requirement for having \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\). This underlines the fact that both corollaries are relevant: Corollary 14 for its simple and clear connection to (1) and Corollary 15 since the assumption here is often rather easily checkable. Recall that a core for an operator \(A\) is a subspace of \(D(A)\) satisfying that the restriction of \(A\) to this has closure \(A\). We obtain now:
**Proposition 17**.: _Assume that \(\mathcal{D}\) is a common core for \(A\) and all \(A_{n}\)'s. If \(A_{n}\phi\to A\phi\) for all \(\phi\in\mathcal{D}\) then \(\operatorname{Gr}(A)\subseteq\Gamma_{\infty}\)._
Proof.: The assumption tells us that \(\Gamma_{\mathcal{D}}:=\{(\phi,A\phi)\mid\phi\in\mathcal{D}\}\subseteq\Gamma_{\infty}\). Thus,if we argue that \(\Gamma_{\infty}\) is closed, we have also \(\operatorname{Gr}(A)=\overline{\Gamma_{\mathcal{D}}}\subseteq\Gamma_{\infty}\). But closedness is a general property of any strong limit of subspaces by the following argument:
Let \(\{V_{n}\}_{n=1}^{\infty}\) be any sequence of subspaces of a Hilbert space \(\mathcal{H}\) and denote as usual its strong limit by \(V_{\infty}\). If we consider an arbitrary convergent sequence \(\{x_{k}\}_{k=1}^{\infty}\subseteq V_{\infty}\) with limit \(x_{0}\) then we need only to find a sequence \(\{\widetilde{x}_{n}\}_{n=1}^{\infty}\subseteq\mathcal{H}\) with \(\widetilde{x}_{n}\in V_{n}\) for all \(n\) such that \(\widetilde{x}_{n}\to x_{0}\) in order to obtain \(x_{0}\in V_{\infty}\) and hence prove that \(V_{\infty}\) is closed. We now construct such sequence. Firstly we choose for each \(k\) a sequence \(\{x_{n}^{k}\}_{n=1}^{\infty}\) with \(x_{n}^{k}\in V_{n}\) for all \(n\) and \(x_{n}^{k}\to x_{k}\), and then we take natural numbers \(N_{1}<N_{2}<N_{3}<\cdots\) so that \(\|x_{n}^{k}-x_{k}\|<1/k\) for all \(n\geq N_{k}\). Defining \(\widetilde{x}_{n}:=x_{n}^{1}\) for \(n=1,2,\ldots,N_{2}-1\); \(\widetilde{x}_{n}:=x_{n}^{2}\) for \(n=N_{2},\ldots,N_{3}-1\) and generally \(\widetilde{x}_{n}:=x_{n}^{k}\) for \(n=N_{k},\ldots,N_{k+1}-1\) one can check using the triangular inequality that this is indeed a sequence with the properties we seek.
**Example 18**.: To make things even more concrete than requiring pointwise convergence of the \(A_{n}\)'s on a common core, we can ask what this means for differential operators like those in Example 1. To simplify things let us consider a sequence of Schrodinger operators - that is, the \(A_{n}\)'s are the closures of \(-\Delta+\Phi_{n}\) defined on \(C_{c}^{\infty}(\Omega)\subseteq L^{2}(\Omega)\) for some open set \(\Omega\subseteq\mathbb{R}^{d}\) and some potentials (i.e. real-valued, sufficiently regular functions) \(\Phi_{n}\) on this set. Hence, \(C_{c}^{\infty}(\Omega)\) is a common core for the \(A_{n}\)'s and also for \(A=-\Delta+\Phi\) if we define this in the same manner. Now, for any \(\phi\in C_{c}^{\infty}(\Omega)\),
\[\|A_{n}\phi-A\phi\|^{2}=\|\Phi_{n}\phi-\Phi\phi\|^{2}=\int_{\Omega}|\phi|^{2}| \Phi_{n}-\Phi|^{2}\,dx\leq\|\phi\|_{\infty}^{2}\int_{\operatorname{supp}\phi} |\Phi_{n}-\Phi|^{2}\,dx\]
where \(\|\cdot\|_{\infty}\) is the supremum norm. Now if \(\Phi_{n}\to\Phi\) in \(L^{2}_{\operatorname{loc}}(\Omega)\) then we conclude that \(A_{n}\phi\to A\phi\) for all \(\phi\in C_{c}^{\infty}(\Omega)\). If, on the other hand, we assume the latter, then we see that \(\Phi_{n}\to\Phi\) in \(L^{2}(K)\) for any compact subset \(K\subseteq\Omega\) by choosing \(\phi\equiv 1\) on \(K\), i.e. we get \(\Phi_{n}\to\Phi\) in \(L^{2}_{\operatorname{loc}}(\Omega)\). Being able to consider only local \(L^{2}\)-convergence is often desirable if one deals for example with potentials with singularities.
## 4 Acknowledgements
This work was supported in parts by the VILLUM Foundation grant no. 10059. I would like to thank my PhD advisor Jan Philip Solovej for initiating this cute little project as well as for his always committed and insightful guidance. Also, I thank Johannes Agerskov for proofreading and for his useful suggestions. |
2302.01963 | Formulas for Hitting Times and Cover Times for Random Walks on Groups | Using the results of Ding, Lee, Peres [3], we develop formulas to compute the
hitting times and cover times for random walks on groups. We developed an
explicit formula for hitting times in terms of the irreducible representations
of the group. We also have a way of computing cover times in terms of these
hitting times. This computation is based on a quantity we indentified, which we
call the volume growth function. And we believe that it is the right object to
study in order to understand the cover time. | Christopher Zhang | 2023-02-03T19:18:36Z | http://arxiv.org/abs/2302.01963v1 | # Formulas for Hitting Times and Cover Times for Random Walks on Groups
###### Abstract
Using the results of Ding, Lee, Peres [3], we develop formulas to compute the hitting times and cover times for random walks on groups. We developed an explicit formula for hitting times in terms of the irreducible representations of the group. We also have a way of computing cover times in terms of these hitting times. This computation is based on a quantity we indentified, which we call the volume growth function. And we believe that it is the right object to study in order to understand the cover time.
## 1 Introduction
The hitting time of a Markov chain is the time for a Markov chain to reach a certain state. These values have long been studied such as in problems like gambler's ruin. More recently, there has been interest in the cover time i.e. the time it takes for a finite Markov chain to hit all of its states. Computing cover times has applications in Markov chain algorithms such as page rank. Here we are interested in the asymptotics of the cover times of a family Markov chains, where the size of the state space goes to infinity.
Matthews [6] shows a relationship between cover times and a natural object of study, the maximum hitting time. For a Markov chain with state space \(\Omega\), the Matthews upper and lower bounds differ by a factor of \(\log|\Omega|\). Both the upper and lower Matthews bounds can be achieved for example with the cycle and the hypercube (see examples 4.14 and 4.16).
The paper of Ding, Lee, and Peres [3] was a big breakthrough for the theory showing the cover times for random walks on graphs are asymptotic to the expected maximum of an associated Gaussian process. Their results can be sharper than Matthews bounds as they encapsulate the information of all of the hitting times instead of only the maximum hitting time. However in the general case, computing the maximum of this Gaussian process is difficult to compute. For example, it can be computed with Talagrand's theory of majorizing measures [4].
We consider a case in which there are many symmetries, a random walk on a group. This allows us to do explicit computations of hitting times and cover times. We derive an explicit formula for hitting times for random walks on group in terms of its irreducible representations. The strength of this method is that it is exact and general. The formulas hold for any random walk on a group while conventional hitting time calculations only hold for random walks where the transition probabilities are very carefully chosen. However, the drawback is that the formulas are often difficult to calculate or bound by hand, but they can be calculated with high precision on a computer.
In the case of cover times, the associated Gaussian process of Ding, Lee, Peres also has many symmetries. Formally, it is a stationary Gaussian process. In this case, more simple arguments can be used to calculate the maximum of the Gaussian process. Ultimately, this allows us to compute the cover time as an integral of an important quantity, which we call the volume growth function. We can
compute this function with our formulas for hitting times. We believe the volume growth function is the right quantity to understand in order to understand cover times, and is computed on a case by case basis. This method is extremely general, but it faces the difficulty of being very hard to compute since the behavior of all hitting times are needed to understand the volume growth function.
## 2 Background
We review the basics of random walks on groups and Fourier transforms on finite groups that we will later use for the following sections.
### Random Walks on Groups
These are very basic facts about random walks on groups that are needed for this paper. See [5] for a more in depth discussion.
**Definition 2.1**.: Let \(G\) be a group. Let \(p\) be a probability measure on \(G\). A _random walk on a group \(G\)_ generated by \(p\) is a Markov chain with state space \(G\) with the following transition probabilities. For \(g,h\in G\), the probability of going from \(g\) to \(h\) is
\[P(g,h)=p(hg^{-1})\]
We note the symmetry in this definition
\[P(g,hg)=P(e,h)=p(h) \tag{2.1}\]
The random walk is irreducible/reversible if it is an irreducible/reversible Markov chain. Note that the random walk is reversible iff the support of \(p\) generates \(G\).
**Lemma 2.2**.: _The stationary distribution \(\pi\) of an irreducible random walk on a group \(G\) is the uniform distribution._
Proof.: For \(g\in G\),
\[\sum_{h\in G}P(h,g)\frac{1}{|G|}=\sum_{h\in G}P(h^{-1}g,g)\frac{1}{|G|}=\frac{ 1}{|G|}\sum_{h\in G}p(h)=\frac{1}{|G|}\]
**Definition 2.3**.: For a discrete Markov chain \((X_{t})_{t\geq 0}\), let
\[\tau_{x}=\min\{t\geq 0:X_{t}=x\}\]
be the _hitting time_. We will also call
\[\mathbf{E}_{y}[\tau_{x}]\]
hitting times. For \(\tau_{x}^{+}=\min\{t>0:X_{t}=x\}\), we call
\[\mathbf{E}_{x}[\tau_{x}^{+}]\]
the _first return time_.
**Corollary 2.4**.: _For an irreducible random walk on a group \(G\). The first return time \(\mathbf{E}_{e}[\tau_{e}^{+}]\) of a random walk on a group is \(|G|\)._
Proof.: If \(\pi\) is the stationary distribution, then
\[\mathbf{E}_{e}[\tau_{e}^{+}]=\frac{1}{\pi(e)}\]
(see [5] section 1.5).
**Lemma 2.5**.: _A random walk on a group \(G\) generated by distribution \(p\) is reversible iff \(p(g)=p(g^{-1})\) for all \(g\in G\)._
Proof.: Let \(\pi\) be the stationary distribution. A Markov chain is reversible iff for all \(g,h\)
\[\pi(g)p(g,h)=\pi(h)p(h,g)\]
By lemma 2.2, this is
\[\frac{1}{|G|}p(hg^{-1})=\frac{1}{|G|}p(gh^{-1})\]
which is true iff \(p(g)=p(g^{-1})\) for all \(g\in G\).
_Remark 2.6_.: A reversible random walk on a group \(G\) is a random walk on the Cayley graph with edge weights given by \(p\). (This is true for random walks that are not reversible for a directed Cayley graph.)
### Fourier Transform on Finite Groups
We review the basics of Fourier transforms on finite groups which will be used in the next section. Proofs will be omitted, but can be found in [2].
**Theorem 2.7** (Fourier Transform).: _Let \(G\) be a finite group and \(f:G\to\mathbb{R}_{\geq 0}\). Then we define the Fourier transform \(\hat{f}\) as a function that takes representations \(\rho\) of \(G\) and gives complex matrices according to the formula_
\[\hat{f}(\rho)=\sum_{g\in G}f(g)\rho(g)\]
**Definition 2.8**.: Let \(f_{1},f_{2}:G\to\mathbb{R}_{\geq 0}\) and let \(G\) be a finite group. Then the _convolution_ is
\[(f_{1}*f_{2})(h):=\sum_{g\in G}f_{1}(hg^{-1})f_{2}(g)\]
_Remark 2.9_.: We note that for a random walk on a group with transition probability \(P\) and initial distribution \(Q\). The distribution at time one is \(P*Q\).
_Remark 2.10_.: Convolution is in non-commutative in general.
As in standard Fourier analysis, the Fourier transform takes convolutions to products.
**Lemma 2.11**.: \[\widehat{f_{1}*f_{2}}(\rho)=\hat{f}_{1}(\rho)\hat{f}_{2}(\rho)\]
**Theorem 2.12** (Fourier Inversion).: _Let \(f:G\to\mathbb{R}_{\geq 0}\) and \(G\) a finite group. Let \(\rho_{1},\ldots,\rho_{r}\) be the irreducible representations of \(G\). Let \(d_{i}\) be the degree of \(\rho_{i}\). Then for \(g\in G\),_
\[f(g)=\frac{1}{|G|}\sum_{i=1}^{r}d_{i}\operatorname{Tr}(\rho_{i}(g^{-1})\hat{f }(\rho_{i}))\]
_Remark 2.13_.: The Fourier inversion formula shows that \(f\) is determined by how \(\hat{f}\) acts on the irreducible representations.
**Theorem 2.14**.: _Consider a random walk on a group \(G\) generated by a probability distribution \(p\). Let \(P\) be the transition matrix. Then_
\[P=\phi^{-1}D^{*}\phi\]
_where \(D\) is the block diagonal matrix with blocks of the form \(\hat{p}(\rho_{i})\) and \(\phi\) are the change of basis matrices into the Fourier basis._
See section 3E of [2] for a more detailed statement of the theorem and a proof.
The following is a lemma of representation theory we will use in the next section. We provide it here as to not interrupt the flow of the next section
**Lemma 2.15**.: _Let \(\rho\) be a non-trivial irreducible representation of a finite group \(G\) on a vector space \(V\). Then,_
\[\sum_{g\in G}\rho(g)=0\]
Proof.: Let \(A=\sum_{g\in G}\rho(g)\) be an endomorphism on \(V\). Then \(\rho(g)A=A=A\rho(g)\), so by Schur's lemma, \(A=\lambda I\) is a multiple of the identity. \(\operatorname{Tr}A=\sum_{g\in G}\chi(g)=0\), by the orthogonality relations for characters, so \(A=0\).
## 3 Hitting Times
We will prove the following formula for the hitting time (definition 2.3) for random walks on a group using Fourier transforms. Our main idea was that the Fourier basis "almost diagonalizes" matrices of a group, and is thus the natural basis to compute the matrix of hitting times.
**Theorem 3.1**.: _Let \(G\) be an abelian group, and \(\rho_{1},\ldots,\rho_{r}\) be the irreducible representations with \(\rho_{1}\) the trivial representation. Let \(p\) a probability distribution on \(G\), and consider the random walk on \(G\) generated by \(p\). Then,_
\[\mathbf{E}_{e}[\tau_{g}]=\sum_{i=2}^{r}\frac{1-\rho_{i}(g^{-1})}{1-\hat{p}( \rho_{i})}\]
We will prove a more general formula for not-necessarily abelian groups. Let \(G\) be a group with \(|G|=n\). Consider a random walk on \(G\) generated by a probability \(p\).
**Lemma 3.2**.: _The hitting times satisfy the following system of linear equations_
\[\mathbf{E}_{e}[\tau_{e}] =0 \tag{3.1}\] \[\mathbf{E}_{e}[\tau_{g}] =1+\sum_{s\in S}\mathbf{E}_{s}[\tau_{g}]P(e,s),\quad g\neq e\]
Proof.: By definition \(\mathbf{E}_{e}[\tau_{e}]=0\). Let \((X_{t})_{t\geq 0}\) be a random walk on \(G\) generated by \(p\) s.t. \(X_{0}=e\) under a probability distribution \(\mathbf{P}\). Then
\[\mathbf{E}_{e}[\tau_{g}]=\mathbf{E}_{\mathbf{P}}[\tau_{g}]=\sum_{s\in G}P(e,s )\mathbf{E}_{\mathbf{P}}[\tau_{g}\mid X_{1}=s]=\sum_{s\in G}P(e,s)\mathbf{E}_{ s}[\tau_{g}]\]
To analyze equation (3.1), we use the following simplifying notation. For \(g,h\in G\) and \(e\) the identity element, the hitting times are
\[h(g):=\mathbf{E}_{e}[\tau_{g}]=\mathbf{E}_{h}[\tau_{gh}]\]
Thus, we view these two quantities as vectors instead of matrices.
Writing equation (3.1) in the above notation we get
\[h(e) =0 \tag{3.2}\] \[h(g) =1+\sum_{s\in S}h(gs^{-1})p(s),\quad g\neq e\]
This system of equations looks very similar to a convolution. In order to make this true, we rewrite the equation \(h(e)=0\). We calculate
\[1+\sum_{s\in S}h(es^{-1})p(s)\]
which is \(\mathbf{E}_{s}[\tau_{e}^{+}]\) the first return time. By lemma 2.4,
\[\mathbf{E}_{e}[\tau_{e}^{+}]=\frac{1}{\pi(e)}=n\]
Now we can write equations (3.2) as
\[h(e) =1-n+\sum_{s\in S}h(es^{-1})p(s) \tag{3.3}\] \[h(g) =1+\sum_{s\in S}h(gs^{-1})p(s),\quad g\neq e\]
We take the constant terms in the above system, and create a function
\[k(g)=\left\{\begin{array}{ll}1-n,&g=e\\ 1,&\text{otherwise}\end{array}\right.\]
The equations (3.3) become
\[h(g)=k(g)+(h*p)(g) \tag{3.4}\]
Let \(\rho_{1},\ldots,\rho_{r}\) be the irreducible representations of \(G\). Now we may take the Fourier transform to get
\[\hat{h}(\rho_{i})=\hat{k}(\rho_{i})+\hat{h}(\rho_{i})\hat{p}(\rho_{i}) \tag{3.5}\]
or equivalently
\[\hat{h}(\rho_{i})(I-\hat{p}(\rho_{i}))=\hat{k}(\rho_{i}) \tag{3.6}\]
where \(I\) is the identity matrix of appropriate dimension.
We compute
\[1-\hat{p}(\rho_{1})=1-\sum_{g\in G}p(g)\rho_{1}(g)=1-\sum_{g\in G}p(g)=0\]
and thus we cannot solve for \(\hat{h}(\rho_{1})\) with this equation. However, we can solve for \(\hat{h}(\rho_{i})\) with this equation for all \(i\neq 1\).
**Lemma 3.3**.: _Consider the random walk on a group \(G\) generated by the distribution \(p\). If the random walk is irreducible, then \(I-\hat{p}(\rho_{i})\) is invertible for all nontrivial irreducible representations._
Proof.: Let \(P\) be the matrix of transition probabilities. The solutions \(v\) to \(v(I-P)=0\) are multiples of the stationary distribution. This distribution is unique, since the Markov chain is irreducible. Thus, the nullspace of \(I-P\) is \(1\), so the matrix has rank \(n-1\).
By lemma 2.14, \(I-P=\phi^{-1}D^{*}\phi\), where \(D\) is the block diagonal matrix with \(I-\hat{p}(\rho_{i})\) along the diagonal, and where \(\phi\) is a nonsingular matrix. Since \(I-P\) has rank \(n-1\), \(D\) must have rank \(n-1\). Since \(1-\hat{p}(\rho_{1})=0\), the rest of the blocks must be nonsingular.
Thus from equation (3.6), we have
\[\hat{h}(\rho_{i})=\hat{k}(\rho_{i})(I-\hat{p}(\rho_{i}))^{-1},\quad i\neq 1 \tag{3.7}\]
**Lemma 3.4**.: _Let \(\rho_{i}\) be a non-trivial representation. Then,_
\[\hat{k}(\rho_{i})=-nI\]
Proof.: By lemma 2.15,
\[\hat{k}(\rho_{i})=\sum_{g\in G}k(g)\rho_{i}(g)=\sum\rho_{i}(g)-n\rho_{i}(e)=-nI\]
From equation (3.7), we get the equation
\[\hat{h}(\rho_{i})=-n(I-\hat{p}(\rho_{i}))^{-1},\quad i\neq 1 \tag{3.8}\]
We may now compute \(\hat{h}(\rho_{1})\). Using Fourier inversion, we have
\[0=h(e)=\sum_{i=1}^{r}d_{i}\operatorname{Tr}(\hat{h}(\rho_{i}))\]
so
\[\hat{h}(\rho_{1})=-\sum_{i=2}^{r}d_{i}\operatorname{Tr}(\hat{h}(\rho_{i}))\]
We summarize our results in the following theorem.
**Theorem 3.5**.: _Consider an irreducible random walk on a finite group \(G\). Let \(h\) be the hitting times_
\[h(g):=\mathbf{E}_{e}[\tau_{g}]=\mathbf{E}_{h}[\tau_{gh}]\]
_We have a formula for the Fourier transform \(\hat{h}\), and thus we can compute \(h\) with Fourier inversion. Let \(\rho_{1},\dots,\rho_{r}\) be the irreducible representations of \(G\) with \(\rho_{1}\) the trivial representation. The formula is_
\[\hat{h}(\rho_{i}) =-n(I-\hat{p}(\rho_{i}))^{-1},\quad i\neq 1\] \[\hat{h}(\rho_{1}) =-\sum_{i=2}^{r}d_{i}\operatorname{Tr}(\hat{h}(\rho_{i}))\]
**Theorem 3.6**.: _Let \(G\) be an abelian group. Using the same notation from the above theorem,_
\[h(g)=\sum_{i=2}^{r}\frac{1-\rho_{i}(g^{-1})}{1-\hat{p}(\rho_{i})}\]
Proof.: Abelian groups have one-dimensional representations, so
\[\hat{h}(\rho_{1})=-\sum_{i=2}^{r}\hat{h}(\rho_{i})\]
where
\[\hat{h}(\rho_{i})=-\frac{n}{1-\hat{p}(\rho_{i})}\]
We get the desired equation by plugging into the Fourier inversion formula.
**Example 3.7** (Cycle).: We compute the hitting times for the random walk on \(\mathbb{Z}_{n}\), \(n\geq 2\) generated by the probability distribution \(p\) assigning probability \(1/2\) to \(1,n-1\in\mathbb{Z}_{n}\). (We also may define a random walk on \(\mathbb{Z}_{2}\) by assigning probability \(1\) to \(1\in\mathbb{Z}_{2}\). The formulas we derive will still hold in this case.) This is an irreducible Markov chain. We will show the hitting times
\[h(k)=k(n-k)\]
For another proof of this fact, see [5]. The representations of \(\mathbb{Z}_{n}\) are for \(0\leq j\leq n-1\),
\[\rho_{j}(k)=e^{2\pi ijk/n}\]
So
\[\hat{p}(\rho_{j})=\sum_{k=1}^{n}\rho_{j}(k)p(k)=\frac{e^{\frac{2\pi ij}{n}}+e^{ \frac{-2\pi ij}{n}}}{2}=\cos\frac{2\pi j}{n}\]
By theorem 3.1,
\[h(k)=\sum_{i=1}^{n-1}\frac{1-e^{-2\pi ijk/n}}{1-\cos\frac{2\pi j}{n}}=\sum_{i =1}^{n-1}\frac{1-\cos\frac{-2\pi jk}{n}+i\sin\frac{-2\pi jk}{n}}{1-\cos\frac{2 \pi j}{n}}\]
Since \(h(k)\) is real, we may ignore the imaginary components
\[h(k)=\sum_{i=1}^{n-1}\frac{1-\cos\frac{-2\pi jk}{n}}{1-\cos\frac{2\pi j}{n}}\]
Now we simplify this formula.
**Lemma 3.8**.: \[\sum_{i=1}^{n-1}\frac{1-\cos\frac{-2\pi jk}{n}}{1-\cos\frac{2\pi j}{n}}=k(n-k)\]
Proof.: We first note for \(k\) not a multiple of \(n\),
\[\sum_{i=1}^{n-1}e^{2\pi ik/n}=-1\]
so taking the real part of this equation
\[\sum_{i=1}^{n-1}\cos(2\pi k/n)=-1\]
Let \(z=e^{\pi i/n}\), then
\[\sum_{i=1}^{n-1}\frac{1-\cos\frac{-2\pi jk}{n}}{1-\cos\frac{2\pi j }{n}} =\sum_{i=1}^{n-1}\frac{1-\frac{z^{2jk}+z^{-2jk}}{2}}{1-\frac{z^{2j +z^{-2j}}}{2}}=\sum_{i=1}^{n-1}\frac{(z^{jk}-z^{-jk})^{2}}{(z^{j}-z^{-j})^{2}}\] \[=\sum_{i=1}^{n-1}(z^{j(k-1)}+z^{j(k-3)}+\cdots+z^{-j(k-1)})^{2}\] \[=\sum_{i=1}^{n-1}(z^{2j(k-1)}+z^{-2j(k-1)}+2(z^{2j(k-2)}+z^{-2j(k -2)})+\cdots+(k-1)(z^{2j}+z^{-2j})+k)\] \[=\sum_{i=1}^{n-1}(2\cos[2\pi j(k-1)]+4\cos[2\pi j(k-2)]+\cdots+2( k-1)\cos[2\pi j]+k)\] \[=-2-4-\cdots-2(k-1)+k(n-1)=k(n-k)\]
**Example 3.9** (Hypercube).: Consider the random walk on the group \(\mathbb{Z}_{2}^{m}\) generated by the probability distribution assigning probability \(1/m\) to each of the elements \((0,\ldots,0,1,0,\ldots,0)\in\mathbb{Z}_{2}^{m}\). We will show the non-trivial hitting times are all of order \(2^{m}\).
The representations for this group are the Walsh functions, which are defined as follows. For an element \(x\in\mathbb{Z}_{2}^{m}\), we let \(x_{i}\) be the \(i\)-th component. For each \(k\in\mathbb{Z}_{2}^{m}\) we define a representation \(\rho_{k}\) on \(x\in\mathbb{Z}_{2}^{m}\) by
\[\rho_{k}(x)=(-1)^{\sum_{i=1}^{m}x_{i}k_{i}}\]
We note that for \(k=(0,\ldots,0)=:\mathbf{0}\), this is the trivial representation. For \(x\in\mathbb{Z}_{2}^{m}\), let \(|x|\) be the number of nonzero entries. We compute the Fourier transform
\[\hat{p}(\rho_{k})=\sum_{x\in\mathbb{Z}_{2}^{m}}p(x)\rho_{k}(x)=\frac{1}{m}\sum _{i=1}^{m}(-1)^{k_{i}}=\frac{m-2|k|}{m}\]
By theorem 3.1, the hitting time is
\[h(x)=\sum_{k\neq\mathbf{0}}\frac{1-(-1)^{-\sum_{i=1}^{m}x_{i}k_{i}}}{1-\frac{ m-2|k|}{m}}=m\sum_{k\neq\mathbf{0}}\frac{1-(-1)^{-\sum_{i=1}^{m}x_{i}k_{i}}}{2|k|}\]
Because the hypercube is symmetric up to relabeling of the coordinates, \(h(x)\) depends only on \(|x|\). Let \(|x|=j\). It suffices to calculate the hitting times for \(x=(1,\ldots,1,0\ldots,0)\) the element with \(j\) copies of \(1\) followed by \(n-j\) copies of \(0\). When \(k\) has an odd number of \(1\)'s among its first \(j\) coordinates, \((-1)^{-\sum_{i=1}^{m}x_{i}k_{i}}=-1\). Thus let \(S\) be the subset of \(\mathbb{Z}_{2}^{m}\) s.t. that there are an odd number of \(1\)'s among the first \(j\) components. Then,
\[h(x)=m\sum_{k\neq\mathbf{0}}\frac{1-(-1)^{-\sum_{i=1}^{m}x_{i}k_{i}}}{2|k|}=m \sum_{k\in S}\frac{1}{|k|}\]
We count the number of \(k\in S\). There must be an odd number \(i\) of \(1\)'s among the first \(j\) components of \(k\). For a fixed \(i\), there are \(\binom{j}{i}\) to pick these ones. For a fixed \(l\), there are \(\binom{m-j}{l}\) ways to choose these ones among the last \(m-j\) elements. Such an element of \(S\) has \(|k|=i+l\) and there are \(\binom{j}{i}\binom{m-j}{l}\) of these. Thus, hitting time is
\[h(x)=m\sum_{i\text{ odd}}\sum_{l=0}^{m-j}\binom{j}{i}\binom{m-j}{l}\frac{1}{ l+i} \tag{3.9}\]
There are cases in which we can compute the exact hitting time. We will use the following lemma
**Lemma 3.10**.: \[\frac{2^{m+1}-1}{m+1}=\sum_{i=0}^{m}\frac{1}{i+1}\binom{m}{i}\]
Proof.: We have the binomial formula
\[(x+1)^{m}=\sum_{i=0}^{m}\binom{m}{i}x^{i}\]
Taking the integral \(\int_{0}^{x}\) to both sides, we get
\[\frac{1}{m+1}(x+1)^{m+1}-\frac{1}{m+1}=\sum_{i=0}^{m}\binom{m}{i}\frac{x^{i+1} }{i+1}\]
We plug in \(x=1\) to get the identity.
**Proposition 3.11**.: _Let \(x\in\mathbb{Z}_{2}^{m}\)._
_If \(|x|=1\),_
\[h(x)=2^{m}-1\]
_If \(|x|=2\),_
\[h(x)=\frac{m}{m-1}(2^{m}-2)\]
_(The \(|x|=1\) case is proven with a different method in [7].)_
Proof.: For \(|x|=1\), the formula is
\[h(x) =m\sum_{i\text{ odd}}\sum_{l=0}^{m-1}\binom{1}{i}\binom{m-1}{l} \frac{1}{l+i}\] \[=m\sum_{l=0}^{m-1}\binom{m-1}{l}\frac{1}{l+1}\] \[=2^{m}-1\]
In the last step, We can do a similar calculation for \(|x|=2\).
\[h(x) =m\sum_{i\text{ odd}}\sum_{l=0}^{m-2}\binom{2}{i}\binom{m-2}{l} \frac{1}{l+i}\] \[=2m\sum_{l=0}^{m-2}\binom{m-2}{l}\frac{1}{l+1}\] \[=\frac{m}{m-1}(2^{m}-2)\]
Now we prove sharp bounds for the hitting times. We start with a lemma
**Lemma 3.12**.: \[\sum_{i+l=s}\binom{j}{i}\binom{m-j}{l}=\binom{m}{s}\]
_where the sum is over all pairs of non-negative integers \((i,l)\) s.t. \(i+l=s\)._
Proof.: The right hand side counts the number of ways to choose \(s\) marbles out of \(m\). We can count this in another way. Color \(j\) marbles red and \(m-j\) marbles blue. Then the number of ways to choose \(s\) marbles is the sum over all \(i+l=s\) of choosing \(i\) red marbles and \(l\) blue marbles.
**Proposition 3.13**.: _There are non-zero constants \(c,C\) such that for \(x\in\mathbb{Z}_{2}^{m}\),_
\[c2^{m}\leq h(x)\leq C2^{m}\]
_for all \(x\neq(0,0,\dots,0)\)._
Proof.: First we prove the upper bound.
\[h(x) =m\sum_{i\text{ odd}}\sum_{l=0}^{m-j}\binom{j}{i}\binom{m-j}{l}\frac{ 1}{l+i}\] \[\leq m\sum_{i=0}^{j}\sum_{l=0}^{m-j}\binom{j}{i}\binom{m-j}{l}\frac{ 1}{l+i}\] \[\leq m\sum_{s=1}^{m}\sum_{i+l=s}\binom{j}{i}\binom{m-j}{l}\frac{1}{s}\] rearranging the sum \[=m\sum_{s=1}^{m}\binom{m}{s}\frac{1}{s}\] by lemma 3.12 \[\leq 2m\sum_{s=1}^{m}\binom{m}{s}\frac{1}{s+1}\] \[=\frac{2m}{m+1}(2^{m+1}-1)\] by lemma 3.10
Now we prove the lower bound.
**Lemma 3.14**.: _First we note that for \(i<j/2\), \(j\) odd,_
\[3\sum_{i\text{ odd}}\binom{j}{i}\frac{1}{l+i}\geq\sum_{i=0}^{j}\binom{j}{i} \frac{1}{l+i}\]
_for \(l\geq 1\) and_
\[3\sum_{i\text{ odd}}\binom{j}{i}\frac{1}{i}\geq\sum_{i=1}^{j}\binom{j}{i} \frac{1}{i}\]
Proof.: First we note that for \(i<j/2\), \(i+l\geq 2\),
\[\binom{j}{i}\frac{1}{l+i}\geq\binom{j}{i-1}\frac{1}{l+i-1}\Leftrightarrow\frac {j-i+1}{i}\geq\frac{i+l}{i+l-1}\Leftrightarrow il+l+i-2\geq 0\]
Thus for \(l\geq 1\),
\[\sum_{i\text{ odd},i<j/2}\binom{j}{i}\frac{1}{i+l}\geq\sum_{i\text{ even},i<j/2}\binom{j}{i}\frac{1}{i+l} \tag{3.10}\]
Similarly for \(i<j/2\),
\[\binom{j}{i}\frac{1}{l+i}\geq\binom{j}{j-i}\frac{1}{l+j-i}\]
so
\[\sum_{i\text{ odd},i<j/2}\binom{j}{i}\frac{1}{i+l}\geq\sum_{i\text{ even},i\geq j/2}\binom{j}{i}\frac{1}{i+l} \tag{3.11}\]
Using equations (3.10) and (3.11), we have for \(j\) odd, \(l\geq 1\),
\[3\sum_{i\text{ odd},i<j/2}\binom{j}{i}\frac{1}{i+l}+\sum_{i\text{ odd},i>j/2}\binom{j}{i}\frac{1}{i+l}\geq\sum_{i=0}^{j}\binom{j}{i}\frac{1}{i+l}\]
The same argument holds for \(l=0\).
Thus by the lemma, we compute the lower bound of the hitting time.
\[h(x) =m\sum_{i\text{ odd}}\sum_{l=0}^{m-j}\binom{j}{i}\binom{m-j}{l}\frac {1}{l+i}\] \[\geq\frac{m}{3}\sum_{i=1}^{j}\sum_{l\leq m-j,i+l\geq 1}\binom{j}{i} \binom{m-j}{l}\frac{1}{l+i} \text{by lemma \ref{thm}}\] \[\geq\frac{m}{3}\sum_{s=1}^{m}\sum_{i+l=s}\binom{j}{i}\binom{m-j}{ l}\frac{1}{s} \text{by lemma \ref{thm}}\] \[\geq\frac{m}{3}\sum_{s=1}^{m}\sum_{i+l=s}\binom{j}{i}\binom{m-j}{ l}\frac{1}{s+1}\] \[=\frac{m}{3}\sum_{s=1}^{m}\binom{m}{s}\frac{1}{s+1}\] \[=\frac{m}{3}\frac{2^{m+1}-1}{m+1}-\frac{m}{3} \text{by lemma \ref{thm}}\]
## 4 Cover Times
In this section we would like to use our computations of hitting times to compute cover times defined as follows
\[t_{cov}:=\max_{h\in G}\mathbf{E}_{h}[\max_{g\in G}\tau_{g}]=\mathbf{E}_{e}\max _{g\in G}[\tau_{g}]\]
We will describe our asymptotic bounds with the following notation.
**Definition 4.1**.: We say
\[f(n)\lesssim g(n)\]
if there is a universal constant \(C\) s.t. \(f(n)\leq Cg(n)\). as similarly for \(\gtrsim\). If \(f(n)\lesssim g(n)\) and \(f(n)\gtrsim g(n)\), then we write
\[f(n)\asymp g(n)\]
We will use some results of Ding, Lee, Peres [3] to compute cover times, which we summarize here. Consider a random walk on a group \(G\) generated by a distribution \(p\). We define the follow Gaussian process \((\eta_{g})_{g\in G}\) associated to th random walk, based on the commute time \(\kappa(g,h):=\mathbf{E}_{g}[\tau_{h}]+\mathbf{E}_{h}[\tau_{g}]\). It is the centered Gaussian process with \(\eta_{e}=0\), and
\[\mathbf{E}[|\eta_{g}-\eta_{h}|^{2}]=\kappa(g,h)\]
These statistics fully determine the Gaussian process.
**Theorem 4.2** (Ding, Lee, Peres).: _For a random walk on a group \(G\) with vertex set \(V\),_
\[t_{cov}\asymp\left(\mathbf{E}\max_{v\in V}\eta_{v}\right)^{2}\]
_for the associated Gaussian processes \(\eta\) defined above._
_Remark 4.3_.: The results of Ding, Lee, Peres are presented in the more general setting of electrical networks, which we will not use in the result of the paper. For an electrical network, let \(c_{gh}\) be the conductance from \(g\) to \(h\). And \(c_{g}=\sum_{h\in G}c_{gh}\). The total conductance is \(\mathcal{C}:=\sum_{g,h\in G}c_{gh}\). We define the transition probabilities by \(P(g,h)=\dfrac{c_{gh}}{c_{g}}\). Ding, Lee, Peres define \(\mathbf{E}[|\eta_{g}-\eta_{h}|^{2}]=R_{eff}=\dfrac{\kappa(g,h)}{\mathcal{C}}\) (see [2]). Then,
\[t_{cov}\asymp\mathcal{C}\left(\mathbf{E}\max_{v\in V}\eta_{v}\right)^{2}\]
Scaling all the conductances, does not change the transition probabilities of the random walk, so \(\mathcal{C}\) changes but \(t_{cov}\) remains the same. We can also see this in the formula in theorem 4.12 below.
In our case, we let the conductances be \(c(g,h)=\dfrac{1}{|G|}p(hg^{-1})\). The total conductance is \(\mathcal{C}:=\sum_{g,h\in G}c_{gh}=1\) for this network.
The case of a random walk on a group is much simpler than a general random walk on a graph. This will simplify calculations considerably.
**Definition 4.4**.: For a Gaussian process \((\eta_{t})_{t\in T}\), \(d(t,s)=\mathbf{E}[|\eta_{t}-\eta_{s}|^{2}]^{1/2}\) is a distance on \(T\) induced by this process.
In our setting of a random walk on a group with associated Gaussian process, we may check this distance is a metric. Commute time satisfies the triangle inequality because for all \(s,t,u\in G\),
\[\mathbf{E}_{t}[\tau_{s}]\leq\mathbf{E}_{t}[\tau_{u}]+\mathbf{E}_{u}[\tau_{s}]\]
Thus
\[d(t,s)=\mathbf{E}[|\eta_{t}-\eta_{s}|^{2}]^{1/2}=\sqrt{\kappa(t,s)}\leq\sqrt{ \kappa(t,u)+\kappa(u,s)}\leq\sqrt{\kappa(t,u)}+\sqrt{\kappa(u,s)}=d(t,u)+d(u,s)\]
**Definition 4.5**.: A Gaussian process \((\eta_{t})_{t\in T}\) is _stationary_, if there is a transitive group action \(G\curvearrowright T\) s.t. for \(g\in G\),
\[d(g\cdot t,g\cdot s)=d(t,s)\]
**Lemma 4.6**.: _Let \(G\) be a random walk on a group. The associated Gaussian process is stationary. Namely, the associated metric \(d(g,h)\) is fixed by the action of right multiplication by an element of \(G\)._
Proof.: Recall the Gaussian process is defined by for \(t,s\in G\),
\[d(t,s)=\mathbf{E}[|\eta_{t}-\eta_{s}|^{2}]^{1/2}=\sqrt{\kappa(t,s)}\]
The group \(G\) has a transitive action on itself by right multiplication which fixes \(\kappa(t,s)\) since for \(g\in G\),
\[\kappa(g\cdot t,g\cdot s)=\mathbf{E}_{tg}[\tau_{sg}]+\mathbf{E}_{sg}[\tau_{tg }]=\mathbf{E}_{t}[\tau_{s}]+\mathbf{E}_{s}[\tau_{t}]=\kappa(t,s)\]
Thus \(d(t,s)\) is fixed by this action.
**Definition 4.7**.: Let \(d\) be any metric on a set \(T\). Define \(N(T,d,\epsilon)\) to be the maximum number of points \(t_{1},\ldots,t_{N}\in T\) s.t. \(d(t_{i},t_{j})>\epsilon\) for all \(i\neq j\).
**Theorem 4.8** (Fernique).: _Let \((\eta_{t})_{t\in T}\) be a stationary Gaussian process with finite index set. Then_
\[\mathbf{E}\left[\max_{t\in T}X_{t}\right]\asymp\int_{0}^{\infty}\sqrt{\log N (T,d,\epsilon)}d\epsilon\]
_(See [4] for a proof.)_
Consider a random walk on a group \(G\). Combining theorems 4.2 and 4.8, we have
\[t_{cov}\asymp\left(\int_{0}^{\infty}\sqrt{\log N(G,d,\epsilon)}d\epsilon\right)^ {2} \tag{4.1}\]
where \(d(g,h)=\sqrt{\kappa(s,t)}\). Now we would like to understand the quantity \(N(G,d,\epsilon)\).
**Lemma 4.9**.: _Let \(G\) be a group with metric \(d\). Let \(B_{g}(\epsilon)=\{h\in G:d(g,h)\leq\epsilon\}\) be the closed ball of radius \(\epsilon\) around \(g\). Then \(|B_{g}(\epsilon)|\) is independent of \(g\). We define the quantity \(V_{d}(\epsilon):=|B_{g}(\epsilon)|\) the volume growth function._
Proof.: We will show \(|B_{g}(\epsilon)|=|B_{h}(\epsilon)|\) for any \(g,h\in G\). Let \(k\in B_{g}(\epsilon)\). Then by lemma 4.6,
\[d(g,k)=d(h,kg^{-1}h)\leq\epsilon\]
Right multiplication by \(g^{-1}h\) is injective, so it maps \(B_{g}(\epsilon)\) into \(B_{h}(\epsilon)\). Similarly, right multiplication by \(h^{-1}g\) maps \(B_{h}(\epsilon)\) into \(B_{g}(\epsilon)\).
_Remark 4.10_.: The term volume growth function already exists in the literature [8]. Here we use the term in a more general context than the existing usage. [8] uses the term volume growth function to refer to the particular case where the distance \(d\) is words size. We will be studying a different distance on the group.
**Lemma 4.11**.: _Let \(G\) be a group with metric \(d\),_
\[N(G,d,\epsilon)V_{d}(\epsilon/3)\leq|G|\]
_and_
\[N(G,d,\epsilon)V_{d}(\epsilon)\geq|G|\]
Proof.: First we show the upper bound. Let \(S\subset G\) be a set with \(N(G,d,\epsilon)\) elements s.t. \(d(s,t)\geq\epsilon\) for \(s,t\in S\). Then the balls \(B_{s}(\epsilon/3)\) for \(s\in S\) are disjoint.
Now we show the lower bound. Let \(S\) be as above. We claim the \(B_{s}(\epsilon)\) cover \(G\). If this is true, the inequality holds. Assume by contradiction \(g\in G\) is not in \(B_{s}(\epsilon)\) for any \(s\in S\), then \(d(g,s)>\epsilon\) for all \(s\in S\), so \(S^{\prime}=S\cup\{g\}\) satisfy \(d(s,t)>\epsilon\) for all \(s,t\in S^{\prime}\) and \(|S^{\prime}|>N(G,d,\epsilon)\). However, this contradicts the definition of \(N(G,d,\epsilon)\).
Now, we can understand the cover time in terms of the volume growth function.
**Theorem 4.12**.: \[t_{cov}\asymp\left(\int_{0}^{\infty}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}}d \epsilon\right)^{2}\]
Proof.: From equation (4.1) and lemma 4.11,
\[t_{cov}\asymp\left(\int_{0}^{\infty}\sqrt{\log N(G,d,\epsilon)}d\epsilon\right) ^{2}\gtrsim\left(\int_{0}^{\infty}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}}d \epsilon\right)^{2}\]
Similarly, we can get the upper bound
\[t_{cov}\asymp\left(\int_{0}^{\infty}\sqrt{\log N(G,d,\epsilon)}d\epsilon\right) ^{2}\lesssim\left(\int_{0}^{\infty}\sqrt{\log\frac{|G|}{V_{d}(\epsilon/3)}}d \epsilon\right)^{2}=\left(3\int_{0}^{\infty}\sqrt{\log\frac{|G|}{V_{d}( \epsilon)}}d\epsilon\right)^{2}\]
**Corollary 4.13** (Matthew's Bounds [6]).: _Let \(M=\max_{g_{1},g_{2}\in G}\kappa(g_{1},g_{2})=\max_{g\in G}\kappa(e,g)\), then_
\[M\lesssim t_{cov}\lesssim M\log|G|\]
_We note \(M\asymp\max_{g\in G}\mathbf{E}_{e}[\tau_{g}]\) the maximum hitting time._
Proof.: The maximum distance \(d(s,t)=\sqrt{M}\) for \(s,t\in T\). Thus, for \(\epsilon\geq\sqrt{M}\), \(V_{d}(\epsilon)=|G|\), so
\[\int_{0}^{\infty}\log\frac{|G|}{V_{d}(\epsilon)}d\epsilon=\int_{0}^{\sqrt{M}} \log\frac{|G|}{V_{d}(\epsilon)}d\epsilon\]
Since \(V_{d}(\epsilon)\geq 1\),
\[t_{cov}\lesssim\left(\int_{0}^{\sqrt{M}}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}} d\epsilon\right)^{2}\leq\left(\int_{0}^{\sqrt{M}}\sqrt{\log|G|}d\epsilon\right)^{2}=M \log|G|\]
For the lower bound, let \(s,t\) be s.t. \(d(s,t)=M\). Then \(B_{s}(M/3)\) and \(B_{t}(M/3)\) are disjoint, so \(V_{d}(M/3)\leq|G|/2\).
\[t_{cov}\asymp\left(\int_{0}^{\infty}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}}d \epsilon\right)^{2}\geq\left(\int_{0}^{\sqrt{M}/3}\sqrt{\log\frac{|G|}{V_{d}( \epsilon)}}d\epsilon\right)^{2}\geq\left(\int_{0}^{\sqrt{M}/3}\sqrt{\log 2}d \epsilon\right)^{2}\gtrsim M\]
The following examples show that both the upper and lower bounds may be achieved.
**Example 4.14** (Cycle).: We consider the random walk \(\mathbb{Z}_{n}\) generated by the distribution that assigns probability \(1/2\) to each of \(\pm 1\). We will prove
\[t_{cov}\asymp n^{2}\]
This is the lower bound of the Matthews bounds, so we must only prove the upper bound. Using example 3.7, \(d(0,k)=\sqrt{2k(n-k)}\) for \(k\in\mathbb{Z}_{n}\). Thus for \(k<n/2\),
\[\sqrt{2k(n-k)}\leq\epsilon<\sqrt{2(k+1)(n-k-1)} \tag{4.2}\]
implies \(V_{d}(\epsilon)=2k+1\). The maximum value of \(d\) is achieved when \(k=\lfloor n/2\rfloor\). Thus for \(\epsilon\geq\sqrt{n^{2}/2}=n/\sqrt{2}\), \(V_{d}(\epsilon)=n\), so \(\log\frac{n}{V(\epsilon)}=0\). Thus,
\[t_{cov}\asymp\left(\int_{0}^{\infty}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}}d \epsilon\right)^{2}=\left(\int_{0}^{3\sqrt{n}}\sqrt{\log\frac{|G|}{V_{d}( \epsilon)}}d\epsilon+\int_{3\sqrt{n}}^{n/\sqrt{2}}\sqrt{\log\frac{|G|}{V_{d}( \epsilon)}}d\epsilon\right)^{2}\]
We bound the first summand
\[\int_{0}^{3\sqrt{n}}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}}d\epsilon\leq\int_{ 0}^{3\sqrt{n}}\sqrt{\log n}d\epsilon=3\sqrt{n\log n}\]
Thus, it suffices to show
\[\int_{3\sqrt{n}}^{n/\sqrt{2}}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}}d\epsilon\lesssim n\]
Rearranging equation (4.2), we have
\[2(k+1)\geq\frac{\epsilon^{2}}{n-k-1}\geq\frac{\epsilon^{2}}{3n}\]
Thus \(V_{d}(\epsilon)\geq\frac{\epsilon^{2}}{3n}-1\geq\frac{\epsilon^{2}}{6n}\) for \(3\sqrt{n}\leq\epsilon\leq\sqrt{2\lfloor n/2\rfloor^{2}}\), and we can manually check this inequality for \(\sqrt{2\lfloor n/2\rfloor^{2}}\leq\epsilon\leq n/\sqrt{2}\). Thus,
\[\int_{3\sqrt{n}}^{n/\sqrt{2}}\sqrt{\log\frac{|G|}{V_{d}(\epsilon )}}d\epsilon \leq\int_{3\sqrt{n}}^{n/\sqrt{2}}\sqrt{\log\frac{6n^{2}}{\epsilon^ {2}}}d\epsilon\leq\int_{3\sqrt{n}}^{n/\sqrt{2}}\log\frac{6n^{2}}{\epsilon^{2}}d\epsilon\] \[=(n/\sqrt{2}-3\sqrt{n})\log 6+2(n/\sqrt{2}-3\sqrt{n})\log n-2(x \log x-x)\Big{|}_{3\sqrt{n}}^{n/\sqrt{2}}\] \[=n\frac{\log 6}{\sqrt{2}}+\sqrt{2}n\log n-\sqrt{2}n\log(n/ \sqrt{2})+\sqrt{2}n+o(n)\asymp n\]
Thus,
\[t_{cov}\asymp n^{2}\]
The cover times of our other examples can be computed with the following proposition.
**Proposition 4.15**.: _Let \(f\) be a fixed function. Consider a random walk on a group \(G\) with \(|G|=N\) s.t. \(h(g)\asymp f(N)\) for all \(g\neq e\). Then,_
\[t_{cov}\asymp f(N)\log N\]
Proof.: The maximum hitting time is \(\asymp f(n)\), so this is the Matthews upper bound. It suffices to prove a lower bound.
The minimum distance \(\alpha:=\min\limits_{g\neq e}d(e,g)\asymp\sqrt{f(n)}\). Then for \(\epsilon<\alpha\), \(V_{d}(\epsilon)=1\). Thus,
\[t_{cov}\asymp\left(\int_{0}^{\infty}\sqrt{\log\frac{|G|}{V_{d}(\epsilon)}}d \epsilon\right)^{2}\geq\left(\int_{0}^{\alpha}\sqrt{\log\frac{|G|}{V_{d}( \epsilon)}}d\epsilon\right)^{2}=\left(\int_{0}^{\alpha}\sqrt{\log|G|}d \epsilon\right)^{2}=\alpha^{2}\log|G|\asymp f(n)\log n\]
**Example 4.16** (Hypercube).: From proposition 3.13, \(\mathbf{E}_{e}[\tau_{g}]\asymp 2^{m}\) for all \(g\neq e\), so by the above proposition the cover time of the hypercube \(\mathbb{Z}_{2}^{m}\) is
\[t_{cov}\asymp m2^{m}\]
**Example 4.17** (Finite-Dimensional Torus).: We consider a random walk on a torus \(\mathbb{Z}_{n}^{m}\). For \(n=2\), we have the hypercube discussed in example 3.9. For \(n>2\), this random walk is generated by the distribution \(p\) that assigns probability \(1/2m\) to each element \(\{0,\ldots,0,\pm 1,0\ldots,0\}\). We consider the case when \(m\) is fixed and \(n\) grows. The hitting times for this random walk are computed in section 10.4 of Levin, Peres, Wilmer [5] with electrical network methods. They prove for \(m\geq 3\),
\[\mathbf{E}_{e}[\tau_{x}]\asymp n^{m}\]
Thus by proposition 4.15,
\[t_{cov}\asymp n^{m}\log n\]
Further Directions
Our method gives us explicit formula for hitting times for random walks on any finite groups whose representations are understood, which includes all finite abelian groups. However, we were only able to analyze and understand these formula for a few simple cases. A generalization of the cases considered would be the torus \(\mathbb{Z}_{n}^{m}\), letting both \(n\) and \(m\) grow. This specializes to both the examples of the hypercube (example 3.9) and the finite-dimensional torus (example 4.17). In the cases in this paper, it seems that for large \(m\), the hitting times are all of the same order, the order of the group. If this were the case, proposition 4.15, gives us that
\[t_{cov}\asymp mn^{m}\log n\]
for these cases. This conjecture can be generalized to random walks on a group \(G\) generated by a probability distribution \(p\). In the above case, we had that the support of \(p\) was large enough, and that \(p\) was uniform on the support. In these cases, will we also have the phenomenon that the hitting times all are on the order of \(|G|\)? In these cases where the support of \(p\) is large, intuitively it is easy for the random walk to get "lost" even if it starts very close to its destination, which is reason to believe the hitting times would all be of the same order.
A different direction may be the following. The representation theory methods we developed were for the expected values of specific hitting times \(\tau_{g}=\{\min t\geq 0:X_{t}=g\}\). Thus, techniques may also be generalized to certain settings for hitting times of the form \(\tau_{A}=\{\min t\geq 0:X_{t}\in A\}\) for appropriately chosen sets such as cosets. There is also freedom to choose the initial distribution \(\mu\) so that
\[\mathbf{E}_{\mu}[\tau_{A}]\]
may be easier to compute and understand. |
2305.02215 | Exploring Linguistic Properties of Monolingual BERTs with Typological
Classification among Languages | The impressive achievements of transformers force NLP researchers to delve
into how these models represent the underlying structure of natural language.
In this paper, we propose a novel standpoint to investigate the above issue:
using typological similarities among languages to observe how their respective
monolingual models encode structural information. We aim to layer-wise compare
transformers for typologically similar languages to observe whether these
similarities emerge for particular layers. For this investigation, we propose
to use Centered Kernel Alignment to measure similarity among weight matrices.
We found that syntactic typological similarity is consistent with the
similarity between the weights in the middle layers, which are the pretrained
BERT layers to which syntax encoding is generally attributed. Moreover, we
observe that a domain adaptation on semantically equivalent texts enhances this
similarity among weight matrices. | Elena Sofia Ruzzetti, Federico Ranaldi, Felicia Logozzo, Michele Mastromattei, Leonardo Ranaldi, Fabio Massimo Zanzotto | 2023-05-03T15:52:17Z | http://arxiv.org/abs/2305.02215v2 | # Exploring Linguistic Properties of Monolingual BERTs
###### Abstract
The overwhelming success of transformers is a real conundrum stimulating a compelling question: are these machines replicating some traditional linguistic models or discovering radically new theories? In this paper, we propose a novel standpoint to investigate this important question. Using typological similarities among languages, we aim to layer-wise compare transformers for different languages to observe whether these similarities emerge for particular layers. For this investigation, we propose to use Centered kernel alignment to measure similarity among weight matrices. We discovered that syntactic typological similarity is consistent with the similarity among weights in the middle layers. This finding confirms results obtained by syntactically probing BERT and, thus, gives an important confirmation that BERT is replicating traditional linguistic models.
## 1 Introduction
Natural language processing (NLP), as well as Artificial Intelligence in general, is in an era where a tantalizing question can be addressed: can linguistic models naturally emerge in brain-like neural networks? This is a fascinating question as it suggests the more profound issue of whether linguistic theories can explain brain activity.
The powerful resurgence of brain-like neural networks has fostered end-to-end models to solve downstream tasks and generate, apparently, theory-agnostic models of languages. Pre-trained transformers Peters et al. (2018); Devlin et al. (2019) are offered as versatile universal sentence/text encoders that contain whatever is needed to solve any downstream task. Pre-training is obtained with two general tasks: next-sentence prediction and masked language model Devlin et al. (2019). Results are astonishing: these models outperform all other models nearly consistently after fine-tuning or domain-adaptation Jin et al. (2022).
Decades of studies in NLP have created symbol-empowered architectures for interpreting natural language implementing different levels of linguistic analysis: morphology, syntax, semantics, and pragmatics are few sub-disciplines of linguistics that shaped how symbolic-based NLP has been conceived. Understanding whether these levels of linguistic models emerge in pre-trained transformers-architecture is a compelling issue.
_Probing_ transformers has mainly been used to investigate whether these model classical linguistic properties of languages. Probing consists of preparing precise sets of examples - probe tasks - and, eventually, observing how these examples activate transformers. In this way, BERT contextual representations for English have been tested to assess their ability to model syntactic information Goldberg (2019); Hewitt and Manning (2019); Jawahar et al. (2019); Coenen et al. (2019) and morphology Edmiston (2020). Syntactic probing tasks have been also used to assess the similarity among different monolingual BERT models Nikolaev and Pado (2022); Otmakhova et al. (2022).
In this study, we take a different standpoint to investigate if traces of linguistic models are encoded in monolingual BERT models Devlin et al. (2019): using linguistically-motivated typological similarities among languages Dryer and Haspelmath (2013), we aim to layer-wise compare transformers for different languages to observe whether these similarities emerge between weight matrices for particular layers. For this investigation, we propose to use Centered Kernel Alignment to measure similarity among weight matrices Kornblith et al. (2019). We discovered that syntactic typological similarity is consistent with the similarity among weights in the middle layers. This finding confirms results obtained by syntactically probing BERT and, thus, gives an important confirmation that BERT is replicating traditional linguistic models.
## 2 Background and related work
Although different, natural languages show more or less evident similarities at all levels of analysis - phonetics, morphology, syntax, and semantics -, due to the fact that "the human language capacity is a species-specific biological property, essentially unique to humans, invariant among human groups, and dissociated from other cognitive systems" (Chomsky, 2017). Hence, some properties are shared by all natural languages, called 'language universals' like all spoken languages use vowels, all languages have nouns and verbs, and some are common among groups of languages, called 'universal tendencies' like many languages have nasal consonant (Greenberg, 2005; Haspelmath, 2001a,b). In our study, universal tendencies are fairly important.
Languages are classified in two main ways: typological classification and genealogical classification. Language typology is the branch of linguistics that, studying universal tendencies of languages (Comrie, 1989; Song, 2010), maps out "the variation space filled by the languages of the world" and finding "regular patterns and limits of variation" (Haspelmath, 2001a). These typological patterns may be used to classify all languages ad different levels of linguistic analysis. From our perspective, typological classification is more interesting than the genealogical (or genetic) one, which aims to gather languages directly or indirectly stemming from a common ancestor (Lehmann, 2013; Dunn, 2015). Indeed, languages belonging to a genealogical family can share specific typological properties or not. Likewise, languages that appear similar, according to a specific typological category, can be genealologically linked or not. For example, Latin (Lat), Japanese (Jap) and Turkish (Tur) are final-verb languages, whereas Italian (Ita) is not, despite being directly derived from Latin (see some sample sentences in Tab. 1). Clearly, common linguistic properties among languages are important for our study and, thus, we use typological classification of languages.
Typological similarities among languages have been investigated with probes in two ways: (1) by using syntactic probes in comparing the behavior of different monolingual BERT models (Nikolaev and Pado, 2022; Otmakhova et al., 2022); and, (2) by searching shared syntax representation in multilingual BERT (mBERT) (Chi et al., 2020; Ravishankar et al., 2019; Singh et al., 2019). Chi et al. (2020) suggest that mBERT has a behavior similar to monolingual BERT since it shared syntax representation can be found in middle layers. However, mBERT seems to separate representations for each language rather than using a shared space for all languages (Singh et al., 2019). Hence, mBERT is not exploiting typological similarities among languages to build better language models.
Another perspective to exploit similarity among languages is to compare activation matrices of transformers produced over probing examples. In this case, the underlying assumption is that parallel sentences should have similar activation matrices if languages are similar. To compare these activation matrices, Canonical Correlation Analysis (CCA) (Hardoon et al., 2004), Singular Vector CCA (SVCCA) (Raghu et al., 2017), and Projection Weighted CCA (PWCCA) (Morcos et al., 2018) have been widely used. More recently, Centered Kernel Alignment (CKA) (Kornblith et al., 2019) measure has been proposed to quantify representational similarity. Kornblith et al. (2019) also claims that CCA and SVCCA are less adequate for assessing similarity between representations because they are invariant to linear transformations. By using these metrics, Vulic et al. (2020) compared activation matrices derived using monolingual models for translation pairs of words: using CKA, they measured the similarity between the representation of words in context in different languages. They noticed that the similarity between the representations of words is higher for more similar languages. Similarities between mBERT activation matrices have also been used to derive a phylogenetic tree of languages (Singh et al., 2019; Rama et al., 2020).
\begin{table}
\begin{tabular}{l l} \hline \hline Lat: & Hostes (S - ‘Enemies’) castra (O - ‘the camp’) oppugnant (V – ‘attack’) \\ Jap: & Maria-san (S – ‘Mary’) wa (participacle) sarada (O – ‘salad’) wo (particle) tabemasu (V – ‘eats’) \\ Tur: & Maria (S – ‘Mary’) elmayi (O – ‘the apple’) yiyor (V – ‘eats’) \\ It: & Maria ‘Maria’ (S) mangia ‘eats’ (V) la mela ‘the apple’ (O) ’ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Syntactic property of languages: the basic order of constituents - subject (S), direct object (O) and verb (V) - in a clause
In our study, building on typological similarities among languages and on the measures to compare matrices, we aim to compare monolingual BERTs by directly evaluating layer-wise the similarities among parameters of BERTs for different languages. To the best of our knowledge, this is the first study using this idea to investigate whether transformers replicate linguistic models without probing tasks.
## 3 Method and Data
If languages are typologically similar for syntactic or for morphological properties, their BERT should have some similar weight matrices in some layers. This is our main intuition. Then, in this section, we introduce a method to investigate the above intuition. Then, Section 3.1 describes the selected typological classification of languages and how we use it to compute similarity between languages. Section 3.2 briefly introduces the subject of our study, BERT. Finally, Section 3.3 introduces a novel way, _biCKA_, to compare weight matrices.
### Selected typological similarity among languages
To compute morphological and syntactic similarity among languages, we use a typological classification scheme that, in line with lang2vec [11], is used to generate metric feature vectors for languages. These vectors are then used to compute similarity between languages.
The World Atlas of Language Structures (WALS) [13] aims at classifying languages of the world on the basis of typological criteria at all levels of linguistic analysis. Languages in WALS are represented as categorial feature-value vectors. Each feature can assume two or more values. For instance, the 20A Fusion of Selected Inflectional Formatives assumes 7 possible values: 1. Exclusively concatenative; 2 Exclusively isolating; 3 Exclusively tonal; 4 Tonal/isolating; 5 Tonal/concatenative; 6 Ablaut/concatenative; and 7 Isolating/concatenative. The feature 25B Zero Marking of A and P Arguments allows only 2 possibilities: 1. Zero-marking; 2. Non-zero marking. A complete description of each feature and its values can be found on [https://wals.info/feature](https://wals.info/feature). Since not all world languages are classified according to each feature (the list of languages analyzed varies from feature to feature), we had to integrate WALS. As far as morphological features are concerned, we added Dutch, Romanian and Swedish values for all the features, except 26A, already filled in WALS for such languages; we also completed missing features for Italian, except 26A and 27A.
Specifically, we took into account all the 12 features classified under "Morphology Area" and a selection of syntactic features of WALS (see Tab. 2). As far as syntactical features are concerned, we selected the most important ones relating to word order [23] (from 81A to 97A, Tab. 2) and some representative feature pertaining to linguistic negation (143A, 143E, 143F, 144A). The last-mentioned features about negative morphemes and words have also been selected because these are fully specified for target languages. Unlike morphological features - almost totally lacking values for Dutch, Romanian, Swedish, and Italian - syntactic features are almost fully specified for all target languages. We only integrated 2 features with the following values: 84A (Italian, Russian, Romanian: 1 VOX, Persian: 3 XOV, Greek: 6 No dominant order); 93A (Italian and Dutch = 1 Initial interrogative phrase). The complete list of categorical vectors for target languages is available in Appendix A where these vectors are extracted from WALS and, eventually, integrated by the two linguists.
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{_WALS ID_} & \multicolumn{1}{c}{_Feature_} \\ \hline \hline \multicolumn{1}{c}{_Morphological features_} \\ \hline
20A & Fusion of Selected Inflectional Formatives \\
21A & Exponence of Selected Inflectional Formatives \\
21B & Exponence of Termes-Agreed-Mood Inflectional \\
22A & Inflectional Synthesis of the Vieh \\
23A & Locons of Maribing in the Clause \\
24A & Locons of Maribing in Possessive Noun Phrases \\
25A & Locons of Maribing, Whole-language-Typology \\
25B & Zero Marking of A and P Arguments \\
26A & Prefining No. Staffing in Inflectional Morphology \\
27A & Application \\
28A & Case Syrererere \\
29A & Smyversin
In line with lang2vec (Littell et al., 2017), we used categorical WALS vectors for languages to generate metric vectors. Each syntactic and morphological pair of feature-value \((f:v)\) is one dimension of this space, which is 1 for a language L if L has the feature \(f\) equal to the value \(v\). Thus, the similarity \(\sigma(L_{1},L_{2})\) between two languages \(L_{1}\) and \(L_{2}\) can be computed as the similarity between the language vectors \(\vec{L_{1}}\) and \(\vec{L_{2}}\), using the cosine similarity (\(cos\)) as similarity measure:
\[\sigma(L_{1},L_{2})=cos(\vec{L_{1}},\vec{L_{2}})\]
This measure assesses the similarity between languages by counting the number of shared feature-value pairs and has two versions \(\sigma_{synt}(L_{1},L_{2})\) and \(\sigma_{morph}(L_{1},L_{2})\) that computed similarity using, respectively, syntactic and morphological features.
### BERT Model in brief
BERT (Devlin et al., 2019) is the subject of our study, and this section briefly describes how it is organized in order to give names to weight matrices at each layer, although it is a widely known model.
BERT is a layered transformer-based model with attention (Vaswani et al., 2017) that utilizes only the Encoder block. Each layer \(M_{i}\) of the encoder block is divided into two sub-layers. The first sub-layer, called Attention Layer, heavily relies on the attention mechanism. The second sub-layer (Feed Forward Layer) is simply a feed-forward neural network. Clearly, each sub-layer has its weight matrices referred in the rest of the paper as follows:
\begin{tabular}{l l} Attention & \\ \hline query & \(Q_{i}\) \\ key & \(K_{i}\) \\ value & \(V_{i}\) \\ attention\_output\_dense & \(OA_{i}\) \\ \hline \hline Feed Forward Network & \\ \hline intermediate\_dense & \(DI_{i}\) \\ output\_dense & \(DO_{i}\) \\ \hline \end{tabular} Since the self-attention mechanism does compute it, the matrix \(C_{i}=Q_{i}K_{i}^{T}\) is also taken into account. For each monolingual BERT for a language \(L\), weight matrices \(M_{i}=\{Q_{i},K_{i},V_{i},OA_{i},C_{i},DI_{i},DO_{i}\}\) for \(i\in[0,\dots,11]\) should represent part of the linguistic model of \(L\) after pre-training.
### Bidimensional Centered Kernel
Alignment for Comparing Weight
Matrices
Centered kernel alignment (CKA) (Kornblith et al., 2019) is a metric used to compare similarities between representations, that is, activation matrices of the deep neural network. Linear CKA measure is defined as follows:
\[CKA(X,Y)=\frac{||Y^{T}X||_{F}^{2}}{||X^{T}X||_{F}||Y^{T}Y||_{F}} \tag{1}\]
where \(X\) and \(Y\) are \(n\times p\) matrices and denote activations of \(p\) neurons for \(n\) examples. The key idea behind this metric is to determine the similarity between pairs of elements and then compare the similarity structures. As presented by Kornblith et al. (2019), computing the numerator in Equation 1 is proportional to computing the similarity, using the dot product between the linearized version (which is referred to as \(vec\)) of the matrices as similarity measure, of the estimated covariance matrices \(XX^{T}\) and \(YY^{T}\) for mean-centered activation matrices \(X\) and \(Y\):
\[vec(XX^{T})^{T}\cdot vec(YY^{T})=trace(XX^{T}YY^{T})=||Y^{T}X||_{F}^{2} \tag{2}\]
where \(vec(A)\) is the linearization of a matrix \(A\) and, thus, \(vec(XX^{T})^{T}\cdot vec(YY^{T})\) is the equivalent of the Frobenius product between two matrices. Then, CKA is suitable to compare matrices of representations of sentences as it is possible to derive relations between rows of the two matrices as these represent words.
Among the different methods, CKA is chosen because it allows one to compare matrices with \(p\geq n\). Other metrics that are invariant to invertible linear transformations, such as CCA and SVCCA, instead assign the same similarity value to any pair of matrices having \(p\geq n\)(Kornblith et al., 2019). Moreover, the columns of the analyzed weight matrices are characterized by high multicollinearity. If one feature is a linear combination of others, then also the covariance matrix has some row that is linear combination of others and hence is rank deficient. This makes difficult in this setting the use of PWCCA since it requires the computation of CCA and, consequently, the inversion of the covariance matrices.
However, in the case of weight matrices, CKA is not suitable as both rows and columns of these matrices play an important role in determining the output of the network. Indeed, both the similarity
\(W_{1}W_{1}^{T}\) with respect to \(W_{2}W_{2}^{T}\) and the similarity of \(W_{1}^{T}W_{1}\) with respect to \(W_{2}^{T}W_{2}\) may play an important role in determining the similarity between the two weight matrices \(W_{1}\) and \(W_{2}\),
Hence, we introduce bidimensional CKA, \(biCKA\), a variant of CKA that compares matrices considering rows and columns. Firstly, given a weight matrix \(W\), we define the following block diagonal matrix \(F(W)\) as follows:
\[F(W)=\left[\begin{array}{c|c}W&0\\ \hline 0&W^{T}\end{array}\right]\]
Then, \(biCKA\), our solution to compute their similarity between weight matrices, is defined as follows:
\[biCKA(W_{1},W_{2})=CKA(F(W_{1}),F(W_{2})))\]
Hence:
\[\begin{split}& biCKA(W_{1},W_{2})\propto\\ &\propto vec\left(\left[\begin{array}{c|c}W_{1}W_{1}^{T}&0\\ \hline 0&W_{1}^{T}W_{1}\end{array}\right]\right)^{T}\cdot vec\left(\left[ \begin{array}{c|c}W_{2}W_{2}^{T}&0\\ \hline 0&W_{2}^{T}W_{2}\end{array}\right]\right)\end{split}\]
## 4 Experiments
### Experimental set-up
The aim of the experiments is to use typological similarity between languages to determine whether some BERT layers encode morphological or syntactic information.
Our working hypothesis is the following: given a set of pairs of languages \((L_{1},L_{2})\), a particular matrix \(W\) of a BERT layer encodes syntactic or morphological information if the similarity \(biCKA(W_{L_{1}},W_{L_{2}})\) between that particular weight matrix of BERT\({}_{L1}\) and BERT\({}_{L2}\) correlates with the typological similarities \(\sigma_{\textit{s}mt}(L_{1},L2)\) or \(\sigma_{\textit{m}orph}(L_{1},L2)\), respectively. To evaluate the correlation, we compared two lists of pairs ranked according \(biCKA(W_{L_{1}},W_{L_{2}})\) and \(\sigma(L_{1},L2)\) by using the Spearman's correlation coefficients. The Spearman's correlation coefficient takes values from \(-1\) to \(+1\): values around \(0\) indicate no correlation, a value over the threshold of \(0.5\) indicates a good correlation, while negative values indicate anticorrelation.
To build the set of language pairs, we used Hugging Face1 that offers a considerable repository of pre-trained BERTs for a variety of languages. Hence, we had the possibility to select specific and interesting languages for our investigation. Languages are listed along with the respective pre-trained monolingual model (retrieved from the Hugging Face repository using the transformers library Wolf et al. (2020)). We selected the following European languages, listed above according to genealogical criteria:
Footnote 1: [https://huggingface.co/models](https://huggingface.co/models)
* Italian (ITA), bert-base-italian-cased by Schweter (2020c); French (FRE), bert-base-french-europeana-cased by Schweter (2020b); Spanish (SPA), bert-base-spanish-wmm-cased by Canete et al. (2020); Romanian (ROM), bert-base-romanian-cased-v1 by Dumitrescu et al. (2020);
* English (ENG), bert-base-uncased by Devlin et al. (2019); Swedish (SWE), bert-base-swedish-cased by Malmsten et al. (2020); German (GER), bert-base-german-cased by Deepset (2019); Dutch (DUT), bert-base-dutch-cased by de Vries et al. (2019);
* Russian (RUS), rubert-base-cased by Kuratov and Arkhipov (2019);
* Greek (GRK), bert-base-greek-uncased-v1 by Koutsikakis et al. (2020)
* Non-Indo-European languages: Turkish (TUR), bert-base-turkish-cased by Schweter (2020a); and, Finnish (FIN), bert-base-finnish-cased-v1 by Virtanen et al. (2019).
In addition, we included Persian (or Farsi) (PRS), belonging to the Indo-Iranian branch with the corresponding model bert-fa-base-uncased by Farahani et al. (2020), which is one of the most representative Indo-European language spoken outside the European continent.
We performed two different sets of experiments: (1) using all language pairs in a single shot; (2) clustering languages and then selecting pairs interclusters. The first set of experiments showed where
generally syntax and morphology are encoded. Instead, the second set of experiments is used to determined if more fine grained common features among languages are captured or not.
The experiments on clustering languages aim to be more specific in describing whether some weight matrices encode specific linguistic information. Indeed, despite being all European, selected languages show different typological features. Hence, these languages may be clustered according to their WALS similarity on typological vectors. Then, we identified distinctive features among pairs of clusters by comparing the features Gini impurity Ceriani and Verme (2012) of each cluster and their union. We will refer to features that take entirely different values among clusters as _polarizing features_. Once those features are detected, to understand if they are encoded by a given matrix, to compute ranked lists of language pairs, we selected pairs of languages belonging to different clusters. The intuition is that the similarity between these pairs of languages, and, thus, Spearman's correlation between ranked lists, is mainly affected by polarizing features.
## 5 Results and Discussion
This section reports the results of the two sets of experiments. Section 5.1 reports the experiments using all language pairs in a single shot. Section 5.2 describes the experiments on clustering languages and then selecting pairs inter-clusters.
### Correlation between matrices in all languages
Experiments using all languages in a single list of pairs have relevant results confirming some of the findings obtained using probe tasks on BERT. Indeed, both morphological and syntactic spaces in WALS correlate with specific layers in BERT (see Fig. 1). Only some of the correlation coefficients have values higher than the fixed threshold at \(0.5\).
Syntactic properties seem to emerge in middle layers (Fig. 0(a)) for content matrices \(OA\) and \(V\) of the attention sub-layer. In fact, matrices \(OA_{4}\), \(V_{5}\), \(OA_{5}\), \(OA_{6}\) achieve a Spearman's value higher than the fixed threshold, respectively, \(0.60\), \(0.52\), \(0.66\), and \(0.55\). The higher correlation value is \(0.66\) for \(OA_{5}\). All Spearman's are tested against the null hypothesis of the absence of linear correlation and are statistically significant with a p-value smaller than \(0.01\). These results align with the probing experiments that assessed the predominance of middle layers in the encoding of syntactic information.
Morphological properties instead show up in a single matrix \(V_{3}\). The correlation between typological space and the matrix is moderate, that is, \(0.55\). Moreover, the correlation seems to be moderately stable until the layer \(7\) on these matrices \(V\).
These results are definitely new and important since they are corroborating probing results Nikolaev and Pado (2022); Otmakhova et al. (2022); Chi et al. (2020); Ravishankar et al. (2019); Singh et al. (2019). Indeed, the layers \(4\), \(5\), and \(6\) show a positive correlation with syntax, as shown by probing tasks.
### Extra-cluster Correlation
Experiments on correlations of the ranked list of language pairs extracted from different language pairs aim to study whether specific linguistic fea
Figure 1: Spearman correlation coefficient over all language pairs ranked with typological features and with weight matrix similarities. Rows are the matrices types, and columns are the layers. The color scale: Values closer to \(+1\) are in red, closer to 0 are in black, and values closer to -1 are in blue. Statistically significant results with a \(p-value<0.01\) are labeled with \({}^{*}\).
tures are related to specific weight matrices in BERT.
For this set of experiments, we performed a K-means clustering of languages is performed (Figure 4) for both syntactic and morphological typological spaces. Four clusters emerged for the syntactic typological space of languages, and three clusters emerged on the morphological one. The clusters (\(S\)) generated from syntactic features are: \(S1=\{\texttt{ITA},\texttt{FRE},\texttt{SPA},\texttt{ROM}\}\), \(S2=\{\texttt{ENG},\texttt{FIN},\texttt{SWE},\texttt{RUS},\texttt{GRK}\}\), \(S3=\{\texttt{TUR},\texttt{PRS}\}\) and \(S4=\{\texttt{GER},\texttt{DUT}\}\). The clusters (\(M\)) generated from morphological space are: \(M1=\{\texttt{ITA},\texttt{FRE},\texttt{SPA},\texttt{ROM}\}\), \(M2=\{\texttt{ENG},\texttt{SWE},\texttt{GER},\texttt{DUT}\}\) and \(M3=\{\texttt{TUR},\texttt{FIN},\texttt{PRS},\texttt{RUS},\texttt{GRK}\}\).
The Spearman's correlation is reported for each matrix in the different layers, for both syntactic (Fig. 2) and morphological (Figure 3) clustering. In each matrix, two different clusters are considered.
From the syntactic point of view (see Fig. 2), the two larger clusters \(S1\) and \(S2\) show an interesting pattern (Figure 1(a)). The threshold of \(0.5\) is exceeded by the matrices in layers from 4 to 8, with a peak of 0.77 in layer 5 on matrix \(V_{5}\). These values are statistically significant with a p-value lower than \(0.01\). Those results confirm what has been observed in the previous section. Hence, these matrices may encoded the polarizing features 87A Order of Adjective and Noun and 97A
Figure 3: Each matrix shows the Spearman’s correlation coefficients for extra-cluster morphological analysis, one matrix for each pair of clusters, \(M_{i}\) and \(M_{j}\). Values closer to \(+1\) are in red, values closer to -1 in blue. Statistically significant results with a p-value lower than \(0.01\) are labelled with \({}^{*}\).
Figure 2: Each matrix shows the Spearman’s correlation coefficients for extra-cluster syntactic analysis, one matrix for each pair of clusters, \(S_{i}\) and \(S_{j}\). Values closer to \(+1\) are in red, and values closer to -1 in blue. Statistically significant results with a p-value lower than \(0.01\) are labeled with \({}^{*}\).
Relationship between the Order of Object and Verb and the Order of Adjective and Noun.
For completeness, we also report results between syntactic clusters where one is a smaller cluster. Results here a very unstable. In fact, Spearman's values that exceed the threshold of \(0.5\) fluctuate too much across different layers and matrices. Results are rarely statistically significant, with the exception of \(Q_{3}\) in 2e. Hence, it is not hard to assert that the polarizing features of these clusters are encoded at some layers using Spearman's correlation coefficient.
From the morphological point of view (see Fig. 3), language pairs from \(M1\)-\(M3\) and from \(M2\)-\(M3\) lead to interesting results. Indeed, related ranked lists for pairs for these clusters show extra-clustering Spearman's coefficients that are above the threshold across multiple layers and that are statistically significant. The rankings of pairs generated from \(M1\)-\(M3\) and \(M2\)-\(M3\) have peaks at layer 0 (\(Q_{0}\) matrix) and layer 3 (\(V_{3}\) matrix), respectively. However, a high correlation can also be observed in other layers, with no clear descending trend. Some of the polarizing and nearly polarizing features shared by \(M1\)-\(M3\) and \(M2\)-\(M3\) are then important: 29A Syncretism in Verbal Person/Number Marking, 21A Exponce of Selected Inflectional Formatives, 23A Locus of Marking in the Clause.
Although these results are not conclusive, these experiments on clusters of similar languages establish a possible new way to investigate the linguistic properties of BERT.
## 6 Conclusions
Understanding whether large language models encode linguistic models is a challenging and important research question. Despite their flexibility and their SOTA performances on multiple NLP tasks, transformer-based models do not explicitly express what they learn about language constructions and, thus, it is hard to control their failures directly.
Our paper proposes a novel way to observe BERT capability to encode linguistic models and achieves two critical contributions: confirming syntactic probing results from another point of view and opening a new way to investigate specific linguistic features in layers. Differently from all previous approaches, our methodology is based on directly comparing layer-by-layer weight matrices of BERTs and evaluating whether typological similar languages have similar matrices in specific layers. From a different standpoint, our experimental results confirm that layers 4, 5, and 6 encode syntactic linguistic information. Moreover, they also suggest that attention's value matrix V and the attention's output layer are more important than other matrices in encoding linguistic models. This latter is an important and novel result.
In future work, our findings could be helpful: (1) for defining cross-language training procedures that consider similarities between languages and between models, and (2) for fostering ways to act in specific weight matrices of specific layers of BERT to change the undesired behavior of final BERT downstream systems. Moreover, this methodology could be used on other transformer architectures.
Figure 4: t-SNE plot of clustering based on syntactic (4a) and morphological (4b) features extracted from WALS.
### Limitations
To the best of our knowledge, this is the first attempt to directly compute similarities between weight matrices of BERT and to compare it with an external resource. Hence, it has many possible limitations.
The more important limitation can be due to the fact that transformers in general and BERT, in particular, could be mostly large memories of pre-training examples. Hence, comparing weight matrices at different layers could imply comparing pre-training examples given to the different BERTs. However, this is not only a limitation of our study. Indeed, it could be a limitation for any linguistic analysis of BERTs or other transformers.
The second limitation is the availability of monolingual BERTs for low-resource languages, which led to an analysis conducted that is incomplete. The growing availability of monolingual BERTs can solve this issue and may require to re-do the experiments.
The third limitation is the incompleteness of the World Atlas of Language Structures (WALS) Dryer and Haspelmath (2013). Indeed, as this is a growing linguistic resource, our results also depend on the quality of the resource at the time of download. For this reason, we selected languages that may be controlled by our linguists.
|
2308.04698 | Collective deformation modes promote fibrous self-assembly in
protein-like particles | The self-assembly of particles into organized structures is a key feature of
living organisms and a major engineering challenge. While it may proceed
through the binding of perfectly matched, puzzle-pieces-like particles, many
other instances involve ill-fitting particles that must deform to fit together.
These include some pathological proteins, which have a known propensity to form
fibrous aggregates. Despite this observation, the general relationship between
the individual characteristics of the particles and the overall structure of
the aggregate is not understood. To elucidate it, we analytically and
numerically study the self-assembly of two-dimensional, deformable ill-fitting
particles. We find that moderately sticky particles tend to form equilibrium
self-limited aggregates whose size is set by an elastic boundary layer
associated with collective deformations that may extend over many particles.
Particles with a soft internal deformation mode thus give rise to large
aggregates. Besides, when the particles are incompressible, their aggregates
tend to be anisotropic and fiber-like. Our results are preserved in a more
complex particle model with randomly chosen elastic properties. This indicates
that generic protein characteristics such as allostery and incompressibility
could favor the formation of fibers in protein aggregation, and suggests design
principles for artificial self-assembling structures. | Hugo Le Roy, M. Mert Terzi, Martin lenz | 2023-08-09T04:21:50Z | http://arxiv.org/abs/2308.04698v2 | # Collective deformation modes promote fibrous self-assembly in protein-like particles
###### Abstract
The self-assembly of particles into organized structures is a key feature of living organisms and a major engineering challenge. While it may proceed through the binding of perfectly matched, puzzle-pieces-like particles, many other instances involve ill-fitting particles that must deform to fit together. These include some pathological proteins, which have a known propensity to form fibrous aggregates. Despite this observation, the general relationship between the individual characteristics of the particles and the overall structure of the aggregate is not understood. To elucidate it, we analytically and numerically study the self-assembly of two-dimensional, deformable ill-fitting particles. We find that moderately sticky particles tend to form equilibrium self-limited aggregates whose size is set by an elastic boundary layer associated with collective deformations that may extend over many particles. Particles with a soft internal deformation mode thus give rise to large aggregates. Besides, when the particles are incompressible, their aggregates tend to be anisotropic and fiber-like. Our results are preserved in a more complex particle model with randomly chosen elastic properties. This indicates that generic protein characteristics such as allostery and incompressibility could favor the formation of fibers in protein aggregation, and suggests design principles for artificial self-assembling structures.
Functional structures in living cells are often self-assembled from several copies of a single protein, from microtubules and clathrin cages to viral capsids in the shape of cylinders or spheres [1, 2, 3]. The radius of such assemblies is dictated by the curvatures of the individual particles that precisely fit together to form them. Similarly, artificial self-assembly often relies on fitting well-adjusted particles together to build structures with a controlled size [4, 5, 6, 7].
In other instances however, the shapes of the individual particles are ill-fitting and do not obviously dictate the structure of the aggregate. This is the case in the pathological aggregation of normally soluble proteins, _i.e._, of proteins not evolutionarily optimized to self-assemble into a well-defined structure [8, 9, 10, 11]. Despite the diversity of the shapes and interactions involved, the aggregation of these ill-fitting proteins produces fibrous structures with remarkable consistency. These fibers display varied widths and internal structures [12, 13, 14], and the proteins within are often significantly deformed in ways that depend on the assembly protocol [15]. Deformations are common in proteins, and many display physiologically relevant deformation modes that facilitate self-assembly [16, 17], perform a motor function [18], participate in their biochemical activity [19], or serve to mechanically transmit a signal, a function known as allostery [20, 21]. Nevertheless, the generic implications of the deformability of proteins on their ill-fitting aggregation is not understood.
Beyond proteins, particle deformations have long been suggested as a mechanism to regulate aggregate size in self-assembly [22]. In this picture, ill-fitting particles are forced to deform as they tightly bind to one another. As more and more particles are added to the aggregate, the distortions build up until they become so severe as to prevent any further assembly [23, 24]. The accumulation of stresses resulting from such distortions may govern the structure of DNA origami assemblies [25, 26] and prevent the indefinite bundling of preexisting protein fibers [27, 28, 29]. Beyond merely fixing the overall size of an aggregate, the accumulation of deformations can moreover dramatically alter its shape. This has been proposed to drive a transition from cylindrical to tape-like fiber bundles [30, 31]. Finally, it can also drive sticky, deformable particles to form anisotropic aggregates that grow into infinite one-dimensional structures reminiscent of pathological protein fibers [32]. The underlying mechanism and the nature of the particle properties that determine the dimensionality of the final aggregate however remain elusive.
In this paper, we provide a detailed analytical understanding of the emergence of self-limited and fibrous aggregates in a two-dimensional deformable particle system. We first introduce a minimal model based on highly symmetrical particles. It gives rise to an emergent elastic boundary layer length \(\ell\), allowing us to map it onto a continuum description in the limit of large \(\ell\). We use this description to compare the energies of several candidate structures and establish an aggregation phase diagram, which we then validate using numerical simulations. Finally, we introduce a much broader class of elastic particles, and demonstrate that the results derived in the idealized model still apply there, including in cases where the values of \(\ell\) are moderate.
## II Elastic aggregation model
To understand the interplay between particle deformations and aggregate structure, we first discuss a minimal one-dimensional example where ill-fitting particles deform upon aggregation. This leads to a deformation gradient from moderately deformed particles at the edge of the aggregate to highly deformed ones in its bulk. We then introduce a two-dimensional model that allows for a much wider diversity of aggregate structures, including fibers and planes. This model is analytically intractable in its general form, which leads us to design a continuum limit to enable further analysis.
### One-dimensional toy model
We consider a collection of identical isosceles trapezoids [Fig. 1(a)]. Each such particle can aggregate with its left and right neighbors by fusing its vertical sides with theirs. In the special case where the trapezoids are well-adjusted, _i.e._, if they are rectangles, such binding does not require any deformation. Conversely, particles whose top and bottom faces have different lengths are ill-fitting and must deform to bind. We model the energetic cost of this deformation using four springs: two representing the top and bottom faces of the particles (yellow and red) with rest lengths \(1\pm\epsilon\) and spring constant \(k\), and two connecting springs with spring constants \(k_{c}/2\) and rest lengths \(\epsilon\) that tend to center the top and bottom faces (blue). Summing the contributions of these four springs, we write the deformation energy of the central particle of Fig. 1(b) as
\[e_{d}^{(i)}= \frac{k}{2}[(x_{i+1}^{\uparrow}-x_{i}^{\uparrow})-(1+\epsilon)]^ {2}+\frac{k}{2}[(x_{i+1}^{\downarrow}-x_{i}^{\downarrow})-(1-\epsilon)]^{2}\] \[+\frac{k_{c}}{4}(x_{i+1}^{\uparrow}-x_{i+1}^{\downarrow}-\epsilon )^{2}+\frac{k_{c}}{4}(x_{i}^{\downarrow}-x_{i}^{\uparrow}-\epsilon)^{2}, \tag{1}\]
where the \(\{x_{i}^{\uparrow}\}\), \(\{x_{i}^{\downarrow}\}\) denote the coordinates of particle corners. This model does not involve any explicit prestresses; including some would not make any difference within the linear response regimes studied here and in the rest of this work.
Defining the shift between an upper and lower corner as \(\delta_{i}=x_{i}^{\uparrow}-x_{i}^{\downarrow}\), force balance dictates that inside the aggregate
\[k(\delta_{i+1}-2\delta_{i}+\delta_{i-1})=k_{c}\delta_{i}\quad \Rightarrow\quad\delta_{i}\propto\epsilon\sinh(i/\ell), \tag{2}\]
where we define \(i=0\) as the center of the aggregate and where
\[\ell =1/\ln\left[1+k_{c}/k+\sqrt{2k_{c}/k+(k_{c}/k)^{2}}\right]\] \[\underset{k_{c}\ll k}{\sim}\sqrt{k/2k_{c}}. \tag{3}\]
The full prefactor of the last expression of (2) is fixed through the force balance condition at the aggregate's left and right edges. In a large aggregate, it results in an exponential decay \(\delta_{i}\propto\epsilon\exp(-|i-i_{\text{edge}}|/\ell)\) close to these edges. The initially trapezoidal particles at the center of the aggregate are thus forced into a rectangular shape (\(\delta_{i}=0\)), in contrast with the particles that reside within an edge-associated elastic boundary layer of size \(\ell\).
In the limit \(k_{c}/k\to 0\), the boundary layer size diverges. This regime is characterized by very rigid yellow and red springs, implying that the yellow and red springs close to the edge of the aggregate are almost at their equilibrium lengths. Going deeper into the aggregate, each blue spring exerts a small compressive (tensile) force on the yellow (red) chain. These forces add up over long distances, implying a progressive change of the yellow and red strain over a length scale much larger than the particle size.
### Two-dimensional particle model
One-dimensional aggregates are very simple geometrically, and are entirely characterized by the number of particles that they contain. Aggregation in higher dimension allows for a much broader variety of aggregate structures. To study the emergence of complex shapes there, we introduce a two-dimensional model based on hexagonal particles. Well-adjusted particles are represented by regular hexagons. Ill-fitting hexagons, by contrast, have alternating protruding (yellow) and withdrawn (red) corners [Fig. 2(a)]. The corners belonging to each category
Figure 1: The assembly of ill-fitting particles results in collective deformations, shown here in a minimal 1D model. (a) _left:_ Individual particle at rest, _right:_ colored springs schematizing the elasticity of the particles. In this 1D model, particles are allowed to aggregate only along the horizontal direction. Their corners then become colocalized (_black arrows_), which requires deforming at least some of the springs. (b) Schematic of a one-dimensional particle aggregate showing the state of the springs therein. While the yellow and red springs are able to assume different lengths in the vicinity of the edge of the aggregate, they are forced to have the same lengths in the bulk. The resulting energetic penalty hampers the formation of space-filling, bulky aggregates. In two or three dimensions, a similar penalty may result in the formation of fibrous aggregates.
form a yellow and a red equilateral triangle whose sides are springs with rest lengths \(1\pm\epsilon\) and a spring constant \(k\). The (blue) sides of the hexagon itself play the role of connecting springs with spring constant \(k_{c}\), and are at rest when the yellow and red equilateral triangles are themselves at rest and centered. These hexagonal particles are three-fold symmetric, which rules out an intrinsic, particle-level preference for forming one-dimensional fibers. Any fiber formed from their aggregation will thus be an emergent symmetry-broken structure. The response of these particles to shear and uniform compression cannot be independently varied while holding \(k_{c}/k\) (and therefore the boundary layer size) constant. To enable particles that range from fully compressible to incompressible, we thus additionally endow both yellow and red triangles with an areal rigidity. We implement it through an energy \(e_{\rm area}^{\uparrow/\downarrow}=k_{\rm area}(A^{\uparrow/\downarrow}-A_{0} ^{\uparrow/\downarrow})^{2}/2A_{0}^{\uparrow/\downarrow}\), where \(\uparrow\) and \(\downarrow\) respectively refer to the yellow and red triangle and \(A\) (\(A_{0}\)) are the associated triangle areas (rest areas).
The quadratic spring and areal energies introduced above all vanish in the particle's resting state. Any deformation away from this state implies an energetic cost, and such deformations are required to accommodate particle binding. In our model, two particles can bind along a blue side by merging one yellow and one red corner each. The merging of corners with different colors is not allowed. Each pair of bound sides is rewarded by an energy \(-g\) regardless of the particles' state of deformation, which defines a zero-range interaction between particles. These rules favor the assembly of hexagons into a triangular "particle lattice" [Fig. 2(b)] where all pairs of neighboring particles are bound, which we consider throughout. The aggregate topology, _i.e._, the specification of which particles bind to which others through which sides, can thus be entirely described by considering a triangular lattice and specifying a list of the lattice sites that are occupied by a particle. In the following we use the symbol \(\mathcal{T}\) to refer to this topology. Since the binding energy is fully determined by the number of bound particle sides, it only depends on \(\mathcal{T}\).
### Continuum formalism
Finding the most favorable aggregate in our 2D particle model requires two steps: to compute the optimal deformation energy for each fixed topology \(\mathcal{T}\), and then to determine which topology has the lowest optimal energy. Here we introduce a continuum approximation that renders the first step analytically tractable in several important cases. This approximation is formally valid in the limit where the 2D counterpart of the boundary layer size \(\ell\) is much larger than the particle size.
To define our continuum limit, we note that in large aggregates where all sites of the particle lattice discussed above are occupied (_i.e._, without holes), the yellow and red springs arrange into triangular spring lattices. In the regime \(k_{c}\ll k\) where connecting (blue) springs are much softer than triangle (yellow and red) springs, the strain within the yellow and red triangular spring lattices varies slowly over space. This is similar to the behavior of our one-dimensional model. As a result, we can assimilate each of these triangular spring lattices to a continuum sheet, giving rise to a continuum elastic energy
\[E_{d}= \iint\left[\frac{\lambda}{2}\left(\partial_{\alpha}u_{\alpha}^{ \dagger}-2\epsilon\right)^{2}+\mu\left(\frac{\partial_{\alpha}u_{\beta}^{ \dagger}+\partial_{\beta}u_{\alpha}^{\dagger}}{2}+\epsilon\delta_{\alpha\beta }\right)^{2}\right]\,\mathrm{d}A\] \[+\iint\left[\frac{\lambda}{2}\left(\partial_{\alpha}u_{\alpha}^{ \downarrow}+2\epsilon\right)^{2}+\mu\left(\frac{\partial_{\alpha}u_{\beta}^{ \downarrow}+\partial_{\beta}u_{\alpha}^{\downarrow}}{2}+\epsilon\delta_{\alpha \beta}\right)^{2}\right]\,\mathrm{d}A\] \[+\iint\frac{\kappa_{c}}{2}(u_{\alpha}^{\uparrow}-u_{\alpha}^{ \downarrow})^{2}\,\mathrm{d}A, \tag{4}\]
where the superscripts \({}^{\uparrow}\) and \({}^{\downarrow}\) refer to the yellow and red sheets respectively, and where the summation over repeated indices is implied while \(\delta\) denotes the Kronecker delta. The displacement fields \(\mathbf{u}^{\uparrow/\downarrow}(\mathbf{r})\) of either sheet are computed with respect to the infinite-aggregate, bulk state where all hexagons are regular, a state akin to a row of length-one rectangles in the one-dimensional model. Neither elastic sheet is at rest in this reference state, and \(\mathbf{r}\) is the position vector in this state. The displacement gradient \(\partial_{\alpha}u_{\alpha}^{\uparrow/\downarrow}\) thus plays the same role as the finite difference (\(x_{i+1}^{\uparrow/\downarrow}-x_{i}^{\uparrow/\downarrow}-1\)) of (1). The first integral of (4) is a two-dimensional generalization of the first term of (1), and gives the elastic energy of an isotropic elastic sheet with Lame coefficients \(\lambda\) and \(\mu\) whose resting state is characterized by an isotropic strain \(\partial_{\alpha}u_{\beta}^{\dagger}=\epsilon\delta_{\alpha\beta}\). As a reminder, \(\mu\) is the usual shear modulus and the sheet's Young modulus and Poisson ratio are \(Y=4\mu(\lambda+\mu)/(\lambda+2\mu)\) and \(\nu=\lambda/(\lambda+2\mu)\). The integration element \(\mathrm{d}A\) runs over the reference area \(A\). Finally, the last integral captures the energy of the connecting springs. It penalize any shift between the centers of the yellow and red triangles of a given particle, and therefore any difference in the displacements of the two sheets.
The harmonic form of (4) is valid for small deformations, which can be obtained for any aggregate topology
Figure 2: 2D model of ill-fitting self-assembly. (a) Individual particles have two kind of vertices, which define three interconnected elastic networks. (b) As the particles are aggregated by matching the vertices with the same color, individual particles undergo deformations.
considered here given a small enough \(\epsilon\). Within this limit, the particles' binding energy can be described through
\[E_{b}=\gamma L, \tag{5}\]
where \(\gamma\) is a line tension and \(L\) is the total length of aggregate edge in the reference state, including any internal holes. The parameters of the continuum model are mapped onto those of the 2D particles in the _Supporting Information_, yielding
\[\mu = \frac{\sqrt{3}}{4}k \tag{6a}\] \[\nu = \frac{\sqrt{3}k+2k_{\text{area}}}{3\sqrt{3}k+2k_{\text{area}}}\] (6b) \[\kappa_{c} = 2\sqrt{3}k_{c}\] (6c) \[\gamma = \frac{4}{\sqrt{3}}g \tag{6d}\]
The binding energy of an aggregate depends only on topology \(\mathcal{T}\), which we denote as \(E_{b}(\mathcal{T})\). By contrast, the deformation energy \(E_{d}(\mathcal{T},\{\mathbf{u}^{\uparrow/\downarrow}(\mathbf{r})\})\) depends both on the topology and on the displacement fields \(\mathbf{u}^{\uparrow/\downarrow}(\mathbf{r})\). As described in the beginning of this section, the optimal energy associated with a given aggregate topology is thus
\[E(\mathcal{T})=E_{b}(\mathcal{T})+\min_{\{\mathbf{u}^{\uparrow/\downarrow} \}}E_{d}(\mathcal{T},\{\mathbf{u}^{\uparrow/\downarrow}\}). \tag{7}\]
Once this minimization is performed, finding the most favorable aggregate structure requires finding the topology \(\mathcal{T}\) that minimizes \(E(\mathcal{T})\).
## III Aggregation phase diagram
To establish an aggregation phase diagram, we consider a system with a fixed but large number of particles and ask which binding topology minimizes the total energy of the system. We first use our continuum formalism to compute the energies of an infinite bulk, an elongated fiber and a disk-like aggregate, thus offering a first comparison of the stability of two-, one- and zero-dimensional structures. We then numerically compare the energies of a wider range of putative aggregate structures in our discrete particle model. Finally, we confirm the converging results of these two approaches using numerical simulations devoid of _a priori_ constraints on the aggregate topology.
### Continuum phase diagram
In 2D space-filling, infinite aggregate, all particles are forced into a regular hexagonal shape. The continuum energy of (4) is then minimal for \(\mathbf{u}^{\uparrow}=\mathbf{u}^{\downarrow}=\mathbf{0}\), yielding an optimal energy per unit reference surface
\[e_{\text{bulk}}=\mu\frac{1+\nu}{1-\nu}4\epsilon^{2}. \tag{8}\]
Denoting the reference area per particle by \(a\), the total energy of a set of \(N\) particles thus reads \(Nae_{\text{bulk}}\) in the large-\(N\) limit. The binding energy is proportional to the perimeter of the aggregate (\(E_{b}\propto\sqrt{N}\)), and is thus negligible.
To determine whether fiber formation is favored over bulk aggregation, we minimize (4) over \(\{\mathbf{u}^{\uparrow/\downarrow}(\mathbf{r})\}\) for an infinite strip of width \(W\) and find (_Supporting Information_)
\[\mathbf{u}^{\uparrow}(x)=-\mathbf{u}^{\downarrow}(x)=\ell\epsilon\left[(1+\nu )\frac{\sinh(x/\ell)}{\cosh(W/2\ell)}\right]\,\hat{\mathbf{x}}, \tag{9}\]
where \(x\) is the direction perpendicular to the fiber and where
\[\ell=\sqrt{\frac{\lambda+2\mu}{2\kappa_{c}}}. \tag{10}\]
The quantity \(\lambda+2\mu\) is known as the P-wave modulus of the sheet, and characterizes the cost of compressing it along one axis without allowing it to deform in the perpendicular direction. Similar to (1) and Fig. 1(b), the profile of (9) implies bulk-like, highly deformed particles in the center of the fiber, while close to the edges the red and yellow sheets gradually relax within a boundary layer of width \(\ell\). Defining the dimensionless line tension \(\Gamma=2\gamma/[(1+\nu)\ell e_{\text{bulk}}]\), line tension cost associated with the fibers' edges reads \(E_{b}=2\Gamma Na/W\) and the mean energy per unit surface reads
\[\frac{e_{\text{fiber}}(W)}{e_{\text{bulk}}}=1-(1+\nu)\frac{\tanh(W/2\ell)}{W/ \ell}+(1+\nu)\frac{\Gamma}{W/\ell}. \tag{11}\]
In the \(\epsilon\to 0\), small-particle-mismatch limit, all deformation energies scale as \(\epsilon^{2}\). Thus the parameter \(\Gamma\) encloses both the \(\gamma\) and the \(\epsilon\) dependence of all self-assembly outcomes studied in this paper. When \(\Gamma<1\), \(e_{\text{fiber}}(W)\) displays a minimum at a finite fiber width \(W^{*}\). This optimal width diverges in the limit \(\Gamma\to 1\), and the corresponding fiber is always more stable than the bulk [Fig. 3(a)]. To understand this stability, consider a semi-infinite aggregate that fills half of the plane. While its energy per unit surface far from its edge is equal to \(e_{\text{bulk}}\), the presence of the edge brings about two energetic contributions. The first is a bare line tension cost \(\gamma\) per unit edge length. The second is the deformation energy gain in the boundary layer, which is of the order of \(e_{\text{bulk}}\) per unit area. Since the width of the layer is \(\ell\), the resulting energy gain per unit edge length is of the order of \(\ell e_{\text{bulk}}\). For \(\gamma\lesssim\ell e_{\text{bulk}}\), forming a new edge thus results in a net energy gain. At the scaling analysis level, this is equivalent to \(\Gamma<1\). This argument implies that infinite bulks can lower their energy by breaking up into fibers in this regime. However, if these fibers are made so narrow that their widths become of order \(\ell\) or smaller, the boundary layers associated with their two edges start to overlap. Such narrow fibers can only claim a fraction of the deformation energy reduction described above. As a result, very narrow
fibers are penalized. This implies the existence of an optimal width \(W^{*}\) that is of order \(\ell\) when \(\Gamma\) is of order one but smaller than one.
In an aggregate whose resting shape is a disk of radius \(R\), the displacement field is given by (_Supporting Information_)
\[\mathbf{u}^{\uparrow}(r)=\ell\epsilon\frac{(1+\nu)I_{1}(r/\ell)}{I_{0}(R/\ell)+ I_{2}(R/\ell)+\nu[I_{0}(R/\ell)-I_{2}(R/\ell)]}\,\hat{\mathbf{r}} \tag{12}\]
and \(\mathbf{u}^{\downarrow}(r)=-\mathbf{u}^{\uparrow}(r)\). Here \(r\) denotes the radial coordinate and the \(I_{\alpha}\)s are the modified Bessel functions of the first kind. Just like Eqs. (2) and (9), (12) predicts an exponential decay of the boundary displacement at the edge of large aggregates. Again, the deformation cost is smaller at the aggregate edge than in its center, yielding an average energy per unit area
\[\frac{e_{\mathrm{disk}}(R)}{e_{\mathrm{bulk}}}=1 -\frac{I_{0}(R/\ell)-I_{2}(R/\ell)}{I_{0}(R/\ell)+I_{2}(R/\ell)+ \nu[I_{0}(R/\ell)-I_{2}(R/\ell)]}\] \[+(1+\nu)\frac{\Gamma}{R/\ell}. \tag{13}\]
This expression displays an optimal finite aggregate size \(R^{*}\) at low values of \(\Gamma\). As in the fiber case, this optimal disk is more stable than the bulk up to values of \(\Gamma\) of order one, although the exact criterion differs due to the curved geometry of the interface [Fig. 3(a)].
Our phase diagram indicates that fibers are more stable than disks for large Poisson ratios, _i.e._, they are favored in the aggregation of incompressible particles (characterized by \(\nu=1\) in 2D). To understand this, we compare a vertical fiber and a disk at \(\nu=1\). Symmetry forbids vertical (orthoradial) displacements in the fiber (disk), allowing only horizontal (radial) displacements. However, no such displacement is possible without violating the incompressibility condition of the yellow or red sheet. As a result, the sheets must remain in their resting states, implying displacements \(u_{x}^{\uparrow}=-u_{x}^{\downarrow}=2\epsilon x\) (\(u_{r}^{\uparrow}=-u_{r}^{\downarrow}=2\epsilon r\)) to lowest order in \(\epsilon\) with respect to the fictitious bulk reference state. All the deformation cost thus comes from the connecting springs. The resulting connecting spring energy per unit area reads \(8\kappa\epsilon^{2}x^{2}\) (\(8\kappa\epsilon^{2}r^{2}\)), proportional to the square of the distance from the center of the aggregate. This is where the difference between fibers and disks manifests itself. The edge of
Figure 3: Large binding energies favor bulk aggregation and incompressible particles tend to form fibrous aggregates. (a) Analytical phase diagram derived from Eqs. (8), (11) and (13). Fiber widths always diverge when approaching the transition to the bulk. Conversely, in disks the radius discontinuously jumps from a finite value to \(+\infty\) at the transition. The point where the three phases meet is \(\Gamma=1\), \(\nu=1/2\). (b) Phase diagram based on the numerical comparison of the energies of the aggregates shown in Fig. 4(a) for \(\ell=5\). The fiber region of the phase diagram is larger than in the continuum model, and the disk radius \(R\) again jumps discontinuously at the transition with the bulk. Smaller values of \(\ell\) lead to a very similar phase diagram, albeit with an extended “bulk with holes” region (_Supporting Information_). Monte-Carlo simulations for the conditions indicated by the small squares are shown as small panels, and are consistent with the phase diagram. The bottom line of snapshots uses 300 particles and a box of \(30\times 30\) sites, while the others use 200 particles and a box of \(60\times 60\) sites.
a fiber is just as long as its centerline, while the center of a disk is much smaller than its perimeter. As a result, a smaller proportion of connecting springs are highly extended in the fiber than in the disk, making the former energetically cheaper. Conversely, in the limit of low Poisson ratio \(\nu\to 0\), the gain per unit length of the straight, fiber-like and of the curved, disk-like boundary layers become identical. Forming such boundary layers is favorable overall for \(\Gamma<1\). Since the disk tends to have more boundary layer per unit area than the fiber, it is more favorable in this limit.
### Comparison of pre-made discrete aggregate structures
The phase diagram of Fig. 3(a) focuses on continuum sheets, leaving open the question of whether the formation of holes within the aggregate or regimes where the particle size is comparable to the boundary layer thickness could result in different aggregation behaviors. To assess its robustness to these effects, we numerically implement our discrete particle model in a computer. We consider several aggregates with predetermined topologies including periodic bulks with and without holes, fibers of various widths as well as hexagonal aggregates approximating disks of different radii [Fig. 4(a)]. Isolated particles are taken into account as disks with one particle. We use a conjugate gradient algorithm to minimize the energy of each aggregate over the position of the all particle corners, analogous to the minimization over the deformation field in (7). We build three different fiber structures by cutting the bulk along distinct directions. One of the fibers is left-right asymmetric and spontaneously curves, although that curvature vanishes in the large-\(W\) limit. In practice we obtain the energy of infinitely long fibers by extrapolating from long ones with increasing lengths.
To compute the aggregation phase diagram, we minimize the deformation energy of aggregates with width (radii) ranging from 1 to 35 (25) for \(\ell=5\) and for values of \(\nu\) ranging from 0 to 1. For each value of \(\Gamma\) in Fig. 3(b) we select the aggregate with the lowest total energy. Bulks with holes, which in our model have zero deformation energy, dominate the assembly only at very small surface tensions. The rest of the phase diagram is essentially identical to the continuum one, except for an expansion of the fiber region against both bulks and disks. This may be due to the increased stability of curved fibers, which are always more stable than their straight counterparts.
To assess the consistency of the morphology of finite-\(\ell\) aggregates with the continuum (\(\ell\to\infty\)) expectation of Fig. 3(a), we numerically determine the most favorable fiber width and disk radius for a range of \(\ell\) and \(\Gamma\) in Fig. 4(b). While the two approaches are guaranteed to agree only in the large-\(\ell\) limit, in practice the continuum approximation yields very accurate predictions all the way down to values of \(\ell\) equal to the particle size. This indicates that the boundary layer physics revealed by our continuum model remains an excellent qualitative and quantitative description of the aggregation process even when the stiffness of the connecting springs \(k_{c}\) is comparable to that of the others (\(k_{c}\lesssim k\Leftrightarrow\ell\gtrsim 1\)). The length \(\ell\) thus provides a robust tool to predict the typical number of particles in the cross-section of a fiber or a disk.
### Monte-Carlo validation of the phase diagram
As a final validation of our phase diagram, we remove any restriction on the aggregate's structure and use a Monte-Carlo algorithm to evolve its topology. We simulate a triangular lattice where each site can be empty or occupied by a particle. We start with randomly placed particles, and attempt Monte-Carlo moves where a ran
Figure 4: Comparison of the energies of a collection of aggregates with pre-determined topologies. (a) List of the aggregate topologies included in the trial: bulk with holes, three types of fibers obtained by piling the particles in three different ways, hexagonal aggregates (including single particles) and bulk. The white lines and shading in the bulk show the three different types of cuts used to produce the fibers shown on the left. In practice, the isolated particles and uncurred fibers are never the most stable structures. We compare a broad range of fiber widths and hexagonal aggregates, despite representing only a few here. (b) _left:_ When measured in units of number of particles, the optimal fiber width \(W^{*}\) (computed here for \(\nu=0.95\)) and disk radius \(R^{*}\) (\(\nu=0.2\)), strongly depend on the particles’ elastic properties through \(\ell\). _Right:_ Rescaling both lengths by \(\ell\) however leads to an excellent collapse with the analytical prediction (dashed red line), even for small \(\ell\).
domly chosen particle is moved to a randomly chosen empty site. We compute the optimal energy of the resulting new topology using our conjugate gradient method. The move is accepted according to a Metropolis criterion with temperature \(T\). Since the deformation energy is optimized before the application of the Metropolis criterion, this temperature applies only to the system's topological degrees of freedom. To look for an approximation of the system's topological ground state, we perform a simulated annealing procedure whereby \(T\) is slowly lowered from a large value to zero over the course of the simulation.
The computational cost and limited particle number in our simulation make it difficult to construct a full Monte-Carlo phase diagram. Instead, we simulate select parameter regimes to validate our main findings. Specifically, we perform three line scans at \(\ell=5\) whose final aggregation states are shown in the small panels of Fig. 3(b). First, a range of increasing \(\Gamma\) at \(\nu=0.9\) shows the dominance of fibers at large Poisson ratios, and the overall tendency of the fiber widths to increase with increasing line tension. A second horizontal scan at \(\nu=0.4\) shows disk-like aggregates whose radii increase with increasing \(\Gamma\). Finally, a vertical scan at fixed \(\Gamma=0.5\) shows a transition between disks and elongated aggregates at the predicted value of \(\nu\). The shapes of the aggregates resulting from the simulations are somewhat variable, as can be assessed from the \(\Gamma=0.5\), \(\nu=0.9\) condition which belongs to two different line scans and for which the outcomes of two independent simulations is shown in Fig. 3(b). Our results show that despite this limitation and the relatively small number of particles in our system, our phase diagram accurately predicts the overall outcome of an unconstrained assembly.
## IV Extension to random particles
The discrete model of Fig. 2 has been specifically designed with the continuum approximation in mind, making it unclear whether our results apply to more generic particle types. To test whether this is the case, we define a much broader class of hexagonal particles with the same shape at rest and the same binding rules but with a considerably more generic deformation energy. We parametrize the shape of each particle by nine distances \(d_{1}\), \(d_{2},\ldots,d_{9}\) defined in Fig. 5(a). The vector \(\mathbf{d}\) of these lengths completely characterizes a particle's shape, and we denote its value at rest by \(\mathbf{d}_{0}\). The deformation energy of one particle is an arbitrary quadratic form of the deviation from that state:
\[e_{d}=\frac{1}{2}(\mathbf{d}-\mathbf{d}_{0})^{T}\cdot\mathbf{M}\cdot(\mathbf{d }-\mathbf{d}_{0}). \tag{14}\]
In the following we draw the matrix \(\mathbf{M}\) from a random distribution that ensures that it is positive semi-definite and three-fold symmetric (_Supporting Information_). Such particles allow not only for differences in the elastic constants of the colored springs of Fig. 2(a), but also for new couplings between them. For instance, one of these new couplings dictates that compressing a yellow spring makes the red spring to its right shrink or extend. It also opens regimes where the coupling springs are not much softer than the others.
We first determine whether the length \(\ell\) in a random particle is determined by the same ratio of uniaxial-compression to triangle-shifting moduli as in (10). To access a wide range of \(\ell\) values, we randomly draw a large number (\(3\times 10^{7}\)) of instances of matrix \(\mathbf{M}\). For each instance, we operationally define \(\ell\) as the thickness of the elastic boundary layer in a simulation of a semi-infinite aggregate [Fig. 5(b)]. We then relate this measured value to the properties of a single particle, by numerically measuring random-particle proxies of the two moduli \(\lambda+2\mu\)
Figure 5: The aggregation behavior of particles with random elasticity is accurately captured by our continuum theory. (a) The shape of a particle is fully characterized by a tuple of nine distances \((d_{1},\ldots,d_{9})\). (b) To numerically compute the boundary layer thickness \(\ell\), we simulate a half-plane full of particles (colored here by absolute deformation energy). We then perform an exponential fit of the excess elastic energy relative to the bulk as a function of distance from the edge, shown here as a line on a lin-log plot. Finally we define \(\ell\) as the associated decay length. (c) The random particle boundary layer thickness \(\ell\) measured using this protocol is well predicted by the value \(\ell^{\mathrm{pred}}\) inferred from an extrapolation of (10) (_red line_). (d) Radius and width of the best disk and fibers obtained for random particles. The continuum predictions are shown as red lines (the position of the line does not depend on \(\nu\) for fibers).
and \(\kappa_{c}\) and using (10) to compute a predicted boundary layer thickness \(\ell^{\text{pred}}\) (_Supporting Information_). As shown in Fig. 5(c), we find a good correlation between \(\ell\) and \(\ell^{\text{pred}}\) for a randomly selected subsample of \(10^{5}\) matrices. This indicates that frustration at the edge of an aggregate relaxes in similar ways in simple and random particles, with the two subtriangles of the particles progressively shifting relative to each other. Similar to the one-dimensional model of Fig. 1, this shift causes a restoring force (denoted \(k_{c}\delta_{i}\) in the 1D model) which is balanced by stiffness of the particles' other degrees of freedom (associated with the stiffness \(k\) in the 1D model). Equation (10), which we use to compute \(\ell^{\text{pred}}\), uses the P-wave modulus \(\lambda+2\mu\) as a proxy for this stiffness. While this value is as an upper bound for the cost of deforming the particles, in practice our random particles may deform in more complex, less costly ways. Such deformations would result in a lower effective stiffness, and could explain why \(\ell^{\text{pred}}\) tends to overestimate the actual value of \(\ell\).
To determine whether the boundary layer thickness \(\ell\) controls the aggregate size in the same way as in our simple model, we study a randomly selected subsample of \(10^{3}\) matrices \(\mathbf{M}\) from our total sample. We use the same discrete aggregate procedures as in Fig. 4(b) to determine which fiber/disk width is the most favorable for \(\Gamma\) ranging from \(0\) to \(\Gamma_{\text{max}}\), where we define \(\Gamma_{\text{max}}\) as the critical line tension where the bulk becomes the most favorable structure. Both fibers and disks tend to become larger with increasing \(\Gamma\) despite a large dispersion of the data similar to that observed in Fig. 4(b) (_Supporting Information_). We however show in Fig. 5(d) that just as in the case of simple particles, this dispersion is largely abolished by rescaling the aggregate size by the boundary layer size \(\ell\). This demonstrates that the boundary layer still controls the physics of the assembly in the random particle case. Moreover, the resulting radii and width distribution are clearly centered around the values of \(W^{*}\) and \(R^{*}\) predicted by the continuum limit.
We finally assess the applicability of the phase diagrams of Fig. 3 to random particles by correlating the most favorable aggregate type with a suitable measure of the "individual sheet Poisson ratio" of the analytical theory. We define this measure by decoupling the "yellow" and "red" subtriangles of our new particles from each other (_i.e._, set \(M_{ij}=0\) for all \((i,j)\notin[1,3]^{2}\cup[4,6]^{2}\)), composing a lattice out of each and numerically computing their separate Poisson ratios. We use the average of these two values as our \(\nu\). We then segregate our \(3\times 10^{7}\) instances of the matrix \(M\) into three groups with \(\ell<1\), \(1<\ell<1.5\) and \(1.5<\ell\). We randomly select a few thousand particles within each group to obtain three near-uniform distribution of Poisson ratios. Fig. 6 shows the type and width of the aggregates obtained in each case. Fibers form at large \(\nu\) and \(\Gamma\) in all three groups, thus demonstrating the robust influence of these parameters. Disks also form in the expected parameter regimes, although they tend to be replaced by bulks with holes for the smallest values of \(\ell\). Finally, fibers are even more predominant here than in our initial model, illustrating the broad relevance of our fiber formation mechanism upon ill-fitting self-assembly.
## Discussion
Protein aggregation in disease involves the self-assembly of deformable, ill-fitting sticky particles. Compact aggregates of such particles display a deformation gradient between a relatively unconstrained edge and a strongly deformed, frustrated core. This deformed core is energetically costly, while the aggregate's surface tension implies that its edge is also costly. As a result, the cheapest part of the aggregate lies in between the core and the edge, in the shallow bulk region. The minimization of the aggregate's energy thus requires its structure to comprise as much of this shallow bulk as possible while keeping both core and surface small. Here we show that for a rather general model of deformable particles, this results in aggregate size limitation and emergent anisotropy. This geometrically nontrivial optimization is _a priori_ strongly dependent on the elastic properties of the particles. We nevertheless identify two surprisingly simple particle-level predictors of its outcome, namely an elastic screening length \(\ell\) and particle incompressibility.
In our model, the shallow bulk manifests as an elastic boundary layer with size \(\ell\). This is reminiscent of the size limitation mechanisms at work in ribbons self-assembled out of planar materials with an intrinsic negative Gaussian curvature [33] or specially designed warped jigsaw puzzle particles [24]. Particles with a frustrated continuum spin-like degree of freedom and coupled incommensurate lattices in the absence of phase slips also give rise to a boundary layer [34; 35; 36]. In contrast with these specific particle geometries however, here this behavior emerges in a wide variety of particles with randomly chosen elastic properties. This suggests that the mechanism may also apply in packings of complex ill-fitting proteins. Our observation of a connection between soft deformation modes at the particle level, large values of \(\ell\) and consequently large self-limited aggregates thus suggests an analogy with allosteric proteins, which mechanically transmit a signal by undergoing a concerted conformational change along a soft deformation mode [37; 38; 39].
While extended boundary layers appear to require some particle deformation modes to be much softer than others [Fig. 5(c)], not all particle-level soft modes result in one (Fig. S5). Indeed, once embedded in an aggregate, an initially soft deformation mode may couple to and be stiffened by the presence of neighboring particles. While our six-vertices, two-dimensional particles comprise only relatively simple soft modes compatible with collective deformations [namely the triangle-shifts illustrated in Fig. S4(a-b)], more complex objects are likely to allow many more. For instance, generalizations of our model in three dimensions, where the qualitative physics
highlighted here still applies, could additionally involve frustration and soft modes associated with chiral particle twisting. Further investigations are required to identify the geometrical requirements for a soft mode to be compatible with collective deformations, which in turn allows the build-up of a thick boundary layer. These requirements will shed light on the specific elastic parameters that most influence the particles' aggregation behavior, which could include the formation of clusters, fibers or sheets in three dimensions. In the language of proteins, this would allow us to assess the likelihood for a given allosteric mode to dictate the size of a frustrated protein aggregate. This would constitute a very general tool for predicting the structure of an aggregate from the individual properties of its proteins.
The three-fold symmetry of our model particles implies that their propensity to form fibers is an emergent property as opposed to an intrinsic preference for uniaxial aggregation. This breaking of symmetry is reminiscent of the strain-induced, elongated structures formed during frustrated epitaxial growth [40; 41; 42]. Both our most simple model and our generic, random-elasticity particles indicate that fiber formation is most advantageous in incompressible particles. This behavior is not specific to hexagonal particles, and we show in the _Supporting Information_ that an alternative model based on triangular objects displays an essentially identical phase diagram. These results suggests that our mechanism is more relevant for dense globular proteins, which tend to be largely incompressible, than for proteins with fluffy intrinsically disordered domains. Globular proteins are indeed known to form fibers in, _e.g._, sickle cell anemia and amyotrophic lateral sclerosis [10; 8; 11]. The binding free energies involved in these assemblies are typically at least one order of magnitude larger than the thermal energy \(k_{B}T\), justifying the analogy with our zero-temperature model. The particle deformations there moreover remain modest, consistent with the analytically tractable, small-frustration (\(\epsilon\ll 1\)) regime studied here. In more asymmetric particles than those studied here, the mechanism outlined in our work could be complemented by fiber-formation mechanisms based, _e.g._, on the presence of two specific binding sites on either side of the particle. Kinetic effects such as diffusion-limited aggregation, which hampers the formation of bulky aggregates, may also favor fibers.
Beyond proteins, the principles outlined here could be harnessed to control the assembly of artificial nano-objects. In DNA origami, soft deformation modes can be engineered to control aggregate size in a simple one-dimensional chain [25]. Two- or three-dimensional extensions of such designs should be prone to frustration-induced fibrillation. Fibrous morphologies also emerge in DNA origami systems into which this feature is not intentionally designed [43]. While systems involving rigid nanoparticles are less straightforwardly mapped onto an elastic continuum than DNA origami, fibrous morphologies have been observed in packings of tetrahedral particles [44] and successfully rationalized with an elastic model [45] based on a elastic frustration originating from a metric incompatibility. Our study does not rely on this very strong, somewhat specialized type of frustration and thus demonstrates that it is not required for frustration-induced fiber formation. Rigid colloids with short-range attractive and long-range repulsive interactions also display frustration-induced fibrous structures [46]. In this
Figure 6: Random particle aggregation diagram showing good agreement with the analytical results of Fig. 3. Color coding is as in Fig. 3(b). Each horizontal line in the diagrams corresponds to an instance of the elasticity matrix \(\mathbf{M}\). Although the boundaries between the different regions of the diagram fluctuate due to the random origin of \(\mathbf{M}\), together they outline very consistent regions where bulk with holes, disks, fibers and bulks dominate. The diagrams with larger values of \(\ell\) cover a more restricted range of \(\nu\) due to the relative scarcity of particles with both large \(\ell\) and small \(\nu\) in matrices \(\mathbf{M}\) produced by our random generation procedure. See _Supporting Information_ for details.
case, the importance of the distance dependence of the particle interaction profile falls outside of the scope of our small-frustration (\(\epsilon\ll 1\)) formalism. More generally, frustration build-up in the presence of nonlinear elasticity and strain-induced particle unbinding can lead to the emergence of new aggregate patterns that have only begun to be explored [47] and could play a crucial role in the physical implementation of the principles described here.
###### Acknowledgements.
ML was supported by Marie Curie Integration Grant PCIG12-GA-2012-334053, "Investissements d'Avenir" LabEx PALM (ANR-10-LABX-0039-PALM), ANR grants ANR-15-CE13-0004-03, ANR-21-CE11-0004-02, ANR-22-ERCC-0004-01 and ANR-22-CE30-0024-01, as well as ERC Starting Grant 677532. ML's group belongs to the CNRS consortium AQV. M. M.
of self-limiting assembly, Reviews of Modern Physics **93**, 025008 (2021).
* Spivack _et al._ [2022]I. R. Spivack, D. M. Hall, and G. M. Grason, Stress accumulation versus shape flattening in frustrated, warped-jigsaw particle assemblies, New Journal of Physics **24**, 063023 (2022).
* Berengut _et al._ [2020]J. F. Berengut, C. K. Wong, J. C. Berengut, J. P. K. Doye, T. E. Ouldridge, and L. K. Lee, Self-limiting polymerization of DNA origami subunits with strain accumulation, ACS Nano **14**, 17428 (2020).
* Videbaek _et al._ [2022]T. E. Videbaek, H. Fang, D. Hayakawa, B. Tyukodi, M. F. Hagan, and W. B. Rogers, Tiling a tubule: how increasing complexity improves the yield of self-limited assembly, J. Phys. Condens. Matter **34** (2022).
* Claessens _et al._ [2008]M. M. A. E. Claessens, C. Semmrich, L. Ramos, and A. R. Bausch, Helical twist controls the thickness of F-actin bundles, Proc. Natl. Acad. Sci. U.S.A. **105**, 8819 (2008).
* Adamcik _et al._ [2010]J. Adamcik, J.-M. Jung, J. Flakowski, P. De Los Rios, G. Dietler, and R. Mezzenga, Understanding amyloid aggregation by statistical analysis of atomic force microscopy images, Nat. Nanotechnol. (2010).
* Brown _et al._ [2014]A. I. Brown, L. Kreplak, and A. D. Rutenberg, An equilibrium double-twist model for the radial structure of collagen fibrils, Soft Matter **10**, 8500 (2014).
* Hall _et al._ [2016]D. M. Hall, I. R. Bruss, J. R. Barone, and G. M. Grason, Morphology selection via geometric frustration in chiral filament bundles, Nat. Mater. **15**, 727 (2016).
* Hall and Grason [2017]D. M. Hall and G. M. Grason, How geometric frustration shapes twisted fibres, inside and out: Competing morphologies of chiral filament assembly, Interface Focus **7**, 20160140 (2017).
* Lenz and Witten [2017]M. Lenz and T. A. Witten, Geometrical frustration yields fibre formation in self-assembly, Nat. Phys. **13**, 1100 (2017).
* Aggeli _et al._ [2001]A. Aggeli, I. A. Nyrkova, M. Bell, R. Harding, L. Carrick, T. C. McLeish, A. N. Semenov, and N. Boden, Hierarchical self-assembly of chiral rod-like molecules as a model for peptide beta-sheet tapes, ribbons, fibrils, and fibers, Proc. Natl. Acad. Sci. U.S.A. **98**, 11857 (2001).
* Meiri and Efrati [2022]S. Meiri and E. Efrati, Cumulative geometric frustration and superextensive energy scaling in a nonlinear classical \(xy\)-spin model, Phys. Rev. E **105**, 024703 (2022).
* Hackney _et al._ [2023]N. W. Hackney, C. Amey, and G. M. Grason, Dispersed, condensed and self-limiting states of geometrically frustrated assembly, arXiv, 2303.02121 (2023).
* Bak [1982]P. Bak, Commensurate phases, incommensurate phases and the devil's staircase, Rep Prog Phys **45**, 587 (1982).
* Thirumalai _et al._ [2019]D. Thirumalai, C. Hyeon, P. I. Zhuravlev, and G. H. Lorimer, Symmetry, Rigidity, and Allosteric Signaling: From Monomeric Proteins to Molecular Machines, Chemical Reviews **119**, 6788 (2019).
* Bray and Duke [2004]D. Bray and T. Duke, Conformational spread: the propagation of allosteric states in large multiprotein complexes, Annu Rev. Biophys. Biomol. Struct. **33**, 53 (2004).
* Bray [2013]D. Bray, The propagation of allosteric states in large multiprotein complexes, J. Mol. Biol. **425**, 1410 (2013).
* Tersoff and Tromp [1993]J. Tersoff and R. M. Tromp, Shape transition in growth of strained islands: Spontaneous formation of quantum wires, Physical Review Letters **70**, 2782 (1993).
* Spencer _et al._ [1991]B. J. Spencer, P. W. Voorhees, and S. H. Davis, Morphological instability in epitaxially strained dislocation-free solid films, Physical Review Letters **67**, 3696 (1991).
* Eaglesham and Cerullo [1990]D. J. Eaglesham and M. Cerullo, Dislocation-free Stranski-Krastanow growth of Ge on Si(100), Physical Review Letters **64**, 1943 (1990).
* Tikhomirov _et al._ [2017]G. Tikhomirov, P. Petersen, and L. Qian, Programmable disorder in random DNA tilings, Nat. Nanotechnol. **12**, 251 (2017).
* Yan _et al._ [2020]J. Yan, W. Feng, J.-Y. Kim, J. Lu, P. Kumar, Z. Mu, X. Wu, X. Mao, and N. A. Kotov, Self-assembly of chiral nanoparticles into semiconductor helices with tunable near-infrared optical activity, Chem. Mater. **32**, 476 (2020).
* Serafin _et al._ [2021]F. Serafin, J. Lu, N. Kotov, K. Sun, and X. Mao, Frustrated self-assembly of non-euclidean crystals of nanoparticles, Nat. Commun. **12**, 4925 (2021).
* Sciortino _et al._ [2005]F. Sciortino, P. Tartaglia, and E. Zaccarelli, One-Dimensional Cluster Growth and Branching Gels in Colloidal Systems with Short-Range Depletion Attraction and Screened Electrostatic Repulsion, The Journal of Physical Chemistry B **109**, 21942 (2005).
* Hall _et al._ [2023]D. M. Hall, M. J. Stevens, and G. M. Grason, Building blocks of non-Euclidean ribbons: size-controlled self-assembly via discrete frustrated particles, Soft Matter **19**, 858 (2023).
# Supplementary material
Collective deformation modes promote fibrous self-assembly in protein-like particles
Hugo Le Roy
[email protected] Universite Paris-Saclay, CNRS, LPTMS, 91405, Orsay, France Institute of Physics, Ecole Polytechnique Federale de Lausanne--EPFL, 1015 Lausanne, Switzerland
M. Mert Terzi
Universite Paris-Saclay, CNRS, LPTMS, 91405, Orsay, France PMMH, CNRS, ESPCI Paris, PSL University, Sorbonne Universite, Universite de Paris, F-75005, Paris, France
Martin Lenz
[email protected] Universite Paris-Saclay, CNRS, LPTMS, 91405, Orsay, France PMMH, CNRS, ESPCI Paris, PSL University, Sorbonne Universite, Universite de Paris, F-75005, Paris, France
## S1 Mapping between discrete and continuous parameters
In this section, we give the relationship between the parameters of our simple (non-random) discrete particle model and the continuum model. The former has three parameters: two spring constants \(k\) and \(k_{c}\), and an area stiffness \(k_{\rm area}\). The continuum limit of this model comprises two coupled elastic sheets corresponding to the colors yellow and red in Fig. 2 of the main text, which we respectively denote by the \({}^{\uparrow}\) and \({}^{\downarrow}\) symbols. We represent the elasticity of each sheet by a shear modulus \(\mu\) and a Poisson ratio \(\nu\). The elastic coupling between the sheets is parametrized by the coupling constant \(\kappa_{c}\). Here we determine \(\mu\), \(\nu\) and \(\kappa_{c}\) in terms of \(k\), \(k_{c}\) and \(k_{\rm area}\).
We first map the energy of a single triangular spring network in the discrete particle model onto the energy of a single sheet in the continuum model. The corresponding continuum sheet energy density reads
\[f_{\rm sheet}(\mu,\nu,\{u_{\alpha\beta}\})=\mu\left(\frac{\nu}{1-\nu}u_{ \alpha\alpha}^{2}+u_{\alpha\beta}u_{\alpha\beta}\right),\] (S1)
where \(u_{\alpha\beta}\) denotes the linearized strain tensor and summation over repeated indices is implied. This strain is expressed with respect to the resting configuration of the sheet of interest, and not with respect to the bulk configuration as in Eq. (4) of the main text. The connection between the two conventions is given by the substitutions
\[u_{\alpha\beta} =\frac{\partial_{\alpha}u_{\beta}^{\uparrow}+\partial_{\beta}u_{ \alpha}^{\uparrow}}{2}-\epsilon\delta_{\alpha\beta} \qquad\text{for the yellow sheet}\] (S2a) \[u_{\alpha\beta} =\frac{\partial_{\alpha}u_{\beta}^{\downarrow}+\partial_{\beta}u_ {\alpha}^{\downarrow}}{2}+\epsilon\delta_{\alpha\beta} \qquad\text{for the red sheet}.\] (S2b)
To lowest order in \(\epsilon\), the expression of the energy density of (S1) does not depend on whether it is defined as the energy per unit surface of the sheet in its own resting state or in the bulk state defined in the main text.
The energy of the discrete triangular spring network is the sum of two parts: the springs and areal energies. The springs part yields the following contribution to the sheet energy density [1]
\[f_{\rm spring}(\{u_{\alpha\beta}\})=f_{\rm sheet}(\mu_{0},\nu_{0},\{u_{ \alpha\beta}\})\qquad\text{with}\qquad\mu_{0}=\frac{\sqrt{3}}{4}k,\qquad\nu_ {0}=\frac{1}{3}.\] (S3)
Now focusing on the areal stiffness part, we find that its contribution to the total sheet energy reads
\[F_{\rm area}=\frac{1}{2}k_{\rm area}\sum_{\rm triangles}\frac{(A-A_{0})^{2}}{ A_{0}}\approx\frac{k_{\rm area}A_{0}}{2}\sum_{\rm triangles}(u_{\alpha\alpha})^{2},\] (S4)
where \(A_{0}\) is the area of a triangle and where we have used the small-deformation approximation of the relative area change as the trace of strain tensor: \((A-A_{0})/A_{0}\approx u_{\alpha\alpha}\). The corresponding energy density in the continuum limit then reads
\[f_{\rm area}=\frac{k_{\rm area}}{4}(u_{\alpha\alpha})^{2},\] (S5)
where additional factor of \(1/2\) comes from the fact that only half of the triangles are endowed with an areal stiffness in our model. Adding the two contributions of Eqs. (S3) and (S5), we find a total sheet energy density of the form (S1) with
\[\mu=\frac{\sqrt{3}}{4}k,\qquad\nu=\frac{\sqrt{3}k+2k_{\text{area}}}{3\sqrt{3}k+2 k_{\text{area}}}.\] (S6)
The second contribution to the total energy stems from the coupling springs. In the discrete particle model, the total coupling energy reads
\[F_{c}=\frac{k_{c}}{2}\sum_{\text{coupling springs}}(r-r_{0})^{2},\] (S7)
where \(r\) and \(r_{0}\) are the deformed and rest length of the coupling springs, respectively. To lowest order in displacement, the change of length of a spring whose direction is given by the unit vector \(\hat{\mathbf{s}}\) reads
\[r-r_{0}\sim\left(\mathbf{u}^{\uparrow}-\mathbf{u}^{\downarrow}\right)\cdot \hat{s}=|\mathbf{u}^{\uparrow}-\mathbf{u}^{\downarrow}|\,\cos(\theta_{u}- \theta_{s}),\] (S8)
where \(\theta_{u}\) and \(\theta_{s}\) are the angles that the vectors \(\mathbf{u}^{\uparrow}-\mathbf{u}^{\downarrow}\) and \(\hat{s}\) respectively make with the horizontal axis. As required for the continuum limit approach, we assume that the displacement fields \(\mathbf{u}^{\uparrow}\), \(\mathbf{u}^{\downarrow}\) are homogeneous. This yields a total energy per spring
\[F_{c}=\frac{k_{c}}{2}\sum_{\text{springs }s}\left(\mathbf{u}^{\uparrow}- \mathbf{u}^{\downarrow}\right)^{2}\cos(\theta_{u}-\theta_{s})=\frac{N_{s}k_{ c}}{2}\left(\mathbf{u}^{\uparrow}-\mathbf{u}^{\downarrow}\right)^{2}\left\langle \cos^{2}(\theta_{u}-\theta_{s})\right\rangle,\] (S9)
where \(N_{s}\) is the total number of springs in the system and where the average \(\left\langle\cdot\right\rangle\) is performed over all six possible orientations of the coupling springs, namely \(\theta_{s}=i\pi/3\) with \(i=1..6\). This averaging of the square cosine yields a result that is independent of \(\theta_{u}\). Finally, dividing \(F_{c}\) by the total area of the system and noting that there are six coupling springs per hexagon, we find a coupling energy per unit area
\[f_{c}=\frac{2\sqrt{3}k_{c}}{2}\left(\mathbf{u}^{\uparrow}-\mathbf{u}^{ \downarrow}\right)^{2}.\] (S10)
Identifying this expression to the last term of Eq. (4) of the main text, we thus find a continuum coupling constant
\[\kappa_{c}=2\sqrt{3}k_{c}.\] (S11)
In the main text, we use Eqs. (S6) and (S11) to compute values of \(\nu\), \(\Gamma\) and \(\ell\) associated with discrete particles and compare the results of our numerical results to continuum predictions in Figs. 3 and 4.
## S2 Continuum limit computations
Here we derive the displacement fields of Eqs. (9) and (12) of the main text from the energy functional presented in Eq. (4) of the main text. In Sec. S2.1 we derive the general form of the force balance equations associated with the yellow and red sheets. In Sec. S2.2, we solve the force balance equations in a fiber geometry. Then we solve them in a disk geometry in Sec. S2.3. The expressions of the aggregate energies are computed by inserting these results into Eq. (4) of the main text and performing the integration. The bulk elastic energy is trivially derived from either one of the resulting expressions by taking the infinite-size limit.
### Force balance equations
We differentiate the sheet energy density of (S1) with respect to the linearized strain tensor to obtain the constitutive equations of the yellow and red sheets
\[\sigma^{\uparrow}_{\alpha\beta} =\lambda(\partial_{\gamma}u^{\uparrow}_{\gamma}-2\epsilon) \delta_{\alpha\beta}+\mu(\partial_{\alpha}u^{\uparrow}_{\beta}+\partial_{ \beta}u^{\uparrow}_{\alpha}-2\epsilon\delta_{\alpha\beta})\] (S12a) \[\sigma^{\downarrow}_{\alpha\beta} =\lambda(\partial_{\gamma}u^{\uparrow}_{\gamma}+2\epsilon)\delta _{\alpha\beta}+\mu(\partial_{\alpha}u^{\downarrow}_{\beta}+\partial_{\beta}u^{ \downarrow}_{\alpha}+2\epsilon\delta_{\alpha\beta}),\] (S12b)
where \(\lambda=\mu\nu/(1-\nu)\) is the first Lame coefficient and where we have used the strain notation of (S2). Differentiating Eq. (4) of the main text with respect to the yellow and red sheet displacements \(u_{\alpha}^{\uparrow}\) and \(u_{\alpha}^{\downarrow}\) respectively yields the force balance equations for the yellow and red sheets:
\[\partial_{\beta}\sigma_{\alpha\beta}^{\uparrow} =-\kappa_{c}(u_{\alpha}^{\downarrow}-u_{\alpha}^{\uparrow})\] (S13a) \[\partial_{\beta}\sigma_{\alpha\beta}^{\downarrow} =-\kappa_{c}(u_{\alpha}^{\uparrow}-u_{\alpha}^{\downarrow}),\] (S13b)
whose right-hand sides represent the areal densities of external forces exerted by one sheet onto the other through the coupling springs.
We parametrize all displacements and stresses by the bulk position vector \(\mathbf{r}\). This vector is defined as the position of a point in the bulk state, _i.e._, in the state characterized by \(\mathbf{u}^{\uparrow}=\mathbf{u}^{\downarrow}=0\). In other words, the actual position of any point of the yellow sheet is given by \(\mathbf{r}+\mathbf{u}^{\uparrow}(\mathbf{r})\), and that of a point of the red sheet is given by \(\mathbf{r}+\mathbf{u}^{\downarrow}(\mathbf{r})\). In the following, we endeavor to solve the system of equations Eqs. (S12-S13) for the displacement fields \(\mathbf{u}^{\uparrow}(\mathbf{r})\), \(\mathbf{u}^{\downarrow}(\mathbf{r})\) on a two-dimensional domain \(\Omega\) with the stress-free boundary condition
\[\forall\mathbf{r}\in\partial\Omega\quad n_{\alpha}(\mathbf{r})\sigma_{\alpha \beta}^{\uparrow}(\mathbf{r})=n_{\alpha}(\mathbf{r})\sigma_{\alpha\beta}^{ \downarrow}(\mathbf{r})=0,\] (S14)
where \(\mathbf{n}(\mathbf{r})\) denotes the normal to the domain at a point \(\mathbf{r}\) of the domain boundary \(\partial\Omega\). We use linear elasticity throughout.
### Fiber
To study the elastic energy of an infinitely long, straight fiber, we write the position vector \(\mathbf{r}=(x,y)\) in cartesian coordinates. We then solve the force balance equations over the domain \((x,y)\in\Omega=[-W/2,W/2]\times\mathds{R}\), where \(W\) denotes the width of the fiber.
By translational symmetry, the displacement fields gradients \(\partial_{\alpha}u_{\beta}^{\uparrow/\downarrow}\) may only depend on the horizontal coordinate \(x\). this implies that the vertical displacements \(u_{y}^{\uparrow}\) and \(u_{y}^{\downarrow}\) are affine functions of the vertical coordinate \(y\). To prevent a divergence in the coupling spring energy, the slopes of these functions must moreover be identical, and we thus write
\[u_{y}^{\uparrow}=u_{y}^{\downarrow}=\phi y,\] (S15)
where \(\phi\) is an undetermined constant that cannot depend on \(x\) lest diverging strains appear in regions of large \(y\) in either or both sheets. The horizontal displacements \(u_{x}^{\uparrow}\), \(u_{x}^{\downarrow}\) must be independent of \(y\) for the same reason. Additive constants on the right-hand-side of (S15) can be ignored without loss of generality through suitable choices of the origins of \(y\) and \(\mathbf{u}^{\uparrow/\downarrow}\). As a consequence of (S15), the vertical component of the force exerted by the coupling springs vanishes everywhere.
Inserting these results into Eqs. (S12-S14) yields a system of coupled equations for two functions of one variables, namely \(u_{x}^{\uparrow}(x)\) and \(u_{x}^{\downarrow}(x)\):
\[\partial_{x}^{2}u_{x}^{\uparrow} =\frac{\kappa_{c}}{\lambda+2\mu}(u_{x}^{\uparrow}-u_{x}^{ \downarrow})\] (S16a) \[\partial_{x}^{2}u_{x}^{\downarrow} =\frac{\kappa_{c}}{\lambda+2\mu}(u_{x}^{\downarrow}-u_{x}^{ \uparrow}),\] (S16b)
with boundary conditions
\[\partial_{x}u_{x}^{\uparrow}(\pm W/2) =\frac{(\lambda+\mu)2\epsilon}{\lambda+2\mu}-\frac{\lambda\phi} {\lambda+2\mu}\] (S17a) \[\partial_{x}u_{x}^{\downarrow}(\pm W/2) =-\frac{(\lambda+\mu)2\epsilon}{\lambda+2\mu}-\frac{\lambda\phi} {\lambda+2\mu},\] (S17b)
which is a continuum version of the one-dimensional toy model of the main text, except for the unknown constant \(\phi\).
To determine the value of \(\phi\), we note that the fiber as a whole is not subjected to any external force. This implies that its total vertical tension must be constant. Due to the no-stress boundary condition at the \(y=\pm\infty\) ends of the fiber, this constant is moreover equal to zero:
\[0=\int_{-\infty}^{\infty}\left[\sigma_{yy}^{\uparrow}(x)+\sigma_{yy}^{ \downarrow}(x)\right]\,\mathrm{d}x=2(\lambda+2\mu)W\phi+\lambda U(W/2)- \lambda U(-W/2),\] (S18)
where we have defined \(U(x)=u_{x}^{\uparrow}(x)+u_{x}^{\downarrow}(x)\) and where the last equality was obtained by using (S13) and performing the integration. We finally combine Eqs. (S16-S17) to obtain a simple differential equation for \(U(x)\):
\[\partial_{x}^{2}U=0\qquad\text{with}\qquad\partial_{x}U(\pm W/2)=-\frac{2 \lambda\phi}{\lambda+2\mu},\] (S19)
which implies \(U(x)=-2\lambda\phi x/(\lambda+2\mu)\). Inserting this result into (S18) yields \(\phi=0\).
Inserting the condition \(\phi=0\) into the system Eqs. (S16-S17), we find a linear system of differential equations without any unknown parameters. This system thus has a single solution, which can easily be verified to be Eq. (9) of the main text.
### Disk
To study the elastic energy of a disk, we write the position vector \(\mathbf{r}=(r,\theta)\) in polar coordinates. We then solve the force balance equations over the domain \((r,\theta)\in\Omega=[0,R]\times[0,2\pi)\), where \(R\) denotes the radius of the disk.
The rotational invariance of the problem imposes \(u_{\theta}^{\uparrow}=-u_{\theta}^{\downarrow}=0\), and implies that the radial displacement depends only on the radial coordinate. We must thus solve for two scalar functions of one variable, namely \(u_{r}^{\uparrow}(r)\) and \(u_{r}^{\downarrow}(r)\). Combining Eqs. (S12-S14), we obtain
\[\partial_{r}^{2}u_{r}^{\uparrow}+\frac{\partial_{r}u_{r}^{ \uparrow}}{r}-\frac{u_{r}^{\uparrow}}{r^{2}} =-\frac{\kappa_{c}}{\lambda+2\mu}(u_{r}^{\downarrow}-u_{r}^{ \uparrow})\] (S20a) \[\partial_{r}^{2}u_{r}^{\downarrow}+\frac{\partial_{r}u_{r}^{ \downarrow}}{r}-\frac{u_{r}^{\downarrow}}{r^{2}} =-\frac{\kappa_{c}}{\lambda+2\mu}(u_{r}^{\uparrow}-u_{r}^{ \downarrow})\] (S20b)
with boundary conditions
\[\partial_{r}u_{r}^{\uparrow}(0) =\partial_{r}u_{r}^{\downarrow}(0)=0\] (S21a) \[\sigma_{rr}^{\uparrow}(R) =(\lambda+2\mu)\partial_{r}u_{r}^{\uparrow}(R)+\lambda\frac{u_{r }^{\uparrow}(r)-\epsilon}{R}=0\] (S21b) \[\sigma_{rr}^{\downarrow}(R) =(\lambda+2\mu)\partial_{r}u_{r}^{\downarrow}(R)+\lambda\frac{u_{r }^{\uparrow}(r)+\epsilon}{R}=0.\] (S21c)
By summing these equations two by two, we find a system of equations for \(U(r)=u_{r}^{\uparrow}(r)+u_{r}^{\downarrow}(r)\), namely
\[\partial_{r}^{2}U+\frac{U}{r}-\frac{U}{r^{2}}=0\quad\text{with}\quad\partial_ {r}U(0)=0\quad\text{and}\quad(\lambda+2\mu)\partial_{r}U(R)+\lambda\frac{U(r )}{R}=0.\] (S22)
This implies \(U(r)=0\) and therefore \(u_{r}^{\uparrow}(r)=-u_{r}^{\downarrow}(r)\). Plugging this condition into (S20a) yields
\[\partial_{r}^{2}u_{r}^{\uparrow}+\frac{\partial_{r}u_{r}^{\uparrow}}{r}-\frac {u_{r}^{\uparrow}}{r^{2}}=\frac{2\kappa_{c}}{\lambda+2\mu}u_{r}^{\uparrow},\] (S23)
which alongside the boundary conditions on \(u_{r}^{\uparrow}\) comprised in (S21) form a fully specified second-order linear differential equation, for which Eq. (12) of the main text is the unique solution.
## S3 Smaller \(\ell\) phase diagram
Figure 3 of the main text presents the phase diagram of our non-random discrete particles for boundary layer thicknesses \(\ell=+\infty\) [Fig. 3(a), continuum theory] and \(\ell=5\) [Fig. 3(b), numerical procedure described in the main text]. To further illustrate the influence of the boundary layer thickness \(\ell\), in Fig. 1 we follow the same procedure used to compute the diagram of Fig. 3(b) for a smaller value \(\ell=2.5\). Consistent with the results obtained for random particles in Fig. 6 of the main text, we observe that smaller values of \(\ell\) induce a loss of disks to the benefit of bulks with holes, but no dramatic changes in the fiber and bulk regions of the diagram.
## S4 Procedure to Draw Random Matrix
Here we describe the procedure we use to generate the random instances of the matrix \(\mathbf{M}\) introduced in Eq. (14) of the main text. The form of the energy chose in Eq. (14) is very generic, as it boils down to a small-displacement Taylor expansion of any energy function of the 12 vertex coordinates under the constraints of translational and rotational invariance. Here we discuss the way in which we enforce the additional symmetries of the matrix \(\mathbf{M}\).
As discussed at the end of the main text, we demand that the elasticity of our particles be three-fold symmetric, _i.e._, invariant under the permutation:
\[d_{1} \to d_{2}\] \[d_{2} \to d_{3}\] \[d_{3} \to d_{1}\] \[d_{4} \to d_{5}\] \[d_{5} \to d_{6}\] \[d_{6} \to d_{4}\] \[d_{7} \to d_{8}\] \[d_{8} \to d_{9}\] \[d_{9} \to d_{7},\]
where the distances \(d_{i}\) are defined in Fig. 5(a) of the main text and the permutation above applies simultaneously to the vectors \(\mathbf{d}\) and \(\mathbf{d}_{0}\) of actual and resting positions. To enforce this condition, we first draw all entries of a \(9\times 9\) matrix \(\mathbf{M}_{0}\) as independent identically distributed variables from the normal distribution \(\mathcal{N}(0,1)\). We then define the \(9\times 9\) block matrix \(\mathbf{\Omega}\) that enforces the aforementioned permutation as:
\[\mathbf{\Omega}=\begin{pmatrix}\omega&0&0\\ 0&\omega&0\\ 0&0&\omega\end{pmatrix},\qquad\text{where the block $\omega$ is given by}\qquad\omega= \begin{pmatrix}0&0&1\\ 1&0&0\\ 0&1&0\end{pmatrix}.\] (S24)
We then apply permutations to the symmetry-less matrix \(\mathbf{M}_{0}\) to obtain
\[\mathbf{M}_{1}=\frac{1}{3}\left(\mathbf{M}_{0}+\mathbf{\Omega}\mathbf{M}_{0} \mathbf{\Omega}^{-1}+\mathbf{\Omega}^{2}\mathbf{M}_{0}\mathbf{\Omega}^{-2} \right),\] (S25)
Figure 1: Phase diagram based on the numerical comparison of the energies of the aggregates shown in Fig. 4(a) of the main text for \(\ell=2.5\). The color code is as in Fig. 3 of the main text; black: bulk with holes, red: disk, blue: fiber, purple: bulk.
which has the required three-fold symmetry property.
Our second and last requirement for our elasticity matrix is that it be semi-positive, which prevents the ground state of our individual particles from being mechanically unstable. We enforce this condition by defining
\[\mathbf{M}=\mathbf{M}_{1}\mathbf{M}_{1}^{T},\] (S26)
where \({}^{T}\) denotes the usual matrix transposition. By combining Eqs. [25] and [26] and realizing that \(\mathbf{\Omega}^{3}=\mathbf{I}\) it is easy to show that the energy of Eq. (14) of the main text then satisfies the three-fold symmetry condition \(e_{d}(\mathbf{\Omega}\mathbf{d})=e_{d}(\mathbf{d})\). All elasticity matrices used in Figs. 5 and 6 of the main text are obtained through the procedure described here.
S5 Self-limited aggregate sizes scale like the boundary layer thickness \(\ell\) in random particles
Randomly drawn matrices \(\mathbf{M}\) display heterogeneous elastic properties, which result in a wide distribution of aggregate sizes. In Fig. 2 we show that the value of the surface tension \(\Gamma/\Gamma_{\text{max}}\) is not sufficient to accurately predict the size of the aggregates resulting from a matrix \(\mathbf{M}\). By contrast, we show in Fig. 5(d) of the main text that rescaling the aggregate sizes by the boundary layer thickness \(\ell\) leads to a collapse of the aggregate sizes, consistent with our continuum theory.
## S6 Selection of the random particles used in the phase diagrams
To generate the random particle aggregation diagrams of Fig. 6 of the main text, we first draw \(3\times 10^{7}\) random matrices and compute the boundary layer length of the associated particles. This large sample size is required to obtain a sufficient number of particles with relatively high values of \(\ell\), as is apparent from the fast decay of the probability density of Fig. 3(a) as \(\ell\) increases.
We next divide the range of accessible boundary layer lengths into three intervals: \(\ell\in[0,1)\), \(\ell\in[1,1.5)\) and \(\ell\in[1.5,\infty)\). We randomly select batches of \(10^{6}\) particles from the first two intervals and use the whole third batch, which contains only \(1.25\times 10^{5}\) particles. We then compute the Poisson ratio (see Sec. S7 for the procedure) for all particles in the three batches and further select a few thousand particles from each to obtain quasi-uniform distributions of Poisson ratios as represented in Fig. 3(b). Finally, for each particle, we construct an aggregation diagram by numerically determining the best aggregate upon varying the surface tension. The outcome of this procedure is Fig. 6 of the main text.
## S7 Predicting the boundary layer thickness from the elastic properties of random particles
In our continuum model, the thickness of the boundary layer that marks the transition between strongly constrained bulk particles and relatively unconstrained aggregate-edge particles is directly tied to the ease with which the two
Figure 2: Equilibrium width \(W^{*}\) and radius \(R^{*}\) of random particles aggregates in units of number of particles. The data shown is identical to that of Fig. 5(d) of the main text, only without rescaling by the boundary layer thickness \(\ell\).
subtriangles that constitute the particles can be shifted relative to each other. Here we tentatively apply a similar reasoning to random particles, and construct the ultimately successful estimate \(\ell^{\text{pred}}\) of the boundary layer thickness presented in Fig. 5(c) of the main text. In our continuum model the boundary layer thickness \(\ell\) is constructed from a ratio of elastic moduli, namely
\[\ell^{2}=\frac{\lambda+2\mu}{2\kappa_{c}}=\frac{K}{(1+\nu)\kappa_{c}}.\] (S27)
where \(\kappa_{c}\) denotes the inter-sheet coupling constant, \(K=\lambda+\mu\) is the intra-sheet bulk modulus and \(\nu=\lambda/(\lambda+2\mu)\) is the intra-sheet Poisson ratio. These parameters are not rigorously well-defined in a bulk aggregate of random particles characterized by an elasticity matrix \(\mathbf{M}\) [Eq. (14) of the main text], which does not in general exactly map onto the continuum energy of Eq. (4) of the main text. To nonetheless derive our estimate \(\ell^{\text{pred}}\), here we set out to compute proxies for each of these three parameters. We base our procedure on the computation of pseudo-moduli associated with specific deformations illustrated in Fig. 4. We thus define the pseudo-modulus \(\mathcal{K}\) associated with a deformation vector \(\delta\mathbf{d}=\mathbf{d}-\mathbf{d}_{0}\) as
\[\mathcal{K}[\delta\mathbf{d}]=\frac{\delta\mathbf{d}^{T}\cdot\mathbf{M}\cdot \delta\mathbf{d}}{A_{0}^{\text{hex}}},\] (S28)
where \(A_{0}^{\text{hex}}=\sqrt{3}/2\) is the resting area of a hexagonal particle to zeroth order in \(\epsilon\). In the following, we choose the normalization of \(\delta\mathbf{d}\) so that the definition of (S28) coincides with the moduli discussed in Eq. (4) of the main text when applied to our simple particle model (Fig. 2 of the main text). In that specific case, the deformations \(\delta\mathbf{d}\) used below are eigenvectors of \(\mathbf{M}\).
We first estimate the sheet-coupling modulus \(\kappa_{c}\) as the pseudo-modulus associated with a relative displacement between the two subtriangles of a particle. We picture two possible protocols for such a displacement in Fig. 4(a) and (b), namely a horizontal or a vertical shift of the two triangles, although any intermediate direction is also allowed. Due to the three-fold symmetry of the particle, all these shifting protocols are associated with the same modulus. Using the deformation mode of Fig. 4(a), we write
\[\kappa_{c}^{\text{pred}}=\mathcal{K}\left[\left(0,0,0,0,0,0,-\sqrt{3}/2,0,\sqrt {3}/2\right)\right].\] (S29)
We next estimate the intra-sheet bulk modulus \(K\), which characterises the stiffness of each sheet with respect to an isotropic expansion or compression, as the pseudo-modulus associated with the expansion-compression deformation illustrated in Fig. 4(c), namely
\[K^{\text{pred}}=\mathcal{K}\left[\left(-2^{-3/2},-2^{-3/2},-2^{-3/2},2^{-3/2},2 ^{-3/2},0,0,0\right)\right].\] (S30)
Our third step is to estimate the Poisson ratio as the average
\[\nu^{\text{pred}}=\frac{\nu^{\uparrow}+\nu^{\downarrow}}{2}\] (S31)
Figure 3: Distribution of random particle properties resulting from the procedure of Sec. S4. (a) Distribution of boundary layer lengths within the initial draw of \(3\times 10^{7}\) elasticity matrices \(\mathbf{M}\). (b) Distribution of Poisson ratio within each of the three batches of particles. We respectively pick 3400, 3500 and 2400 particles out of the three batches to obtain the quasi-uniform distributions of Poisson ratios materialized by the black lines.
of the individual pseudo-Poisson ratios \(\nu^{\uparrow}\) and \(\nu^{\downarrow}\) of the yellow and red sublattices (and thus of the yellow and red sheets in the continuum limit). We estimate each of these sheet-specific Poisson ratios as
\[\nu^{\uparrow/\downarrow}=\frac{K^{\uparrow/\downarrow}-\mu^{\uparrow/ \downarrow}}{K^{\uparrow/\downarrow}+\mu^{\uparrow/\downarrow}},\] (S32)
where we define the sheet-specific bulk and shear pseudo-moduli through the four deformations illustrated in Fig. 4(d-e), namely
\[K^{\uparrow} =\mathcal{K}\left[(1/2,1/2,1/2,0,0,0,0,0,0)\right]\] (S33a) \[\mu^{\uparrow} =\mathcal{K}\left[\left(0,-\sqrt{3}/4,\sqrt{3}/4,0,0,0,0,0,0\right)\right]\] (S33b) \[K^{\downarrow} =\mathcal{K}\left[(0,0,0,1/2,1/2,1/2,0,0,0)\right]\] (S33c) \[\mu^{\downarrow} =\mathcal{K}\left[\left(0,0,0,\sqrt{3}/4,0,-\sqrt{3}/4,0,0,0\right) \right].\] (S33d)
We finally combine Eqs. (S29-S31) by inserting them into (S27), which yields the values of \(\ell^{\mathrm{pred}}\) displayed in Fig. 4(c) of the main text. We also use the pseudo-Poisson ratio defined in (S31) as the vertical coordinate of the phase diagrams of Fig. 6 of the main text.
## S8 Unsuccessful alternative predictor of the boundary layer thickness in random particles
The successful method described in Sec. S7 is only one of many possible extensions of our continuum theory to random particles. Here we discuss a different, unsuccessful approach and draw conclusions from its failure.
Figure 4: Deformation protocols used to estimate the boundary layer thickness and Poisson modulus of a collection of random particles. These estimates are made to zeroth order in \(\epsilon\). We thus set \(\epsilon\) to zero in this figure and throughout the procedure. The black arrows denote externally imposed node displacements. (a) Displacement protocol used to generate the estimate \(\kappa_{\mathrm{c}}^{\mathrm{pred}}\) of the coupling modulus in (S29). The definitions of the elements of the distance vector \(\mathbf{d}\) are recalled here for convenience. (b) Another displacement protocol equivalent to that of the previous panel. (c) Simultaneous isotropic bulk deformation of the two triangles used to estimate the sheet bulk modulus \(K^{\mathrm{pred}}\). In the case of the simple particles of Fig. 2 of the main text, this mixture of expansion and compression imposes bulk deformations on both sub-triangles while leaving the coupling springs lengths unchanged to lowest order in deformation. We thus use it to extract the value of the bulk modulus independently from the coupling modulus in (S30). (d) Bulk and shear deformation of the yellow triangle used to compute \(\nu^{\uparrow}\) through \(K^{\uparrow}\) and \(\mu^{\uparrow}\) in (S32). The grey nodes are assumed to be free to move in such a fashion that the length of the grey segments remains unchanged during the deformation. This is equivalent to decoupling the black triangle from the rest of the particle, as mentioned in the main text. (e) Bulk and shear deformation of the red triangle used to compute \(\nu^{\downarrow}\) through \(K^{\downarrow}\) and \(\mu^{\downarrow}\) in (S32). The meaning of the grey nodes is the same as in the previous panel.
The foundation of our continuum approach is the existence of an emergent length scale in our deterministic 2D particle model that diverges in the limit where one elastic constant of the particles (\(k_{c}\)) becomes much smaller than another (\(k\)). Mechanistically, this large mismatch in elastic constants means that the restoring forces from the softer deformation mode of the particle _slowly_ accumulate from the edge of the aggregate to the bulk as discussed in the main text for the one-dimensional model of Fig. 1. Mathematically, any deformation of the particle may be decomposed into a linear combination of the eigenvectors of the matrix \(\mathbf{M}\) defined in Eq. (14) of the main text. These deformation eigenmodes do not couple to each other in an isolated particle, and each has its own stiffness associated with the corresponding eigenvalue of \(\mathbf{M}\). In our deterministic 2D particles, the length \(\ell\) is inversely proportional to the square root of the smallest of these eigenvalues.
This leads us to hypothesize that any soft mode of deformation of the particle may be able to play the same role that the shifting mode of Fig. 4(a-b) plays in our deterministic 2D model: to produce slowly accumulating stresses that take the particle from a more relaxed edge configuration to the bulk configuration. If several soft modes exist within the particle, the softer one should correspond to the thickest boundary layer and thus should dominate the decay of the elastic deformation far enough from the aggregate edge. This reasoning thus suggests the following proxy for the boundary layer thickness:
\[\ell^{\text{eigen}}=\frac{1}{\sqrt{\min_{i}\lambda_{i}}},\] (S34)
where the \(\lambda_{i}\) denote the eigenvalues of \(\mathbf{M}\). We test the accuracy of this predictor in Fig. 5 using the same plotting convention as in Fig. 5(c) of the main text, and find that it does not significantly correlate with \(\ell\). As detailed in the discussion section of the main text, we conclude that not all soft modes of our particles are compatible with the mechanism of stress accumulation over large length scales described above.
## S9 Triangular particles
To demonstrate the robustness of the results of the main text to a change in particle design, here we introduce a model of triangular deterministic particles. Similar to the main text, the model particles comprise two triangles made of hard \(k\) springs and a set of softer (six in this case) \(k_{c}\) coupling springs. The particle design is illustrated in Fig. 6(a), and aggregates thereof are shown in Fig. 6(b) and (c). This design gives rise to the same continuum theory as the model of the main text, and we conduct comparisons of pre-made discrete aggregate structures as well as Monte-Carlo simulations using the same methodology as in the main text. These results are shown in Fig. 6(d). The aggregate designs used for the former type of analysis are shown in Fig. 6(e). By contrast with the model described in the main text, the bulk with hole is never advantageous because sticking two triangular particles together has a
Figure 5: The predictor \(\varrho^{\text{eigen}}\) of (S34) does not correlate with the measured boundary layer thickness \(\ell\). Each of the two side panels is a close-up of a region of the central panel, as indicated by the black rectangles.
non-zero elastic energy cost. As a result, a gas of isolated single particles is favored at very low tensions. In addition, unlike in Fig. 3(b) of the main text, the frontier between fiber and bulk is perfectly vertical. This further confirms that the extended region of stability of the fibers observed for the hexagonal model of the main text is due to the curvature of the fibers. Apart from these nuances, the results obtained with this model are very similar to those of the deterministic 2D hexagonal particle model detailed in the main text, thus confirming the robustness of our continuum approach.
|
2307.03772 | Effects of anisotropy on the high field magnetoresistance of Weyl
semimetals | We study the effects of anisotropy on the magnetoresistance of Weyl
semimetals (WSMs) in the ultraquantum regime. We utilize the fact that many
Weyl semimetals are approximately axially anisotropic. We find that anisotropy
manifests itself in the strong dependence of the magnetoresistance on the polar
and azimuthal angles determining the orientation of the anisotropy axis with
respect to the applied magnetic field and electric current. We also predict
that the ratio of magnetoresistances in the geometries, where the magnetic
field and anisotropy axes are aligned and where they are orthogonal, scales as
$(v_\bot/v_\parallel)^2$ where $v_\bot$ and $v_\parallel$ are the corresponding
Fermi velocities. | A. S. Dotdaev, Ya. I. Rodionov, K. I. Kugel, B. A. Aronzon | 2023-07-07T18:00:02Z | http://arxiv.org/abs/2307.03772v1 | # Effects of anisotropy on the high field magnetoresistance of Weyl semimetals
###### Abstract
We study the effects of anisotropy on the magnetoresistance of Weyl semimetals (WSMs) in the ultraquantum regime. We utilize the fact that many Weyl semimetals are approximately axially anisotropic. We find that anisotropy manifests itself in the strong dependence of the magnetoresistance on the polar and azimuthal angles determining the orientation of the anisotropy axis with respect to the applied magnetic field and electric current. We also predict that the ratio of magnetoresistance in the geometries, where the magnetic field and anisotropy axes are aligned and where they are orthogonal, scales as \((v_{\perp}/v_{\parallel})^{2}\) where \(v_{\perp}\) and \(v_{\parallel}\) are the corresponding Fermi velocities.
pacs: 72.10.-d, 72.15.Gd, 71.55.Ak, 72.80.-r
## I Introduction
Weyl [1; 2; 3; 4] and Dirac [5; 6; 7; 8] semimetals attract intense interest in recent years. Due to their _relativistic_ 3D Hamiltonian with the Fermi velocity playing the role of the speed of light, they exhibit intriguing transport properties, e.g., disorder driven phase transitions [9; 10], unusual topological phenomena, such as the existence of Fermi arcs, an open in momentum space surface states connecting Weyl fermions of opposite chiralities [11], and finally, pronounced QED-type phenomena such as chiral anomaly [12; 13; 14; 15]. To some extent, the material is essentially a solid-state realization of QED physics.
Of particular interest are the transport properties of WSMs at the the magnetic field perpendicular to the transport voltage (transverse magnetoresistance). Recent experiments undertaken in the ultraquantum regime (at which temperature and chemical potential are much less than the energy gap between the zeroth and the first Landau levels (LLs)) reveal unsaturated magnetoresistance [16; 17; 18; 19], linear in the magnetic field \(H\) (\(\rho_{xx}\propto H\)). As is, this behavior seems surprising since the usual relaxation-time arguments predict the saturation of the magnetoresistance at high magnetic fields. However, the transverse magnetoresistance of the compound with massless Dirac spectrum in the ultraquantum regime was theoretically studied by A. Abrikosov [20] back in 1998. He assumed the principal source of the disorder in the compound to be Coulomb impurities. He found that magnetoresistance obeys linear law as a function of the magnetic field \(H\). In his work A. Abrikosov addressed the simplest isotropic gapless semiconductor with the linear spectrum identical to the one of a Dirac semimetal.
Actual WSMs are highly anisotropic compounds. Fortunately, for the theoretical analysis, some of the most popular ones, such as Cd\({}_{3}\)As\({}_{2}\)[5] or Na\({}_{3}\)Bi [21] are approximately axially anisotropic with similar Fermi velocity ratios: \(\xi=v_{\perp}/v_{\parallel}\approx 4\) and untilted Weyl cones. Naturally, the anisotropy of the materials substantially complicates the theoretical study. Most theoretical works so far addressed the anisotropy in WSMs caused by a possible tilt of the Weyl node, the so called type II WSMs [22; 23]. In the meantime, the anisotropy of WSM with untilted Weyl cones is expected to have a dramatic effect on the experimental study of transport phenomena. Indeed, an active experimental interest has recently awaken to the implications of anisotropy of WSMs with untilted Weyl cones [24; 25; 19].
The effect of anisotropy of the untilted Weyl cone on transport properties of WSMs with the Coulomb disorder has not been studied theoretically yet. We note the comprehensive work [26], where the effects of chemical potential and temperature on magnetoresistance in an isotropic WSM with the Coulomb disorder were (although mostly numerically) addressed. Also of note is the exhaustive study of magnetoresistance of isotropic WSMs with \(\delta\)
Figure 1: Geometry of the problem. The anisotropy axis (\(\mathbf{n}_{0}\)) is inclined by polar angle \(\Theta\) and azimuth angle \(\Phi\). The voltage is applied along the \(x\) axis.
correlated disorder [27]. The effect of strong Coulomb disorder on the transverse magnetoresistance was addressed in Ref. [28]. The effect of anisotropy on the transport of WSM with long-range disorder without magnetic field was studied in Ref. [29].
In this paper, we compute the magnetoconductivity and magnetoresistance of a WSM with axially anisotropic untilted Weyl cone in the ultraquantum regime. We obtain the magnetoresistance as a function of the magnetic field, and of the polar and azimuthal angles of the anisotropy axis (see Fig. 1 for the actual geometry). We analyze the scaling of the conductivity tensor components with the anisotropy parameter \(\xi=v_{\parallel}/v_{\perp}\).
The paper is organized as follows. In Sec. II, we introduce the anisotropic WSM Hamiltonian and discuss the transformation properties of conductivity necessary for the computation. Sec. III addresses the computation of the magnetoconductivity. In Sec. IV, we deal with the magnetoresistance and analyze its \(\xi\) and angular dependence. We summarize the results of the paper in Sec. VI and discuss the regime, in which they are applicable.
## II Formulation of the model
### Hamiltonian
We start with the standard anisotropic Hamiltonian for electrons in the Coulomb disorder potential
\[H =H_{0}+H_{\text{imp}},\] \[H_{0} =\sum_{i=-,\parallel}\int\psi^{\dagger}(\mathbf{r})\mathbf{\sigma}_ {i}\left(v_{i}\Big{[}\mathbf{p}-\frac{e}{c}\mathbf{A}\Big{]}_{i}\right)\psi( \mathbf{r})d\mathbf{r}, \tag{1}\] \[H_{\text{imp}} =\int\psi^{\dagger}(\mathbf{r})u(\mathbf{r})\psi(\mathbf{r})d \mathbf{r}, \tag{2}\]
where \(H_{0}\) is the anisotropic Hamiltonian of noninteracting Weyl fermions, \(\psi(\mathbf{r})\) and \(\psi^{\dagger}(\mathbf{r})\) are the fermion annihilation and creation operators, \(\mathbf{\sigma}_{\parallel}=(\mathbf{\sigma}\cdot\mathbf{n}_{0})\mathbf{n}_{0}\), \(\mathbf{\sigma}_{\perp}=\mathbf{\sigma}-\mathbf{n}_{0}(\mathbf{\sigma}\cdot\mathbf{n}_{0})\), are the Pauli matrices, \(v_{\parallel}\) and \(v_{\perp}\) are the Fermi velocities, and \(\mathbf{n}_{0}\) is the unit vector determining the direction of the anisotropy axis (see Fig. 1). The term \(H_{\text{imp}}\) is responsible for the interaction between electrons and Coulomb impurities. For the reference, we present the details of derivation of Hamiltonian (1) in Appendix A.
As is well known, the Nielsen-Ninomiya theorem [30] states that the Weyl nodes should appear in pairs within the Brillouin zone. However, due to the smoothness of the disorder potential (see the details in the Discussion section), we discard the charge carrier scattering between the nodes. Therefore, to determine the full conductivity, one simply multiplies the result from a single Weyl node by the number of nodes in the Brillouin zone of the WSM. Throughout the paper, we set \(\hbar=1\) and introduce the variable \(\Omega\), related to the magnetic field (the distance between the zeroth and the first LL) and the magnetic length \(l_{H}\)
\[\Omega^{2}=\frac{2eHv_{\parallel}}{c},\quad l_{H}^{2}=\frac{c}{eH}. \tag{3}\]
### Disorder potential
The screened disorder potential reads
\[u(\mathbf{k})=\frac{4\pi e^{2}}{\epsilon}\frac{1}{k^{2}-\frac{4\pi e^{2}}{ \epsilon}\Pi(k^{2})}, \tag{4}\]
where \(\epsilon\) is the dielectric constant and \(\Pi(k)\) is the Fermi gas polarization operator taken in the static limit (frequency is set to zero): \(\Pi(k^{2})\equiv\Pi(\omega,k^{2})|_{\omega=0}\). In the situation of the ordinary Fermi liquid, the momentum transferred by the static disorder potential to a charge carrier is much smaller than the Fermi momentum \(k\ll k_{\text{F}}\). This entails the possibility to expand \(\Pi(k^{2})\) in terms of \(k/k_{\text{F}}\ll 1\) in Eq. (4), and keep the first term only: \(\Pi(k)=\Pi(0)+k^{2}\partial_{k^{2}}\Pi(0)+...\), where \(\Pi(0)=-dn/d\mu\) is the thermodynamic density of states. This leads to the standard static screening of the Coulomb interaction.
In our problem, as we will see in the course of calculations, the situation is more subtle. The role of Fermi momentum is assumed by the inverse magnetic length \(l_{H}^{-1}\). We may write the expression for the exact polarization operator in the following suitable form
\[\Pi(k^{2}) =-\frac{dn}{d\mu}(1+c_{1}(kl_{H})^{2}+c_{2}(kl_{H})^{4}+...) \tag{5}\] \[=-\frac{dn}{d\mu}[1+k^{2}l_{H}^{2}f(k^{2}l_{H}^{2})],\]
where \(f(0)\neq 0\) and \(f(x)\) is some dimensionless function measuring a deviation of the polarization operator from its value at zero momentum. At low temperatures, only the zeroth Landau level is occupied and \(dn/d\mu\) is easily calculated (see e.g. Ref. [31]) yielding \(dn/d\mu=(2\pi^{2}vl_{H}^{2})^{-1}\). Using Eq. (4), we write the following expression for the screened disorder potential
\[u(\mathbf{k})=\frac{4\pi e^{2}}{\epsilon}\frac{1}{k^{2}[1+\frac{2\alpha}{\pi} f(k^{2}l_{H}^{2})]+\frac{2\alpha}{\pi l_{H}^{2}}}, \tag{6}\]
where
\[\alpha=\frac{e^{2}}{\epsilon\hbar v_{\parallel}}, \tag{7}\]
is the so called fine structure constant for WSM.
We will see that the main contribution to the conductivity related to the disorder potential comes from the \(k\lesssim l_{H}^{-1}\) momentum range, where \(l_{H}\) is defined in Eq. (3) (see Appendix C for the details). As a result, in contrast to the Fermi liquid theory, the argument of function \(f\) entering denominator of Eq. (6) is of the order of unity.
Therefore, the whole expression is \(f(k^{2}l_{H}^{2})\sim\mathcal{O}(1)\) in our problem.
However, as is known quite well, the typical WSM, like Cd\({}_{3}\)As\({}_{2}\) has an additional small parameter \(\alpha\ll 1\) which is for Cd\({}_{3}\)As\({}_{2}\) is equal \(\alpha\approx 0.05\)[32; 5]. This drastically simplifies our analysis. Taking into account exact \(\Pi(k^{2})\) in the Coulomb disorder (4) instead of \(\Pi(0)\) is equivalent to keeping the term with function \(f\) in expression (6). However, as one sees from (6), \(f\) enters with small prefactor \(\alpha\) in the renormalization of the Coulomb field.
Thus, keeping the term containing \(f\) in Coulomb interaction (6) yields small (of the order of \(\alpha\)) corrections to the observables. We, on our part, will keep only the terms of the order of \(\mathcal{O}(\ln\alpha)\) and \(\mathcal{O}(1)\). Therefore, we will substitute \(\Pi(k^{2})\) by \(\Pi(0)\) in disorder potential (4) and will use the standard Lindhard expression for the renormalized Coulomb potential
\[u(\mathbf{k})=\frac{4\pi e^{2}}{\epsilon}\frac{1}{k^{2}+\kappa^{2}}, \tag{8}\]
where \(\kappa^{2}=2\alpha\pi^{-1}l_{H}^{-2}\ll l_{H}^{-2}\) is the inverse Debye screening length squared, from now on.
### Transformation of the conductivity tensor
Before we proceed any further, it is quite suitable to introduce the rescaling, which makes the spectrum isotropic
\[r_{\parallel}=r_{s,\parallel},\ \ r_{\perp}=\xi r_{s,\perp},\ \ \psi( \mathbf{r})=\frac{1}{\xi}\psi_{s}(\mathbf{r}_{s}),\ \ v_{\perp}=\xi v_{\parallel}. \tag{9}\]
Transformation (9) makes the disorder-free part of the Hamiltonian isotropic
\[\begin{split} H_{s,0}&=-iv_{\parallel}\int\psi_{s}^ {\dagger}(\mathbf{r}_{s})\mathbf{\sigma}\left(\nabla_{\mathbf{r}_{s}}-i\frac{e}{c} \mathbf{A}\right)\psi_{s}(\mathbf{r}_{s})d\mathbf{r}_{s},\\ H^{\prime}_{\text{imp}}&=\int\psi_{s}^{\dagger} (\mathbf{r}_{s})u(\mathbf{r}_{s,\parallel}+\xi\mathbf{r}_{s,\perp})\psi( \mathbf{r}_{s})d\mathbf{r}_{s}.\end{split} \tag{10}\]
The transformation is performed in three steps. First, we rotate the coordinate system so that the new \(z^{\prime}\) axis becomes parallel to the anisotropy axis. We rotate it by angle \(\Phi\) about axis \(z\) and then by angle \(\Theta\) about the transformed \(y\) axis (see Fig. 1).
\[\sigma=R\sigma^{\prime}R^{-1}, \tag{11}\]
where matrix \(R\) is presented in Appendix A, Eq. (10). Second, we perform the rescaling. We denote the rescaled conductivity tensor in the rotated basis as \(\sigma_{s}^{\prime}\). The correct transformation rule is not immediately obvious. The details are summarized in Appendix B. The transformation rule has the form
\[\sigma^{\prime}=S_{1}\sigma_{s}^{\prime}S^{-1}, \tag{12}\]
where matrices \(S\) and \(S_{1}\) are defined by Eqs. (11) and (12)
The scaling transformation also changes the components of the magnetic field vector \(\mathbf{H}\). The transformation law is derived in Appendix B, (see Eq. (13))
\[\mathbf{H}_{s}=H(-\xi\sin\Theta,\ 0,\ \xi^{2}\cos\Theta). \tag{13}\]
We see that the magnetic field changes according to the law
\[H_{s}=\xi\eta H,\ \ \eta=\sqrt{\xi^{2}\cos^{2}\Theta+\sin^{2}\Theta}. \tag{14}\]
It is important to note that Eq. (13) also entails the change of the inclination angle of the vector with respect to the scaled basis (though the direction of the vector, of course, stays unchanged. However, to accentuate the fact that the measured angle is changed, we draw vector \(\mathbf{H}_{s}\) in a slightly different direction in Fig. 2 for illustrative purposes). As is seen in the figure, the rescaled magnetic field vector is inclined by an angle
\[\gamma=-\arctan\left(\frac{1}{\xi}\tan\Theta\right) \tag{15}\]
in the rotated (\(x^{\prime}z^{\prime}\)) plane with respect to the \(z^{\prime}\) axis. To make the calculation for the conductivity easier, we need to switch to the coordinate system, in which \(z\) axis is aligned along the magnetic field vector. Therefore, we need to perform the reversed rotation by the \(\gamma\) angle in the (\(x^{\prime}z^{\prime}\)) plane (we denote the corresponding rotation
Figure 2: (a) Rotated basis \(x^{\prime},y^{\prime},z^{\prime}\). Axis \(z^{\prime}\) is oriented along the anisotropy vector \(\mathbf{n}_{0}\). (b) Position of the rescaled magnetic field vector \(\mathbf{H}_{s}\) after the rescaling. We can see that it remains in the \(z^{\prime}y^{\prime}\) plane, but is rotated by angle \(\gamma\) about \(y^{\prime}\) axis.
matrix as \(R_{\gamma}\)). Let us denote the new conductivity tensor in the once more rotated basis as \(\sigma^{\prime}_{s}\)
\[\sigma_{s}=R_{\gamma}\sigma^{\prime}_{s}R_{\gamma}^{-1}, \tag{16}\]
where the \(R_{\gamma}\) matrix is identical to matrix \(R\) from Eq. (14) up to the change \(\Theta\rightarrow\gamma,\ \Phi\to 0\). As a result, the initial conductivity tensor and the rescaled and rotated one are related by the following transform
\[\sigma=RS_{1}R_{\gamma}\sigma^{\prime}_{s}R_{\gamma}^{-1}S^{-1}R^{-1}. \tag{17}\]
This results in the following final expression for relating components of the conductivity tensor in the rotated rescaled basis and the initial one
\[\sigma_{xx}=\frac{1}{\xi^{2}}\big{[}\eta^{2}\sigma^{\prime}_{s,xx}\cos^{2} \Phi+\xi^{2}\sigma^{\prime}_{s,yy}\sin^{2}\Phi\big{]}, \tag{18}\]
where \(\eta\) is defined in Eq. (14).
We note here that the anisotropy axis is now inclined by the polar angle \(\gamma\) in the \((x^{\prime}z^{\prime})\) plane. The latter means that the axis's azimuthal angle is zero. The components of the conductivity tensor, in general, should depend on the Euler angles.
Next, we realize that the conductivity tensor components \(\sigma^{\prime}_{s,xx}\) and \(\sigma^{\prime}_{s,yy}\) ought to depend on the component of the vector determining the direction of the anisotropy axis (anisotropy vector). However, the only geometric difference between tensor components \(\sigma^{\prime}_{s,xx}\) and \(\sigma^{\prime}_{s,yy}\) is the orientation of the anisotropy vector with respect to \(x^{\prime}y^{\prime}\) plane. Therefore, we have \(\sigma^{\prime}_{s,yy}(\varphi)=\sigma^{\prime}_{s,xx}(\pi/2-\varphi)\), where \(\varphi\) is the azimuthal angle.
Also, we should pay attention to the behavior of conductivity (18) at \(\Theta=0\). In this case, the anisotropy axis coincides with the direction of the magnetic field. In such situation, the azimuthal angle \(\Phi\) is, strictly speaking, undefined. Therefore, the conductivity tensor is supposed to be independent of \(\Phi\) at \(\Theta=0\). As will be proven in the next section, at \(\Theta=0\) (anisotropy axis is aligned along the direction of the magnetic field), we obtain that \(\sigma^{\prime}_{s,xx}=\sigma^{\prime}_{s,yy}\), and Eq. (18) implies the relation \(\sigma_{xx}=\sigma^{\prime}_{s,xx}\). The latter is quite natural since in this case, the system effectively becomes isotropic in the \((xy)\) plane.
### Debye screening
The inverse Debye screening length is determined according to the standard equation
\[\kappa^{2}=\frac{4\pi e^{2}}{\epsilon}\frac{dn(H)}{d\mu}, \tag{19}\]
where \(n\) is the particle density determined by the chemical potential \(\mu\). The easiest way to compute the particle density at the applied magnetic field is to switch to the rescaled rotated basis. The rescaled density is related to the initial one via the transform: \(n_{s}=\xi^{2}n\) (see discussion of Eq. (12)). As a result, the Debye screening is determined as
\[\kappa^{2}=\frac{1}{\xi^{2}}\frac{dn_{s}(H_{s})}{d\mu}\equiv\frac{1}{\xi^{2}} \frac{2\alpha}{\pi l_{H_{s}}^{2}}, \tag{20}\]
where \(l_{H_{s}}=c/eH_{s}\equiv c/(eH\xi\eta)\) is the magnetic length in the rescaled coordinate system.
The rescaled disorder potential leads to the modified disorder correlation function
\[g(p)=\frac{16\pi^{2}n_{\rm imp}\xi^{2}\alpha^{2}v_{\parallel}^{2}}{(\xi^{2}p_ {\parallel}^{2}+p_{\perp}^{2}+\xi^{2}\kappa^{2})^{2}}. \tag{21}\]
Now, we are ready to compute the conductivity. To this end, we are going to employ the Kubo formalism. As usual, the conductivity contains two distinct contributions: the one, which comes from the separate averaging of Green's functions, and the vertex correction.
## III Conductivity \(\sigma_{xx}\)
### Kubo expressions and Green's functions
The expression for conductivity is given by the standard Kubo formula (see e.g. Ref. [31])
\[\begin{split}\sigma_{xx}&=2e^{2}v_{\parallel}^{2} \int\frac{d\varepsilon d\mathbf{p}dx^{\prime}}{(2\pi)^{3}}\frac{df(\varepsilon )}{d\varepsilon}\text{Tr}\bigg{[}\langle\text{Im}G_{11}^{R}(x,x^{\prime}; \varepsilon,\mathbf{p})\text{Im}G_{22}^{R}(x^{\prime},x;\varepsilon,\mathbf{p })\rangle+\langle\text{Im}G_{22}^{R}(x,x^{\prime};\varepsilon,\mathbf{p}) \text{Im}G_{11}^{R}(x,x^{\prime};\varepsilon,\mathbf{p})\rangle\\ &-\frac{1}{4}\langle\big{[}G_{12}^{R}(x,x^{\prime};\varepsilon, \mathbf{p})-G_{12}^{A}(x,x^{\prime};\varepsilon,\mathbf{p})\big{]}\big{[}G_{1 2}^{R}(x^{\prime},x;\varepsilon,\mathbf{p})-G_{12}^{A}(x^{\prime},x; \varepsilon,\mathbf{p})\big{]}\rangle\\ &-\frac{1}{4}\langle\big{[}G_{21}^{R}(x,x^{\prime};\varepsilon, \mathbf{p})-G_{21}^{A}(x,x^{\prime};\varepsilon,\mathbf{p})\big{]}\big{[}G_{2 1}^{R}(x^{\prime},x;\varepsilon,\mathbf{p})-G_{21}^{A}(x^{\prime},x; \varepsilon,\mathbf{p})\big{]}\rangle\bigg{]}.\end{split} \tag{22}\]
Here, angular brackets denote the disorder averaging, and \(f(\varepsilon)\) is the Fermi distribution function. The inte
gration over momentum \(\mathbf{p}\) is performed in the \((p_{y},p_{z})\) plane. The last two lines in Eq. (22) (usually absent in standard analysis) appear owing to the disorder vertex corrections and, as we will see below, do not vanish only in the anisotropic case.
The Green's functions entering (22) are defined as follows
\[\begin{split}& G^{R}(x,x^{\prime};\varepsilon,\mathbf{p})=\sum_{n=0}^ {\infty}S_{n}(x_{p_{y}})G_{n}^{R}(\varepsilon,\mathbf{p}_{n})S_{n}^{\dagger}( x_{p_{y}}^{\prime}),\\ & S_{n}(s)=\begin{pmatrix}\chi_{n}\big{(}s\big{)}&0\\ 0&\chi_{n-1}\big{(}s\big{)}\end{pmatrix},\\ & G_{n}^{R}(\varepsilon,p_{z})=\frac{\varepsilon+v\boldsymbol{\sigma} \cdot\mathbf{p}_{n}}{(\varepsilon+i0)^{2}-\varepsilon_{n}^{2}},\\ & x_{p_{y}}=x-p_{y}l_{H}^{2}.\end{split} \tag{23}\]
Here, \(\chi_{n}\big{(}s\big{)}\) is the normalized oscillator wave function of the \(n\)th state and
\[\mathbf{p}_{n}=(0,\sqrt{2n}/l_{H},p_{z}) \tag{24}\]
is the effective 2D momentum.
### Summation of diagrams
In the ultraquantum limit (\(T\to 0\)), the Fermi function derivative can be substituted by the \(\delta\) function, \(\partial f(\varepsilon)=-\delta(\varepsilon-\mu)\), and the integration over the energy can be explicitly performed. We will be interested in the small chemical potential limit, \(\mu\ll\Omega\). As a result, we discard \(\mu\) in the further computation of \(\sigma_{xx}\) (but we will keep it for the computation of \(\sigma_{xy}\) to obtain a nonvanishing result). As well as in Abrikosov's study [20], only the zeroth and the first Landau levels contribute to the conductivity.
We need to sum up the diagrams shown in Fig. 4. The details of the derivation are presented in Appendix C.
In the leading log approximation (the precision of Abrikosov's calculation [20]), the conductivity \(\sigma_{xx}\) has the form
\[\sigma_{xx}=\frac{\alpha^{3}}{\Omega^{2}}v_{\parallel}^{3}n_{\text{imp}}\big{[} \cos^{2}\Theta+\xi^{-2}\sin^{2}\Theta\big{]}\ln\frac{1}{\alpha}. \tag{25}\]
As we see, the anisotropy manifests itself in the \(\Theta\) dependence of Eq. (25). At \(\xi=1\), the \(\Theta\) dependence drops out, and Eq. (25) reproduces the famous Abrikosov's result for the isotropic WSM [20]. However, conductivity (25) still does not exhibit the \(\Phi\) dependence. This is due to the the insufficient precision of the log approximation. The result can be improved by a more accurate computation of the corresponding integrals.
After simple but a rather cumbersome analysis, we arrive at the following expression (see Appendix C)
\[\begin{split}&\sigma_{xx}=\frac{\alpha^{3}}{2\pi\Omega^{2}}v_{ \parallel}^{3}n_{\text{imp}}\big{[}\cos^{2}\Theta+\xi^{-2}\sin^{2}\Theta\big{]} \\ \times&\Bigg{[}\ln\frac{4\pi\xi^{2}}{\alpha e^{C}( \xi+\eta)^{2}}-2\frac{\xi\cos^{2}\Phi+\eta\sin^{2}\Phi}{\xi+\eta}\Bigg{]}, \end{split} \tag{26}\]
where \(\eta\) is defined in Eq. (14) and \(C\) is is the Euler-Mascheroni constant.
Expression (26) is the \(\alpha\) expansion of the integrals entering Kubo formula (22), where the disorder averaging is performed with correlation function (21). The omitted terms in the computation of integrals entering Kubo expression (22) are of the order of \(\mathcal{O}(\alpha\ln\alpha)\). This is exactly the precision, with which we computed polarization operator in Coulomb potential (8) and this makes the whole derivation to be selfconsistent. The plots with the \(\Theta\) and \(\Phi\) dependence of magnetoconductivity are presented in Figs. 4 and 5.
Figure 4: \(\Phi\) dependence of the conductivity \(\sigma_{xx}\) at different values of the polar angle \(\Theta\). Here, \(\Theta\) diminishes in \(\pi/24\) steps (see curves from 1 to 5); \(\Theta_{n}=\frac{\pi}{2}-(n-1)\frac{\pi}{24}\). The plots are drawn at the realistic values of \(\xi=4\) (Cd\({}_{2}\)A\({}_{83}\)) and of the fine structure constant \(\alpha=0.05\).
Figure 3: First-order contributions to the conductivity \(\sigma_{xx}\).
## IV Magnetoresistance
The expression for magnetoresistance reads
\[\rho_{xx}=\frac{\sigma_{xx}}{\sigma_{xx}^{2}+\sigma_{xy}^{2}}. \tag{27}\]
Two terms entering the denominator of Eq. (27) are not of the same order: \(\sigma_{xx}\) is proportional to the disorder strength, while the first term in disorder expansion of \(\sigma_{xy}\) is disorder independent. We are going to see that for not very highly compensated WSMs (see the exact condition below), the condition \(\sigma_{xx}\ll\sigma_{xy}\) is always satisfied.
### Hall conductivity \(\sigma_{xy}\)
The Hall conductivity includes the anomalous and normal contributions. The full conductivity is disorder independent in the lowest order of the perturbation theory [20; 27]. The expression, relating the Hall conductivity in the initial and rotated and rescaled basis follows from Eq. (17) and reads
\[\sigma_{xy}=\frac{1}{\xi}\sigma_{xy,sc}^{\prime}\sqrt{\xi^{2}\cos^{2}\Theta+ \sin^{2}\Theta}. \tag{28}\]
The expression for the Hall conductivity in the ultraquantum limit in the isotropic system can be taken from, e.g. Ref. [20]. We have
\[\sigma_{xy}=\sqrt{\cos^{2}\Theta+\xi^{-2}\sin^{2}\Theta}\frac{\alpha\mu}{4\pi^ {2}}. \tag{29}\]
### Computation of the magnetoresistance
We need to express the Hall conductivity (29) via the charge carrier density. In the scaled rotated basis it is given by the standard expression [20]: \(n_{s}=\Omega_{s}^{2}\mu/(4\pi^{2}v_{\parallel})\), where \(\Omega_{s}^{2}=2eH_{s}v_{\parallel}/c\) is the magnetic field in the rescaled coordinate basis.
Using the relation between magnetic fields (14), we obtain the following relation for the charge carrier density
\[n_{0}=\frac{\Omega^{2}\mu}{4\pi^{2}v_{\parallel}}\sqrt{\cos^{2}\Theta+\xi^{-2 }\sin^{2}\Theta}. \tag{30}\]
We see, that condition \(\sigma_{xx}\ll\sigma_{xy}\) is met as long as \(\alpha^{2}n_{\rm imp}\ll n_{0}\). In the typical situation, the electroneutrality condition entails \(n_{\rm imp}\sim n_{0}\), therefore \(\sigma_{xx}\ll\sigma_{xy}\) is always satisfied.
Finally, plugging in (30) into (29) and (27), we obtain the following expression for the magnetoresistance
\[\begin{split}\rho_{xx}=\frac{\Omega^{2}n_{\rm imp}v_{\parallel }}{n_{0}^{2}}\big{[}\cos^{2}\Theta+\xi^{-2}\sin^{2}\Theta\big{]}\\ \times\Bigg{[}\ln\frac{4\pi\xi^{2}}{\alpha e^{C}(\xi+\eta)^{2}}-2 \frac{\xi\cos^{2}\Phi+\eta\sin^{2}\Phi}{\xi+\eta}\Bigg{]}.\end{split} \tag{31}\]
We see that the anisotropy is clearly pronounced in the realistic WSM where (like in Cd\({}_{3}\)As\({}_{2}\), \(\xi^{2}\approx 16\gg 1\)). If the anisotropy axis is oriented perpendicular to the magnetic field \(\mathbf{H}\), the ratio of resistances scales as \(\xi^{2}\)
\[\frac{\rho_{xx}(\mathbf{H}\parallel\mathbf{n}_{0})}{\rho_{xx}(\mathbf{H} \perp\mathbf{n}_{0})}=\xi^{2}+\mathcal{O}\Big{(}\frac{1}{\ln\alpha}\Big{)}. \tag{32}\]
Expressions (26), (31), and (32) are the main results of our paper.
It is quite interesting to point out once more that the azimuthal angle \(\Phi\) dependence of the resistance manifests itself in the subleading to the main log term. Such is the consequence of averaging over long-range Coulomb disorder.
## V Discussion
We studied the magnetoresistance of WSM with an axial anisotropy. We found that the magnetoresistance is strongly renormalized as a function of the polar and azimuthal angle between anisotropy axis and the applied voltage plane. Some remarks are relevant here. First, we computed the contribution to conductivity from a single Weyl node. If the internodal scattering by disorder can be discarded, the total conductivity can be found by multiplying our Eq. (31) by the number of Weyl nodes in the Brillouin zone. The internodal scattering can be neglected if the momentum transferred by disorder is much smaller then the distance between the adjacent Weyl nodes in momentum space. For Cd\({}_{3}\)As\({}_{2}\) this distance reads [33]: \(2k_{0}=0.012\) A\({}^{-1}\), while for TaAs [34] it is \(2k_{0}=0.0183\) A\({}^{-1}\). On the other hand, the inverse Debye length in a typical magnetotransport experiment with field \(H\sim 1\) T is \(\kappa\sim 10^{-4}\) A\({}^{-1}\) (see Eq. (8) and the comment below it). Therefore, indeed, we have \(2k_{0}\gg\kappa\) and the internodal scattering can be safely neglected.
As was argued in Ref. [35] at temperatures \(T\gtrsim n_{\rm imp}^{1/3}v\), the electron-electron interaction starts dominating the transport in WSMs. For a typical magnetotransport experiment [36], the charge carrier density \(n\sim 10^{18}\) cm\({}^{-3}\), which yields the limiting temperature \(T\lesssim 360\) K for the transport to be dominated by the Coulomb impurity scattering. Therefore, we conclude that our findings should be valid in the majority of experiments dealing with magnetoresistance measurements in WSMs.
###### Acknowledgements.
The work was supported by the Russian Science Foundation (project No. 21-12-00254, [https://rscf.ru/en/project/21-12-00254/](https://rscf.ru/en/project/21-12-00254/)). The work of A.S. Dotdaev in the part concerning the numerical calculations was supported by Grant No. K2-2022-025 in the framework of the Increase Competitiveness Program of NUST MISIS and by the the Foundation for the Advancement of Theoretical Physics and Mathematics "Basis" (project No. 22-1-1-24-1).
## Appendix A Derivation of the anisotropic Hamiltonian
In the rotated coordinate system, in which the \(z^{\prime}\) axis is aligned along the anisotropy axis \({\bf n}_{0}\), we have the self-explanatory expression for the Hamiltonian
\[H_{0} = \int d{\bf r}^{\prime}\psi^{\prime\dagger}({\bf r}^{\prime})h(p^ {\prime})\psi^{\prime}({\bf r}^{\prime})\] \[h^{\prime}(p^{\prime}) = \left[v_{\perp}(\sigma_{x}p_{x^{\prime}}+\sigma_{y}p_{y^{\prime}} )+v_{\parallel}p_{x^{\prime}}\right], \tag{10}\]
where \(\psi^{\prime}({\bf r}^{\prime})\) are spinors in the rotated basis. We want to get back to the _laboratory_ system, in which the anisotropy axis \(z^{\prime}\) (\({\bf n}_{0}\)) is inclined at Euler angles \((\Phi,\Theta)\). The change from the laboratory system to the inclined is achieved by the rotation about the \(z\) axis by angle \(\Phi\) and about the new \(y\) axis by \(\Theta\). It is the standard Euler matrix relating vectors via \(p=Rp^{\prime}\), where
\[R=\left(\begin{array}{ccc}\cos\Theta\cos\Phi&-\sin\Phi&\sin\Theta\cos\Phi\\ \cos\Theta\sin\Phi&\cos\Phi&\sin\Theta\sin\Phi\\ -\sin\Theta&0&\cos\Theta\end{array}\right). \tag{11}\]
Therefore, the transformed Hamiltonian is rewritten as \(h(p^{\prime})\equiv h(R^{-1}p)\). We also need to transform the spinors according the 2D representation of the rotation group: \(\psi^{\prime}=U\psi\), where the unitary matrix \(U=U_{y}(\Theta)U_{z}(\Phi)\) is the product of unitary rotation matrices \(U_{\bf n}(\varphi)=\exp[i\mathbf{\sigma}\cdot{\bf n}\varphi/2]\) by angle \(\Phi\) about \(z\) and by \(\Theta\) about the new \(y\) axis
\[U=\left(\begin{array}{ccc}e^{\frac{i\Phi}{2}}\cos\frac{\Theta}{2}&e^{-\frac{ i\Phi}{2}}\sin\frac{\Theta}{2}\\ -e^{\frac{i\Phi}{2}}\sin\frac{\Theta}{2}&e^{-\frac{i\Phi}{2}}\cos\frac{\Theta}{ 2}\end{array}\right). \tag{12}\]
Taking into account the fact that the Jacobian of the rotation is equal to unity (\(d{\bf r}^{\prime}=d{\bf r}\)), we see that the transformed Hamiltonian (10) becomes
\[H_{0} = \int d{\bf r}\psi^{\dagger}({\bf r})h(p)U\psi({\bf r}), \tag{13}\] \[h(p) = U^{\dagger}h(R^{-1}p)U.\]
Taking expression (10) for the Hamiltonian \(2\times 2\) matrix \(h^{\prime}(p^{\prime})\) and performing direct substitution of \(R\) from (11) and multiplication by matrices (12), we arrive at expression (1).
## Appendix B Transformation of the conductivity tensor
As was pointed out in Section II.3, the transformation due to rotation of the conductivity tensor is achieved via transform (11) with matrix (11). The rescaling of coordinates \(z=z^{\prime},\ (x,y)=\xi(x^{\prime},y^{\prime})\) leads to the volume measure transform: \(d{\bf r}=\xi^{2}d{\bf r}^{\prime}\). We require that the particle number be given by the same expression
\[N=\int\psi^{\prime\dagger}({\bf r}^{\prime})\psi^{\prime}({\bf r}^{\prime})d{ \bf r}^{\prime} \tag{14}\]
as before the rescaling. Hence, we postulate the scaling of the \(\psi\) in such a way that expression (14) remains invariant
\[\int d{\bf r}\psi^{\dagger}({\bf r})\psi({\bf r})=\int\psi^{\dagger}(Rr^{ \prime})\psi(Rr^{\prime})\xi^{2}d{\bf r}^{\prime}, \tag{15}\]
which entails the \(\psi\)-operator scaling law (9).
To understand how the conductivity tensor is transformed under the rescaling, we need to write the transformations for the electric field and current density. The electric field transformation law can be found from the requirement that the part of the Hamiltonian responsible for the coupling to the external electromagnetic field should remain unchanged (since the very definition of the conductivity is the response of the current to the external potential).
The corresponding potential affected by the rescaling enters the transversal part of canonical momentum and reads
\[-i\Big{(}\frac{\partial}{\partial{\bf r}_{\perp}}-\frac{ie}{c}{\bf A}_{\perp} \Big{)}=-\frac{i}{\xi}\Big{(}\frac{\partial}{\partial{\bf r}_{s,\perp}}-i \frac{e}{c}\xi{\bf A}_{\perp}\Big{)}. \tag{16}\]
Looking at (16), we immediately establish the scaling transformation for the vector potential
\[{\bf A}_{s,\perp}=\xi{\bf A}_{\perp},\quad{\bf A}_{s,\parallel}={\bf A}_{ \parallel}. \tag{17}\]
Using definition \({\bf E}=-c^{-1}\partial_{t}{\bf A},\ {\bf H}=\nabla\times{\bf A}\), we find transformation scaling rules for the electric and magnetic fields
\[{\bf E}_{s,\perp}=\xi{\bf E}_{\perp},\quad{\bf E}_{s,\parallel}={\bf E}_{\parallel}. \tag{18}\] \[{\bf H}_{s,\perp}=\xi{\bf H}_{\perp},\quad{\bf H}_{s,\parallel}= \xi^{2}{\bf H}_{\parallel}.\]
Similarly, for the electric field, we find from (101):
\[E=SE_{s},\ S\equiv\text{diag}(\xi^{-1},\xi^{-1},1). \tag{102}\]
Finally, we determine the current density transformation law from its definition: \(\mathbf{j}=n\mathbf{v}\). Recalling operator definition of the density and using (9), we write \(n_{s}=\psi_{s}^{\dagger}\psi_{s}=\xi^{2}n\). For the current density, we obtain
\[\begin{split}\mathbf{j}_{s,\perp}=\xi\mathbf{j}_{\perp},& \quad\mathbf{j}_{s,\parallel}=\xi^{2}\mathbf{j}_{\parallel}\ \Rightarrow\\ j=S_{1}j_{s},&\quad S_{1}=\text{diag}(\xi^{-1}, \xi^{-1},\xi^{-2}).\end{split} \tag{103}\]
Now, from the definition of the conductivity tensor: \(j_{i}=\sigma_{ik}E_{k}\) with the help of Eqs. (101) and (103), we determine the transformation law relating initial and scaled conductivities
\[S_{1}j_{s}=\sigma SE_{s}\ \Rightarrow\sigma_{s}=S_{1}^{-1}\sigma S\ \Rightarrow\sigma=S_{1}\sigma_{s}S^{-1}. \tag{104}\]
## Appendix C Analytical expressions for diagrams, beyond the log approximation
### Expression for the conductivity
In this section, the parameters \(l_{H}\) and \(\Omega^{2}\) refer to the rescaled quantities. Using the orthogonality relations for the Hermite polynomials, one easily convinces oneself that only the zeroth and the first LLs yield nonexponentially suppressed expressions corresponding to diagrams (a)-(d) represented in Fig. 4
\[\begin{split}&(a):\quad\int\frac{dp_{z}}{2\pi}\text{Im}G_{0,11}(p_{z })G_{1,21}^{R}(p_{z})G_{1,12}^{R}(p_{z})\int\frac{d\mathbf{q}}{(2\pi)^{3}} \text{Im}G_{0,11}^{R}(p_{z}+q_{z})S_{0}(\mathbf{q})g(\mathbf{q}),\\ & S_{0}(\mathbf{q})=\int\frac{dp_{y}dx_{1}dx_{2}dx^{\prime}}{2 \pi}e^{iq_{x}(x_{1}-x_{2})}\chi_{0}^{2}(x_{p_{y}})\chi_{0}^{2}(x_{p_{y}}^{ \prime})\chi_{1}(x_{1,p_{y}})\chi_{0}(x_{1,p_{y}+q_{y}})\chi_{0}(x_{2,p_{y}+q_ {y}})\chi_{1}(x_{2,p_{y}}).\end{split} \tag{105}\]
The diagrams in Figs. 4(a) and 4(b) lead to identical expressions. The diagrams in Figs. 4(c) and 4(d) read
\[\begin{split}&(c):\quad\int\frac{dp_{z}}{2\pi}\int\frac{d\mathbf{q }}{(2\pi)^{3}}\text{Im}G_{0,11}(p_{z})\text{Im}G_{0,11}^{R}(p_{z}+q_{z})G_{1, 12}(p_{z}+q_{z})G_{1,12}(p_{z})S_{1}(\mathbf{q})g(\mathbf{q}),\\ & S_{1}(\mathbf{q})=\!\!\int\frac{dp_{y}dx_{1}dx_{2}dx^{\prime}}{2 \pi}\chi_{0}^{2}(x_{p_{y}})\chi_{0}^{2}(x_{p_{y}+q_{y}}^{\prime})\chi_{0}(x_{1,p_{y}})\chi_{1}(x_{1,p_{y}+q_{y}})e^{iq_{x}(x_{1}-x_{2})}\chi_{0}(x_{2,p_{y}+q _{y}})\chi_{1}(x_{2,p_{y}}).\end{split} \tag{106}\]
\[\begin{split}&(d):\quad\int\frac{dp_{z}}{2\pi}\int\frac{d\mathbf{q }}{(2\pi)^{3}}\text{Im}G_{0,11}(p_{z})\text{Im}G_{0,11}^{R}(p_{z}+q_{z})G_{1, 21}(p_{z}+q_{z})G_{1,21}(p_{z})S_{2}(\mathbf{q})g(\mathbf{q}),\\ & S_{2}(\mathbf{q})=\!\!\int\frac{dp_{y}dx_{1}dx_{2}dx^{\prime}}{2 \pi}\chi_{0}^{2}(x_{p_{y}}^{\prime})\chi_{0}^{2}(x_{p_{y}+q_{y}})\chi_{0}(x_{1,p_{y}})\chi_{1}(x_{1,p_{y}+q_{y}})e^{-iq_{x}(x_{1}-x_{2})}\chi_{0}(x_{2,p_{y}+ q_{y}})\chi_{1}(x_{2,p_{y}}).\end{split} \tag{107}\]
The expressions for form-factors \(S_{0,1,2}(\mathbf{q})\) are easily computed using the relations for the Hermite polynomials. We have
\[S_{0}(\mathbf{q})=\frac{1}{4\pi}e^{-q_{\perp}^{2}l_{H}^{2}/2}q_{\perp}^{2}, \quad S_{1,2}(\mathbf{q})=\frac{1}{4\pi}e^{-q_{\perp}^{2}l_{H}^{2}/2\pm 2i\varphi}q_{ \perp}^{2}, \tag{108}\]
where \(\mathbf{q}_{\perp}=(q_{x},q_{y})\) and \(\varphi\) is its direction in the \(xy\) plane.
We also see that due to the presence of \(\text{Im}G_{0,11}(p_{z})=-\pi\delta(v_{\parallel}p_{z})\) and \(\text{Im}G_{0,11}(p_{z}+q_{z})\), the integration over momenta \(p_{z}\) and \(q_{z}\) is trivial, leading effectively \(p_{z}=q_{z}=0\). As a result, the expressions for diagrams (a)-(d) are simplified
\[(a+b+c+d)=\frac{1}{4\Omega^{2}}\frac{1}{4\pi}\int\frac{d\mathbf{q}_{\perp}}{(2 \pi)^{2}}g(\mathbf{q}_{\perp})e^{-q_{\perp}^{2}l_{H}^{2}/2}q_{\perp}^{2}\big{(} 2-e^{2i\varphi}-e^{-2i\varphi}\big{)}. \tag{109}\]
Here, \(g(\mathbf{q}_{\perp})\) is the potential correlation function taken at momentum \(q_{z}=0\). Using Eq. (21), we write
\[g(\mathbf{q}_{\perp})=\frac{16\pi^{2}n_{\text{imp}}\xi^{2}\alpha^{2}v_{ \parallel}^{2}}{\big{(}q_{\perp}^{2}[(\xi^{2}-1)\sin^{2}\gamma\cos^{2}(\varphi- \varphi_{0})+1]+\xi^{2}\kappa^{2}\big{)}^{2}}. \tag{110}\]
Here, \(\varphi_{0}\) is the azimuthal angle of the anisotropy axis. We see that this angle is in fact equal to zero in the rotated coordinate basis, since it belongs to the \(x^{\prime}z^{\prime}\) plane. However, we will keep it arbitrary since we are going to need it for the computation of \(\sigma_{yy}\) component.
Next, we are able to perform exactly the integration over \(\varphi\) and integration over \(q_{\perp}\).
To go beyond the leading log approximation, we need to perform exact integration over \(\varphi\) in (100). We use the following suitable integrals
\[\int\limits_{-\pi}^{\pi}\frac{d\varphi}{(\cos^{2}\varphi+a^{2})^{2}}=\frac{ \pi}{a^{3}}\frac{2a^{2}+1}{(a^{2}+1)^{3/2}},\quad\int\limits_{-\pi}^{\pi}\frac {d\varphi\cos 2\varphi}{(\cos^{2}\varphi+a^{2})^{2}}=-\frac{\pi}{a^{3}}\frac{1}{(a ^{2}+1)^{3/2}},\quad a^{2}=\frac{q_{\perp}^{2}+\xi^{2}\kappa^{2}}{q_{\perp}^{2 }(\xi^{2}-1)\sin^{2}\gamma}. \tag{101}\]
Then, we have for the (100)
\[(a+b+c+d)=\frac{n_{\rm imp}\xi^{2}\alpha^{2}}{\Omega^{2}(1+(\xi^{2}-1)\sin^{ 2}\gamma)^{3/2}}\int\limits_{0}^{\infty}q_{\perp}^{3}dq_{\perp}e^{-q_{\perp}^ {2}l_{H}^{2}/2}\frac{q_{\perp}^{2}\big{[}1+(\xi^{2}-1)\sin^{2}\gamma\cos^{2} \varphi_{0}\big{]}+\xi^{2}\kappa^{2}}{(q_{\perp}^{2}+\kappa^{2}\xi^{2})^{3/2} \big{(}q_{\perp}^{2}+\frac{\kappa^{2}\xi^{2}}{1+(\xi^{2}-1)\sin^{2}\gamma} \big{)}^{3/2}}. \tag{102}\]
Let us introduce new integration variable: \(q_{\perp}^{2}=q_{0}^{2}s\), \(q_{0}^{2}=\frac{\xi^{2}\kappa^{2}}{1+(\xi^{2}-1)\sin^{2}\gamma}\). Using relation \(\kappa^{2}l_{H}^{2}=2\alpha/(\pi\xi^{2})\) (in the exponential function), and the handy relation \(1+(\xi^{2}-1)\sin^{2}\gamma=\xi^{2}/\eta^{2}\), we arrive at the following dimensionless and convenient to analyze integral
\[(a+b+c+d)=\frac{n_{\rm imp}\eta^{3}\alpha^{2}}{2\xi\Omega^{2}}I(\varphi_{0}), \quad I(\varphi_{0})=\int\limits_{0}^{\infty}\frac{s[1+(\xi^{2}-1)\sin^{2} \gamma\cos^{2}\varphi_{0}]+\xi^{2}/\eta^{2}}{(s+1)^{3/2}(s+\xi^{2}/\eta^{2})^{ 3/2}}se^{-\alpha\eta^{2}s/(\pi\xi^{2})}\,ds. \tag{103}\]
The integral in (103) can be computed for any value of \(\varphi_{0}\). However, we are going to need it at only two values: \(\varphi=0\) (for \(\sigma^{\prime}_{s,xx}\)) and for \(\varphi=\pi/2\) (for \(\sigma^{\prime}_{s,yy}\)). For brevity, let us denote \(a=\xi/\eta\geq 1\). We are going to estimate them beyond log accuracy using the fact that \(\alpha\ll 1\)
\[I(0)=a^{2}\int\limits_{0}^{\infty}\frac{s}{(s+1)^{1/2}(s+a^{2})^{3/2}}e^{- \alpha\eta^{2}s/(\pi\xi^{2})}\,ds,\quad I\Big{(}\frac{\pi}{2}\Big{)}=\int \limits_{0}^{\infty}\frac{s}{(s+1)^{3/2}(s+a^{2})^{1/2}}e^{-\alpha\eta^{2}s/( \pi\xi^{2})}\,ds. \tag{104}\]
For the conductivity, we have the following suitable expression
\[\sigma_{xx}=\frac{1}{a^{2}}\sigma^{\prime}_{s,xx}\cos^{2}\Phi+\sigma^{\prime} _{s,yy}\sin^{2}\Phi=\frac{\alpha^{3}v_{\parallel}^{3}}{2\pi}\frac{n_{\rm imp} \eta^{3}}{\xi\Omega^{2}}\Big{[}\frac{1}{a^{2}}I(0)\cos^{2}\Phi+I\Big{(}\frac{ \pi}{2}\Big{)}\sin^{2}\Phi\Big{]}. \tag{105}\]
### Computation of the integrals
In this case, both integrals entering (101) can be represented by the following expansion in \(\alpha\): \(\ln\frac{1}{\alpha}+{\rm const}+\mathcal{O}(\alpha)\). The integral accumulates its value on a span \(s\lesssim\alpha^{-1}\). That means the momentum is \(q\lesssim l_{H}^{-1}\). We are not interested in the \(\mathcal{O}(\alpha)\) terms. However, we will extract const terms in both integrals since they carry the information on the \(\Phi\) - dependence of the conductivity. Both integrals can be represented as
\[\begin{split}& I(0)=a^{2}\big{[}J-a^{2}I_{0}(\alpha)\big{]},\quad I \Big{(}\frac{\pi}{2}\Big{)}=J-I_{\pi/2}(\alpha),\quad J=\int\limits_{0}^{ \infty}\frac{1}{(s+1)^{1/2}(s+a^{2})^{1/2}}e^{-\alpha\eta^{2}s/(\pi\xi^{2})} \,ds,\\ & I_{0}(\alpha)=\int\limits_{0}^{\infty}\frac{1}{(s+1)^{1/2}(s+a^ {2})^{3/2}}e^{-\alpha\eta^{2}s/(\pi\xi^{2})}\,ds,\quad I_{\pi/2}(\alpha)=\int \limits_{0}^{\infty}\frac{1}{(s+1)^{3/2}(s+a^{2})^{1/2}}e^{-\alpha\eta^{2}s/( \pi\xi^{2})}\,ds.\end{split} \tag{106}\]
Both integrals \(I_{0}(\alpha)\) and \(I_{\pi/2}(\alpha)\) has regular limits at \(\alpha\to 0\). Since we are not interested in \(\mathcal{O}(\alpha)\) terms, we may set \(\alpha=0\) in them. We immediately obtain
\[I_{0}(0)=\frac{2}{a(a+1)},\quad I_{\pi/2}(0)=\frac{2}{(a+1)}. \tag{107}\]
It is convenient to transform integral \(J\) as
\[J\equiv\int\limits_{0}^{\infty}\Big{(}\frac{1}{(s+1)^{1/2}(s+a^{2})^{1/2}}-\frac{ 1}{s+1}\Big{)}e^{-\alpha\eta^{2}s/(\pi\xi^{2})}\,ds+\int\limits_{0}^{\infty} \frac{1}{s+1}e^{-\alpha\eta^{2}s/(\pi\xi^{2})}\,ds. \tag{100}\]
The first term in (101) is the convergent one, and one sets \(\alpha=0\). This gives \(\ln(4(a+1)^{-2})\). The second integral is easily computed using the integration by parts. We obtain
\[J=\ln\frac{4\pi\xi^{2}}{(a+1)^{2}\alpha\eta^{2}e^{C}}+\mathcal{O}(\alpha), \tag{101}\]
where \(C\) is the Euler-Mascheroni constant.
Finally, let us deal with the the conductivity tensor. Using expression (18) from the main body of the paper, and changing the rescaled \(\Omega^{2}-\to\Omega^{2}\xi\eta\), we write
\[\sigma_{xx}=\frac{\alpha^{3}v_{\parallel}^{3}}{2\pi}\frac{n_{\rm imp}\eta^{2}} {\xi^{2}\Omega^{2}}\Big{[}(J-a^{2}I_{0}(0))\cos^{2}\Phi+(J-I_{\pi/2}(0)\sin^{2 }\Phi)\Big{]}. \tag{102}\]
Plugging in (100) into (102), we obtain expression (26) for the conductivity.
|
2303.00954 | Large Deviations for Accelerating Neural Networks Training | Artificial neural networks (ANNs) require tremendous amount of data to train
on. However, in classification models, most data features are often similar
which can lead to increase in training time without significant improvement in
the performance. Thus, we hypothesize that there could be a more efficient way
to train an ANN using a better representative sample. For this, we propose the
LAD Improved Iterative Training (LIIT), a novel training approach for ANN using
large deviations principle to generate and iteratively update training samples
in a fast and efficient setting. This is exploratory work with extensive
opportunities for future work. The thesis presents this ongoing research work
with the following contributions from this study: (1) We propose a novel ANN
training method, LIIT, based on the large deviations theory where additional
dimensionality reduction is not needed to study high dimensional data. (2) The
LIIT approach uses a Modified Training Sample (MTS) that is generated and
iteratively updated using a LAD anomaly score based sampling strategy. (3) The
MTS sample is designed to be well representative of the training data by
including most anomalous of the observations in each class. This ensures
distinct patterns and features are learnt with smaller samples. (4) We study
the classification performance of the LIIT trained ANNs with traditional batch
trained counterparts. | Sreelekha Guggilam, Varun Chandola, Abani Patra | 2023-03-02T04:14:05Z | http://arxiv.org/abs/2303.00954v1 | # Large Deviations for Accelerating Neural Networks Training
###### Abstract
Artificial neural networks (ANNs) require tremendous amount of data to train on. However, in classification models, most data features are often similar which can lead to increase in training time without significant improvement in the performance. Thus, we hypothesize that there could be a more efficient way to train an ANN using a better representative sample. For this, we propose the LAD Improved Iterative Training (LIIT), a novel training approach for ANN using large deviations principle to generate and iteratively update training samples in a fast and efficient setting. This is exploratory work with extensive opportunities for future work. The thesis presents this ongoing research work with the following contributions from this study: (1) We propose a novel ANN training method, LIIT, based on the large deviations theory where additional dimensionality reduction is not needed to study high dimensional data. (2) The LIIT approach uses a Modified Training Sample (MTS) that is generated and iteratively updated using a LAD anomaly score based sampling strategy. (3) The MTS sample is designed to be well representative of the training data by including most anomalous of the observations in each class. This ensures distinct patterns and features are learnt with smaller samples. (4) We study the classification performance of the LIIT trained ANNs with traditional batch trained counterparts.
Large deviations, anomaly detection, high-dimensional data, multivariate time series 00 Month
2. We present four LAD score based sampling strategies to design the MTS. Obtaining the LAD score based on a large deviations principle is computationally inexpensive. Therefore, one can analyze large and high dimensional datasets without additional dimensionality reduction procedures allowing more accurate and cost effective scoring schema.
3. The use of MTS which is a smaller training sample reduces the cost of computational time significantly for large datasets.
4. We perform an empirical study on publicly available classification benchmark datasets to analyze the performance of the proposed method.
The work presented here is limited to simple classification based neural networks. Future work will include extending it to more complex ANNs.
## 2 Related Work
In this section, we provide a brief overview of sensitivity to training samples and speed of neural network training.
Artificial neural networks are powerful for general classification. However, its excellent performance often depends largely on a huge training set. A large body of research exists that study the impact of training data size on neural network learning [6, 2]. In particular, it is evident that smaller training data leads to less efficient models. However, the vast computational expense associated with training on large sets of data makes the need to improve training practices essential, specially for online or real-time models.
Many methods that try to model faster neural networks exist. For instance, Wang et al. [7] use batch normalization in deep neural networks to improve the convergence rates. Zhong et al. [8] work on image classification using their agile convolution neural network SatCNN, for quick and effective learning with small convolutional kernels and deep convolutional layers. However, these works are limited to the domain problems and cannot be easily scaled to other data types.
Another alternative to improve the training speed can be by modifying the training samples. For instance, studies like Shanker et al. [5] look at the effect of standardization of data on the learning of the neural network. Kavzoglu [3] emphasizes on characteristics of training samples and uses representative training to improve the classification. These methods, however, fail to study the impact of smaller data on model performance and efficiency.
In this part of the thesis, we propose a novel training strategy that can be generalized across domains. The method is used to replicate the true representation of the training features in a smaller sample which can be in turn used for faster training and convergence. Due to the proper representation of even the most extreme observations, this method ensures faster learning with competitive performance.
## 3 Methodology
The most important aspect of classification models is the adequacy of the representative training samples for each class. Although the size of the training data is of considerable importance, acquiring a large number of representative training data may be impractical where a large number of classes are involved. In particular, since most observations within each true class have similar features, multiple samples add low value in terms of novel information/pattern. In this section, we describe the traditional batch training approach in brief followed by the LAD Improved Iterative Training approach. We present 4 sampling strategies used in the LIIT training and their respective algorithms.
### Definitions and Terminology
Before describing the detailed methodology, we list out the terminology and corresponding definitions that are used for this study.
**Definition 1**.: **LAD Score** is the Large deviations Anomaly Detection (LAD) generated anomaly score for each observation in the data.
**Definition 2**.: **Full-Training Data** is the available complete training dataset for the ANN. It must be noted that only a subset of the Full Training Data might be used to train the ANN in the LIIT approach. Hence we present a different terminology to differentiate it from the training data.
**Definition 3**.: **Batch Training** is the traditional ANN training method using mini-batches of training data.
**Definition 4**.: **Modified Training Sample (MTS)** is a smaller sample generated from the training data using a specific sampling algorithm.
**Definition 5**.: **LAD Improved Iterative Training (LIIT)** is the novel improved batch training approach to train the ANN.
### Classification Neural Network
For this analysis, we look at a basic classification algorithm. Figure 1 shows the architecture of a simple three layer dense neural network.
The model is trained using full training samples with the convergence criterion set to zero validation loss for 5 epochs with the maximum number of epochs set to 180. Three different activation functions, RELU, Tanh, Softmax are used for the three consecutive dense layers respectively. A simple model was chosen to study the proof of concept of the representative sampling strategy presented in the part of the thesis. Further studies are needed to understand the relation between the model choice and training sampling techniques.
### LAD Improved Iterative Training of The Neural Network
Traditionally, in batch training, the full training data is divided into smaller samples or batches. The ANN learns from each batch sequentially till all the observations from the full training data are exhausted, as demonstrated in Figure 2. In the LIIT training, we iteratively design and update the modified training samples, MTS, from the full training data. At each iteration, we train the ANN using batch training on MTS till convergence. This partially trained model is then tested on the full training data to identify potential learning flaws. Since the current work is limited to classification models, the learning flaws include the misclassified data. The misclassified data is then used to derive the updated MTS which is used to retrain the ANN. The process is illustrated in Figure 3. This is inspired by Boosting techniques [4] where the subset creation depends on the previous model. However, unlike in boosting setting, we retrain the same ANN. 1
Figure 1: Simple Classification Neural Network: The figure illustrates a dense neural network to classify data into 3 classes. The network takes an input of 10 dimensions and returns scores used to assign each class.
To determine and extract the MTS sample, any sampling algorithm can be used. However, to ensure a good representation, we designed four LAD score based sampling algorithms along with the random sampling approach which is used as a baseline. The following are the sampling strategies used in our analysis:
1. **LAD Anomaly only (Repeated Entry)**: Observations with the highest anomaly scores in each true class are added to the training batch. Multiple copies of the observation can be added over iterations when the model fails to classify them after numerous re-training. See Algorithm 1.
2. **LAD Anomaly + Normal (Unique Entry)**: Equal parts of the high and low anomaly score observations are sampled for each true class. The final training batch contains a unique set of observations with no duplicate entries. See Algorithm 2.
3. **LAD Anomaly only (Unique Entry)**: This is similar to the **LAD Anomaly only (Repeated Entry)** approach. Observations with the highest anomaly scores in each true class are added to the training batch. However, the final training batch contains a unique set of observations with no duplicate entries. See Algorithm 3.
4. **LAD Quantile Samples (Repeated Entry)**: The observations are sampled using different quantiles of the anomaly score for each true class. Multiple copies are maintained in the training batch to ensure weighting from under-represented latent classes within each known true class. See Algorithm 4.
5. **Random**: In this model, we use random sampling from the available data. See Algorithm 5.
For this part, we sample \(\sim 5-6\%\) of the full training data at each iteration that is later added to the modified training sample. We ensure equal weights for all true classes for the analysis. The LIIT approach is implemented with 6 iterations (1 initial and 5 updates) which brings to \(\sim 30\%\) of the full training sample used in the LIIT approach.
## 4 Experiments
In this section, we evaluate the classification performance of the simple neural networks on real data when trained using LAD sub-sampled data. We focus on the performance of the neural networks under different training and sampling settings.
The following experiments have been conducted to study the model:
Figure 2: Mini-Batch Training Algorithm
1. Computational Expense: The LIIT trained ANN model's ability to train on a smaller set of training samples and converge faster is compared to the fully trained model.
2. Classification Performance: The overall performance of the sub-sampled models on multiple benchmark datasets is studied. For this analysis, we consider Area Under the Curve (AUC) as the performance metric to study classification.
3. Stability to Perturbations: Perturbations upto 8% are added to the test data which is used to study the change in performance in all models.
To maintain fair comparison, the number of epochs is fixed to a maximum count of 180 for the ANN model trained on the full training data a.k.a. the full model and 30 per iteration of all the LIIT trained ANNs (totaling to 180 epochs for complete training). For each trained ANN, we evaluate performance on 5 independent reruns. The average results are presented for all evaluations.
Figure 3: LIIT Training Algorithm
```
0: Dataset \(X\) of size \((n,d)\), number of iterations \(N_{iter}\), threshold \(th\), number of true classes in the data \(K\), sample size from each class \(c_{size}\), number of iterations \(i_{iter}\), ANN classification model \(model_{iiti}\). Initialization: Split data into \(x_{train},x_{test},x_{val},y_{train},y_{test},y_{val}\) (train, test and validation) Derive LAD score \(ana_{score}\) for all observations in training data i.e. \(ana_{score}=LAD(x_{train},y_{train})\)
1:\(MTS=[]\) (create empty MTS sample indices list)
2:for each class \(k\)do
3: Generate list of indices of all observations in class \(k\), \(ind_{k}\)
4: Subset anomaly scores for each class \[ana_{score_{k}}=ana_{score}[ind_{k}]\]
5: Identify top \(c_{size}\) observations with least anomaly scores and add them samples to the \(MTS\) sample i.e. (most non-anomalous observations)
6:for each iteration \(i\leq i_{iter}\)do
7: Fit the ANN on \(MTS\) using batch training, \[model_{iiti}.fit(x_{train}[MTS],y_{train}[MTS])\]
8: Predict model classification on \(x_{train}\), \(z_{pred}=model_{iiti}.predict(x_{train})\)
9: Identify all miss-classified observations' indices in training data \[err_{inds}=np.vhere(z_{pred}.!=y_{train})\]
10:for each class \(k\)do
11: Identify all miss-classified observations \(ind_{err_{k}}\)
12: Subset anomaly scores for miss-classified data in class \(k\) \[ana_{err_{k}}=ana_{score}[ind_{err_{k}}]\]
13: Identify \(c_{size}\) observations with highest anomaly scores from \(ind_{err_{k}}\) i.e. (most anomalous observations) and add them to \(MTS\) sample.
14:endfor
15:endfor
16:endfor
```
**Algorithm 1** LAD Anomaly only (Repeated Entry)
### 4.1 Datasets
We consider a variety of publicly available benchmark data sets from the UCI-ML repository [1]) (See Table 1 ) for the experimental evaluation. For training, test and validation, the data was randomly split into 80%, 10% and 10% of the data respectively.
#### Computational Time
In this section, we look at the time taken by each ANN to train on the datasets. Since the LIIT trained ANNs use only one-third of the full training data, the training time is evidently lower as compared to training for the full model. This can be clearly seen in the Figures 4.
```
0: Dataset \(X\) of size \((n,d)\), number of iterations \(N_{iter}\), threshold \(th\), number of true classes in the data \(K\), sample size from each class \(c_{size}\), number of iterations \(i_{iter}\), ANN classification model \(model_{limit}\). Initialization: Split data into \(x_{train},x_{test},x_{val},y_{train},y_{test},y_{val}\) (train, test and validation) Derive LAD score \(ana_{score}\) for all observations in training data i.e. \(ana_{score}=LAD(x_{train},y_{train})\)
1:\(MTS=[]\) (create empty MTS sample indices list)
2:for each class \(k\)do
3: Generate list of indices of all observations in class \(k\), \(ind_{k}\)
4: Subset anomaly scores for each class \[ana_{score_{k}}=ana_{score}[ind_{k}]\]
5: Identify top \(c_{size}\) observations with least anomaly scores and add them samples to the \(MTS\) sample i.e. (most non-anomalous observations)
6:for each iteration \(i\leq i_{iter}\)do
7: Fit the ANN on \(MTS\) using batch training, \[model_{lift}.fit(x_{train}[MTS],y_{train}[MTS])\]
8: Predict model classification on \(x_{train}\), \(z_{pred}=model_{limit}.predict(x_{train})\)
9: Identify all miss-classified observations' indices in training data \[err_{inds}=np.where(z_{pred}!=y_{train})\]
10:for each class \(k\)do
11: Identify all miss-classified observations \(ind_{err_{k}}\)
12: Subset anomaly scores for miss-classified data in class \(k\) \[ana_{err_{k}}=ana_{score}[ind_{err_{k}}]\]
13: Identify \(c_{size}/2\) observations each for the lowest and highest anomaly scores from \(ind_{err_{k}}\) i.e. (most anomalous as well as least anomalous observations) and add them to the \(MTS\) sample indices.
14:endfor
15: Remove repeated indices in the updated modified training sample, \[MTS=unique(MTS)\]
16:endfor
17:endfor
```
**Algorithm 2** LAD Anomaly + Normal (Unique Entry)
```
0: Dataset \(X\) of size \((n,d)\), number of iterations \(N_{iter}\), threshold \(th\), number of true classes in the data \(K\), sample size from each class \(c_{size}\), number of iterations \(i_{iter}\), ANN classification model \(model_{limit}\). Initialization: Split data into \(x_{train},x_{test},x_{val},y_{train},y_{test},y_{val}\) (train, test and validation) Derive LAD score \(ana_{score}\) for all observations in training data i.e. \(ana_{score}=LAD(x_{train},y_{train})\)
1:\(MTS=[]\) (create empty MTS sample indices list)
2:for each class \(k\)do
3: Generate list of indices of all observations in class \(k\), \(ind_{k}\)
4: Subset anomaly scores for each class \[ana_{score_{k}}=ana_{score}[ind_{k}]\]
5: Identify top \(c_{size}\) observations with least anomaly scores and add them samples to the \(MTS\) sample i.e. (most non-anomalous observations)
6:for each iteration \(i\leq i_{iter}\)do
7: Fit the ANN on \(MTS\) using batch training, \[model_{lift}.fit(x_{train}[MTS],y_{train}[MTS])\]
8: Predict model classification on \(x_{train}\), \(z_{pred}=model_{limit}.predict(x_{train})\)
9: Identify all miss-classified observations' indices in training data \[err_{inds}=np.where(z_{pred}!=y_{train})\]
10:for each class \(k\)do
11: Identify all miss-classified observations \(ind_{err_{k}}\)
12: Subset anomaly scores for miss-classified data in class \(k\) \[ana_{err_{k}}=ana_{score}[ind_{err_{k}}1]\]
13: Identify \(c_{size}\) observations with highest anomaly scores from \(ind_{err_{k}}\) i.e. (most anomalous observations) and add them to \(MTS\) sample.
14:endfor
15: Remove repeated indices in the updated modified training sample, \[MTS=unique(MTS)\]
16:endfor
17:endfor
```
**Algorithm 3** LAD Anomaly only (Unique Entry)
of the models. It is discernible that the Quantile Sampling along with LIIT trained ANN model is on par with the fully trained model.
#### 4.1.3 Stability to Perturbations
Since the training samples have a significant influence on the model's learning and performance, we try to look at the stability of the model to various perturbations in the test data. For this, random noise is sampled from a multivariate normal distribution with the \(0-8\%\) of the training data mean and variance and is added to all the observations in the test data. Each ANN's performance is evaluated in these settings for all benchmark datasets. The final classification performances are seen in Figures 5. It was interesting to note that different datasets had better and relatively more stable performances using different sampling strategies.
Now, to see the individual changes in performance to perturbations, we look at the raw change in AUC values due to the addition of perturbations for all models. Figures 6 show the change in performance for different datasets. In particular, Figures 6 a and 6 b show a group of datasets that show better performance using Quantile (Repeated), while Figures 6 c - 6 e show performance on datasets where Anomaly (Unique), Anomaly + Normal (Repeated) and Anomaly (Repeated) sampling approaches have respectively outperformed.
It can be seen that the Quantile Sample Trained Model has a higher mean AUC as well as lower deviation in AUC than the fully trained model in most datasets.
Here, we can see that different LIIT models outperform for different datasets. We hypothesize that the data distribution and heterogeneity play important role in the overall performance and stability. We intend to continue the study of the proposed hypothesis as future research.
## 5 Conclusion
We present a new training strategy for enhancing the learning speed of a neural network whilst maintaining the performance of the model. We present the LAD Improved Iterative Training (LIIT) which is an improved iterative training version of the traditional batch training approach. The LIIT approach uses a modified training sample (MTS) generated and updated using a LAD score based sampling approach that ensures enough representation of extreme and rare behaviours. In particular, the LAD score based Quantile Sampling approach allows ample heterogeneity within the sample data. We study the classification performance of the LIIT trained ANN in comparison with ANN trained on full training data on real benchmark datasets. Though the current research is limited to simple classification neural networks, the work has immense research potential. The LIIT training approach combined with specific LAD sampling methodology might draw out the best performance in a dataset based on the data characteristics. Future studies might help understand the impact of data heterogeneity and sampling method on the performance of ANN.
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline Name & \(N\) & \(d\) & \(c\) \\ \hline Ecoli & 336 & 7 & 8 \\ Imgseg & 2310 & 18 & 7 \\ Skin & 245057 & 4 & 2 \\ Shuttle & 58000 & 10 & 2 \\ Wisc & 699 & 9 & 2 \\ Iono & 351 & 33 & 2 \\ Zoo & 101 & 16 & 7 \\ Letter & 20000 & 16 & 26 \\ Comm And Crime & 1994 & 102 & 2 \\ Vowel & 990 & 10 & 11 \\ Fault & 1941 & 28 & 2 \\ Sonar & 208 & 60 & 2 \\ Balance-Scale & 625 & 4 & 3 \\ Pageb & 5473 & 11 & 2 \\ Spambase & 4601 & 58 & 2 \\ Wave & 5000 & 22 & 2 \\ Tae & 151 & 3 & 3 \\ Thy & 215 & 5 & 3 \\ Opt Digits & 5620 & 63 & 2 \\ Concrete & 1030 & 9 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: classification Benchmark Datasets: Description of the benchmark data sets used for evaluation of the classification detection capabilities of the proposed model. \(N\) - number of instances, \(d\) - number of attributes, \(c\) - number of true classes in the data set.
## References
* [1]D. Dheeru and E. Karra Taniskidou (2017) UCI machine learning repository. External Links: Link Cited by: SS1.
* [2]J. Djolonga, J. J. Yung, M. Tschannen, R. Romijnders, L. Beyer, A. Kolesnikov, J. Puigcerver, M. Minderer, A. D'Amour, D. Moldovan, et al. (2021) On robustness and transferability of convolutional neural networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition16458-16468. Cited by: SS1.
* [3]K. Kavzoglu (2009) Increasing the accuracy of neural network classification using refined training data. Environmental Modelling & Software24 (7), pp. 850-858. Cited by: SS1.
* [4]R. E. Schapire (2003) The boosting approach to machine learning: an overview. Nonlinear estimation and classification149-171. Cited by: SS1.
* [5]M. Shanker, M. Y. Hu, and M. S. Hung (1996) Effect of data standardization on neural network training. Omega24 (4), pp. 385-397. Cited by: SS1.
* [6]D. Soekhoe, D. Van Der Putten, and A. Plaat (2016) On the impact of data set size in transfer learning using deep neural networks. International symposium on intelligent data analysis, pp. 50-60. Cited by: SS1.
* [7]J. Wang, S. Li, Z. An, X. Jiang, W. Qian, and S. Ji (2019) Batch-normalized deep neural networks for achieving fast intelligent fault diagnosis of machines. Neurocomputing329, pp. 53-65. Cited by: SS1.
* [8]Y. Zhong, F. Fei, Y. Liu, B. Zhao, H. Jiao, and L. Zhang (2017) Satcnn: satellite image dataset classification using agile convolutional neural networks. Remote sensing letters8 (2), pp. 136-145. Cited by: SS1.
Figure 4: Computation time for different datasets: The figures illustrate the computation time for different LIIT trained ANN models in comparison to the ANN trained on full training data (Full model).
|
2303.11681 | DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic
Segmentation Using Diffusion Models | Collecting and annotating images with pixel-wise labels is time-consuming and
laborious. In contrast, synthetic data can be freely available using a
generative model (e.g., DALL-E, Stable Diffusion). In this paper, we show that
it is possible to automatically obtain accurate semantic masks of synthetic
images generated by the Off-the-shelf Stable Diffusion model, which uses only
text-image pairs during training. Our approach, called DiffuMask, exploits the
potential of the cross-attention map between text and image, which is natural
and seamless to extend the text-driven image synthesis to semantic mask
generation. DiffuMask uses text-guided cross-attention information to localize
class/word-specific regions, which are combined with practical techniques to
create a novel high-resolution and class-discriminative pixel-wise mask. The
methods help to reduce data collection and annotation costs obviously.
Experiments demonstrate that the existing segmentation methods trained on
synthetic data of DiffuMask can achieve a competitive performance over the
counterpart of real data (VOC 2012, Cityscapes). For some classes (e.g., bird),
DiffuMask presents promising performance, close to the stateof-the-art result
of real data (within 3% mIoU gap). Moreover, in the open-vocabulary
segmentation (zero-shot) setting, DiffuMask achieves a new SOTA result on
Unseen class of VOC 2012. The project website can be found at
https://weijiawu.github.io/DiffusionMask/. | Weijia Wu, Yuzhong Zhao, Mike Zheng Shou, Hong Zhou, Chunhua Shen | 2023-03-21T08:43:15Z | http://arxiv.org/abs/2303.11681v4 | DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models
###### Abstract
Collecting and annotating images with pixel-wise labels is time-consuming and laborious. In contrast, synthetic data can be freely available using a generative model (e.g., DALL-E, Stable Diffusion). In this paper, we show that it is possible to automatically obtain accurate semantic masks of synthetic images generated by the Off-the-shelf Stable Diffusion model, which uses only text-image pairs during training. Our approach, called DiffuMask, exploits the potential of the cross-attention map between text and image, which is natural and seamless to extend the text-driven image synthesis to semantic mask generation. _DiffuMask_ uses text-guided cross-attention information to localize class/word-specific regions, which are combined with practical techniques to create a novel high-resolution and class-discriminative pixel-wise mask. The methods help to reduce data collection and annotation costs obviously. Experiments demonstrate that the existing segmentation methods trained on synthetic data of _DiffuMask_ can achieve a competitive performance over the counterpart of real data (VOC 2012, Cityscapes). For some classes (e.g., bird), _DiffuMask_ presents promising performance, close to the state-of-the-art result of real data (within **3%** mIoU gap). Moreover, in the open-vocabulary segmentation (zero-shot) setting, _DiffuMask_ achieves a new SOTA result on _Unseen class of VOC 2012. The project website can be found at _DiffuMask_.
+
Footnote †: Corresponding author
## 1 Introduction
Semantic segmentation is a fundamental task in vision, and existing data-hungry semantic segmentation models
usually require a large amount of data with pixel-level annotations to achieve significant progress. Unfortunately, pixel-wise mask annotation is a labor-intensive and expensive process. For example, labeling a single semantic urban image in Cityscapes [14] can take up to 60 minutes, underscoring the level of difficulty involved in this task Additionally, in some cases, it may be challenging or even impossible to collect images due to existing privacy and copyright. To reduce the cost of annotation, weakly-supervised learning has become a popular approach in recent years. This approach involves training strong segmentation models using weak or cheap labels, such as image-level labels [2, 33, 60, 62, 51, 52], points [3], scribbles [37, 64], and bounding boxes [34]. Although these methods are free of pixel-level annotations, still suffer from several disadvantages, including low-performance accuracy, complex training strategy, indispensable extra annotation cost (_e.g._, edge), and image collection cost.
With the great development of computer graphics (_e.g._, generative model), an alternative way is to utilize synthetic data, which is largely available from the virtual world, and the pixel-level ground truth can be freely and automatically generated. DatasetGAN [66] firstly exploits the feature space of a trained GAN and trains a shallow decoder to produce pixel-level labeling. BigDatasetGAN [35] extends DatasetGAN to handle the large class diversity of ImageNet. However, both methods suffer from certain drawbacks, the need for a small number of **pixel-level** labeled examples to generalize to the rest of the latent space and suboptimal performance due to imprecise generative masks.
Recently, large-scale language-image generation (LLIG) models, such as DALL-E [48], and Stable Diffusion [49], have shown phenomenal generative semantic and compositional power, as shown in Fig. 1. Given one language description, the text-conditioned image generation model can create corresponding semantic things and stuff, where visual and textual embedding are fused using spatial cross-attention. We dive deep into the cross-attention layers and explore how they affect the generative semantic object and structure of the image. We find that cross-attention maps are the core, which binds visual pixels and text tokens of the prompt text. Also, the cross-attention maps contain rich class (text token) discriminative spatial localization information, which critically affects the generated image.
**Can the attention map be used as mask annotation?** Consider semantic segmentation [19, 14] - a 'good' pixel-level semantic mask annotation should satisfy two conditions: (a) class-discriminative (_i.e._, localize and distinguish the categories in the image); (b) high-resolution, precise mask (_i.e._, capture fine-grained detail). Fig. 1(b) presents a visualization of cross attention map between text token and vision. \(8\times 8\), \(16\times 16\), \(32\times 32\), and \(64\times 64\), as four different resolutions, are extracted from different layers of the U-Net of Stable Diffusion [49]. \(8\times 8\) feature map is the lowest resolution, including obvious class-discriminative location. \(32\times 32\) and \(64\times 64\) feature maps include high-resolution and highlight fine-grained details. The average map shows the possibility for us to use for semantic segmentation, where it is class-discriminative and fine-grained. To further validate the potential of the attention map of the generative task, we convert the probability map to a binary map with fixed thresholds \(\gamma\), and refine them with Dense CRF [31], as shown in Fig. 1(c). With the \(0.35\) threshold, the mask presents a wonderful precision on fine-grained details (_e.g._, foot, ear of the '_horse_').
Based on the above observation, we present DiffuMask, an automatic procedure to generate a massive high-quality image with a pixel-level semantic mask. Unlike DatasetGAN [66] and BigDatasetGAN [35], DiffuMask does not require any pixel-level annotations. This approach takes full advantage of powerful zero-shot text-to-image generative models such as Stable Diffusion [49], which are trained on web-scale image-text pairs. DiffuMask mainly includes two advantages for two challenges: 1) _Precise Mask_. An adaptive threshold of binarization is proposed to convert the probability map (attention map) to a binary map, as the mask annotation. Besides, noise learning [44, 56] is used to filter noisy labels. 2) _Domain Gap:_ retrieval-based prompt (various and verisimilar prompt guidance) and data augmentations (_e.g._, Splicing [7]), as two effective solutions, are designed to reduce the domain gap via enhancing the diversity of data. With the above advantages, DiffuMask can generate infinite images with pixel-level annotation for any class without human effort. These synthetic data can then be used for training any semantic segmentation architecture (_e.g._, mask2former [11]), replacing real data.
To summarize, our contributions are three-folds:
Figure 2: **Cross-attention maps of a text-conditioned diffusion model (_i.e._, Stable Diffusion [49]). Prompt language: ‘a horse on the grass’.**
* We show a novel insight that it is possible to automatically obtain the synthetic image and mask annotation from a text-supervised pre-trained diffusion model.
* We present DiffuMask, an automatic procedure to generate massive image and pixel-level semantic annotation _without_ human effort and any manual mask annotation, which exploits the potential of the cross-attention map between text and image.
* Experiments demonstrate that segmentation methods trained on DiffuMask perform competitively on real data, _e.g._, VOC 2012. For some classes, _e.g._, dog, the performance is close to that of training with real data (within **3%** gap). Moreover, in the open-vocabulary segmentation (zero-shot) setting, DiffuMask achieves a new SOTA result on Unseen class of VOC 2012.
## 2 Related Work
**Reducing Annotation Cost.** Various ways can be explored to reduce the segmentation data cost, including interactive human-in-the-loop annotation [1, 39], nearest-neighbor mask transfer [25], or weak/cheap mask annotation supervision in different levels, such as image-level labels [2, 33, 60, 62, 51, 52], points [3], scribbles [37, 64], and bounding boxes [34, 9, 32]. Among the above-related works, image-level label supervised learning [51, 52] presents the lowest cost, and its performance is unacceptable. Bounding boxes [9, 32] annotation usually shows a competitive performance than pixel-wise supervised methods, but its annotation cost is the most expensive. By comparison, synthetic data presents many advantages, including lower data cost without image collection, and infinite availability for enhancing the diversity of data.
**Image Generation.** Image generation is a basic and challenging task in computer vision. There are several mainstream methods for the task, including Generative Adversarial Networks (GAN) [23], Variational autoencoders (VAE) [30], flow-based models [18], and Diffusion Probabilistic Models (DM) [55, 49]. Recently, the diffusion model has drawn lots of attention due to its wonderful performance. GLIDE [43] used pre-trained language model (CLIP [47]) and the cascaded diffusion structure for text-to-image generation. Similarly, DALL-E 2 [48] of OpenAI Imagen [53] obtain the corresponding text embedding with CLIP and adopted a similar hieratical structure to generate images. To increase accessibility and reduce significant resource consumption, Stable Diffusion [49] of Stability AI introduced a novel direction in which the model diffuses on VAE latent spaces instead of pixel spaces.
**Synthetic Dataset Generation.** Prior works [28, 16] for dataset synthesis mainly utilize 3D scene graphs to render images and their labels. 2D methods, _i.e._, Generative Adversarial Networks (GAN) [23] mainly is used to solve domain adaptation task [13, 13], which leverages image-to-image translation to reduce the domain gap. Recently, inspired by the success of generative model (_e.g._, DALL-E 2, Stable Diffusion), some works further try to explore the potential of synthetic data to replace real data as the training data in many downstream tasks, including image classification [27, 6], object detection [61, 42, 21, 20], image segmentation [35, 66, 36], 3D Rendering [65, 46]. DatasetGAN [66] utilized a few labeled real images to train a segmentation mask decoder, leading to an infinite synthetic image and mask generator. Based on DatasetGAN, BigDatasetGAN [46] scale the class diversity to ImageNet size, which generates 1k classes with manually annotated 5 images per class. With Stable diffusion and Mask R-CNN pretrained on COCO dataset, Li _et al._[36] design and train a grounding module to generate images and segmentation masks. Different from the above methods, we go one step further and synthesize accurate semantic labels by exploiting the potential of cross attention map between text and image. One significant advantage of the DiffuMask is that it does not require any manual localization annotations (_i.e._, box and mask) and only rely on _text supervision_.
## 3 Methodology
In this paper, we explore simultaneously generating images and the semantic mask described in the text prompt with the existing pre-trained diffusion model. Using the synthetic data to train the existing segmentation methods, and apply them to the real images.
The core is to exploit the potential of the _cross-attention map_ in the generative model and _domain gap_ between synthetic and real data, providing corresponding new insights, solutions, and analysis. We introduce the preliminary of cross attention in Sec. 3.1, Mask generation and refinement with cross-attention map in text-conditioned diffusion models in Sec. 3.2, data diversity enhancement with prompt engineering in Sec. 3.4, data augmentation in Sec. 3.5.
### Cross-Attention of Text-Image
Text-guided generative models (_e.g._, Imagen [53], Stable Diffusion [49]) use a text prompt \(\mathcal{P}\) to guide the content-related image \(\mathcal{I}\) generation from a random gaussian image noise \(z\), where visual and textual embedding are fused using the spatial cross-attention. Specifically, Stable Diffusion [49] consists of a text encoder, a variational autoencoder (VAE), and a U-shaped network [50]. The interaction between the text and vision occurs in the U-Net for the latent vectors at each time step, where cross-attention layers are used to fuse the embeddings of the visual and textual features and produce spatial attention maps for each textual toke. Formally, for step \(t\), the visual features of the noisy image \(\varphi(z_{t})\in\mathbb{R}^{H\times W\times C}\) are flatted and linearly projected into a Query vector \(Q=\ell_{Q}(\varphi(z_{t}))\). The text prompt \(\mathcal{P}\) is projected into the textual embedding \(\tau_{\theta}(\mathcal{P})\in\mathbb{R}^{N\times d}\) (\(N\)
refers to the sequence length of text tokens and \(d\) is the latent projection dimension) with the text encoder \(\tau_{\theta}\), then is mapped into a Key matrix \(K=\ell_{K}(\tau_{\theta}(\mathcal{P}))\) and a Value matrix \(V=\ell_{V}(\tau_{\theta}(\mathcal{P}))\), via learned projections \(\ell_{Q},\ell_{K},\ell_{V}\). The _cross attention maps_ can be calculated by:
\[\mathcal{A}=\text{Softmax}\left(\frac{QK^{T}}{\sqrt{d}}\right), \tag{1}\]
where \(\mathcal{A}\in\mathbb{R}^{H\times W\times N}\) (re-shape). For \(j\)-th text token, _e.g._, _horse_ on Fig. 1(a), the corresponding weight \(\mathcal{A}_{j}\in\mathbb{R}^{H\times W}\) on the visual map \(\varphi(z_{t})\) can be obtained. Finally, the output of cross-attention can be obtained with \(\widehat{\varphi}\left(z_{t}\right)=\mathcal{A}V\), which is then used to update the spatial features \(\varphi(z_{t})\).
### Mask Generation and Refinement
Based on Equ. 1, we can obtain the corresponding cross attention map \(\mathcal{A}_{j}^{s,t}\). \(s\) denotes the attention map from \(s\)-th layer of U-Net, and corresponding to four different resolutions, _i.e_., \(8\times 8\), \(16\times 16\), \(32\times 32\), and \(64\times 64\), as shown in Fig. 1(b). \(t\) denotes \(t\)-th diffusion step (time). Then the average cross-attention map can be calculated by aggregating the multi-layer and multi-time attention maps as follows:
\[\hat{\mathcal{A}}_{j}=\frac{1}{S\cdot T}\sum_{s\in S,t\in T}\frac{\mathcal{A} _{j}^{s,t}}{\text{max}(\mathcal{A}_{j}^{s,t})}, \tag{2}\]
where \(S\) and \(T\) refer to the total steps and the number of layers (_i.e_., four for U-Net). Normalization is necessary due the value of the attention map from the output of Softmax is not a probability between 0 and 1.
#### 3.2.1 Standard Binarization
Given an average attention map (a probability map) \(M\in\mathbb{R}^{H\times W}\) for \(j\)-th text token produced by the cross attention in Equ. (1), it is essential to convert it to a binary map, where pixels with \(1\) as the foreground region (_e.g._, 'horse'). Usually, as shown in Fig. 1(c), the simplest solution for the binarization process is using a fixed threshold value \(\gamma\), and refining with DenseCRF [31] (local relationship defined by color and distance of pixels) as follows:
\[B=\text{DenseCRF}\big{(}\big{[}\gamma;\hat{\mathcal{A}}_{j}\big{]}_{\text{ argmax}}\big{)}. \tag{3}\]
The above method is not practical and effective, while the _optimal threshold_ of each image and each category are not exactly the same. To explore the relationship between threshold and binary mask quality, we set a simple analysis experiment. Stable Diffusion [49] is used to generate 1k images and corresponding attention maps for each class. The prediction of Mask2former [11] pre-trained on Pascal-VOC 2012 as the ground truth is adopted to calculate the quality of mask quality (mIoU), as shown in Fig. 3. The optimal threshold of different classes usually are different, _e.g._, around \(0.48\) for 'Bottle' class, different from that (_i.e_., around \(0.39\)) of 'Dog' class. To achieve the best quality of the mask, the _adaptive threshold_ is a feasible solution for the various binarization for each image and class.
#### 3.2.2 Adaptive Threshold for Binarization
It is challenging to determine the optimal threshold for binarizing the probability maps because of the variation in shape and region for each object class. The image generation relies on **text-supervision**, which does not provide a precise definition of the shape and region of object classes. For example, the masks with \(0.45\gamma\) and that with \(0.35\gamma\) in Fig. 1(c), the model can not judge which one is better, while no location information as supervision and reference is provided by human effort.
Looking deeper at the challenge, pixels with a middle confidence score cause uncertainty, while that with a high and low score usually represent the true foreground and the background. To address the challenge, semantic affinity learning (_i.e_., AffinityNet [2]) is used to give an estimation for those pixels with a middle confidence score. Thus we can obtain the definition for global prototype, _i.e_., _which semantic masks with different threshold \(\gamma\) is suitable to represent the whole prototype_. AffinityNet aims to predict semantic affinity between a pair of adjacent coordinates. During the training phase, those pixels in the middle score range are considered as _neutral_. If one of the adjacent coordinates is _neutral_, the network simply ignores the pair during training. Without _neutral_ pixels, the affinity label of two coordinates is set to \(1\) (positive pair) if their classes are the same, and 0 (negative pair) otherwise. During the inference phase, a coarse affinity map \(\hat{B}\in\mathbb{R}^{H\times W}\) can be predicted by AffinityNet for each class of each image. \(\hat{B}\) is used to search for a suitable threshold \(\hat{\gamma}\) during a search
Figure 3: Relationship between mask quality (IoU) and threshold for various categories. \(1000\) generative images are used for each class from Stable Diffusion [49]. Mask2former [11] pre-trained on Pascal-VOC 2012 [19] is used to generate the corresponding ground truth. The optimal threshold of different classes usually is different.
space \(\Omega=\{\gamma_{i}\}_{i=1}^{L}\) as follows:
\[\hat{\gamma}=\operatorname*{arg\,max}_{\gamma\in\Omega}\sum\mathcal{L}_{\rm match }(\hat{B},B_{\gamma}), \tag{4}\]
where \(\mathcal{L}_{\rm match}(\hat{B},B_{\gamma})\) is a pair-wise _matching cost_ of IoU between affinity map \(\hat{B}\) and a binary map from attention map with threshold \(\gamma\). As a result, an adaptive threshold \(\hat{\gamma}\) can be obtained for each image of each class. The red points in Fig. 3 represent the corresponding threshold from matching with the affinity map. They are usually close to the optimal threshold.
### Noise Learning
Although refined mask \(B_{\hat{\gamma}}\) presents a competitive result, there are still existing noisy labels with low precision. Fig. 5 provides the probability density distribution of IoU for the 'Horse' and 'Bird' classes. The masks with IoU under \(80\%\) account for a non-negligible proportion and may cause a significant performance drop. Inspired by noise learning [44, 56, 10] for the classification task, we design a simple, yet effective noise learning (NL) strategy to prune the noise labels for the segmentation task.
NL improves the data quality by identifying and filtering noisy labels. The main procedure (see Fig. 4) comprises two steps: (1) **Count**: estimating the distribution of label noise \(Q_{B_{\hat{\gamma}},B^{*}}\) to characterize pixel-level label noise, \(B^{*}\) refers to the prediction of model. (2) **Rank**, and **Prune**: filter out noisy examples and train with errors removed data. Formally, given massive generative images and annotations \(\{(\mathcal{I},B_{\hat{\gamma}})\}\), a segmentation model \(\mathbf{\theta}\) (_e.g._, Mask2former [11], Mask-RCNN [26]) is used to predict out-of-sample probabilities of segmentation result \(\mathbf{\theta}:\mathcal{I}\rightarrow\mathbf{M}_{c}(B_{\hat{\gamma}};\mathcal{I}, \mathbf{\theta})\) by cross-validation. Then we can estimate the joint distribution of noisy labels \(B_{\hat{\gamma}}\) and true labels, \(Q^{c}_{B_{\hat{\gamma}},B^{*}}=\Phi_{\rm IoU}(B_{\hat{\gamma}},B^{*})\), where \(c\) denotes \(c\)-th class. With \(Q^{c}_{B_{\hat{\gamma}},B^{*}}\), some interpretable and explainable ranking methods, such as loss reweighting [22, 41] can be used for CL to find label errors using. In this paper, we adopt a simple and effective modularized rank and prune method, _i.e._, _Prune by Class_, which decouples the model and data cleaning procedure. For each class, select and prune \(\alpha\%\) examples with the lowest self-confidence \(Q^{c}_{B_{\hat{\gamma}},B^{*}}\) as the noisy data, and train model \(\mathbf{\theta}\) with the remaining clean data. While \(\alpha\%\) is set to \(50\%\), the probability density distribution of IoU from the remaining clean data is presented in Fig. 5 (yellow). CL can bring an obvious gain for the mask precision, which further taps the potential of attention map as mask annotation.
Figure 4: Pipeline for DiffuMask with a given prompt: ‘Photo of a [sub-class] car in the street’. DiffuMask mainly includes three steps: 1) Prompt engineering is used to enhance the diversity and reality of prompt language (Sec. 3.4). 2) Image and mask generation and refinement with adaptive threshold from AffinityNet (Sec. 3.2). 3) Noise learning is designed to further improve the quality of data via filtering the noisy label (Sec. 3.3).
Figure 5: **Effect of Noise Learning (NL).** 30k generative images are used for each class. NL prunes \(70\%\) images on the basis of the rank of IoU. Mask2former [11] pre-trained on VOC 2012 [19] is used to generate the ground truth. NL brings obvious improvement in mask quality by pruning data.
### Prompt Engineering
Previous works [42, 59] have shown the effectiveness of prompt engineering on diversity enhancement of generative data. These studies utilize a variety of prompt modifiers to influence the generated images, _e.g._, GPT3 used by ImaginaryNet [42]. Unlike generation-based or modification-based prompts, we design two practical, reality-based prompt strategies.
**Prompt with Sub-Classes.** Simple text prompts, such as 'Photo of a bird', often results in monotony for generative images, as depicted in Fig. 6 (upper), they fail to capture the diverse range of objects and scenes found in the real world. To address this challenge, we incorporate'sub-classes' for each category to improve diversity. To achieve this, we select \(K\) sub-classes for each category from Wiki1 and integrate this information into the prompt templates. Fig. 6 (down) presents an example for 'bird' category. Given \(K\) sub-classes, _i.e._, Golden Bullul, Crane, this allows us to obtain \(K\) corresponding text prompts 'Photo of a [sub-class] bird', denoted by \(\{\hat{\mathcal{P}}_{1},\hat{\mathcal{P}}_{2},...,\hat{\mathcal{P}}_{K}\}\).
Footnote 1: [https://en.wikipedia.org/wiki/Main_Page](https://en.wikipedia.org/wiki/Main_Page)
**Retrieval-based Prompt.** The prompt \(\hat{\mathcal{P}}\) still is a handcrafted sentence template, we expect to develop it into a real language prompt in the human community. One feasible solution for that is through prompt retrieval [5, 47]. As shown in Fig. 4, given a prompt \(\hat{\mathcal{P}}\), _i.e._, 'Photo of a [sub-class] car in the street', Clipretrieval [5] pre-trained on Lain5B [54] is used to retrieve top \(N\) real images and captions, where the captions as the final prompt sets. Using this approach, we can collect a total of \(K\times N\) text prompts, denoted by \(\sum_{i=1}^{K\times N}\hat{\mathcal{P}}_{i}\), for our synthetic data. During inference, we randomly sample a prompt from this set to generate each image.
### Data Augmentation
To further reduce the domain gap between the generated images and the real-world images in terms of size, blur, and occlusion, data augmentations \(\Phi(\cdot)\) (_e.g._, Splicing [7]), as the effective strategies are used, as shown in Fig. 7. **Splicing.** Synthetic image usually present normal size for the foreground (object), _i.e._, objects typically occupy the majority of image. However, real-world images often contain objects of varying resolutions, including small objects in datasets such as Cityscapes [15]. To address this issue, we use Splicing augmentation. Fig. 7 (a) presents one example for the image splicing (\(2\times 2\)). In the experiment, six scales of image splicing are used, _i.e._, \(1\times 2\), \(2\times 1\), \(2\times 2\), \(3\times 3\), \(5\times 5\), and \(8\times 8\), and the images are sampled from train set randomly. **Gaussian Blur.** Synthetic images typically exhibit a uniform level of blur, whereas real images exhibit varying degrees of blur due to motion, focus, and artifact issues. Gaussian Blur [40] is used to increase the diversity of blur, where the length of Gaussian Kernel is randomly sampled from a range of \(6\) to \(22\). **Occlusion**. Similar to CutMix [63], to make the model focus on discriminative parts of objects, patches of another image are cut and pasted among training images where the corresponding labels are also mixed proportionally to the area of the patches. **Perspective Transform.** Similar to the above augmentations, perspective transform is used to improve the diversity of the generated images by simulating different viewpoints.
## 4 Experiments
### Experimental Setups
**Datasets and Task.**_Datasets._ Following the previous works [11, 36] for semantic segmentation, Pascal-VOC 2012 [19] (20 classes), ADE20k [67] and Cityscapes [15] are used to evaluate the quality our synthetic data. _Tasks._ To evaluate DiffuMask, we adopt three tasks in our experiment, _i.e._, semantic segmentation, open-vocabulary segmentation, and domain generalization.
**Implementation Details** The pre-trained Stable Diffusion [49], the text encoder of CLIP [47], AffinityNet [2] are adopted as the base components. We do not finetune the Stable Diffusion and only train AffinityNet for each category. The corresponding parameter optimization and setting (_e.g._, initialization, data augmentation, batch size,
Figure 6: **Prompt for diversity in sub-class for the _bird_ class.**\(100\) sub-classes for _bird_ class in total for our experiment. The same prompt strategy is used for other classes, _e.g._, cat, car.
Figure 7: **Data Augmentation.** Four data augmentations are used to reduce the domain gap.
learning rate) all are similar to that of the original paper. _Synthetic data for training_. For each category on Pascal-VOC 2012 [19], we generate \(10k\) images and set \(\alpha\) of noise learning to \(0.7\) to filter \(7k\) images. As a result, we collect \(60k\) synthetic data for \(20\) classes as the final training set, and the spatial resolution is \(512\times 512\). For Cityscapes [14], we only evaluate \(2\) important classes, _i.e._, 'Human' and 'Vehicle', including six sub-classes, person, rider, car, bus, truck, train, and generate \(30k\) images for each sub-category, where \(10k\) images are selected as the final training data by noise learning. Considering the relationship between rider and motorbike/bicycle, we set the two classes to be ignored, while evaluating the 'Human' class on Tab. 2 and Tab. 5. In our experiment, only a single object for an image is considered. Multi-categories generation [36] usually causes the unstable quality of the images, limited by the generation ability of Stable Diffusion. Mask2Former [11] is used as the baseline to evaluate the dataset, and all settings are similar to the official code and paper. 8 Tesla V100 GPUs are used for all experiments.
### Protocol-I: Semantic Segmentation
**VOC 2012.** Tab. 1 presents the results of semantic segmentation on the VOC 2012. The existing segmentation methods trained on synthetic data (DiffuMask) can achieve a competitive performance, _i.e._, \(70.6\%\)\(v.s.\)\(84.3\%\) for mIoU with Swin-B backbone. A point worth emphasizing is that our synthetic data does not need any manual localization and mask annotation, while real data need humans to perform a pixel-wise mask annotation. For some categories, _i.e._, bird, cat, cow, horse, sheep, DiffuMask presents a powerful performance, which is quite close to that of training on real (within \(5\%\) gap). Besides, finetune on few real data, the results can be improved further, and exceed that of training on full real data, _e.g._, \(84.9\%\) mIoU finetune on \(5.0\)k real data \(v.s\)\(83.4\%\) mIoU training on full real data (\(11.5\)k).
**Cityscapes.** Tab. 2 presents the results on Cityscapes. Urban street scenes of Cityscapes are more challenging, including a mass of small objects and complex backgrounds. We only evaluate two classes, _i.e._, Vehicle and Human, which are the two most important categories in the driving scene. Compared with training on real images, DiffuMask presents a competitive result, _i.e._, \(79.6\%\)\(vs.\)\(90.8\%\) mIoU.
### Protocol-II: Open-vocabulary Segmentation
As shown in Fig. 1, it is natural and seamless to extend the text-driven synthetic data (our DiffuMask) to the open-vocabulary (zero-shot) task. As shown in Tab. 3, compared
\begin{table}
\begin{tabular}{l|c|c|c c c c c c c c c c c c c} \hline \multirow{2}{*}{Train Set} & \multirow{2}{*}{Number} & \multirow{2}{*}{Backbone} & \multicolumn{6}{c|}{Semantic Segmentation (IoU) for Selected Classes\%} \\ & & & & bird & boat & bus & car & cat & chair & cow & dog & horse & person & sheep & sofa & mIoU \\ \hline \multicolumn{13}{l}{_Train with Pure Real Data_} \\ & & R: 11.5k (all) & R50 & 87.5 & 94.4 & 70.6 & 95.5 & 87.7 & 92.2 & 44.0 & 85.4 & 89.1 & 82.1 & 89.2 & 80.6 & 53.6 & 77.3 \\ VOC & R: 11.5k (all) & Swin-B & 97.0 & 93.7 & 71.5 & 91.7 & 89.6 & 96.5 & 57.5 & 95.9 & 96.8 & 94.4 & 92.5 & 95.1 & 65.6 & 84.3 \\ & R: 5.0k & Swin-B & 95.5 & 87.7 & 77.1 & 96.1 & 91.2 & 95.2 & 47.3 & 90.3 & 92.8 & 94.6 & 90.9 & 93.7 & 61.4 & 83.4 \\ \hline \multicolumn{13}{l}{_Train with Pure Synthetic Data_} \\ & & S: 60.0k & R50 & 80.7 & 86.7 & 56.9 & 81.2 & 74.2 & 79.3 & 14.7 & 63.4 & 65.1 & 64.6 & 71.0 & 64.7 & 27.8 & 57.4 \\ \multicolumn{13}{l}{**DiffuMask**} \\ & S: 60.0k & Swin-B & 90.8 & 92.9 & 67.4 & 88.3 & 82.9 & 92.5 & 27.2 & 92.2 & 86.0 & 89.0 & 76.5 & 92.2 & 49.8 & 70.6 \\ \hline \multicolumn{13}{l}{_Finetune on Real Data_} \\ & S: 60.0k + R: 5.0k & R50 & 85.4 & 92.8 & 74.1 & 92.9 & 83.7 & 91.7 & 38.4 & 86.5 & 86.2 & 82.5 & 87.5 & 81.2 & 39.8 & 77.6 \\ & S: 60.0k + R: 5.0k & Swin-B & 95.6 & 94.4 & 72.3 & 96.9 & 92.9 & 96.6 & 51.5 & 96.7 & 95.5 & 96.1 & 91.5 & 96.4 & 70.2 & 84.9 \\ \hline \end{tabular}
\end{table}
Table 1: **Result of Semantic Segmentation on the VOC 2012 val. _mIoU_ is for \(20\) classes. ‘S’ and ‘R’ refer to ‘Synthetic’ and ‘Real’.**
\begin{table}
\begin{tabular}{l|c|c|c|c} & \multicolumn{2}{c|}{Train Set/\%} & \multicolumn{2}{c}{mIoU/\%} \\ Methods & Type & Categories & \multicolumn{2}{c}{Seen} & \multicolumn{2}{c}{Unseen} & \multicolumn{2}{c}{Harmonic} \\ \hline _Manual Mask Supervision_ & & & & & \\ ZS3 [8] & real & 15 & 78.0 & 21.2 & 33.3 \\ CaGNet [24] & real & 15 & 78.6 & 30.3 & 43.7 \\ Joint [4] & real & 15 & 77.7 & 32.5 & 45.9 \\ STRICT [45] & real & 15 & 82.7 & 35.6 & 49.8 \\ SIGN [12] & real & 15 & 83.5 & 41.3 & 55.3 \\ ZegFormer [17] & real & 15 & **86.4** & 63.6 & **73.3** \\ \hline _Pseudo Mask Supervision from Model pre-trained on COCO [38]_ & & & & & \\ Li _et al._[36] (ResNet01) & synthetic & 15+5 & 62.8 & 50.0 & 55.7 \\ \hline _Text(Prompt)_ & Supervision & & & & & \\ DiffuMask (ResNet00) & synthetic & 15+5 & 60.8 & 50.4 & 55.1 \\ DiffuMask (ResNet101) & synthetic & 15+5 & 62.1 & 50.5 & 55.7 \\ DiffuMask (Swin-B) & synthetic & 15+5 & 71.4 & **65.0** & 68.1 \\ \hline \end{tabular}
\end{table}
Table 3: **Performance for Zero-Shot Semantic Segmentation Task on PASCAL VOC. ‘Seen’, ‘Unseen’, and ‘Harmonic’ denote mIoU of seen, unseen categories, and their harmonic mean. Priors are trained with real data and masks.**
with priors training on real images with manually annotated mask, DiffuMask can achieve a SOTA result on Unseen classes. It is worth mentioning that DiffuMask is pure synthetic/fake data and supervised by text, while priors all must need the real image and corresponding manual mask annotation. Li _et al._, as one contemporaneous work, use the segmentation model pre-trained on COCO [38] to predict the pseudo label of the synthetic image, which is high-cost.
### Protocol-III: Domain Generalization
Tab. 5 presents the results for cross-dataset validation, which can evaluate the generalization of data. Compared with real data, DiffuMask show powerful effectiveness on domain generalization, _e.g._, \(69.5\%\) with DiffuMask \(v.s\)\(68.0\) with ADE20K [67] on VOC 2012 val. The domain gap [58] between real datasets sometimes is bigger than that among synthetic and real data. For Motorbike class, model training with Cityscapes only achieves \(28.9\%\) mIoU, but that of DiffuMask is \(63.2\%\) mIoU. We argue that the main reason is domain shift in foreground and background domains, _i.e._, Cityscapes contains images of city roads, with the majority of Motorbike objects being small in size. But VOC 2012 is an open-set scenario, where Motorbike objects vary greatly in size and include close-up shots.
### Ablation Study
**Compared with Attention Map.** Tab. 3(a) presents the comparison with the attention map and the impact of binarization threshold \(\gamma\). It is clear that the optimal threshold for different categories is different, even various for different images of the same category. Sometimes it is sensitive for some categories, such as Dog. The mIoU of \(0.4\)\(\gamma\) is better than that of \(0.6\)\(\gamma\) around \(40\%\) mIoU, which can not be neglectful. By contrast, our adaptive threshold is robust. Fig. 3 also shows it is close to the optimal threshold.
**Prompt Engineering.** Tab. 3(b) provides the related ablation study for prompt strategies. Retrieval-based and sub-classes prompt all can bring an obvious gain. For dog, \(10\) sub-classes prompt brings a \(7.7\%\) mIoU improvement, which is quite significant. It is reasonable, the fine-grained prompts can directly enhance the diversity of generative images, as shown in Fig. 6.
**Noise Learning.** Tab. 3(c) presents the impact of prune threshold \(\alpha\). \(10k\) synthetic images for each class are used in this experiment. The gain is considerable while \(\alpha\) changes from \(0.3\) to \(0.5\). In other experiments, we set the \(\alpha\) to \(0.7\) for each category.
**Data Augmentation.** The ablation study for the four augmentations is shown in Tab. 3(d). Compared with the other three augmentations, the gain of image splicing is the biggest. One main reason is that the synthetic images are all \(512\times 512\) resolution and the size of the object usually is normal, image splicing can enhance the diversity of scale.
## 5 Conclusion
A new insight is presented in this paper, demonstrating that the accurate semantic mask of generative images can be automatically obtained through the use of a text-driven diffusion model. To achieve this goal, we present DiffuMask, an automatic procedure to generate image and pixel-level semantic annotation. The existing segmentation methods training on synthetic data of DiffuMask can achieve a competitive performance over the counterpart of real data. Besides, DiffuMask shows the powerful performance for open-vocabulary segmentation, which can achieve a promising result on Unseen category. We hope DiffuMask can bring new insights and inspiration for bridging generative data and real-world data in the community.
\begin{table}
\end{table}
Table 4: **DiffuMask ablations. We perform ablations on VOC 2012 val. \(\gamma\) and ‘AT’ denotes the ‘Threshold’ and ‘Adaptive Threshold’, respectively. \(\alpha\) refers to the proportion of data pruning. \(\Phi_{1}\), \(\Phi_{2}\), \(\Phi_{3}\) and \(\Phi_{4}\) refer to ‘Splicing’, ‘Gaussian Blur’, ‘Occlusion’, and ‘Perspective Transform’, respectively. ‘Retri.’ and ‘Sub-C’ denotes ‘retrieval-based’ and ‘Sub-Class’, respectively. Mask2former with Swin-B is adopted as the baseline.**
\begin{table}
\end{table}
Table 5: **Performance for Domain Generalization between different datasets. Mask2former [11] with ResNet50 is used as the baseline. Person and Rider classes of Cityscapes [14] are consider as the same class, _i.e._, Person in the experiment.**
## Appendix A More Details
**Evaluation Metrics.**_Mean intersection-over-union (mIoU)_[19, 11], as the common metric of semantic segmentation, is used to evaluate the performance. For open-vocabulary segmentation, following the prior [17, 12], the mIoU averaged on seen classes, unseen classes, and their _harmonic mean_ are used.
**Mask Smoothness.** The mask \(B_{\hat{\gamma}}\) generated by the Dense CRF often contains jagged edges and numerous small regions that do not correspond to distinct objects in the image. To address these issues, we trained a segmentation model \(\mathbf{\theta}\) (_i.e._ Mask2Former), using the mask \(B_{\hat{\gamma}}\) generated by the Dense CRF as input. We then used this model to predict the pseudo labels for the training set of synthetic data, resulting in a final semantic mask annotation
**Cross Validation for Noise Learning.** In the experiment, we performed the three-fold cross-validation for each class. The five-fold cross-validation (CV) is a process in which all data is randomly split into \(k\) folds, in our case \(k=3\), and then the model is trained on the \(k-1\) folds, while one fold is left to test the quality.
## Appendix B More Ablation Study
**What causes the performance gap between synthetic and real data.** Domain gap and mask precision are the main reasons for the performance gap between synthetic and real data. Tab. 7 is set To further explore the problem. Li _et al._[36] shows that the pseudo mask of the synthetic image from Mask2former [11] pre-trained on VOC 2012 is quite accurate, and can as the ground truth. Thus, we also use the pseudo label from the pre-trained Mask2former to train the model, where we argue that the pseudo label is accurate. As shown in Tab. 7, mask precision cause \(6.4\%\) mIoU gap, and the domain gap of images causes \(4.5\%\) mIoU gap. Notably, for the bird class, the use of synthetic data with a pseudo label resulted in better results than the corresponding real images. This observation suggests that there may be no domain gap for the bird class in the VOC
\begin{table}
\begin{tabular}{l|l|c|c c c|c} & & & & \multicolumn{3}{c|}{Category/\%} \\ Train Set & Number & Backbone & bus & car & person & mIoU \\ \hline \multicolumn{6}{l}{_Train with Pure Real Data_} \\ & R: 20.2k & R50 & 87.9 & 82.5 & 79.4 & 83.3 \\ ADE20K & R: 20.2k & Swin-B & 93.6 & 86.1 & 84.0 & 87.9 \\ \hline \multicolumn{6}{l}{_Train with Pure Synthetic Data_} \\ & S: 6.0k & R50 & 43.4 & 67.3 & 60.2 & 57.0 \\ \multicolumn{6}{l}{**DiffuMask**} & S: 6.0k & Swin-B & 72.8 & 73.4 & 62.6 & 69.6 \\ \end{tabular}
\end{table}
Table 6: **The mIoU (%) of Semantic Segmentation on the ADE20K val.**
\begin{table}
\begin{tabular}{l|c c c|c} Annotation & Bird & Dog & Person & Sofa & \(mIoU\) \\ \hline Real Image, Manual Label & 93.7 & 96.8 & 92.5 & 65.6 & 87.2 \\ Synthetic Image, Pseudo Label & 95.2 & 86.2 & 89.9 & 59.5 & 82.7 \\ Synthetic Image, DiffuMask & 92.9 & 86.0 & 76.5 & 49.8 & 76.3 \\ \end{tabular}
\end{table}
Table 7: **Impact of Mask Precision and Domain Gap on VOC 2012 val.** Mask2former [11] with Swin-B is used as the baseline. ‘Pseudo’ denotes pseudo mask annotation from Mask2former [11] pre-trained on VOC 2012.
\begin{table}
\begin{tabular}{l|c c|c} Attention Map \(\mathcal{A}\) & Bird & Dog & \(mIoU\) \\ \hline \(8\times 8\) & 40.5 & 46.0 & 43.3 \\ \(16\times 16\) & 58.8 & 69.9 & 64.4 \\ \(32\times 32\) & 86.2 & 82.3 & 84.3 \\ \(64\times 64\) & 45.2 & 41.1 & 43.2 \\ \(16\times 16.32\times 32\),\(32\times 32\) & 89.9 & 84.2 & 87.1 \\ Average & 92.9 & 86.0 & 89.5 \\ \end{tabular}
\end{table}
Table 8: **Impact of different attention maps from different layers.** Mask2former [11] with Swin-B is used as the baseline.
Figure 8: **Gradient from Text Tokens for Stable Diffusion.** Prompt language: ‘a horse on the grass’.
Figure 9: **Impact of Backbone.** Stronger backbone is robust for classification, False Negative, and mask precision.
\begin{table}
\begin{tabular}{l|c c c c|c} Backbone & Bird & Dog & Sheep & Horse & Person & \(mIoU\) \\ \hline ResNet 50 & 86.7 & 65.1 & 64.7 & 64.6 & 71.0 & 70.3 \\ ResNet 101 & 86.7 & 66.8 & 65.3 & 63.4 & 70.2 & 70.5 \\ Swin-B & 92.9 & 86.0 & 92.2 & 89.0 & 76.5 & 87.3 \\ Swin-L & 92.8 & 86.4 & 92.3 & 88.3 & 77.3 & 87.4 \\ \end{tabular}
\end{table}
Table 9: **Impact of Backbone on VOC 2012 val.** Mask2former [11] is used as the baseline.
2012 dataset.
**Backbone** Tab. 9 presents the ablation study for the backbone. For some classes, _e.g._ sheep, the stronger backbone can bring obvious gains, _i.e._ Swin-B achieves \(27.5\%\) mIoU improvement than that of ResNet 50. And the mIoU of all classes with Swin-B achieves \(19.2\%\) mIoU improvements. It is an interesting and novel insight that a stronger backbone can reduce the domain gap between synthetic and real data. To give a further analysis for that, we present some results comparison of visualizations, as shown in Fig. 9. Swin-B brings an obvious improvement in classification, False Negatives, and mask precision. Compared with the gain between different backbones, different versions (size) of the same backbone seems can not obtain an effective gain, _e.g._ ResNet101 only obtain \(0.5\%\) mIoU improvements than that of ResNet50.
**Attention Maps of different resolutions.** Table 8 shows the results of an ablation study conducted on cross attention maps with varying resolutions from different layers. The performance of both high resolution (\(64\times 64\)) and low resolution (\(8\times 8\)) maps was found to be unsatisfactory. This can be attributed to the lack of detail in low-resolution maps and the presence of noise in high-resolution maps. On the other hand, integrating (by averaging) all attention maps produced the best performance.
## Appendix C Experiment on ADE20K
ADE20K, as one more challenging dataset, is also used to evaluate the DiffuMask. Tab. 6 presents the results of three categories (bus, car, person) on ADE20K. With fewer synthetic images (6k), we achieve a competitive performance than that of a mass of real images (\(20.2\)k). Compared with the other two categories, Class car achieves the best performance, with \(73.4\%\) mIoU.
## Appendix D Visual explanation with gradients.
The gradient is another way to provide an excellent visual explanation of the generative model, Fig. 8 presents the corresponding gradient visualization from different text tokens. Given a text prompt \(\mathcal{P}\), _i.e._, 'a horse on the grass' and a random Gaussian image noise \(z\), the text-guided generative model is in principle capable of modeling conditional distributions of the form \(\mathcal{I}:=p(z|\tau_{\theta}(\mathcal{P}))\), where \(\tau_{\theta}(\mathcal{P})\in\mathbb{R}^{N\times d}\) and \(\tau_{\theta}\) refers to the text encoder [47]. For the \(k\)-th word \(t_{k}\) (_e.g._, '_horse_' from Fig. 8) from \(\mathcal{P}\), we can compute the corresponding gradient as following: \(\alpha_{k}=\frac{\partial\mathcal{I}}{\partial t_{k}}\) where \(\alpha_{k}\) is the gradient weight from the \(k\)-th word \(t_{k}\). The corresponding gradient weight can be computed by adding a small variate (that is, Numbers close to zero) to the \(t_{k}\). For convenience, we add a small variate \(\triangle\beta\in\mathbb{R}^{d}\) (\(\triangle\beta=\mu\mathbf{1}_{d}\), where \(\mathbf{1}_{d}\) and \(\mu\) refer to the unit matrix and weight) to the text feature map \(\tau_{\theta}(t_{k})\) and obtain the corresponding gradient visualization, as shown in Fig. 8. The gradient visualization is highly class-discriminative (_i.e._ the 'horse' explanation exclusively highlights the '_horse_' regions).
## Appendix E Limitation.
DiffuMask mainly includes two limitations: 1) The inference speed of the text-to-image diffusion model is relatively slow. With 8 Tesla V100 GPUs, generating \(10k\) images usually need to spend around 8 hours. Therefore, scaling up the synthetic dataset to a million level is difficult for some institutions. And it is the main reason why we do not provide more experiments for other datasets with rich categories, _e.g._ ADE20K or COCO. Similarly, we can not scale up the synthetic data to the million level due to the limitation of time and computational cost. But we argue the cost can be reduced by adopting advanced faster Sampling for Diffusion Models [57, 29]. 2) There are still existing obvious result gaps for some classes, _e.g._ person on VOC 2012. The main reason is the obvious domain gap for these classes. The synthetic image usually presents a simple foreground and background, while the real image is more examples with multi-views, multi-scales, blur, and occlusion. Even so, our DiffuMask, as the first work to synthesize image and mask annotation using an image-text pre-trained diffusion model, provide a promising performance and many new insights. We verify the feasibility of training with text-driven synthetic data and applications in the real world, where worth mentioning the diffusion model is trained with only language-image pairs.
|
2305.15769 | MERGE: Fast Private Text Generation | The drastic increase in language models' parameters has led to a new trend of
deploying models in cloud servers, raising growing concerns about private
inference for Transformer-based models. Existing two-party privacy-preserving
techniques, however, only take into account natural language understanding
(NLU) scenarios. Private inference in natural language generation (NLG),
crucial for applications like translation and code completion, remains
underexplored.In addition, previous privacy-preserving techniques suffer from
convergence issues during model training and exhibit poor inference speed when
used with NLG models due to the neglect of time-consuming operations in
auto-regressive generations. To address these issues, we propose a fast private
text generation framework for Transformer-based language models, namely
MERGE.MERGE reuses the output hidden state as the word embedding to bypass the
embedding computation and reorganize the linear operations in the Transformer
module to accelerate the forward procedure. Extensive experiments show that
MERGE achieves a 26.5x speedup to the vanilla encrypted model under the
sequence length 512, and reduces 80\% communication cost, with an up to 10x
speedup to state-of-the-art approximated models. | Zi Liang, Pinghui Wang, Ruofei Zhang, Nuo Xu, Lifeng Xing, Shuo Zhang | 2023-05-25T06:27:19Z | http://arxiv.org/abs/2305.15769v3 | # MERGE: Fast Private Text Generation
###### Abstract
Recent years have seen increasing concerns about the private inference of NLP services and Transformer models. However, existing two-party privacy-preserving methods solely consider NLU scenarios, while the private inference of text generation such as translation, dialogue, and code completion remains unsolved. Besides, while migrated to NLG models, existing privacy-preserving methods perform poorly in terms of inference speed, and suffer from the convergence problem during the training stage. To address these issues, we propose MERGE, a fast private text generation framework for Transformer-based language models. Specifically, MERGE reuse the output hidden state as the word embedding to bypass the embedding computation, and reorganize the linear operations in the Transformer module to accelerate the forward procedure. Based on these two optimizations, extensive experiments show that MERGE can achieve a 26.5x speedup under the sequence length 512, and reduce 80% communication bytes, with an up to 10x speedup to existing state-of-art models.
## 1 Introduction
Recently, pre-trained language models (PLMs) based on Transformer Vaswani et al. (2017) have attracted significant attention because of their exceptional performance in downstream tasks. However, the deployment of such PLM-based services in real-world situations raises concerns about privacy. For example, existing NLP services like Copilot1 and ChatGPT2 require users to send their text queries to servers, where the information contained, such as source code, the medical information, and personal preferences, may be sensitive to users.
Footnote 1: [https://github.com/features/copilot](https://github.com/features/copilot)
Footnote 2: [https://chat.openai.com](https://chat.openai.com)
To alleviate the privacy problem, some of the recent works Hao et al. (2022), Chen et al. (2022) have developed 2-party secure inference services for PLMs by secure Multi-Party Computation (MPC). MPC ensures privacy by encrypting user data and model weights and sharing them secretly. However, PLMs inference under MPC is considerably slow compared to the plain-text version, which limits its application in real-world services. To address this issue, some works have attempted to simplify the bottleneck operation such as activation functions and softmax in the Transformer model. For instance, Mishra et al. (2020) uses Neural Architecture Search (NAS) to replace the activation functions with linear layers, while Li et al. (2022) approximates the exponential operation with polynomial functions.
Though designed for Transformer, these works Hao et al. (2022), Chen et al. (2022), Li et al. (2022) solely explore the scenario of NLU inference (e.g. GLUE Wang et al. (2019)), and our experiments suggest that they have **no** significant improvements in text generation tasks (Figure 2). By illustrating the inference bottleneck of NLU and NLG inference procedure, our experiments show that auto
easoning_ (i.e. GenTime), which slows down the whole inference procedure heavily.
In this paper, we explore to accelerate the generation procedure of language models. To this end, we propose _MERGE_3, a fast and easy-to-adopt framework for private text generation. _MERGE_ is compatible with previous MPC-based works (e.g. MPCformer, THE-X, and IRON) and mainstream PLMs (e.g. GPT-2 Radford et al. (2019), T5 Raffel et al. (2020), and Bart Lewis et al. (2020)). In _MERGE_, we first put forward a strategy called **embedding resending**, which directly uses the output hidden state as the new input token embedding. Embedding resending helps to bypass the _embedding table query_ operation and decouple the computation between _forward representation learning_ and _next token sampling_. Besides, following the recent research Hassid et al. (2022) in attention mechanism, we approximate _self-attention_ with _constant attention_ matrices and merge tensor computations in the Transformer module before inference. These two methods are challenging because: 1) PLMs are usually sensitive to input embeddings, while there are some unavoidable errors in the generated embeddings; 2) constant attention in our **merge module** might hurt the performance of PLMs. To address these issues, we first propose an embedding alignment and augmentation task to enhance the robustness of PLMs to input embeddings. Besides, we employed a weighted distillation training task for approximation models, which allowed us to overcome the negative effects of constant attention. Our empirical experiments on popular text generation tasks such as E2E Dusek et al. (2018), Multiwoz 2.1 Eric et al. (2020), and DailyDialog Li et al. (2017) demonstrate the effectiveness of _MERGE_. Specifically, it can achieve a considerable speedup of 7.75x to GPT-2 and 10.89x to T5 under the sequence length 128, while maintaining an acceptable performance with losses in BERTscore Zhang et al. (2020), BARTscore Yuan et al. (2021), and Rouge-L Lin (2004) of only 0.02 (under 0.92), 0.14 (under -2.90), and 0.03 (under 0.44), respectively.
Footnote 3: MPC-based **E**mbedding **R**esending **GE**eneration with layer **MERGE**
## 2 Related Work
**Secure Multi-Party Computation.** The goal of MPC is to enable private computations among multiple parties. In general, an MPC system may employ various secure techniques, including garbled circuits Yao (1986), Goldreich et al. (2019), fully homomorphic encryption (FHE) Gentry (2009), and homomorphic secret sharing (HSS) Boyle et al. (2016). Thanks to the rich support of existing MPC methods, it is practicable to implement the private inference of Transformer models. Therefore, rather than building a new MPC system, this paper focuses on accelerating the private generation procedure for Transformer-based language models. As a result, our method _MERGE_ can provide much faster text generation which will offer significant benefits to existing mainstream MPC implementations. We detail the MPC system used in this paper in Appendix A.
**MPC-oriented Approximations.** Although existing MPC techniques can provide secure inference for neural networks, they usually suffer from prohibitively high communication delays and computation costs. This is primarily due to the critical nonlinear operations within neural networks. Therefore, some works aim to approximate these bottleneck operations in neural networks. For instance, Chen et al. (2022) replaces the GLU activation function in the Transformer with ReLU, and Hao et al. (2022) reformulate the \(Tanh(\cdot)\) function in GeLU based on optimized exponential operations. Besides, Mishra et al. (2020) approximates the ReLU function with linear layers,
Figure 1: Inference Time Comparison among NLU and NLG models.
and thus it can replace the MPC method used for ReLU, i.e. the garbled circuits, with secret sharing and Beaver triples. Similarly, Li et al. (2022) approximates GeLU with ReLU and quadratic functions. For the softmax operation in the attention mechanism, Li et al. (2022) approximates it by \(softmax(x)\approx\frac{ReLU(x)}{\sum ReLU(x)}\) or \(softmax(x)\approx\frac{(x+c)^{2}}{\sum(x+c)^{2}}\). However, these approximations were designed for the "one-time" inference such as NLU models (e.g. BERT), and are not optimized for auto-regressive generative models (e.g. GPT-series) that execute the forward inference multiple times.
## 3 Preliminary
### Text Generation with Language Models
The text generation task (e.g. dialogue) aims to generate the desired sequence \(y\) (e.g. the response of the chatbot) under the given prefix text \(p\) (e.g. the dialogue context) with the language model \(p_{\theta}(y|p)\). Typically, existing language models usually generate \(y\) in an _auto-regressive_ manner, i.e.
\[p(y|p)=\prod_{t=1}p(x_{t}^{y}|p,x_{<t}^{y}), \tag{1}\]
where the \(x_{t}^{y}\) denotes the \(t\)-th generated token of \(y\), and \(x_{<t}^{y}\) denotes the generated sequence of \(y\) at step \(t\).
In Equation 1, if we denote the one-hot representation of \((p,x_{<t}^{y})\) as \(\textbf{x}_{t}\) with text length \(N_{t}\), then the generation procedure can be divided into the following three stages:
**a) Embedding table query**, i.e. \(\textbf{E}_{t}=f_{e}(\textbf{x}_{t})\), where \(f_{e}(\textbf{x}):\mathbb{R}^{N_{t}\times V}\rightarrow\mathbb{R}^{N_{t} \times d}\) is the embedding layer that maps the \(V\)-length index representation into a \(d\)-dimension semantic space;
**b) Representation learning**, i.e. \(\textbf{h}_{t}^{n_{l}}=\textbf{f}_{tr}(\textbf{E}_{t}^{\prime})\), where \(\textbf{f}_{tr}:\mathbb{R}^{N_{t}\times d}\rightarrow\mathbb{R}^{N_{t}\times d}\) is a \(n_{l}\)-layer transformer model, \(\textbf{h}_{t}^{n_{l}}\) is the output hidden state, and \(\textbf{E}_{t}^{\prime}\) is the combination of positional embeddings, token embeddings \(\textbf{E}_{t}\), and others.
**c) Next token sampling**, i.e. \(x_{t}^{y}\sim f_{cls}(\textbf{h}_{t}^{n_{l}})[N_{t}]\), where \(f_{cls}(\textbf{h}_{t}^{n_{l}}):\mathbb{R}^{N_{t}\times d}\rightarrow\mathbb{ R}^{N_{t}\times V}\) is the linear head, \(f_{cls}(\textbf{h}_{t}^{n_{l}})[N_{t}]\) denote the \(N_{t}\)-th item of \(f_{cls}(\textbf{h}_{t}^{n_{l}})\), and \(\sim\) denotes the sampling strategy (e.g. greedy search) in a generation.
### Transformer Module
In Section 3.1 **b)** the Transformer model \(\textbf{f}_{tr}\) can be seen as a stack of transformer modules. Specifically, each transformer module \(f_{tr}^{n}:\mathbb{R}^{N_{t}\times d}\rightarrow\mathbb{R}^{N_{t}\times d}\) consists of following three components:
**a) Projection**, i.e. \(\textbf{Q}^{n},\textbf{K}^{n},\textbf{V}^{n}=W_{Q^{n}}^{T}\textbf{h}^{n-1},W_ {K^{n}}^{T}\textbf{h}^{n-1},W_{V^{n}}^{T}\textbf{h}^{n-1}\), where \(W_{Q^{n}},W_{K^{n}},W_{V^{n}}\in\mathbb{R}^{d\times(d/N_{h})\times N_{h}}\) are \(N_{h}\)-head projection matrices. Particularly, \(\textbf{h}^{0}=\textbf{E}_{t}^{\prime}\).
Figure 2: An Overview of Private Text Generation Framework of our MERGE method.
**b) Attention4**, i.e. \(\textbf{x}_{att}^{n}=f_{ln}(f_{dr}(W_{d^{n}}^{T}.(\text{Concat}(A^{n}.\textbf{V}^{n} ))+b_{d^{n}})+\textbf{h}^{n-1})\), where \(A^{n}\in\mathbb{R}^{N_{h}\times N_{t}\times N_{t}}\) is the \(N_{h}\)-head attention matrix that can be calculated by \(A=f_{dr}(softmax(\textbf{Q}^{n}\cdot\textbf{K}^{nT}/\sqrt{d_{k}}))\), \(d_{k}=d/N_{h}\), \(W_{d^{n}}\in\mathbb{R}^{d\times d}\) is the weight matrix, \(b_{d^{n}}\in\mathbb{R}^{d}\) is the bias, f\({}_{dr}\) denotes the dropout operation Srivastava et al. (2014), and \(f_{ln}\) is the layer normalization Ba et al. (2016) layer.
Footnote 4: Noted that there are some slight differences for cross attention, e.g. in cross attention **K** and **V** are calculated with the output hidden state of the encoder. While it has no impact on our method in section 4, we will simply discuss the situation of self-attention.
\[f_{lyn}(\textbf{x})=\frac{\textbf{x}-E[\textbf{x}]}{\sqrt{Var[\textbf{x}]+ \epsilon}}\odot\gamma+\beta, \tag{2}\]
in which \(\epsilon\) is a tiny number, \(\odot\) denotes the element-wise product, and \(E[\textbf{x}]\) and \(Var[\textbf{x}]\) denote the mean and variance of **x**, respectively.
**c) Feed forward**, i.e. \(\textbf{h}^{n}=f_{ln}(f_{dr}(W_{O}^{nT}.(\text{Act}(W_{I}^{nT}.\textbf{x}_{att }^{n}+b_{I}^{n})+b_{O}^{n})+\textbf{x}_{att}^{n})\), where \(W_{I}^{n}\in\mathbb{R}^{d\times d}\) and \(W_{O}^{n}\in\mathbb{R}^{d_{I}\times d}\) are weighted matrices, \(b_{I}^{n}\in\mathbb{R}^{d_{I}}\) and \(b_{O}^{n}\in\mathbb{R}^{d}\) are bias vectors, \(d_{I}\) is the dimension of intermediate hidden states, and Act(\({}^{\cdot}\)) denotes the activation functions such as ReLU Agarap (2018) and GeLU Hendrycks and Gimpel (2016).
## 4 Merge
In this section, we present _MERGE_, a fast text generation framework for private inference. Illustrated in Figure 3, _MERGE_ consists of two independent optimizations, the _embedding resending_ (ER) strategy, and a new architecture built upon the _merge module_ (MM).
### Embedding Resending
As shown in Figure 3, the ER strategy aims to speed up the generation process by avoiding time-consuming operations (e.g. _embedding table query_ in Section 3.1**a)**) and decoupling the computation between _representation learning_ (Section 3.1**b)**) and _token sampling_ (Section 3.1**c)**). In detail, ER simply set the newly added token embedding \(\textbf{E}_{t}[N_{t}]\) as the generated hidden state at the last step (\(\textbf{h}_{t-1}^{n_{t}}[N_{t-1}]\)), i.e.
\[\textbf{E}_{t}=[\textbf{E}_{t-1};\textbf{h}_{t-1}^{n_{t}}[N_{t-1}]]=[\textbf{E }_{0};\textbf{h}_{t-1}^{n_{t}}], \tag{3}\]
where \(\textbf{E}_{0}\) denotes the token embeddings of the prefix \(p\) and "\(;\)" denotes the concatenation operation.
In intuition, Equation 3 regards _Embedding table query_ (Section 3.1**a)**) as the inverse procedure of _next token sampling_ (Section 3.1**c)**), which implies that hidden states and token embeddings are in the same representation space, and the embedding layer \(f_{e}\) is the inverse function of \(f_{cls}\). Therefore,
Figure 3: Generation Procedure and Architecture of MERGE.
to align the representation between \(\textbf{h}_{t-1}^{n_{t}}[N_{t-1}]\) and \(\textbf{E}_{t}[N_{t}]\), we design a training task that maximizes the cosine similarity between these vectors, i.e.
\[\mathcal{L}_{cos}=\frac{1}{N_{tr}\cdot N}\sum_{i}^{N_{tr}}\sum_{t=1}^{N}1-cosine (\textbf{h}_{i,t-1}^{m_{t}}[N_{t-1}],\textbf{E}_{i,t}[N_{t}]), \tag{4}\]
where \(cosine(a,b)=\frac{a\cdot b}{||a||\cdot||b||}\) is the cosine similarity, \(N_{tr}\) is the number of train set, and \(N\) denotes the sequence length.
In Equation 4 we select the cosine similarity instead of mean square error (MSE) because the inner product (e.g. _self-attention_ in Section 3.2**b**)) plays a key role in the Transformer module.
Besides, we observe that the error of token embeddings significantly impacts the performance of the Transformer model \(\textbf{f}_{tr}\) and leads to nonsensical sentence generation with the MSE value over \(10^{-3}\) (Section 3.2). To enhance the robustness of \(\textbf{f}_{tr}\), we introduce an _embedding augmentation_ method that first masks each element \(e_{t}\) in \(\textbf{E}_{t}\) with a rate \(p\), and then adds a uniform noise sampled from the small interval \((-\epsilon,\epsilon)\), i.e.
\[\tilde{e}_{t}=m_{t}\cdot(e_{t}+n_{t}), \tag{5}\]
where \(m_{t}\sim\text{Bernoulli}(1-p)\) and \(n_{t}\sim\text{Uniform}(-\epsilon,\epsilon)\).
Thus the cross-entropy loss can be formatted as
\[\mathcal{L}_{ce}=\frac{1}{N_{tr}\cdot N}\sum_{i}^{N_{tr}}\sum_{t=1}^{N}\textbf {x}_{t}[N_{t}]\cdot\text{log}f_{cls}(\textbf{f}_{tr}(\mathbf{\tilde{E}}_{t}^ {\prime}))[N_{t}], \tag{6}\]
where \(\mathbf{\tilde{E}}_{t}^{\prime}\) is the combination of noised token embedding \(\mathbf{\tilde{E}}_{t}\) and others.
The overall train loss can be formatted as
\[\mathcal{L}=\lambda\mathcal{L}_{cos}+(1-\lambda)\mathcal{L}_{ce}, \tag{7}\]
where \(\lambda\in[0,1]\) is the weighting factor.
### Layer Merging
In this subsection, we focus on designing an efficient approximation of the Transformer module \(f_{tr}\) (Section 3.2), i.e. the merge module \(f_{mer}\), to accelerate the inference in the _linear computation_ and _softmax_ function.
Following recent research Hassid et al. (2022), we first replace the dynamic self-attention matrix \(A^{n}\) with a constant attention matrix \(C^{n}\in\mathbb{R}^{N_{h}\times N_{t}\times N_{t}}\). We initialize \(C^{n}\) with the average of \(A^{n}\) in train set, i.e.
\[C^{n}=\frac{1}{N_{tr}}\sum_{i}^{N_{tr}}A_{i}^{n} \tag{8}\]
Besides, we approximate the layer normalization \(f_{ln}\) in Section 3.2**b**) with a simple element-wise multiplication \(f_{ln}^{\prime}(\textbf{x})=\textbf{x}\odot\gamma+\beta\), inspired by the previous work Chen et al. (2022). Consequently, the attention procedure presented in Section 3.2**b**) can now be approximated as
\[\textbf{x}_{att}^{n}=f_{ln}^{\prime}(f_{dr}(W_{d}^{nT}\cdot(\text{Concat}(C^{n }\cdot\textbf{V}^{n}))+b_{d}^{n})+\textbf{h}^{n-1}). \tag{9}\]
Based on Equation 9, we can simplify the whole computation procedure by reorganizing matrix computations in \(f_{tr}\) and merging intermediate linear operations. Specifically, we can merge the projection operation \(W_{V}^{n}\), the linear map \(W_{d}^{n}\), the approximated layer normalization function \(f_{ln}^{\prime}\), as well as the first linear map in feed-forward \(W_{l}^{n}\) into a single linear layer, i.e. a weighted matrix \(M_{u}^{n}\in\mathbb{R}^{d\times d_{I}}\) and a bias term \(b_{M_{u}}^{n}\in\mathbb{R}^{d_{I}}\), which can be formatted as:
\[\begin{split} M_{u}^{n}&=(W_{V^{n}}\cdot W_{d}^{n}+\textbf{1} )\odot\gamma\cdot W_{l}^{n},\\ b_{M_{u}}^{n}&=W_{I}^{nT}\odot\gamma\cdot b_{d}^{n}+W _{I}^{nT}\cdot\beta+b_{I}^{n},\end{split} \tag{10}\]
where \(\textbf{1}\in\mathbb{R}^{d\times d}\) is the residual term in attention module (Section 3.2 **b**)).
Equation 10 shows that there are **no** parameters dependent on input token embeddings \(\textbf{E}_{t}^{\prime}\). Hence, we can compute \(M_{u}\) and \(b_{M_{u}}\) before the inference stage, thus reducing the computation during model execution. As a result, we can simplify the entire Transformer module into only three tensor multiplications, i.e.
\[\textbf{x}_{o}^{n}=f_{mer}(\textbf{h}^{n-1})=f_{ln}(W_{O}^{nT}\cdot\text{Act}( M_{u}^{nT}\cdot C^{n}\cdot\textbf{h}^{n-1}+b_{M_{u}}^{n})+b_{O}^{n}) \tag{11}\]
Although it may appear possible to merge \(M_{u}^{n}\) with the previous linear matrix \(W_{O}^{n-1}\) in Equation 11 by approximating the layer normalization \(f_{ln}\) with \(f_{ln}^{\prime}\), we choose to keep them separate for the following two reasons. Firstly, the merged matrix \(W_{O}^{n-1}\cdot M_{u}^{n}\in\mathbb{R}^{d_{I}\times d_{I}}\) has significantly more parameters than \(W_{O}\) plus \(M_{u}\), since \(d_{I}\) is typically larger than \(d\). Secondly, removing \(f_{ln}\) in Equation 11 will hurt the convergence of the merge module heavily during training (detailed in Section 5.4).
In addition, to derive Equation 10 and Equation 11, we need to swap \(W_{v}^{n}\) and \(C^{n}\), which requires the verification that the matrix multiplications on the tensor \(\textbf{h}_{v}^{n-1}\) under different dimensions obeys the **commutative law**. Proofs of this assertion are available in Appendix B.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Time/ Communication Time} & \multirow{2}{*}{Total Time} & \multirow{2}{*}{Speedup} \\ \cline{2-2} \cline{4-6} & EmbedTime & & & & \\ \hline \multicolumn{6}{c}{_GPT2-base (124M)_} \\ \hline CrypTen & 321.44/52.33 & 251.93/74.21 & 454.61/113.96 & 1328.26 & 1x \\ MPCformer (sm2relu) & 316.75/51.55 & 253.57/76.56 & 181.14/45.59 & 1001.41 & 1.33x \\ MPCformer (sm2quad) & 318.16/50.88 & 253.30/75.16 & 158.45/57.40 & 972.50 & 1.36x \\ THE-X & 329.29/58.30 & 258.08/80.21 & 87.71/19.28 & 965.79 & 1.37x \\ MERGE (ours) & **5.17/0.87** & **157.50/53.97** & **0.00/0.00** & **171.38** & **7.75x** \\ MERGE (only ER) & 5.41/0.95 & 260.36/80.00 & 477.76/124.83 & 834.13 & 1.59x \\ MERGE (only MM) & 320.84/50.92 & 250.98/81.57 & 0.00/0.00 & 747.45 & 1.78x \\ \hline \multicolumn{6}{c}{_T_} \(T\) _/138M)_} \\ \hline CrypTen & 323.46/53.36 & 328.09/96.08 & 693.73/175.57 & 1569.41 & 1x \\ MPCformer (sm2relu) & 327.51/55.36 & 328.61/96.80 & 284.65/75.17 & 1207.63 & 1.30x \\ MPCformer (sm2quad) & 324.81/52.03 & 325.97/92.89 & 235.54/58.47 & 1149.07 & 1.37x \\ THE-X & 316.16/48.58 & 321.90/90.82 & 126.73/725.51 & 1050.28 & 1.49x \\ MERGE (ours) & **7.62/1.27** & **131.31/44.11** & **0.00/0.00** & **144.02** & **10.89x** \\ MERGE (only ER) & 8.24/1.58 & 211.57/65.19 & 596.74/166.50 & 874.36 & 1.79x \\ MERGE (only MM) & 322.38/51.35 & 221.57/69.22 & 0.00/0.00 & 693.30 & 2.26x \\ \hline \hline \end{tabular}
\end{table}
Table 1: Inference Time Comparison of Private Text Generation Models.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline Model & EmbedBytes & LinearBytes & SoftmaxBytes & TotalBytes & Fraction \\ \hline CrypTen & 71.41GB & 159.36GB & 1.62GB & 322.54GB & 100.00\% \\ MPCformer (sm2relu) & 71.41GB & 135.54GB & 0.54GB & 317.20GB & 98.34\% \\ MPCformer (sm2quad) & 71.41GB & 135.54GB & 0.07GB & 316.73GB & 98.20\% \\ THE-X & 71.41GB & 135.54GB & 0.50GB & 319.14GB & 98.95\% \\ MERGE (ours) & **1.15GB** & **119.99GB** & **0.00GB** & **121.76GB** & **37.75\%** \\ MERGE (only ER) & 1.15GB & 160.63GB & 1.62GB & 168.51GB & 52.24\% \\ MERGE (only MM) & 71.41GB & 119.89GB & 0.00GB & 281.88GB & 87.39\% \\ \hline \multicolumn{6}{c}{_T_ _(138M)_} \\ \hline CrypTen & 147.14GB & 199.97GB & 7.72GB & 380.45GB & 100.00\% \\ MPCformer (sm2relu) & 147.14GB & 199.97GB & 2.73GB & 364.74GB & 95.87\% \\ MPCformer (sm2quad) & 147.14GB & 199.97GB & 0.33GB & 362.33GB & 95.24\% \\ THE-X & 147.14GB & 199.97GB & 2.97GB & 369.73GB & 97.18\% \\ MERGE (ours) & **1.73GB** & **95.66GB** & **0.00GB** & **98.03GB** & **25.77\%** \\ MERGE (only ER) & 1.73GB & 120.17GB & 7.56GB & 132.44GB & 34.81\% \\ MERGE (only MM) & 73.72GB & 95.66GB & 0.00GB & 257.89GB & 67.79\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Averaged Communication Bytes for Private Text Generation.
Experiments
### Settings
**Datasets.** We evaluate _MERGE_ on three representative text generation tasks, including Multiwoz Eric et al. (2020), a human-human multi-turn task-oriented dialogue corpus, DailyDialog Li et al. (2017), a multi-turn chitchat dataset, and CommonGen Lin et al. (2020), a hard-constrained controlled text generation benchmark.
**Baselines.** We compare _MERGE_ with several state-of-the-art private inference models and frameworks, including:
\(\bullet\)**THE-X**Chen et al. (2022), one of the first approximation architecture of Transformer models;
\(\bullet\)**MPCformer**Li et al. (2022), the approximated model that aims to accelerate the inference procedure of Transformer;
\(\bullet\)**Crypten Knott et al. (2021), one of the MPC implementations for PyTorch.
**Evaluation Metrics.** We evaluate _MERGE_ in two dimensions: inference speed, and the effectiveness of approximation models. For inference speed, we record both the computation time and the communication bytes for each method. For the effectiveness of PLMs, we use Meteor Banerjee and Lavie (2005), CHRF++ Popovic (2017), NIST Lin and Och (2004), ROUGE family Lin (2004), BERTscore Zhang et al. (2020), and BARTscore Yuan et al. (2021) as the metrics.
### Implementation Details
We use GPT-2 (124M) Radford et al. (2019), T5-small Raffel et al. (2020), and Bart-base Lewis et al. (2020) as the basic evaluation backbone, with max sequence length \(128\). We trained all models under the learning rate \(3\times 10^{-5}\), batch size \(4\) with \(3\) epochs, based on the implementation of huggingface Transformers Wolf et al. (2020). As for the distillation of approximated models, we train our baselines under the same hyperparameter settings in their source code, and train _MERGE_ with \(50000\) steps under the learning rate \(8\times 10^{-5}\). All experiments above are on a single 32 GB Nvidia Tesla V100 GPU. Following previous works Li et al. (2022), for the experiments of private inference, we use two 32 GB Nvidia Tesla V100 GPUs to simulate the client and the server, with 10 GbE Ethernet bandwidth. We implement the whole MPC system based on Crypten Knott et al. (2021), a semi-honest MPC framework built on PyTorch. The implementation detail can be seen in Appendix A.
### Speed Evaluation
We evaluate the inference speed under two mainstream NLG architecture, i.e. the pure decoder represented by GPT-2, and the encoder-decoder models represented by T5. We evaluate these two architectures with the sequence length 128, and record the total inference time as well as the time cost of each operation. As shown in Table 1, our method _MERGE_ can obtain a 7.75x speedup to the encrypted GPT-2, and 5.8x to MPCformer. Besides, the vanilla encrypted GPT-2 with our embedding resending (MERGE only ER) can obtain a 59x speedup on _embedding table query_, and our merge module can help GPT-2 and T5 reduce half of the linear inference time, and achieve zero time cost in the softmax of attentions. Another phenomenon is that MERGE achieves a higher speedup on T5 than GPT-2, which is because in T5 every self-attention module follows with a cross-attention module.
Under the same settings of Table 1, we also record the communication bytes between the client and the server, shown in Table 2. We can see existing methods reduce the communication volume slightly (less than 2% in GPT-2), while our method can reduce 62% communication bytes, with 98% and 25% on _embedding table query_ and _linear operation_, respectively.
### Performance Evaluation
Based on the improvements of inference speed, we focus on the inference performance between our _MERGE_ method and other MPC frameworks. Table 3 shows the effectiveness of our methods and baselines, where the BERTscore of our _MERGE_ method is lower than MPCformer with ReLU approximation (MPCformer (sf2relu)) by 0.017, 0.017, and 0.001 in MultiWoz, CommonGen, and
DailyDialog, respectively. This demonstrates that our methods maintain the comparable results to these baselines. Besides, Table 3 indicates that some acceleration methods designed for NLU models are not suitable to text generation models, i.e. they suffer from the convergence problem during training. For instance, THE-X replaces all _layer normalization_ operations to the approximate normalization, which we observed will lead to the **out of time (OOT)** issue. Similarly, MPCformer that replace the softmax function to quadratic functions (MPCformer (sf2quad)) faces the same problem, though we train it with an elaborate layer-wise knowledge distillation.
## 6 Analysis
### Varying Sequence Lengths and Model Parameters
In this section, we dive to explore the effectiveness of our _MERGE_ method under longer sequence length and larger model parameters. For sequence length, we set it from 64 to 512, and record the
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Model & BERTscore & BARTscore & NIST & Rouge-L & METEOR & CHRF++ \\ \hline \multicolumn{6}{c}{_MultiWoz NLG_ Eric et al. [2020]} \\ \hline GPT-2 (124M) & 0.9237 & -2.9020 & 4.7907 & 0.4424 & 0.4900 & 43.2777 \\ +ER (no train) & 0.6860 & -5.0660 & 0.2325 & 0.0707 & 0.0425 & 3.9721 \\ \hline +MPCformer (sf2relu) & 0.9287 & -2.5377 & 5.7248 & 0.4806 & 0.5792 & 48.8241 \\ +MPCformer (sf2quad) & OOT & OOT & OOT & OOT & OOT & OOT \\ +THE-X & OOT & OOT & OOT & OOT & OOT & OOT \\ \hline +MERGE (Ours) & 0.8984 & -3.1464 & 3.7444 & 0.3970 & 0.4302 & 36.6983 \\ +MERGE only ER & 0.9155 & -2.8057 & 5.0812 & 0.4339 & 0.5102 & 44.2484 \\ +MERGE only MM & 0.9268 & -2.6277 & 5.6524 & 0.4778 & 0.5647 & 47.7262 \\ \hline TS-small (60M) & 0.9140 & -2.8916 & 4.245 & 0.4216 & 0.5225 & 45.0229 \\ +ER (no train) & 0.0 & -5.0347 & - & 0.0 & 0.0 & 0.0 \\ \hline +MPCformer (sf2relu) & 0.9126 & -2.7133 & 4.2952 & 0.4053 & 0.5354 & 45.5565 \\ +MPCformer (sf2quad) & OOT & OOT & OOT & OOT & OOT & OOT \\ +THE-X & OOT & OOT & OOT & OOT & OOT & OOT \\ \hline +MERGE (ours) & - & - & - & - & - & - \\ +MERGE only ER & 0.9053 & -3.1444 & 4.3608 & 0.3789 & 0.4379 & 38.2502 \\ +MERGE only MM & 0.9123 & -2.8744 & 4.6270 & 0.4176 & 0.4879 & 42.6995 \\ \hline Bart-base & 0.9301 & -2.5284 & 5.8325 & 0.4889 & 0.5823 & 49.1391 \\ +ER (no train) & 0.0491 & -5.0379 & - & 0.0038 & 0.0009 & 0.0507 \\ \hline +MPCformer (sf2relu) & 0.8318 & -4.1432 & 1.3971 & 0.1956 & 0.2157 & 19.2337 \\ +MPCformer (sf2quad) & OOT & OOT & OOT & OOT & OOT & OOT \\ +THE-X & OOT & OOT & OOT & OOT & OOT & OOT \\ \hline +MERGE (ours) & 0.8231 & -4.3357 & 0.5926 & 0.1974 & 0.2124 & 15.5329 \\ +MERGE only ER & 0.9305 & -2.4158 & 6.8489 & 0.5329 & 0.6070 & 52.5836 \\ +MERGE only MM & 0.8868 & -3.6204 & 3.5688 & 0.3022 & 0.3662 & 31.6465 \\ \hline \multicolumn{6}{c}{_CommonGen_ Lin et al. [2020]} \\ \hline GPT-2 (124M) & 0.9336 & -3.4710 & 3.7840 & 0.2744 & 0.3012 & 27.7038 \\ +ER (no train) & 0.5999 & -4.9864 & 0.0701 & 0.0192 & 0.0066 & 0.9470 \\ \hline +MPCformer (sf2relu) & 0.8943 & -4.1436 & 2.1301 & 0.1861 & 0.2691 & 27.6167 \\ +MPCformer (sf2quad) & OOT & OOT & OOT & OOT & OOT & OOT \\ +THE-X & OOT & OOT & OOT & OOT & OOT & OOT \\ \hline +MERGE (ours) & 0.8821 & -4.2479 & 0.6639 & 0.2025 & 0.1538 & 16.0573 \\ +MERGE only ER & 0.8953 & -3.8979 & 1.6796 & 0.2430 & 0.2110 & 20.8878 \\ +MERGE only MM & 0.9083 & -4.0885 & 2.2687 & 0.2026 & 0.2058 & 20.9888 \\ \hline \multicolumn{6}{c}{_DailyDialog_ Li et al. [2017]} \\ \hline GPT-2 (124M) & 0.8404 & -6.6387 & 0.5429 & 0.1142 & 0.1042 & 11.5089 \\ +ER (no train) & 0.7518 & -6.8820 & 0.1287 & 0.0566 & 0.0526 & 6.8067 \\ \hline +MPCformer (sf2relu) & 0.8161 & -6.3494 & 1.1102 & 0.1322 & 0.1261 & 12.0713 \\ +MPCformer (sf2quad) & OOT & OOT & OOT & OOT & OOT & OOT \\ +THE-X & OOT & OOT & OOT & OOT & OOT & OOT \\ \hline +MERGE (ours) & 0.8213 & -6.2384 & 0.3674 & 0.1233 & 0.0955 & 7.8091 \\ +MERGE only ER & 0.8205 & -6.5515 & 0.1069 & 0.1301 & 0.0833 & 6.5819 \\ +MERGE only MM & 0.8343 & -6.5800 & 1.0499 & 0.1525 & 0.1364 & 14.9039 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance Experiments of Private Text Generation.
averaged score as well as the minimum and maximum score for each point. Illustrated by Figure 4, we can see the inference time cost as well as the communication volume decreases with the improvements of sequence length. In detail, our _MERGE_ method can obtain a 26.5x speedup to the vanilla model and 11.8x to existing state-of-the-art model THE-X under sequence length 512, and reduce almost 80% communication Bytes. Besides, we can see our embedding resending (ER) strategy can obtain a **constant** embedding inference time, which is because ER bypasses the _embedding table query_, and thus its embedding time only related to the generation prefix of samples.
For model parameters, we also evaluate _MERGE_ under different model sizes from 82M to 391M, and set the sequence length to 128. Different from Figure 4, Figure 5 demonstrates that there are no significant improvements of speedup while the model size increasing, but our _MERGE_ method still obtains an obvious speedup (\(\sim\)10x) to existing methods. Besides, our method exhibits a conspicuous positive correlation with the model parameter size in terms of the gap between our method and the baselines, particularly in linear time and the communication volume, which demonstrate the effectiveness of _MERGE_.
### Robustness of Word Embedding for Language Models
Illustrated in Figure 6, we add random noise on the embedding of Transformer models, and evaluate the decrease of text generation for different generation strategies. In concrete, Figure 6 demonstrates that there exists an abrupt decline with MSE error 0.08 for vanilla auto-regressive generation, while our method can resist the decreasing of generation quality.
## 7 Conclusion
In this paper, we address the problem of private text generation, and propose MERGE, a novel framework to accelerate the inference procedure of existing generative language models. MERGE consists of two optimizations, embedding resending and the merge module. The former speeds up the auto-regressive generation by bypassing the embedding table query of vanilla Transformer models, and the latter optimizes and merges the computation of Transformer modules. Extensive experiments demonstrate the superiority of our method both in inference speed and the generation quality. In the future, we plan to design a fast and plug-and-play MPC framework for existing language models.
Figure 4: Experimental Results varying Model Parameters.
Figure 5: Experimental Results varying Model Parameters. |
2306.14127 | Laplacain eigenvalue distribution and diameter of graphs | Let $G$ be a connected graph on $n$ vertices with diameter $d$. It is known
that if $2\le d\le n-2$, there are at most $n-d$ Laplacian eigenvalues in the
interval $[n-d+2, n]$. In this paper, we show that if $1\le d\le n-3$, there
are at most $n-d+1$ Laplacian eigenvalues in the interval $[n-d+1, n]$.
Moreover, we try to identify the connected graphs on $n$ vertices with diameter
$d$, where $2\le d\le n-3$, such that there are at most $n-d$ Laplacian
eigenvalues in the interval $[n-d+1, n]$. | Leyou Xu, Bo Zhou | 2023-06-25T05:07:31Z | http://arxiv.org/abs/2306.14127v1 | # Laplacian eigenvalue distribution and diameter of graphs
###### Abstract
Let \(G\) be a connected graph on \(n\) vertices with diameter \(d\). It is known that if \(2\leq d\leq n-2\), there are at most \(n-d\) Laplacian eigenvalues in the interval \([n-d+2,n]\). In this paper, we show that if \(1\leq d\leq n-3\), there are at most \(n-d+1\) Laplacian eigenvalues in the interval \([n-d+1,n]\). Moreover, we try to identify the connected graphs on \(n\) vertices with diameter \(d\), where \(2\leq d\leq n-3\), such that there are at most \(n-d\) Laplacian eigenvalues in the interval \([n-d+1,n]\).
**Keywords:** Laplacian spectrum, multiplicity of Laplacian eigenvalues, diameter
## 1 Introduction
Let \(G\) be a graph of order \(n\). The Laplacian matrix of \(G\) is \(L(G)=D(G)-A(G)\), where \(D(G)\) is the diagonal degree matrix and \(A(G)\) is the adjacency matrix \(G\). The eigenvalues of \(L(G)\) are known as the Laplacian eigenvalues of \(G\). The Laplacian eigenvalues of \(G\) lie in the interval \([0,n]\)[14, 16]. The distribution of Laplacian eigenvalues in \([0,n]\) is a natural problem, which is relevant due to the many applications related to Laplacian matrices [12, 14,
17]. There do exist results that bound the number of Laplacian eigenvalues in subintervals of \([0,n]\), see, e.g. [6, 7, 9, 12, 18, 21]). Generally, how the Laplacian eigenvalues are distributed in the interval \([0,n]\) is a hard problem [13] that is not well understood.
For a graph \(G\) and a Laplacian eigenvalue \(\mu\) of \(G\) the multiplicity of \(\mu\) is denoted by \(m_{G}(\mu)\). For a graph \(G\) of order \(n\) and an interval \(I\subseteq[0,n]\), the number of Laplacian eigenvalues of \(G\) in \(I\) is denoted by \(m_{G}I\). Evidently, \(m_{G}[0,n]=n\) for a graph \(G\) of order \(n\). As pointed out by Jacobs, Oliveira and Trevisan in [13], it is also a hard problem because little is known about how the Laplacian eigenvalues are distributed in the interval \([0,n]\). It is of interest to explore the relation between the distribution of Laplacian eigenvalues and the diameter of a graph. The following results are known.
**Theorem 1**.: _[_8_]_ _For any \(n\)-vertex connected graph \(G\) with diameter \(d\geq 1\), \(m_{G}(2,n]\geq\lceil\frac{d}{2}\rceil\)._
**Theorem 2**.: _[_1_]_ _For any \(n\)-vertex connected graph \(G\) with diameter \(d\), where \(d\geq 4\), \(m_{G}(n-d+3,n]\leq n-d-1\)._
**Theorem 3**.: _[_22_]_ _For any \(n\)-vertex connected graph \(G\) with diameter \(d\), where \(2\leq d\leq n-2\), \(m_{G}[n-d+2,n]\leq n-d\)._
Note that Theorem 3 was conjectured by Ahanjideh et al. [1].
In this paper, we show the following theorem.
**Theorem 4**.: _For any \(n\)-vertex connected graph \(G\) with diameter \(d\), where \(1\leq d\leq n-3\), \(m_{G}[n-d+1,n]\leq n-d+1\)._
Denote by \(\mathfrak{G}(n,d)\) the class of graphs \(G\) on \(n\) vertices with diameter \(d\) such that \(m_{G}[n-d+1,n]\leq n-d\). Note that \(\mu_{j}(P_{n})=4\sin^{2}\frac{(n-j)\pi}{2n}\) for \(j=1,\ldots,n\)[2, p. 145]. So, if \(n\geq 4\), then \(\mu_{2}(P_{n})\geq 2\), implying that \(m_{P_{n}}[2,n]\geq 2\). If \(n\geq 10\), then for any \(n\)-vertex connected graph \(G\) with diameter \(n-2\), \(m_{G}[3,n]\geq 3\). This is because
\[\mu_{3}(G)\geq\mu_{3}(P_{n-1})=4\sin^{2}\frac{(n-4)\pi}{2(n-1)}\geq 3\]
by Lemma 3 below. So, to determine the graph class \(\mathfrak{G}(n,d)\), it is necessary that \(1\leq d\leq n-3\). Evidently, \(m_{K_{n}}(n)=n-1\) if \(n\geq 2\) and so \(\mathfrak{G}(n,1)=\{K_{n}\}\). We show that \(\mathfrak{G}(n,d)\) contains all graphs with \(d=2,3\), and those
graphs with some diametral path (a diametral path of a graph is a shortest path whose length is equal to the diameter of the graph) \(P\) such that there are at least two vertices outside \(P\) with at most two neighbors on \(P\), where \(4\leq d\leq n-3\). We also construct a class of \(n\)-vertex graphs with diameter \(d\) that is not in \(\mathfrak{G}(n,d)\) for \(4\leq d\leq n-3\).
## 2 Preliminaries
For a graph \(G\), we denote by \(V(G)\) the vertex and \(E(G)\) the edge set. Let \(H_{1}\cup H_{2}\) be the disjoint union of graphs \(H_{1}\) and \(H_{2}\). The disjoint union of \(k\) copies of a graph \(G\) is denoted by \(kG\). As usual, denote by \(P_{n}\) the path, and \(K_{n}\) the complete graph, of order \(n\).
For a graph \(G\) with \(E_{1}\subseteq E(G)\), denote by \(G-E_{1}\) the subgraph of \(G\) obtained from \(G\) by deleting all edges in \(E_{1}\). Particularly, if \(E_{1}=\{e\}\), then we write \(G-e\) for \(G-\{e\}\). If \(G^{\prime}=G-E_{1}\) with \(E_{1}\subseteq E(G)\), then \(G=G^{\prime}+E_{1}\) and it as \(G^{\prime}+\{e\}\) if \(E_{1}=\{e\}\).
For \(v\in V(G)\), denote by \(N_{G}(v)\) the neighborhood of \(v\) in \(G\), and \(\delta_{G}(v):=|N_{G}(v)|\) denotes the degree of \(v\) in \(G\). Let \(H\) be subgraph of \(G\) and \(u\in V(G)\setminus V(H)\), let \(\delta_{H}(u)=|N_{G}(u)\cap V(H)|\).
**Lemma 1**.: _[_10_, Theorem 4.3.28]_ _If \(M\) is a Hermitian matrix of order \(n\) and \(B\) is its principal submatrix of order \(p\), then \(\rho_{n-p+i}(M)\leq\rho_{i}(B)\leq\rho_{i}(M)\) for \(i=1,\ldots,p\)._
**Lemma 2**.: _[_19_, Theorem 1.3]_ _Let \(A\) and \(B\) be Hermitian matrices of order \(n\). Then for \(1\leq i,j\leq n\) with \(i+j-1\leq n\), \(\rho_{i+j-1}(A+B)\leq\rho_{i}(A)+\rho_{j}(B)\) with equality if and only if there exists a nonzero vector \(\mathbf{x}\) such that \(\rho_{i+j-1}(A+B)\mathbf{x}=(A+B)\mathbf{x}\), \(\rho_{i}(A)\mathbf{x}=A\mathbf{x}\) and \(\rho_{j}(B)\mathbf{x}=B\mathbf{x}\)._
**Lemma 3**.: _[_16_, Theorem 3.2]_ _If \(G\) is a graph with \(e\in E(G)\), then_
\[\mu_{1}(G)\geq\mu_{1}(G-e)\geq\mu_{2}(G)\geq\cdots\geq\mu_{n-1}(G-e)\geq\mu_{ n}(G)=\mu_{n}(G-e)=0.\]
Denote by \(\overline{G}\) the complement of a graph \(G\).
**Lemma 4**.: _[_16_, Theorem 3.6]_ _If \(G\) is a graph of order \(n\geq 2\), then \(\mu_{i}(G)=n-\mu_{n-i}(\overline{G})\) for \(i=1,\ldots,n-1\)._
It is known that the number of components of a graph is equal to the multiplicity of \(0\) as a Laplacian eigenvalue [14].
For an \(n\times n\) Hermitian matrix \(M\), \(\rho_{i}(M)\) denotes its \(i\)-th largest eigenvalue of \(M\) and \(\sigma(M)=\{\rho_{i}(M):i=1,\ldots,n\}\) is the spectrum of \(M\). For convenience, if \(\rho\) is an eigenvalue of \(M\) with multiplicity \(s\geq 2\), then we write it as \(\rho^{[s]}\) in \(\sigma(M)\). For a graph \(G\), let \(\sigma_{L}(G)=\sigma(L(G))\).
Let \(G\) be a graph with a partition \(\pi\colon\, V(G)=V_{1}\cup\cdots\cup V_{m}\). Then \(L(G)\) may be partitioned into a block matrix, where \(L_{ij}\) denotes the block with rows corresponding to vertices in \(V_{i}\) and columns corresponding to vertices in \(V_{j}\) for \(1\leq i,j\leq m\). Denote by \(b_{ij}\) the average row sums of \(L_{ij}\) for \(1\leq i,j\leq m\). Then the \(m\times m\) matrix \(B=(b_{ij})\) is called the quotient matrix of \(L(G)\) with respect to \(\pi\). If \(L_{ij}\) has constant row sum for all \(1\leq i,j\leq m\), then we say \(\pi\) is an equitable partition. The following lemma is an immediate result of [4, Lemma 2.3.1].
**Lemma 5**.: _For a graph \(G\), if \(B\) is the quotient matrix of \(L(G)\) with respect to an equitable partition, then \(\sigma(B)\) is contained in \(\sigma_{L}(G)\)._
A double star \(D_{n,a}\) is a tree with diameter \(3\) in which there are \(a\) and \(n-2-a\) pendant edges at its non-pendant vertices, where \(1\leq a\leq n-3\).
We need a result due to Doob stating that for a tree \(T\) on \(n\) vertices with diameter \(d\geq 1\), \(\mu_{n-1}(T)\leq 2\left(1-\cos\frac{\pi}{d+1}\right)\), see [5, p. 187] or [8, Corollary 4.4].
**Lemma 6**.: _Let \(G=D_{n,a}\), where \(n\geq 5\), \(\mu_{2}(G)>2\) and \(\mu_{3}(G)=1\)._
Proof.: As \(D_{5,1}\) is an edge induced subgraph, we have by Lemma 3 that \(\mu_{2}(G)\geq\mu_{2}(D_{5,1})\approx 2.311>2\).
As \(I_{n}-L(G)\) has \(a\) and \(n-2-a\) equal rows, respectively, \(1\) is a Laplacian eigenvalue of \(G\) with multiplicity at least \((a-1)+(n-2-a-1)=n-4\). By the result due to Doob as mentioned above, \(\mu_{n-1}(G)\leq 2\left(1-\cos\frac{\pi}{4}\right)<1\). It follows that \(\mu_{3}(G)=1\).
For integers \(n\), \(d\) and \(t\) with \(2\leq d\leq n-2\) and \(2\leq t\leq d\), \(G_{n,d,t}\) denotes the graph obtained from a path \(P_{d+1}:=v_{1}\ldots v_{d+1}\) and a complete graph \(K_{n-d-1}\) such that they are vertex disjoint by adding all edges connecting vertices of \(K_{n-d-1}\) and vertices \(v_{t-1}\), \(v_{t}\) and \(v_{t+1}\).
Two matrices \(A\) and \(B\) are said to be permutationally similar if \(B=P^{\top}AP\) for some permutation matrix \(P\). That is, \(B\) is obtainable from \(A\) by simultaneous permutations of its rows and columns.
Proof of Theorem 4
Proof of Theorem 4.: It is trivial for \(d=1\). Suppose the following that \(d\geq 2\). Let \(P:=v_{1}\ldots v_{d+1}\) be the diametral path of \(G\). As \(d\leq n-3\), there are at least two vertices outside \(P\), say \(u\) and \(v\). Let \(H\) be the subgraph induced by \(V(P)\cup\{u,v\}\) and \(B\) the principal submatrix of \(L(G)\) corresponding to vertices of \(H\). Let \(B=L(H)+M\). Then \(M\) is a diagonal matrix whose diagonal entry corresponding to vertex \(z\) is \(\delta_{G}(z)-\delta_{H}(z)\) for \(z\in V(H)\). Obviously, \(\rho_{1}(M)\leq n-|V(H)|=n-d-3\). If \(\mu_{5}(H)<4\), then one gets by Lemmas 1 and 2 that
\[\mu_{n-d+2}(G)=\rho_{n-(d+3)+5}(L(G))\leq\rho_{5}(B)\leq\mu_{5}(H)+\rho_{1}(M) <n-d+1,\]
so \(m_{G}[n-d+1,n]\leq n-d+1\). Thus, it suffices to show that \(\mu_{5}(H)<4\).
As \(P\) is the diametral path, \(u\) (\(v\), respectively) is adjacent to at most three consecutive vertices on \(P\). If \(\delta_{P}(u)\leq 2\) (\(\delta_{P}(v)\leq 2\), respectively), \(3-\delta_{P}(u)\) (\(3-\delta_{P}(v)\), respectively) edges may be added to join \(u\) (\(v\), resepctively) and \(3-\delta_{P}(u)\) (\(3-\delta_{P}(v)\), respectively) vertices such that \(u\) (\(v\), respectively) is adjacent to exactly three consecutive vertices on \(P\). By Lemma 3, we may assume that \(\delta_{P}(u)=\delta_{P}(v)=3\). Let \(v_{t-1},v_{t},v_{t+1}\) (\(v_{r-1},v_{r},v_{r+1}\), respectively) be three neighbors of \(u\) (\(v\), resepctively). Assume that \(t\leq r\).
**Case 1.**\(t=r\).
Let \(H^{\prime}=H+uv\) if \(uv\notin E(G)\) and \(H^{\prime}=H\) otherwise. Then
\[L(H^{\prime})=\begin{pmatrix}L(P)&O_{(d+1)\times 2}\\ O_{2\times(d+1)}&L(P_{2})\end{pmatrix}+M,\]
where \(M=(m_{ij})_{(d+3)\times(d+3)}\) with
\[m_{ij}=\begin{cases}3&\text{if }i=j\in\{d+2,d+3\},\\ 2&\text{if }i=j\in\{t-1,t,t+1\},\\ -1&\text{if }\{i,j\}\in\{\{p,q\}:p=t-1,t,t+1,q=d+2,d+3\},\\ 0&\text{otherwise}.\end{cases}\]
Let \(R=L(H^{\prime})-M\). Then \(\rho_{1}(R)=\mu_{1}(P)\). As \(M\) is permutationally similar to \(L(K_{2,3}\cup(d-2)K_{1})\), one gets \(\rho_{5}(M)=0\). By Lemma 2, we have
\[\mu_{5}(H^{\prime})\leq\rho_{1}(R)+\rho_{5}(M)=\mu_{1}(P)<4,\]
so \(\mu_{5}(H)\leq\mu_{5}(H^{\prime})<4\) by Lemma 3.
**Case 2.**\(t=r-1\).
Let \(H^{\prime}=H+uv\) if \(uv\notin E(G)\) and \(H^{\prime}=H\) otherwise. It suffices to show that \(\mu_{5}(H^{\prime})<4\) by Lemma 3. Let \(u_{i}=v_{i}\) for \(i=1,\ldots,t+1\), \(u_{t+2}=v\), \(u_{i+1}=v_{i}\) for \(i=t+2,\ldots,d+1\), and \(u_{d+3}=u\). It is easy to see that \(H^{\prime}\) is obtainable from \(G_{d+3,d+1,t}\) by adding edges \(u_{t}u_{t+2}\), \(u_{t+1}u_{t+3}\) and \(u_{t+2}u_{d+3}\). Then, under the ordering \(u_{1},\ldots,u_{d+3}\),
\[L(H^{\prime})=L(G_{d+3,d+1,t})+M,\]
where \(M=(m_{ij})_{(d+3)\times(d+3)}\) with
\[m_{ij}=\begin{cases}2&\text{ if }i=j=t+2,\\ 1&\text{ if }i=j\in\{t,t+1,t+3,d+3\},\\ -1&\text{ if }\{i,j\}\in\{\{t,t+2\},\{t+1,t+3\},\{t+2,d+3\}\},\\ 0&\text{ otherwise.}\end{cases}\]
Note that \(M\) is permutationally similar to \(L(P_{3}\cup P_{2}\cup(d-2)K_{1})\), one gets \(\rho_{4}(M)=0\). By [22, Lemma 2.6], \(\mu_{2}(G_{d+3,d+1,t})=4\). By Lemma 2,
\[\mu_{5}(H^{\prime})\leq\mu_{2}(G_{d+3,d+1,t})+\rho_{4}(M)=\mu_{2}(G_{d+3,d+1,t })=4.\]
Suppose that \(\mu_{5}(H^{\prime})=4\). By Lemma 2, there exists a nonzero vector \(\mathbf{x}\) such that \(M\mathbf{x}=0\) and \(L(H^{\prime})\mathbf{x}=4\mathbf{x}=L(G_{d+3,d+1,t})\mathbf{x}\). Let \(x_{i}=x_{u_{i}}\) for \(i=1,\ldots,d+3\). From \(M\mathbf{x}=0\) at \(u_{t}\), \(u_{t+1}\) and \(u_{t+2}\), we have \(x_{t}=x_{t+2}=x_{d+3}\) and \(x_{t+1}=x_{t+3}\). From \(L(H^{\prime})\mathbf{x}=4\mathbf{x}\) at \(u_{t+2}\), one gets
\[2x_{t+2}-2x_{t+1}=4x_{t+2},\]
so \(x_{t+1}=-x_{t+2}=-x_{t}\).
From \(L(H^{\prime})=4\mathbf{x}\) at \(u_{1}\), we have \(x_{1}-x_{2}=4x_{1}\), so \(x_{2}=-3x_{1}\). Suppose that \(x_{j}=(-1)^{j-1}(2j-1)x_{1}\) for each \(j\leq i\) with \(i=2,\ldots,t-2\). From \(L(H^{\prime})=4\mathbf{x}\) at \(u_{i}\), we have \(2x_{i}-x_{i-1}-x_{i+1}=4x_{i}\), so \(x_{i+1}=-2x_{i}-x_{i-1}=(-1)^{i}(2i+1)x_{1}\). This shows that
\[x_{i}=(-1)^{i-1}(2i-1)x_{1}\text{ for }i=2,\ldots,t-1.\]
From \(L(H^{\prime})=4\mathbf{x}\) at \(u_{t-1}\), we have \(3x_{t-1}-2x_{t}-x_{t-2}=4x_{t-1}\), so \(x_{t}=-(x_{t-1}+x_{t-2})/2=(-1)^{t}x_{1}\).
Similarly, we have \(x_{i}=(-1)^{d+2-i}(2(d+3-i)-1)x_{d+2}\) for \(i=t+1,\ldots,d+2\). From \(x_{t+1}=x_{t+3}\), one gets
\((-1)^{d+1}(2(d-t+2)-1)x_{d+2}\), so \(x_{d+2}=0\). Thus \(x_{t}=-x_{t+1}=0\), implying that \(x_{1}=(-1)^{t}x_{t}=0\). It thus follows that \(\mathbf{x}=0\), a contradiction. Therefore \(\mu_{5}(H^{\prime})<4\).
**Case 3.**\(t=r-2\).
As \(P\) is a diametral path, \(u\) is not adjacent to \(v\). Let \(u_{i}=v_{i}\) for \(i=1,\ldots,t\), \(u_{t+1}=u\), \(u_{t+2}=v_{t+1}\), \(u_{t+3}=v\) and \(u_{i+2}=v_{i}\) for \(i=t+2,\ldots,d+1\). Evidently, \(H\) is obtainable from the path \(u_{1}\ldots u_{d+3}\) by adding edges \(u_{j}u_{j+2}\) for \(j=t-1,t,t+2,t+3\). Then
\[L(H)=L(P_{d+3})+M,\]
where \(M=(m_{ij})_{(d+3)\times(d+3)}\) with
\[m_{ij}=\begin{cases}2&\text{ if }i=j=t+2,\\ 1&\text{ if }i=j\in\{t-1,t,t+3\},\\ -1&\text{ if }\{i,j\}\in\{\{p,p+2\}:p=t-1,t,t+2,t+3\},\\ 0&\text{ otherwise.}\end{cases}\]
Note that \(M\) is permutationally similar to \(L(P_{3}\cup 2P_{2}\cup(d-4)K_{1})\), one gets \(\rho_{5}(M)=0\). By Lemma 2,
\[\mu_{5}(H)\leq\mu_{1}(P_{d+3})+\rho_{5}(M)=\mu_{1}(P_{d+3})<4,\]
as desired.
**Case 4.**\(t\leq r-3\).
As \(P\) is a diametral path, \(u\) is not adjacent to \(v\). Let \(u_{i}=v_{i}\) for \(i=1,\ldots,t\), \(u_{t+1}=u\), \(u_{i+1}=v_{i}\) for \(i=t-1,\ldots,r\), \(u_{r+2}=v\), \(u_{i+2}=v_{i}\) for \(i=r+1,\ldots,d+1\). Evidently, \(H\) is obtainable from \(u_{1}\ldots u_{d+3}\) by adding edges \(u_{j}u_{j+2}\) for \(j=t-1,t,r-1,r\). Then
\[L(H)=L(P_{d+3})+M,\]
where \(M=(m_{ij})_{(d+3)\times(d+3)}\) with
\[m_{ij}=\begin{cases}1&\text{ if }i=j\in\{t-1,t,t+1,t+2,r-1,r,r+1,r+2\},\\ -1&\text{ if }\{i,j\}\in\{\{p,p+2\}:p=t-1,t,r-1,r\},\\ 0&\text{ otherwise.}\end{cases}\]
Note that \(M\) is permutationally similar to \(L(4P_{2}\cup(d-5)K_{1})\), one gets \(\rho_{5}(M)=0\). By Lemma 2,
\[\mu_{5}(H)\leq\mu_{1}(P_{d+3})+\rho_{5}(M)<4,\]
as desired.
Graph class \(\mathfrak{G}(n,d)\)
In this section, we show that if \(\mathfrak{G}(n,d)\) is actually the class of \(n\)-vertex graphs with diameter \(d\) and determine the graphs \(G\) in \(\mathfrak{G}(n,d)\) such that \(m_{G}[n-d+1,n]=n-d\) if \(d=2,3\). We give a sufficient condition such that \(G\in\mathfrak{G}(n,d)\) and construct a class of \(n\)-vertex graphs with diameter \(d\) that is not in \(\mathfrak{G}(n,d)\) for \(4\leq d\leq n-3\).
**Theorem 5**.: _Let \(G\) be an \(n\)-vertex graph with diameter two. Then \(m_{G}[n-1,n]\leq n-2\) with equality if and only if \(G\cong K_{n}-\{vv_{i}:i=1,\ldots,s\}\) for some \(s=1,\ldots,n-2\), where \(v,v_{1},\ldots,v_{s}\in V(K_{n})\)._
Proof.: If \(G\cong K_{n}-\{vv_{i}:i=1,\ldots,s\}\), then \(\overline{G}\cong K_{1,s}\cup(n-s-1)K_{1}\)\(\sigma_{L}(\overline{G})=\{s+1,1^{[s-1]},0^{[n-s]}\}\), so by Lemma 4, we have \(\sigma_{L}(G)=\{n^{[n-s-1]},n-1^{[s-1]},n-s-1,0\}\), implying that \(m_{G}[n-1,n]=n-2\).
Suppose that \(G\ncong K_{n}-\{vv_{i}:i=1,\ldots,s\}\) for any \(s=1,\ldots,n-2\). It suffices to show that \(m_{G}[n-1,n]<n-2\), or equivalently, \(\mu_{n-2}(G)<n-1\). As \(G\ncong K_{n}-\{vv_{i}:i=1,\ldots,s\}\) for any \(s=1,\ldots,n-2\), \(G\) is a spanning subgraph of \(H:=K_{n}-vv_{1}-e\) for some \(e\in E(K_{n}-v-v_{1})\). As \(\overline{H}\cong 2K_{2}\cup(n-4)K_{1}\) with \(\sigma_{L}(\overline{H})=\{2^{[2]},0^{[n-2]}\}\), we have \(\sigma_{L}(H)=\{n^{[n-3]},n-2^{[n-2]},0\}\) by Lemma 4, so \(\mu_{n-1}(H)=n-2\). Now, by Lemma 3, \(\mu_{n-2}(G)\leq\mu_{n-2}(H)=n-2<n-1\).
For \(n\geq 5\), let \(G_{n,3}=G_{n,3,3}\).
**Lemma 7**.: _Let \(G=G_{n,3}\) with a diametral path \(P=v_{1}v_{2}v_{3}v_{4}\). Let \(u\) and \(v\) be two distinct vertices outside \(P\). Then \(\mu_{n-3}(G-v_{3}u)<n-2\), \(\mu_{n-3}(G-v_{2}u-v_{4}u)<n-2\) for \(u\in V(G)\setminus V(P)\), and \(\mu_{n-3}(G-v_{2}u-v_{4}v)<n-2\) for \(\{u,v\}\subseteq V(G)\setminus V(P)\)._
Proof.: If \(n=5\), it follows by a direct calculation that \(\mu_{2}(G-v_{3}u)\approx 2.6889<3\) and \(\mu_{2}(G-v_{2}u-v_{4}u)\approx 2.311<3\).
Suppose next that \(n\geq 6\). Let \(G_{1}=\overline{G-v_{3}u}\), \(G_{2}=\overline{G-v_{2}u-v_{4}u}\) and \(G_{3}=\overline{G-v_{2}u-v_{4}v}\). It suffices to show that \(\mu_{3}(G_{i})>2\) by Lemma 4.
Note that in \(G_{1}\), \(\{uv_{1},v_{1}v_{3},uv_{3},v_{2}v_{4},v_{4}v_{1}\}\) induces a graph \(G_{1}^{\prime}\) obtained from \(K_{3}\) and \(K_{2}\) by adding an edge between them. Lemma 3 implies that \(\mu_{3}(G)\geq\mu_{3}(G_{1}^{\prime})\approx 2.311>2\).
Note that in \(G_{2}\), \(\{uv_{2},uv_{4},uv_{1},v_{2}v_{4},v_{1}v_{3},v_{1}v_{4}\}\) induces \(G_{2}^{\prime}\). Lemma 3 implies that \(\mu_{3}(G)\geq\mu_{3}(G_{2}^{\prime})\approx 2.689>2\).
Note that in \(G_{3}\), \(\{uv_{2},uv_{1},v_{2}v_{4},v_{1}v_{4},v_{1}v,v_{4}v\}\) induces \(\overline{P_{5}}\). Lemma 3 implies that \(\mu_{3}(G)\geq\mu_{3}(\overline{P_{5}})=5-4\sin^{2}\frac{3\pi}{10}>2\).
For positive integers \(s\leq n-4\), denote by \(G_{n,3}^{(2,s)}\) (\(G_{n,3}^{(4,s)}\), respectively) be the graph obtained from \(G_{n,3}\) by removing \(s\) edges joining \(v_{2}\) (\(v_{4}\), respectively) and vertices outside the diametral path \(v_{1}\ldots v_{4}\), where \(\delta_{G}(v_{1})=1\).
**Lemma 8**.: _(i) \(m_{G_{n,3}}[n-2,n]=n-3\). (ii) For \(1\leq s\leq n-4\), \(\mu_{G_{n,3}^{(2,s)}}[n-2,n]=\mu_{G_{n,3}^{(4,s)}}[n-2,n]=n-3\)._
Proof.: By Lemmas 4 and 6, one gets \(\mu_{n-2}(G_{n,3})<n-2\) and \(\mu_{n-3}(G_{n,3})=n-1\), so \(m_{G_{n,3}}[n-2,n]=n-3\). This proves Item (i).
Next, we prove Item (ii). Denote by \(S\) the set of vertices so that the removal of all edges \(v_{2}u\) (\(v_{4}u\), respectively) with \(u\in S\) from \(G_{n,3}\) with diametral path \(v_{1}\ldots v_{4}\) yields \(G_{n,3}^{(2,s)}\) (\(G_{n,3}^{(4,s)}\). Then \(|S|=s\). Let \(S^{\prime}=V(G_{n,3})\setminus(S\cup\{v_{1},v_{2},v_{4}\})\). By Lemma 3, one has \(\mu_{n-2}(G_{n,3}^{(2,s)}),\mu_{n-2}(G_{n,3}^{(4,s)})\leq\mu_{n-2}(G_{n,3})<n-2\). So it suffices to show that \(\mu_{n-3}(H)\geq n-2\) for \(H=G_{n,3}^{(4,s)},G_{n,3}^{(4,s)}\).
Let \(H_{1}=\overline{G_{n,3}^{(2,s)}}\) and \(H_{2}=\overline{G_{n,3}^{(4,s)}}\). Let \(V(H_{1})=\{v_{2}\}\cup(S\cup\{v_{4}\})\cup\{v_{1}\}\cup S^{\prime}\) and \(V(H_{2})=\{v_{2}\}\cup\{v_{4}\}\cup S\cup\{v_{1}\}\cup S^{\prime}\). With respect to the above partitions, \(L(H_{1})\) and \(L(H_{2})\) have quotient matrices \(B_{1}\) and \(B_{2}\), respectively, where
\[B_{1}=\begin{pmatrix}s+1&-s-1&0&0\\ -1&2&-1&0\\ 0&-s-1&n-2&-n+s+3\\ 0&0&-1&1\end{pmatrix},\]
and
\[B_{2}=\begin{pmatrix}1&-1&0&0&0\\ -1&s+2&-s&-1&0\\ 0&-1&2&-1&0\\ 0&-1&-s&n-2&-n+s+3\\ 0&0&0&-1&1\end{pmatrix}.\]
As both \(B_{1}\) and \(B_{2}\) has all row sums to be \(0\), \(0\) is an eigenvalue, so we may assume that \(\det(xI_{4}-B_{1})=xf(x)\) and \(\det(xI_{5}-B_{2})=xg(x)\). Note that the partitions are equitable. So the roots of \(f(x)=0\) (\(g(x)=0\), respectively) are Laplacian eigenvalues of \(H_{1}\) (\(H_{2}\), respectively). By direct calculations,
\[f(x)=x^{3}-(n+2+s)x^{2}+((s+3)n-2)x-(s+1)n\]
and
\[g(x)=x^{4}-(n+4+s)x^{3}+((s+5)n+s+2)x^{2}-((2s+7)n-s-4)x+(s+2)n.\]
As \(f(0)=-(s+1)n<0\) and \(f(1)=n-s-3>0\), \(g(0)=(s+2)n>0\), \(g(1)=-n+s+3\) and \(g(2)=2s(n-2)>0\), \(f(x)\) has a root \(a\) with \(0<a<1\), and \(g(x)\) has roots \(b\) and \(c\) with \(0<b<1\) and \(1<c<2\). By Lemmas 5 and 4, \(n-a\) is a Laplacian eigenvalue of \(G_{n,3}^{(2,s)}\), \(n-b\) and \(n-c\) are Laplacian eigenvalues of \(G_{n,3}^{(4,s)}\).
Note that \((n-1)I_{n}-L(G_{n,3}^{(2,s)})\) and \((n-2)I_{n}-L(G_{n,3}^{(2,s)})\) has \(n-s-3\) and \(s+1\) equal rows, respectively. Then \(n-1\) and \(n-2\) are eigenvalues of \(L(G_{n,3}^{(2,s)})\) with multiplicity \(n-s-4\) and \(s\), respectively. Recall that \(n-a\) is a Laplacian eigenvalue of \(G_{n,3}^{(2,s)}\). So \(\mu_{n-3}(G_{n,3}^{(2,s)})\geq n-2\).
As \((n-1)I_{n}-L(G_{n,3}^{(4,s)})\) and \((n-2)I_{n}-L(G_{n,3}^{(4,s)})\) has \(n-s-3\) and \(s\) equal rows, \(n-1\) and \(n-2\) are Laplacian eigenvalues of \(G_{n,3}^{(4,s)}\) with multiplicity \(n-s-4\) and \(s-1\), respectively. So \(\mu_{n-3}(G_{n,3}^{(4,s)})\geq n-2\).
Let \(n\geq 6\) and \(a\) be an integer with \(1\leq a\leq n-5\). Let \(G^{n,a}\) be a graph obtained from a path \(P_{4}:=v_{1}v_{2}v_{3}v_{4}\) and a complete graph \(K_{n-4}\) by adding edges connecting vertices of \(K_{n-4}\) and vertices \(v_{2},v_{3}\), and adding edges between \(a\) vertices of \(K_{n-4}\) and \(v_{1}\) and the remaining \(n-4-a\) vertices of \(K_{n-4}\) and \(v_{4}\).
**Lemma 9**.: _Let \(n\) and \(a\) be integers with \(1\leq a\leq\frac{n}{2}-2\). Let \(G=G^{n,a}\) with a diametral path \(P=v_{1}\dots v_{4}\). Let \(u\) (\(w\), respectively) be a neighbor of \(v_{1}\) (\(v_{4}\), respectively) outside \(P\) in \(G\). Then \(\mu_{n-3}(G-v_{i}u)<n-2\) and \(\mu_{n-3}(G-v_{i}w)<n-2\) for \(i=2,3\)._
Proof.: Let \(H_{1}=\overline{G-v_{2}u}\) and \(H_{2}=\overline{G-v_{2}w}\).
Note that \(H_{1}\) contains \(H_{1}^{\prime}\) a an edge induced subgraph consisting of a \(C_{3}=uv_{2}v_{4}u\) and a star with 3 edges \(v_{1}v_{4},v_{1}v_{3},v_{1}w\). By Lemma 3, \(\mu_{3}(H_{1})\geq\mu_{3}(H_{1}^{\prime})=3>2\). So \(\mu_{n-3}(G-v_{2}u)<n-2\). Similarly, \(\mu_{n-3}(G-v_{3}w)<n-2\).
Note that in \(H_{2}\), \(\{v_{2}v_{4},v_{1}v_{4},wv_{2},wv_{1},uv_{4},v_{1}v_{3}\}\) induces the graph \(H_{2}^{\prime}\) consisting of a \(C_{4}\) with a pendant edge attached at each of two adjacent vertices. By Lemma 3, \(\mu_{3}(H_{2})\geq\mu_{3}(H_{2}^{\prime})\approx 2.529>2\). So \(\mu_{n-3}(G-v_{2}w)<n-2\). Similarly, \(\mu_{n-3}(G-v_{3}u)<n-2\).
Let \(a\), \(b\) and \(n\) be integers with \(a,b\geq 1\) and \(a+b\leq n-4\). Let \(G^{n,a,b}\) be the graph obtained from a path \(P_{4}=v_{1}v_{2}v_{3}v_{4}\) and a complete graph \(K_{n-4}\) by adding edges connecting vertices of \(K_{n-4}\) and vertices \(v_{2}\), \(v_{3}\), adding edges between \(a\) vertices of \(K_{n-4}\) and \(v_{1}\) and other \(b\) vertices of \(K_{n-4}\) and \(v_{4}\).
**Lemma 10**.: _(i) For \(1\leq a\leq\frac{n}{2}-2\), \(m_{G^{n,a}}[n-2,n]=n-3\). (ii) For \(1\leq a\leq b\) and \(a+b\leq n-5\), \(m_{G^{n,a,b}}[n-2,n]=n-3\)._
Proof.: By Lemmas 4 and 6, \(\mu_{n-2}(G^{n,a})<n-2\) and \(\mu_{n-3}(G^{n,a})=n-1\), so Item (i) follows.
Next, we prove Item (ii). Let \(H=\overline{G^{n,a,b}}\).
As \(H\) contains \(H^{\prime}\) as an edge induced subgraph, obtained by attaching two pendant edges at each of two adjacent vertices of a \(C_{3}\), we have by Lemma 3 that \(\mu_{2}(H)\geq\mu_{2}(H^{\prime})\approx 4.4142>4\).
As \(I_{n}-L(H)\) has \(a+1\) and \(b+1\) equal rows, \(1\) is a Laplacian eigenvalue of \(H\) with multiplicity at least \(a+b\). Similarly, \(2\) is a Laplacian eigenvalue of \(H\) with multiplicity at least \(n-5-(a+b)\).
Let \(U_{1}=\{y\in N_{H}(v_{1}):\delta_{H}(y)=1\}\) and \(U_{2}=\{y\in N_{H}(v_{4}):\delta_{H}(y)=1\}\). Then \(|U_{1}|=a+1\) and \(|U_{2}|=b+1\). Let \(U_{3}=V(H)\setminus(U_{1}\cup U_{2}\cup\{v_{1},v_{4}\})\). The quotient matrix \(B\) of \(L(H)\) with respect to the partition \(V(H)=U_{1}\cup\{v_{1}\}\cup U_{3}\cup\{v_{4}\}\cup U_{2}\) is
\[B=\begin{pmatrix}1&-1&0&0&0\\ -(b+1)&n-a-2&-(n-4-a-b)&-1&0\\ 0&-1&2&-1&0\\ 0&-1&-(n-4-a-b)&n-b-2&-(a+1)\\ 0&0&0&-1&1\end{pmatrix}.\]
Note the this partition is equitable. So the roots of \(f(x)=0\) are Laplacian eigenvalues of \(H\), where \(\det(xI_{5}-B)=xf(x)\). As
\[f(0)=(n-4-a-b)n>0,\ f(1)=-(a+1)(b+1)<0,\]
and
\[f(2)=(n-4-a-b)(n-2)>0,\]
\(f(x)\) has roots \(\alpha\) and \(\beta\) with \(0<\alpha<1\) and \(1<\beta<2\).
Therefore, if \(a+b=n-5\), then \(\mu_{2}(H)>4\) and \(1<\mu_{3}(H)=\beta<2\), and if \(a+b<n-5\), then \(\mu_{2}(H)>4\) and \(\mu_{3}(H)=2\). By Lemma 4, \(\mu_{2}(G^{n,a,b})<n-4,n-2\) and \(\mu_{n-3}(G^{n,a,b})=n-2\), so the result follows.
**Theorem 6**.: _Let \(G\) be an \(n\)-vertex graph with diameter three, where \(n\geq 5\). Then \(m_{G}[n-2,n]\leq n-3\) with equality if and only if \(G\cong G_{n,3}\), \(G_{n,3}^{(2,s)}\) with \(1\leq s\leq n-4\), \(G\cong G_{n,3}^{(4,s)}\) with \(1\leq s\leq n-4\), \(G^{n,a}\) with \(1\leq a\leq\frac{n}{2}-2\), or \(G^{n,a,b}\) with \(1\leq a\leq b\) and \(a+b\leq n-5\)._
Proof.: If \(G\cong G_{n,3}\), \(G_{n,3}^{(2,s)}\) with \(1\leq s\leq n-4\), \(G\cong G_{n,3}^{(4,s)}\) with \(1\leq s\leq n-4\), \(G^{n,a}\) with \(1\leq a\leq\frac{n}{2}-2\), or \(G^{n,a,b}\) with \(1\leq a\leq b\) and \(a+b\leq n-5\), then we have by Lemmas 8 and 10 that \(m_{G}[n-2,n]=n-3\).
Suppose the following that \(G\ncong G_{n,3}\), \(G_{n,3}^{(2,s)}\) with \(1\leq s\leq n-4\), \(G\cong G_{n,3}^{(4,s)}\) with \(1\leq s\leq n-4\), \(G^{n,a}\) with \(1\leq a\leq\frac{n}{2}-2\), and \(G^{n,a,b}\) with \(1\leq a\leq b\) and \(a+b\leq n-5\). It suffices to show that \(m_{G}[n-2,n]<n-3\).
Let \(P=v_{1}v_{2}v_{3}v_{4}\) be a diametral path of \(G\). Assume that \(\delta_{G}(v_{1})\leq\delta_{G}(v_{4})\).
Suppose first that \(\delta_{G}(v_{1})=1\). Then \(G\) is a spanning subgraph of \(G_{n,3}\). As \(G\ncong G_{n,3}\), \(G\) is a spanning subgraph of \(G_{n,3}-e\) for some \(e\in E(G_{n,3})\). Let \(G^{\prime}=G_{n,3}\). If \(e\) joins two vertices outside \(P\), then \(G^{\prime}-e\cong G^{\prime}-v_{3}u\) for some \(u\in V(G)\setminus V(P)\). If \(e=v_{3}v_{i}\) for \(i=2,4\), then \(G^{\prime}-e\cong G^{\prime}-v_{i}u\) for some \(u\in V(G)\setminus V(P)\). Thus, we may assume that \(e\) joins a vertex outside \(P\) and \(v_{i}\) with \(i=2,3,4\). If \(e=v_{3}u\) for \(u\in V(G^{\prime})\setminus V(P)\), then we have by Lemmas 3 and 7 that \(\mu_{n-3}(G)\leq\mu_{n-3}(G^{\prime}-e)<n-2\), so \(m_{G}[n-2,n]<n-3\). Assume that \(e=v_{2}u\) or \(e=v_{4}u\) for some \(u\) outside \(P\). Correspondingly, \(G\) is a spanning subgraph of \(G_{n,3}^{(2,1)}\) or \(G_{n,3}^{(4,1)}\). Note that \(G\ncong G_{n,3}^{(2,s)},G_{n,3}^{(4,s)}\) for \(1\leq s\leq n-4\). So \(G\) is a spanning subgraph of \(G_{n,3}^{(2,1)}-f\) for some edge \(f\) not incident to \(v_{2}\) or a spanning subgraph of \(G_{n,3}^{(4,1)}-f\) for some edge \(f\) not incident to \(v_{4}\). By similar argument as above, we may assume that \(f\) joins \(v_{i}\) and a vertex outside \(P\) for \(i=3,4\) in the former case and \(i=2,3\) in the latter case. Then Lemmas 3 and 7 imply that \(\mu_{n-3}(G)\leq\mu_{n-3}(G^{\prime}-e-f)<n-2\), so \(m_{G}[n-2,n]<n-3\).
Suppose next that \(\delta_{G}(v_{1})\geq 2\). Note that \(v_{1}\) and \(v_{4}\) share no common neighbors. So \(\delta_{G}(v_{1})-1+\delta_{G}(v_{4})-1\leq n-4\), implying that \(a:=\delta_{G}(v_{1})-1\leq\frac{n}{2}-2\). It follows that \(G\) is a spanning subgraph of \(G^{n,a}\). As \(G\ncong G^{n,a}\), \(G\) is a spanning subgraph of \(G^{n,a}-e\) for some \(e\in E(G^{n,a})\). Let \(G^{*}=G^{n,a}\). If both ends of \(e=wz\) lie outside \(P\) and \(z\in N_{G^{*}}(v_{1})\), then \(G^{*}-e\cong G^{*}-v_{2}w\). If both ends of \(e=wz\) lie outside \(P\) and \(z\in N_{G^{*}}(v_{4})\), then \(G^{*}-e\cong G^{*}-v_{3}w\). If \(e=v_{1}v_{2}\), then \(G^{*}-e\cong G^{*}-v_{1}u\) for some \(u\in N_{G^{*}}(v_{1})\setminus V(P)\). If \(e=v_{2}v_{3}\), then \(G^{*}-e\cong G^{*}-v_{2}u\) for some \(u\in N_{G^{*}}(v_{4})\setminus V(P)\). If \(e=v_{3}v_{4}\), then \(G^{*}-e\cong G^{*}-v_{4}u\) for some \(u\in N_{G^{*}}(v_{4})\setminus V(P)\). So we may assume that \(e\) joins a vertex outside \(P\) and \(v_{i}\) with \(i=1,2,3,4\).
If \(e\) is incident to \(v_{2}\) or \(v_{3}\), then Lemma 9 together with Lemma 3 implies \(\mu_{n-3}(G)<n-2\), so \(m_{G}[n-2,n]<n-3\).
Assume that \(e=v_{1}u\) or \(e=v_{4}u\) for some \(u\) outside \(P\). Note that \(\delta_{G}(v_{1})\geq 2\). Then \(G^{*}-e\cong G^{n,a-1,n-4-a}\) with \(a\geq 2\) or \(G^{*}-e\cong G^{n,a,n-5-a}\). As \(G\ncong G^{n,r,s}\) with \(1\leq r\leq s\) and \(r+s\leq n-5\), \(G\) is a spanning subgraph
of \(G^{*}-e-f\) for some \(f\in E(G^{n,a})\), where \(f\) is incident to \(v_{2}\) or \(v_{3}\), or \(f\) joins two vertices outside \(P\). By similar argument as above, we may assume that \(f\) joins \(v_{i}\) with \(i=2,3\) and a vertex \(u\) outside \(P\). So \(\mu_{n-3}(G)\leq\mu_{n-3}(G^{*}-e-f)\leq\mu_{n-3}(G^{*}-f)<n-2\) by Lemmas 3 and 9, implying that \(m_{G}[n-2,n]<n-3\).
**Proposition 1**.: \(\mu_{n-3}(G_{n,4,3})>n-3\) _and so \(m_{G_{n,4,3}}[n-3,n]>n-4\)._
Proof.: Let \(G=G_{n,4,3}\) and \(P=v_{1}v_{2}v_{3}v_{4}v_{5}\) be the diametral path of \(G\). It needs to prove \(\mu_{n-3}(G)>n-3\) only. As the rows of \((n-2)I_{n}-L(G)\) corresponding to vertices in \(S:=V(G)\setminus\{v_{1},v_{2},v_{4},v_{5}\}\) are equal, \(n-2\) is a Laplacian eigenvalue of \(G\) with multiplicity at least \(n-7\). Note that \(L(G)\) has a quotient matrix \(B\) with respect to the partition \(V(G)=\{v_{1}\}\cup\{v_{2}\}\cup S\cup\{v_{4}\}\cup\{v_{5}\}\), where
\[B=\begin{pmatrix}1&-1&0&0&0\\ -1&n-3&-(n-4)&0&0\\ 0&-1&2&-1&0\\ 0&0&-(n-4)&n-3&-1\\ 0&0&0&-1&1\end{pmatrix}.\]
Let \(\det(xI_{5}-B)=xf(x)\). As
\[f(n-1)=2n-5>0,\ f(n-2)=-(n-4)^{2}<0\ \text{and}\ f(n-3)=2n-9>0,\]
\(f(x)\) has roots \(\alpha\) and \(\beta\) with \(n-2<\alpha<n-1\) and \(n-3<\beta<n-2\). So by Lemma 5, \(\alpha\) and \(\beta\) are Laplacian eigenvalues of \(G\) and so \(\mu_{n-3}(G)>n-3\), as desired.
**Proposition 2**.: _For \(d=4,\ldots,n-3\) and \(t=3,\ldots,d-1\), \(\mu_{n-d+1}(G_{n,d,t})>n-d+1\) and so \(m_{G_{n,d,t}}[n-d+1,n]>n-d\)._
Proof.: By Proposition 1, it needs only to prove \(\mu_{n-d+1}(G_{n,d,t})>n-d+1\) for \(d\geq 5\). Let \(G^{\prime}=G_{n,d,t}-\{v_{i}v_{i+1}:i=1,\ldots,t-3,t+2,\ldots,d\}\). Then \(G^{\prime}=G_{n-d+4,4,3}\cup(d-4)K_{1}\). By Lemma 3 and Proposition 1,
\[\mu_{n-d+1}(G_{n,d,t})\geq\mu_{n-d+1}(G^{\prime})=\mu_{n-d+1}(G_{n-d+4,4,3})>n -d+1,\]
as desired.
For integers \(n\) and \(t\) with \(1\leq t\leq n-3\), we denote by \(P_{n,t}^{++}\) the graph obtained from \(P_{n-1}=v_{1}\ldots v_{n-1}\) by adding a new vertex \(u\) and two new edges \(uv_{t}\) and \(uv_{t+2}\).
**Theorem 7**.: _Let \(G\) be an \(n\)-vertex graph with a diameteral path \(P=v_{1}\ldots v_{d+1}\), where \(d\leq n-3\). If there are at least two vertices outside \(P\) having at most two neighbors on \(P\), then \(m_{G}[n-d+1,n]\leq n-d\)._
Proof.: Let \(w,z\) be the two vertices outside \(P\) with at most two neighbors on \(P\). Let \(H\) be the subgraph induced by \(V(P)\cup\{w,z\}\) and \(B\) the principal submatrix of \(L(G)\) corresponding to vertices of \(H\). If \(\mu_{4}(H)<4\), then by Lemmas 1 and 2,
\[\mu_{n-d+1}(G)=\rho_{n-(d+3)+4}(L(G))\leq\rho_{4}(B)\leq\mu_{4}(H)+n-d-3<n-d+1,\]
so \(m_{G}[n-d+1,n]\leq n-d\). Thus, it suffices to show that \(\mu_{4}(H)<4\).
If \(\delta_{P}(w)<2\) (\(\delta_{P}(z)<2\), respectively), \(2-\delta_{P}(w)\) (\(2-\delta_{P}(z)\), respectively) edges may be added to join \(w\) (\(z\), resepctively) and \(2-\delta_{P}(w)\) (\(2-\delta_{P}(z)\), respectively) vertices such that \(w\) (\(z\), respectively) is adjacent to exactly two vertices on \(P\). By Lemma 3, we may assume that \(\delta_{P}(w)=\delta_{P}(z)=2\). Let \(v_{p},v_{q}\) (\(v_{r},v_{s}\), respectively) be two neighbors of \(w\) (\(z\), respectively). As \(P\) is a diametral path, \(w\) (\(z\), respectively) is adjacent to at most three consecutive vertices and so \(d_{P}(v_{p},v_{q})\leq 2\) and \(d_{P}(v_{r},v_{s})\leq 2\).
**Case 1.**\(\{v_{p},v_{q}\}=\{v_{r},v_{s}\}\).
Let \(H^{\prime}=H+wz\) if \(wz\notin E(G)\) and \(H^{\prime}=H\) otherwise. Then
\[L(H^{\prime})=\begin{pmatrix}L(P)&O_{(d+1)\times 2}\\ O_{2\times(d+1)}&L(P_{2})\end{pmatrix}+M,\]
where \(M=(m_{ij})_{(d+3)\times(d+3)}\) with
\[m_{ij}=\begin{cases}2&\text{ if }i=j\in\{p,q,d+2,d+3\},\\ -1&\text{ if }\{i,j\}\in\{\{x,y\}:x=p,q,y=d+2,d+3\},\\ 0&\text{ otherwise.}\end{cases}\]
Note that \(M\) is permutationally similar to \(L(K_{2,2}\cup(d-1)K_{1})\), one gets \(\rho_{4}(M)=0\). By Lemma 2, we have
\[\mu_{4}(H^{\prime})\leq\rho_{1}(L(H^{\prime})-M)+\rho_{4}(M)=\mu_{1}(P)<4,\]
so \(\mu_{4}(H)\leq\mu_{4}(H^{\prime})<4\) by Lemma 3.
**Case 2.**\(|\{v_{p},v_{q}\}\cap\{v_{r},v_{s}\}|=1\).
Assume that \(p<q=r<s\). Suppose that \(q-p=2\) or \(s-r=2\), say \(q-p=2\). As the diameter of \(G\) is \(d\), \(wz\notin E(G)\). Then
\[L(H)=\begin{pmatrix}L(P_{d+2,p}^{++})&O_{(d+2)\times 1}\\ O_{1\times(d+2)}&0\end{pmatrix}+M,\]
where \(M=(m_{ij})_{(d+3)\times(d+3)}\) with
\[m_{ij}=\begin{cases}2&\text{if }i=j=d+3,\\ 1&\text{if }i=j=r,s,\\ -1&\text{if }\{i,j\}=\{r,d+3\},\{s,d+3\},\\ 0&\text{otherwise.}\end{cases}\]
By [22, Lemma 4.3], \(\mu_{2}(P_{d+2,p}^{++})<4\). Note that \(M\) is permutationally similar to \(L(P_{3}\cup dK_{1})\), one gets \(\rho_{3}(M)=0\). By Lemma 2,
\[\mu_{4}(H)\leq\rho_{2}(L(H)-M)+\rho_{3}(M)=\mu_{2}(P_{d+2,p}^{++})<4,\]
as desired. Suppose next that \(q-p=s-r=1\). Then \(q=r=p+1\) and \(s=p+2\). Let \(H^{\prime}\) be the graph defined in Case 1. Then \(H^{\prime}-wz-v_{p}v_{q}-v_{r}v_{s}\cong P_{d+3}\). Let \(u_{i}=v_{i}\) for \(i=1,\ldots,p\), \(u_{p+1}=w\), \(u_{p+2}=v_{q}\), \(u_{p+3}=z\) and \(u_{i+2}=v_{i}\) for \(i=s,\ldots d+1\). Then \(L(H^{\prime})=L(P_{d+3})+S\), where \(S=(s_{ij})_{(d+3)\times(d+3)}\) with
\[s_{ij}=\begin{cases}2&\text{if }i=j=p+2,\\ 1&\text{if }i=j=p,p+1,p+3,p+4,\\ -1&\text{if }\{i,j\}=\{p,p+2\},\{p+2,p+4\},\{p+1,p+3\},\\ 0&\text{otherwise.}\end{cases}\]
As \(S\) is permutationally similar to \(L(P_{3}\cup P_{2}\cup(d-2)K_{1})\), one gets \(\rho_{4}(S)=0\). So by Lemma 2,
\[\mu_{4}(H^{\prime})\leq\mu_{1}(P_{d+3})+\rho_{4}(S)<4,\]
and therefore \(\mu_{4}(H)\leq\mu_{4}(H^{\prime})<4\) by Lemma 3, as desired.
**Case 3.**\(\{v_{p},v_{q}\}\cap\{v_{r},v_{s}\}=\emptyset\).
Assume that \(p<q<r<s\). If \(q-p=2\) or \(s-r=2\), then we have \(\mu_{4}(H)<4\) by similar argument as Case 2. Suppose next that \(q-p=1\) and
\(s-r=1\). Let \(H^{\prime}\) be defined as in Case 1. Note that \(H^{\prime}-v_{p}v_{q}-v_{r}v_{s}-wz\cong P_{d+3}\). It hence follows from Lemma 3 that
\[\mu_{4}(H)\leq\mu_{4}(H^{\prime})\leq\mu_{1}(H^{\prime}-v_{p}v_{q}-v_{r}v_{s}-wz )=\mu_{1}(P_{d+3})<4,\]
as desired.
By Theorems 5, 6 and 7, all trees on \(n\) vertices with diameter \(d\) belong to \(\mathfrak{G}(n,d)\) for \(1\leq d\leq n-3\).
|
2306.13648 | Spatially resolved dielectric loss at the Si/SiO$_2$ interface | The Si/SiO$_2$ interface is populated by isolated trap states which modify
its electronic properties. These traps are of critical interest for the
development of semiconductor-based quantum sensors and computers, as well as
nanoelectronic devices. Here, we study the electric susceptibility of the
Si/SiO$_2$ interface with nm spatial resolution using frequency-modulated
atomic force microscopy to measure a patterned dopant delta-layer buried 2 nm
beneath the silicon native oxide interface. We show that surface charge
organization timescales, which range from 1-150 ns, increase significantly
around interfacial states. We conclude that dielectric loss under time-varying
gate biases at MHz and sub-MHz frequencies in metal-insulator-semiconductor
capacitor device architectures is highly spatially heterogeneous over nm length
scales.
Supplemental GIFs can be found at
https://doi.org/10.6084/m9.figshare.25546687 | Megan Cowie, Taylor J. Z. Stock, Procopios C. Constantinou, Neil Curson, Peter Grütter | 2023-06-23T17:55:35Z | http://arxiv.org/abs/2306.13648v2 | # Spatially resolved dielectric loss at the Si/SiO\({}_{2}\) interface
###### Abstract
The Si/SiO\({}_{2}\) interface is populated by isolated trap states which modify its electronic properties. These traps are of critical interest for the development of semiconductor-based quantum sensors and computers, as well as nanoelectronic devices. Here, we study the electric susceptibility of the Si/SiO\({}_{2}\) interface with nm spatial resolution using frequency-modulated atomic force microscopy to measure a patterned dopant delta-layer buried 2 nm beneath the silicon native oxide interface. We show that surface charge organization timescales, which range from \(1-150\) ns, increase significantly around interfacial states. We conclude that dielectric loss under time-varying gate biases at MHz and sub-MHz frequencies in metal-insulator-semiconductor capacitor device architectures is highly spatially heterogeneous over nm length scales.
Semiconductors are emerging as a promising platform for spin-based quantum sensing and computation, with a clear path to scalability and long coherence times. In one widely adopted architecture, single dopant atoms are buried some nanometers beneath the semiconductor surface, where they are electronically accessed by means of an applied gate voltage[1; 2; 3; 4]. Silicon is a promising host lattice, in large part because existing Si microfabrication technologies, which have been refined over the past 50 years, are unparalleled for any other material[2; 4]. Despite these engineering accomplishments, however, it is impossible to fabricate a Si surface which is entirely homogeneous. In particular, if the surface has a SiO\({}_{2}\) overlayer, a variety of defects such as \(P_{\mathrm{b}0}\) and \(P_{\mathrm{b}1}\) centres[5] populate the Si surface. These states modify the electronic environment near the surface resulting in, for example, random telegraph fluctuations (1/f noise)[6; 7] or threshold voltage shifts[8].
The stability and robustness of buried semiconductor qubits depends on the dielectric dispersion of the native lattice (i.e. electronic bath)[9]. In particular, as semiconductor quantum devices increase in scale (that is, as the number of qubits increases) it is becoming increasingly important to understand inhomogeneities of the susceptibility of the Si surface[10]. The lateral spacing of buried spin qubits is on the order of nanometers[1; 4], so dielectric dispersion must be measured at this scale. Classical computation is also not immune to defect states at the Si surface; indeed, as circuit components shrink to nanometer dimensions, surface effects play an increasingly dominant role in device function[11]. It is thus important to understand the origin of inhomogeneity in the electronic properties of nanoscale devices.
In this work, we measure the spatial inhomogeneity of dielectric dispersion at a Si/SiO\({}_{2}\) interface with nanometer spatial resolution using frequency-modulated atomic force microscopy (fm-AFM)[12; 13]. The sample studied here is a patterned n-type Si surface that is terminated with 1 nm of native oxide. Figure 1 shows the spatial variability of dielectric loss measured near two different Si/SiO\({}_{2}\) trap sites observed in this surface, at variable tip-substrate gate bias \(V_{g}\). The dielectric dispersion is bias-dependent and highly sensitive to spatially localized interfacial states. We find that the relaxation times at the Si surface range between \(1-150\) ns, where there is a significant increase in this timescale around isolated trap states. The measurement methodology will now be briefly discussed, before being applied to study the sample described above.
At large (\(>10\) nm) tip-sample separations in non-magnetic systems, where the tip-sample force is predominantly electrostatic, the fm-AFM tip-sample junction can be described as a metal-insulator-semiconductor (MIS)
Figure 1: **Interfacial trap states at the Si/SiO\({}_{2}\) surface**. Spatially inhomogeneous dielectric dispersion around a donor-like (a-c) and acceptor-like (d-f) interfacial state, measured at constant height and variable bias. The colour scale, identical for each image, is \(F_{d}=0\)\(\,:550\) meV/cycle. **a-c** and d-f were acquired simultaneously (by multipass scanning) in UHV (\(\sim 10^{-10}\) mbar) at room temperature (\(\sim 300\) K).
capacitor[14; 15; 16; 17; 18; 19; 20]. In this work, the MIS capacitor is comprised of a metallic tip, an insulating vacuum gap of thickness \(z_{ins}\) (composed of the \(\sim 10\) nm tip-sample vacuum gap plus a 1 nm layer of SiO\({}_{2}\)), and an n-type Si(100) substrate, as illustrated in Figure 2a. The total capacitance of this system (\(C_{tot}\)) is made up of the insulator (oxide and vacuum, \(C_{ins}\)) and interfacial (\(C_{int}\)) capacitances in series:
\[\frac{1}{C_{tot}}=\frac{1}{C_{ins}}+\frac{1}{C_{int}} \tag{1}\]
where \(C_{int}\) describes the space-charge organization (i.e. band bending) at the silicon-oxide interface. For the low-frequency MIS capacitor, loss occurs because each capacitance has a non-zero charging / discharging timescale \(\tau\). Assuming that there are no mobile carries in the oxide, \(\tau\) is limited by the charging and discharging characteristics of \(C_{int}\) (i.e. \(\tau\) is the time required to establish a surface potential \(V_{S}\), which is non-zero due to finite carrier mobility). In other words, \(\tau\) corresponds to the Debye relaxation timescale of the interfacial capacitor.
fm-AFM is a dynamic microscopy, which is why it can be used to characterize dielectric dispersion[21; 22; 23; 24; 25]. In fm-AFM, a cantilever-mounted tip is driven using a self-oscillation loop on the cantilever resonance \(\omega\) at a constant oscillation amplitude \(A\) above a sample surface. This means that over every oscillation cycle, the insulator thickness \(z_{ins}\) varies in time, as is shown in Figure 2b. Consequently, the surface charge organization (i.e. \(V_{S}\), band bending) varies in time (Figure 2c). The tip-sample force \(\vec{F}_{ts}\), which is related to the semiconductor surface charge density[26], therefore also varies in time (Figure 2d). \(\vec{F}_{ts}\) leads to a shift in \(\omega\) with respect to the free natural resonance \(\omega_{o}\). Assuming harmonic oscillation where \(z_{ins}(t)=A\cos(\omega t)\), the frequency shift \(\Delta\omega\) and drive amplitude \(F_{d}\) are[25; 27; 28]:
\[\Delta\omega=\omega-\omega_{o} =\frac{-\omega_{o}}{2kA}\frac{\omega_{o}}{\pi}\int_{0}^{2\pi/ \omega}\partial t\ F_{ts}(t)\cos(\omega t) \tag{2a}\] \[F_{d} =\frac{kA}{Q}-\frac{\omega_{o}}{\pi}\int_{0}^{2\pi/\omega} \partial t\ F_{ts}(t)\sin(\omega t) \tag{2b}\]
where \(k\) and \(Q\) are the spring constant and Q-factor of the cantilever. Equation 2 shows that \(\Delta\omega\) is related to the components of \(\vec{F}_{ts}(t)\) which are in-phase with \(z_{ins}(t)\), and \(F_{d}\) depends on the out-of-phase \(\vec{F}_{ts}(t)\) components. Therefore, a non-zero surface charge organization timescale \(\tau\) manifests as in increase in \(F_{d}\)[14]. By fitting \(F_{d}\) measurements to modelled \(F_{d}\) (given Equation 2b, where \(F_{ts}(t)\) is the time-dependent force between the plates of a one-dimensional MIS capacitor[29]), the best-fit \(\tau\) can be determined.
If \(\tau\) is small with respect to the cantilever oscillation period, the system can be approximated as having a delta function response at \(t-t^{\prime}=\tau\):
\[\vec{P}_{int}(t) =\epsilon_{o}\int_{-\infty}^{t}\partial t^{\prime}\ \chi_{e}^{(1)}(t-t^{\prime})\vec{E}_{S}(t^{\prime}) \tag{3a}\] \[=\epsilon_{o}\ \chi_{e}^{(1)}\ \int_{-\infty}^{\infty}\partial t^{ \prime}\ \delta((t-t^{\prime})-\tau)\vec{E}_{S}(t^{\prime})\] (3b) \[=\epsilon_{o}\ \chi_{e}^{(1)}\ \vec{E}_{S}(t-\tau) \tag{3c}\]
where \(\vec{P}_{int}\) is the interfacial polarization (i.e. band bending), \(\chi_{e}^{(1)}\) is the time-dependent low-frequency susceptibility (\(\chi_{e}^{(1)}=\epsilon-1\))[11], and \(\vec{E}_{S}\) is the electric field at the surface. In the quasi-static limit, \(\vec{P}_{int}\) and \(\vec{E}_{S}\) are known from the MIS capacitor model, so once the best-fit \(\tau\) is known, the experimental \(\chi_{e}^{(1)}\) (and \(\epsilon\)) can be determined. If \(\tau\) is nonzero, the Fourier transform of \(\epsilon\) given in Equation 3 is complex, and can be used to determine the loss tangent (\(\tan(\delta)\)) according to:
\[\tan(\delta)=\frac{\mathrm{im}\left[\tilde{\epsilon}\right]}{\mathrm{re}\left[ \tilde{\epsilon}\right]} \tag{4}\]
where \(\tilde{\epsilon}\) is the frequency-dependent permittivity. \(\tan(\delta)\) is the sum of high-frequency losses (\(\tan(\delta_{i})\) due to atomic and electronic polarizations (i.e. bound charges) and low-frequency losses \(\tan(\delta_{c})\) due to finite sample conductivity (i.e. free carriers)[30; 31; 32]:
\[\tan(\delta)=\tan(\delta_{i})+\tan(\delta_{c}) \tag{5}\]
The results shown in this work were measured in the low
Figure 2: **Experimental setup.** (a) fm-AFM tip-sample junction, composed of a metallic tip, an insulating gap comprised of a vacuum gap and a \(1\ nm\) SiO\({}_{2}\) overlayer, with total thickness \(z_{ins}(t)\), and an n-doped Si(100) sample. An MIS capacitor band diagram (not drawn to scale) is overlaid. \(V_{g}\) is applied to the tip, and the sample is grounded. (b-d) The time dependencies of \(z_{ins}(t)\), \(V_{S}(t)\), and \(\vec{F}_{ts}(t)\) are shown for \(V_{g}=-3\ V\) over two cantilever oscillation periods. \(\tau\) is exaggerated by several orders of magnitude for illustrative purposes. (The values of \(\tau\) discussed in this work, on the order of \(ns\), would be indistinguishable here if drawn to scale.)
frequency regime (\(f=2\pi\omega\approx 310\) kHz) where free carriers dominate loss, such that \(\tan(\delta)\approx\tan(\delta_{c})\). An increase in \(\tan(\delta)\), therefore, indicates an increase in the equivalent series resistance of \(C_{int}\), meaning that more energy is dissipated by Ohmic loss[21, 33].
## Results
The sample measured in this work contains patterned squares of variable two-dimensional dopant density, up to a maximum of \(1.6e14/\)nm\({}^{2}\)[34], on a background substrate doping of \(9.0e14/\)cm\({}^{3}\). The un-patterned background is bulk doped with phosphorous, while the patterned squares are delta-doped with arsenic with a dopant layer thickness of approximately 2 nm. The entire wafer is capped by 3 nm of epitaxial Si, the surface of which has subsequently formed 1 nm of native SiO\({}_{2}\), as determined by secondary mass ion spectroscopy[34]. The results shown in Figures 1 and 3 were measured in the background (lowest dopant density) region.
**Spatial inhomogeneity \(-\)** The Si/SiO\({}_{2}\) interface is prone to trap states which modify the electronic properties of the MIS capacitor[6, 29, 35]. In particular, interfacial traps (such as P\({}_{\rm b0}\) and P\({}_{\rm b1}\) centers) which have energy levels within the band gap interact significantly with the Si surface charge[35, 36]. Donor-like traps (e.g. Figure 1a-c), which have energies in the lower half of the band gap, can become positively charged via emission of an electron to the valence band. Acceptor-like traps (e.g. Figure 1d-f), which have energies in the upper half of the band gap, can become negatively charged via capture of an electron from the conduction band[29, 36]. The interface state energies are fixed with respect to the band edges[6]. This means that the interface state occupancy depends on \(V_{S}\) (and therefore, \(V_{g}\)), since capture or emission into a trap state depends on its energy with respect to the Fermi level \(E_{f}\).
Figure 3(a-f) shows band diagrams at various biases at the bottom (maximum \(\vec{E}_{S}(t)\), solid lines) and top (minimum \(\vec{E}_{S}(t)\), dashed lines) of the cantilever oscillation. Two trap state energies, corresponding to a donor-like trap (orange) and an acceptor-like trap (blue), are shown at the bottom and top of the cantilever oscillation. An empty circle indicates an unoccupied state, and a filled circle an occupied state. The donor-like trap is unoccupied at high negative voltage, but as \(|V_{g}|\) decreases and the bands flatten and bend downward, the donor trap energy lowers below \(E_{f}\), and it becomes occupied. The acceptor-like trap is unoccupied from negative biases up to positive biases in the accumulation regime, where the trap energy lowers below \(E_{f}\) and it becomes occupied. The trap state energies found here (0.17 eV above the valence band for the donor-like trap and 0.65 eV above the valence band for the acceptor-like trap) are in agree
Figure 3: **Interfacial state occupancy and loss**. (a-f) Band diagrams at the bottom (i.e. closest \(z_{ins}\), solid) and top (i.e. farthest \(z_{ins}\), dashed) of the cantilever oscillation at variable \(V_{g}\). (The dashed and solid curves nearly overlap at positive biases.) Donor-like (orange) and acceptor-like (blue) states are also shown at the bottom and top of the oscillation, with respective energies 0.17 eV and 0.65 eV above the valence band maximum. The state occupancy is indicated by full or empty circles. (g) \(V_{g}\)-dependent energy of the donor-like and acceptor-like states and the Fermi energy \(E_{f}\). The \(V_{g}\) values corresponding to the crossing points are shown with vertical lines. (h) \(\tan(\delta)\) measured above a donor-like trap (orange), acceptor-like trap (blue), and far from either trap (grey). 10 curves (measured at constant height above a \(-3\) Hz setpoint) are shown for each spectrum, with their average overlaid. In (h), the uncertainty diverges to infinity as \(V_{g}\) approaches the \(V_{fb}\), so this region is not shown.
ment with accepted levels for \(P_{b0}\) states[37; 38].
At certain biases, the trap level shifts above and below the Fermi energy during every oscillation cycle: Figure 3b shows a scenario where the donor-like trap is unoccupied at the bottom of the cantilever oscillation and occupied at the top; Figure 3e shows an acceptor-like trap which is occupied at the bottom and unoccupied at the top. The complete bias dependence of the occupation of each trap is shown in Figure 3g. The \(V_{g}\) at which the state energy falls below \(E_{f}\) is the crossing point, where the state occupancy switches from unoccupied to occupied. Figure 3h shows the bias-dependent \(\tan(\delta)\) above the trap states, as well as a region devoid of either type of trap (grey). At biases within the crossing points, there is a significant increase in \(\tan(\delta)\) as compared to the trap-free spectrum. Capture and re-emission out of a trap state demands a re-organization of the surface charge density, and so when this occurs over every oscillation cycle, there is an additional \(\vec{P}_{int}\) equilibration timescale which leads to an increase in \(\tau\), and therefore \(\tan(\delta)\). The donor-like peak is wider than the acceptor-like peak because there is a larger difference in the donor-like crossing points at the top and bottom of the cantilever oscillation. Additionally, the peak width is related to the oscillation amplitude (which for a static capacitor configuration corresponds to increasing the AC gate bias amplitude): If the oscillation amplitude increases, the crossing point difference also increases, and the peak broadens. (See the Supplementary Materials.)
This bias-dependent spatial inhomogeneity of \(\tan(\delta)\) manifests as the ring-like \(F_{d}\) features shown in Figure 1 and Figure 4[39]. Any spatially localized process which exhibits a peak in a bias spectrum in fm-AFM manifests as a ring when imaged spatially at constant height[40]. Specifically, the ring shape is due to the spatial localization of the top gate (tip), which introduces circularly symmetric equipotential lines at the sample surface. As the tip moves in \(x\), \(y\), or \(z\) away from a trap state, the peak shifts to more extreme biases. The \(z_{ins}\) dependence of the donor-like trap bias spectrum, showing a peak shift toward negative \(V_{g}\) as \(z_{ins}\) increases, is shown in the Supplementary Materials.
**Dopant density dependence \(-\)** Three "delta-doped" patterned squares of this sample can be seen in Figure 4. The two left-most squares have the highest dopant density, the right-most square has an intermediate dopant density, and the un-patterned background has the lowest dopant density. Figure 4 shows an overall trend where, as the dopant density increases, \(F_{d}\) decreases. This is due to the increased Si metallicity in the patterned regions: As the dopant density increases, the resistivity decreases, meaning that the equivalent series resistance decreases, and so does \(\tan(\delta)\).
There is additionally a dopant density dependence in the ring density. Figure 4 shows that there are many donor-like rings at negative biases in the background region, with a density on the order of 10 rings/100 nm\({}^{2}\). (The acceptor-like rings are much sparser: Measurements at positive bias - not shown - exhibited fewer than 5 rings over the area shown in Figure 4.) In the case of both donor-like and acceptor-like traps, the ring density decreases as dopant density increases. This can also be understood as being due to the increased metallicity of the patterned squares: Interfacial states are still expected to be present, but as the dopant density increases, band bending decreases, meaning that the effect described in Figure 3 occurs at much larger biases.
**Bias dependence \(-\)** The nature of the charge organization at the semiconductor surface is highly bias-dependent. This can be seen in the band diagrams in Figure 3(a-f), where \(V_{S}\) is larger at negative biases than positive biases. Even in the absence of interfacial states, the surface charge density continually re-organizes (i.e. there is a change in \(V_{S}\)) over every cantilever oscillation cycle, meaning that \(\tan(\delta)\) is non-zero. The magnitude of \(\tan(\delta)\) depends on \(\Delta V_{S}\) over every oscillation cycle: If \(\Delta V_{S}\) is large, \(\tan(\delta)\) is large. This bias dependence might be equivalently understood in terms of the operation regimes of the MIS capacitor, which are shown in Figure 5a (where s- strong inversion; w- weak inversion; d- depletion; a- accumulation). At some biases,
Figure 4: **Spatial inhomogeneity of the Si/SiO\({}_{2}\) interfacial dielectric dispersion.** fm-AFM drive signal \(F_{d}\) of a patterned Si/SiO\({}_{2}\) surface at variable \(V_{g}\). The tip-sample separation was constant above a \(-3\) Hz setpoint frequency in the background region. a-c were acquired simultaneously (by multipass scanning). The vertical scale bar for each is \(F_{d}=0:500\) meV/cycle.
the capacitor switches between different charge distribution regimes over every oscillation cycle. For example, around \(-6\) V, the capacitor is under strong inversion at the bottom of its oscillation and weak inversion at the top (i.e. the'sw' regime). In the ww, wd, and dd regimes, the charge re-organization occurs predominately in the depletion region, and there is a significant change in the depletion width \(z_{d}\) over every oscillation cycle. In these regimes, \(\Delta V_{S}\) is large, and consequently \(\tau\) (Figure 5b) and \(\tan(\delta)\) (Figure 5c) are maximized. In the ss and aa regimes, \(z_{d}\) is approximately constant, and most of the charge re-organization occurs immediately at the semiconductor surface. In these regimes, \(\Delta V_{S}\) is small, meaning that there is very little surface charge re-organization, so \(\tau\) and \(\tan(\delta)\) decrease.
## Conclusions
We show that the magnitude of dielectric losses at the Si/SiO\({}_{2}\) interface is highly inhomogeneous[10, 20, 39], dopant density-dependent, and gate bias-dependent. In particular, interfacial trap states lead to a dramatic increase in dielectric loss at particular biases corresponding to the trap state energy. Increasing the AC gate bias amplitude increases the width of this peak. Increasing the distance between the tip and the trap (which for a static capacitor corresponds to the system geometry or, if a qubit is acting as a noise spectrometer of the trap state[9], the distance between the qubit and the trap) leads to a peak shift toward larger voltages.
These results were measured far below the resonance frequency of the interfacial capacitance. This is necessarily the case, as the MIS capacitor model used to interpret these results is quasi-static. However, the values of \(\tau\) that were measured range between \(1-150\) ns (depending on \(V_{g}\) and the proximity to a trap state), which corresponds to an interfacial capacitor resonance frequency between \(1-150\) MHz. This frequency range encompasses typical Rabi frequencies of buried spin qubits, which are between \(1-10\) MHz[41, 42]. As a result, the effective amplitude and phase of the potential at the location of a qubit will be a function of the temporal structure of the applied bias pulse sequence and the local position and energy level of defect states.
Note that the values of \(V_{g}\) shown in these results are much greater than the typical values (\(\mu V-\mathrm{mV}\)) used for spin qubit readout. The characteristics of the MIS charge organization depend on the capacitor geometry. Here, the closest tip-sample separation is \(z_{ins}\) is \(12\) nm, but for the same capacitor with a \(1\) nm insulator thickness (e.g. for a metal-oxide-semiconductor structure with a \(1\) nm oxide thickness), the peak which occurs at \(\sim-4\) V in Figure 3h can be expected to occur closer to \(-500\) mV.
Finally, it can be expected that the \(\sim 0.1\) room temperature loss tangents measured here (which are similar to values reported elsewhere[31, 32, 43, 44]) will be several orders of magnitude smaller at cryogenic temperatures[30, 45], as carrier concentrations decrease and various mobility-limiting phonon scattering mechanisms are reduced[44]. Yet, as long as the mobility is non-infinite, these bias-dependent (and trap-dependent) dielectric losses occur under any time-varying electric field, and so should be taken into consideration for the continued development of semiconductor-based quantum sensors and computers, as well as nanoelectronic devices.
## Methods
**Experimental setup** - All experimental results presented in this work were collected with Nanosensors platinum-iridium coated silicon tips (PPP-NCHPt) with approximately \(310\) kHz resonant frequency, spring constant \(42\) N/m, and a Q-factor of approximately \(18000\). experiments were conducted at room temperature, which was assumed to be \(300\)\(K\), in UHV (\(\sim 10^{-10}\) mbar). The oscillation amplitude, unless otherwise stated in amplitude-dependence experiments, was \(6\)\(nm\). For all modelling and measurements, the bias was applied to the tip and the sample was grounded.
**Bias spectroscopy specifications**- Each bias spectrum includes the forward (positive to negative \(V_{g}\)) and backward curve superimposed, showing that there is neg
Figure 5: **Bias dependencies of an MIS capacitor**. (a) Experimental \(F_{d}\) bias spectrum (colour) and null (\(z_{ins}\sim 1\)\(\mu m\), grey). Modelled \(F_{d}\) for various \(\tau\) (between \(1-100\)\(ns\)) are also shown. Six MIS bias regimes are identified, indicating the charge distribution regime at the bottom and top of the cantilever oscillation. (b) \(\tau\) and (c) \(\tan(\delta)\), corresponding to the experimental data in (a). With this method, as \(V_{g}\) approaches the flatband voltage (\(\sim 600\) mV, where all of the \(F_{d}\) curves overlap), the uncertainty in \(\tau\) and \(\tan(\delta)\) increases.
ligible hysteresis with bias. Each forward and backward curve is the average of 10 individual sweeps, which took about 30 \(s\) each to acquire.
**Imaging specifications-** The images shown in this work were measured by multi-pass imaging. In the first pass, the tip followed the topography defined by \(V_{g}=0\) V at a setpoint frequency shift \(\Delta f=-3\) Hz. In subsequent passes, the tip followed this same topography (1 nm tip lifted), but \(V_{g}\) was set to the displayed values, and \(\Delta f\) and \(F_{d}\) varied. The positions, radii, and quantity of the rings shown here were stable over the course of several weeks of measurement.
**Sample fabrication -** The phosphorus-doped (9.0e14/cm\({}^{3}\)) Si(001) substrate is 300 \(\mu m\) thick. The delta-doped regions at variable arsenic-dopant density were fabricated using scanning tunnelling microscopy-based hydrogen resist lithography and encapsulated with a 3 nm epitaxial intrinsic silicon layer plus a 1 nm native oxide.
**MIS model parameters -** A MIS capacitor model[29] was used to study the capacitances of this system. The input parameters were: A closest tip-sample separation (\(z_{ins}\)) of 12 nm, tip radius 5 nm, \(\epsilon=11.7\), electron affinity 4.05 eV, tip work function 4.75 eV, electron and hole effective masses 1.08 and 0.56, n-type dopant density 5e17/cm\({}^{3}\), and band gap energy 0.7 eV. The 0.7 eV band gap was a fit parameter, and is smaller than the \(\sim 1.1\) eV expected for bulk Si. This discrepancy could be due to surface bandgap narrowing due to the presence of the large surface state density, as in [46; 47].
## Acknowledgements
This research was supported by Natural Sciences and Engineering Research Council of Canada (NSERC) Alliance Grants - Canada-UK Quantum Technologies, an NSERC Discovery Grant, and Fonds de recherche du Quebec - Nature et technologies, as well as the Engineering and Physical Sciences Research Council [grants EP/R034540/1, EP/V027700/1, and EP/W000520/1] and Innovate UK [grant 75574]. The authors would also like to thank Kirk Bevan and Hong Guo for stimulating discussions.
|
2304.14986 | Interpreting Vision and Language Generative Models with Semantic Visual
Priors | When applied to Image-to-text models, interpretability methods often provide
token-by-token explanations namely, they compute a visual explanation for each
token of the generated sequence. Those explanations are expensive to compute
and unable to comprehensively explain the model's output. Therefore, these
models often require some sort of approximation that eventually leads to
misleading explanations. We develop a framework based on SHAP, that allows for
generating comprehensive, meaningful explanations leveraging the meaning
representation of the output sequence as a whole. Moreover, by exploiting
semantic priors in the visual backbone, we extract an arbitrary number of
features that allows the efficient computation of Shapley values on large-scale
models, generating at the same time highly meaningful visual explanations. We
demonstrate that our method generates semantically more expressive explanations
than traditional methods at a lower compute cost and that it can be generalized
over other explainability methods. | Michele Cafagna, Lina M. Rojas-Barahona, Kees van Deemter, Albert Gatt | 2023-04-28T17:10:08Z | http://arxiv.org/abs/2304.14986v2 | # Interpreting Vision and Language Generative Models with Semantic Visual Priors
###### Abstract
When applied to Image-to-text models, interpretability methods often provide token-by-token explanations namely, they compute a visual explanation for each token of the generated sequence. Those explanations are expensive to compute and unable to comprehensively explain the model's output. Therefore, these models often require some sort of approximation that eventually leads to misleading explanations.
We develop a framework based on SHAP, that allows for generating comprehensive, meaningful explanations leveraging the meaning representation of the output sequence as a whole. Moreover, by exploiting semantic priors in the visual backbone, we extract an arbitrary number of features that allows the efficient computation of Shapley values on large-scale models, generating at the same time highly meaningful visual explanations.
We demonstrate that our method generates semantically more expressive explanations than traditional methods at a lower compute cost and that it can be generalized over other explainability methods.
vision and language, multimodality, explainability, image captioning, visual question answering, natural language generation +
Footnote †: 10.0
## 1 Introduction
Multimodal learning research has witnessed a surge of effort leading to substantial improvements, in algorithms involving the integration of vision and language (V&L ), for tasks such as image captioning (Lin et al., 2014; Hossain et al., 2019; Sharma et al., 2020) and visual question answering (Antol et al., 2015; Zhu et al., 2016; Srivastava et al., 2021). The need has arisen to create more challenging tasks and benchmarks requiring higher fine-grained linguistic capabilities (Parcalabescu et al., 2022; Thrush et al., 2022) and semantic and temporal understanding (Yu et al., 2016; Park et al., 2020).
In this context, the role of interpretability methods has become central to assessing the models' grounding capabilities. However, such tools are often designed for specific classes of tasks or models. To overcome this limitation, model-agnostic interpretability methods, such as SHAP-based methods (Lundberg and Lee, 2017), are often preferred over others, since they rely on a solid theory and benefit from desirable properties not available in other methods.
When such methods are applied to V&L generative tasks, like image-captioning, the goal is to explain the textual output with reference to the visual input. However, the text generation process happens token-by-token, and as a result, most of the interpretability methods applied in this context tend to produce local token-by-token explanations. Moreover, for most applications, current methods build the explanation on top of arbitrary regions of the visual input.
Such explanations are hard to interpret as they are token-specific, and they are costly to compute since the number of models' evaluations grows exponentially with the number of features used in each explanation. To mitigate these issues, approximation techniques, like sampling, and input feature reduction are usually applied. However, this produces inaccurate explanations which lack detail and are hard to interpret.
In this work, we address these issues by proposing:
1. a modular framework to create a new family of tools to generate explanations in V&L generative settings;
2. a method to generate sentence-based explanations for vision-to-text generative tasks, as opposed to token-by-token explanations, showing that such explanations can efficiently be generated with SHAP by exploiting semantic knowledge from the two modalities;
3. a method to reduce the number of visual input features by exploiting the semantics embedded in the models' visual backbone. We extend this method to a number of different architectures, performing a systematic comparative study and we propose an alternative for the architectures not suitable for this method;
4. a human evaluation designed to assess key user-centric properties of our explanations.
## 2 Related Work
### Interpretable Machine Learning
Interpretable machine learning is a multidisciplinary field encompassing efforts from computer science, human-computer interaction, and social science, aiming to design user-oriented and human-friendly explanations for machine learning models. It plays an important role in the field for a series of reasons: it increases trust, confidence, and acceptance of machine learning models by users, and enables verification, validation, and debugging of machine learning models. Techniques for deep neural networks (DNN) can be grouped into two main categories: _white-box_ methods which exploit the knowledge of the internal structure of the model to generate the explanation and _black-box_ methods, also called model-agnostic, which operate only on the inputs and the outputs (Loyola-Gonzalez, 2019).
_White-box methods._ There exist two types of white-box methods: attention-based and gradient-based methods. _Attention-based_ methods (e.g. Ahmed et al., 2021; Zheng et al., 2022) exploit the model's attention activations to identify the part of the input attended by the model during the prediction. They can be used to explain predictions in diverse tasks, like image recognition (Li et al., 2021), authorship verification (Boenninghoff et al., 2019) gender bias identification (Boenninghoff et al., 2019) etc. On the other hand, Gradient-based methods (Springenberg et al., 2014; Selvaraju et al., 2017) compute feature attributions by manipulating the gradients computed in the backward step with respect to the original inputs (Shrikumar et al., 2016), or with respect to a specific baseline (Sundararajan et al., 2017; Simonyan et al., 2013).
_Black-box methods_ do not make any assumptions regarding the underlying model. For example, Permutation Feature Importance (Breiman, 2001), initially designed for random forests and later extended into a model-agnostic version by Fisher et al. (2019), consists in randomly shuffling the input features and evaluating the model's output variations. Ribeiro et al. (2016) proposed LIME for Local Interpretable Model-Agnostic Explanation. This method uses a surrogate linear model to approximate the black-box model locally, that is, in the neighborhood of any prediction. LOCO (Lei et al., 2018) is another popular technique for generating local explanation models. It can provide insight into the importance of individual variables in explaining a specific prediction. SHAP (Lundberg and Lee, 2017) is a framework considered by many to be the gold standard for local explanations, thanks to its solid theoretical background. SHAP leverages the concept of Shapley values, first introduced by (Shapley et al., 1953), used to measure the contribution of players in a cooperative game. This was later extended by (Lundberg and Lee, 2017) for the purpose of explaining a machine learning model.
In this work, we propose a flexible hybrid framework based on SHAP, which benefits from properties typical of _black-box_ methods, since it can be applied in a completely model-agnostic way. At the same time, our method shares some properties with _white-box_ approaches since, when possible, it takes advantage of certain internal components of the model. In particular, the framework we propose for Vision-Language generative models can be leveraged to exploit architectural features of a model's visual backbone to generate more semantically meaningful explanations.
### Background on SHAP
In the context of machine learning, the cooperative framework introduced by Shapley et al. (1953) can be framed as a game where each input feature is a player and the outcome is determined by the model's prediction. Shapley values measure the contribution of each player to the final outcome, or in other words, the input features' importance. Shapley redistributed the total outcome value among all the features, based on their marginal contribution across the possible coalitions of players, i.e. combinations of input features. The outcome of the game, namely the prediction of the model, is redistributed across the features, in the form of contributions that have three desirable properties:
* _Efficiency_: all the Shapley values add to the final outcome of the game;
* _Symmetry_: all the features generating the same outcome in the game have the same Shapley value, thus the same contribution;
* _Dummy_: if adding a feature to a coalition (i.e. set of features) does not change the outcome of the game, its Shapley value is zero.
Furthermore, Lundberg and Lee (2017) contribute by formulating a variety of methods to efficiently approximate Shapley values in different conditions:
1. KernelSHAP: derived from LIME and totally model agnostic, hence the slowest within the framework;
2. LinearSHAP: designed specifically for Linear models;
3. DeepSHAP: adapted from DeepLift (Shrikumar et al., 2017) for neural networks, which is faster than KernelSHAP, but makes assumptions about the model's compositional nature.
Later on, the framework was extended with other methods with variations for specific settings; Mosca et al. (2022a) propose a thorough description of the SHAP family of methods.
It is important to note that all these methods work under the so-called _feature independence assumption_, which is fundamental for the theoretical resolution of the problem. However, in order to deal with real-life scenarios, this constraint is relaxed to some extent. For instance, in Natural Language Processing tasks each token of a textual sequence is considered an independent feature (Kokalj et al., 2021) whereas, in Computer Vision, the image is usually split into squared patches also considered independent of each other (Jeyakumar et al., 2020). In both of these cases, the independence assumption is a simplification. For example, language tokens are often mutually dependent in context (and this is indeed the property leveraged by self-attention in Transformer language models). Similarly, pixels in neighboring patches in an image may well belong to the same semantically relevant region (and this is indeed the property exploited by neural many architectures suited for computer vision tasks, such as convolutional networks). Properties of tokens in context and those of pixels in image regions, have been taken into account in some adaptations of SHAP which consider the hierarchical structure of the feature space, such as HEDGE for text (Chen et al., 2020) and h-SHAP for images (Teneggi et al., 2022).
Our framework relies on KernelSHAP as is it totally model-agnostic. We address both the efficiency issue and the strict independence assumption of the method by generating semantic input features (more details in Section 3.3) and optimizing the approximation through sampling (full details in Section 3.1.1).
### Explainability for Vision and Language
One way to characterize the scope of V&L models is with respect to the types of tasks they are designed to address. On the one hand, tasks like image captioning (Fisch et al., 2020; Zhang et al., 2021; Anderson et al., 2018; Mokady et al., 2021; Li et al., 2022), image-text retrieval (Cao et al., 2022; Radford et al., 2021), and visual question answering (Antol et al., 2015) require a strong focus on the recognition of objects in images. More recently, research has begun to explore the capabilities of models in tasks that require some further reasoning or inference over the image contexts, such as understanding analogies (Zhang et al., 2019), describing actions and rationales (Cafagna et al., 2023) and inferring temporal relations (Park et al., 2020).
The need to understand how V&L models ground their predictions has become essential, leading to the emergence of Explainable Artificial Intelligence (XAI) for multimodal settings (Zellers et al., 2019). Visual explanations can help humans to know what triggered the system's output and how the system attended to the image. To this purpose, feature attribution methods are often preferred as they can provide a visual explanation of the prediction. Most of the XAI methods introduced for unimodal tasks can be adapted to V&L tasks.
Some popular _white-box_ methods use gradients to generate saliency maps to highlight the pixels corresponding to highly contributing regions. These methods include Grad-CAM (Selvaraju et al., 2017; Shrikumar et al., 2016) or Layer-wise Relevance Propagation (LRP) (Binder et al., 2016) where the contribution is computed with respect to an intermediate layer instead of the input layer. These methods can produce fine-grained pixel-level explanations. However, their outcomes can be noisy and require many evaluations to converge to a stable explanation.
_Black-box_ approaches are mostly perturbation-based, that is, they compute attributions based on the difference observed in the model's prediction by altering the input. Such methods include occlusion sensitivity, RISE (Petsiuk et al., 2018), and LIME (Ribeiro et al., 2016). Other approaches are task-agnostic, like MM-SHAP (Parcalabescu and Frank, 2022), where a SHAP-based method is used to measure the contribution of the two modalities in V&L models independently of the task performance. Although these methods make few assumptions about the underlying model, their explanations are computationally
expensive, as the number of model evaluations required grows exponentially with the number of features. To overcome this limitation, the number of features is usually reduced by partitioning the image into patches called superpixels, which discretize the input into a smaller number of features. However, this approach can lead to coarser and not very informative explanations.
#### 2.3.1 Generative tasks
Explanations for V&L generative tasks like image captioning incur even more complexity, as the prediction of the model is now a textual sequence. A popular solution, which is in keeping with the autoregressive nature of neural language decoders, is to break down the caption generation process into a series of steps where each token is explained separately with respect to the image and the previously generated sequence. This requires generating a single visual explanation for each generation step. However, the meaning of the sentence is not only determined by the meaning of the single words it is composed of, but also by the way these words are combined and arranged together. Therefore, a global meaningful explanation must take into account the whole textual sequence and not just part of it, as only in this way can the explanation take into account the whole textual context.
A popular solution is to generate the token-level explanations using Integrated Gradients (Sundararajan et al., 2017), providing region-level visualizations or using the attention activation scores to visualize the model's attended regions (Cornia et al., 2022; Zhang et al., 2019). However, these methods are white-box approaches as they make assumptions about the inner workings of the model, thus they need to be specifically re-adapted to new systems. Furthermore, they focus on token-level explanations but do not allow a comprehensive global explanation of the textual output.
To the best of our knowledge, our work is the first attempt to bring together a model-agnostic framework like SHAP, in the image-to-text task, with the aim of providing a comprehensive explanation of the generated textual output as a whole, rather than on a token-by-token level.
## 3 Method
### Kernel Shap
The core method of our framework is Kernel Shap. We base our approach on the formulation by Lundberg and Lee (2017), which provides an accurate regression-based, model-agnostic estimation of Shapley values. The computation is performed by estimating the parameters of an explanation model \(g(x^{\prime})\) which matches the original model \(f(x)\), namely:
\[f(x)=g(x^{\prime})=\phi_{0}+\sum_{i=1}^{M}\phi_{i}x^{\prime}_{i} \tag{1}\]
where \(M\) is the number of input features (or players) and \(x^{\prime}_{i}\) is a player of the game. \(g(x^{\prime})\) is approximated by performing a weighted linear regression using the Shapley kernel:
\[\pi_{x^{\prime}}(z^{\prime})=\frac{M-1}{(M\text{ choose }|z^{\prime}|)|z^{\prime}|(M-|z^{\prime}|)} \tag{2}\]
where \(z^{\prime}\) is the subset of non-zero entries, namely a binary representation of the coalition of players. The Shapley kernel, in other words, is a function assigning a weight to each coalition. The number of coalitions
needed to approximate the Shapley values corresponds to all the possible combinations of players, i.e. \(2^{M}\) coalitions. This makes Kernel SHAP extremely expensive to compute (and slow in practice) when \(M\) is large.
#### 3.1.1 Kernel SHAP Sampling
Kernel SHAP is model-agnostic, meaning that it cannot make any assumption on the model to explain. For this reason, it is also among the slowest in the SHAP family of XAI methods (Mosca et al., 2022). This issue is addressed by performing Monte Carlo sampling over the pool of coalitions, allowing under certain conditions to compute a reasonably accurate approximation of the Shapley values, even in the case of large-sized models or low-resource hardware.
Taking inspiration from Molnar (2020), we implement a deterministic sampling strategy, where we prioritize the high-weight coalitions, whose weight is computed by Eq. 2, when applying a certain sampling budget. This is achieved by generating the coalitions in decreasing weight order and selecting the first \(k\) coalitions, where \(k\) corresponds to the sampling budget. In Figure 1 we compare the plots of the coalition's weights computed using the standard Kernel SHAP (on the left) and prioritizing high-weight coalitions (on the right); as observed, our sampling strategy with priority (on the right) ensures selecting the high-weight coalitions first, providing an optimal ordering among samples.
Sampling with priority offers two main advantages:
1. higher accuracy of the Shapley values estimate;
2. a deterministic sampling strategy.
In Figure 2 we report the approximation error of the Shapley values when applying Kernel SHAP, using Monte Carlo (orange) and the high-weight priority (blue) as sampling strategy, for different sampling sizes. The error is computed over \(10\) runs, using the Mean Squared Error (MSE) with respect to the Shapley values computed with Kernel SHAP using all the \(2^{M}\) coalitions. Our sampling with priority approximates Shapley values with errors that are orders of magnitude smaller than Monte Carlo sampling, consistently, for different sampling sizes.
#### 3.1.2 How to adapt Kernel SHAP to Vision and Language Generative Tasks
In the image captioning scenario, we can set up a cooperative game, where we want to compute the contributions of the players, i.e. the pixels of the image, with respect to the outcome, i.e. the caption. In
Figure 1: Standard Kernel SHAP (left) and modified Kernel SHAP with priority for high-weight coalitions (right). The y-axis corresponds to weight whereas the x-axis is the iteration in which a particular coalition is generated.
this section, we identify two shortcomings of the standard way in which this is performed and discuss our contributions to overcome these shortcomings, which were pointed out in Section 2 above.
The first problem is related to the definition of coalitions in the visual input. The number of coalitions to be computed grows exponentially with the number of players. This makes the computation of the Shapley values intractable for images and makes any sampling strategy completely inaccurate. In order to overcome this limitation, the image is typically partitioned into a grid composed of _superpixels_, namely groups of pixels, each of which represents a single player. This reduces the total number of players in the game, making computation of the Shapley values more feasible, but at the same time, it reduces the degree of the detail of the explanation. Moreover, we argue that breaking the image into a grid of square superpixels breaks the semantics underlying the image, resulting in potentially under-informative explanations. In particular, there is no guarantee that the pixels grouped together in this manner correspond to semantically meaningful image regions.
The second problem is related to the comprehensiveness of explanations. In order to measure the variations of the outcome of the function needed to run Kernel SHAP, the caption generation process is usually broken down into token generation steps. Each step produces logits that can be used to compute a numerical outcome. However, this forces us to consider each generation step as a separate cooperative game, meaning that we need to run a separate instance of Kernel SHAP for each generated token, further increasing the time and compute cost needed to explain an image-caption pair. Moreover, such explanations refer to single tokens and do not provide an explanation for the whole output of the model, namely the caption.
In the following sections, we address these issues, proposing alternative solutions. Specifically, we first address the second shortcoming in Section 3.2, before turning to a proposal for semantically meaningful and sparse priors in Section 3.3.
Figure 2: Mean Squared Error (MSE) of the Shapley values estimated using Monte Carlo sampling (orange) and sampling coalitions with priority (blue), for various sampling sizes. All the values on the x-axis are exponentials (\(2^{M-1},2^{M-2},2^{M-3}\)) where \(M\) corresponds to the number of features. The MSE is computed with respect to the Shapley values computed using all the \(2^{M}\) coalitions available in the sampling space.
### Towards Sentence-based explanations
As explained in Section 3.1, Shapley values are computed with respect to a numerical value representing the outcome of the function we are interested in explaining. With generative models, this is typically done on a token-by-token basis. For example, in an image captioning scenario, this is achieved by measuring variations in the logit of the generated tokens for the caption, at each time step. Such token-based explanations are useful to assess the grounding of a specific part of speech - like nouns and verbs - but fail in providing a global explanation that takes into account the meaning of the whole sentence.
On the other hand, with a whole caption, we lack the numerical output against which to compute the attributions. In order to adapt Kernel SHAP to generate global explanations for the caption, we measure variations of the caption meaning representation when perturbations are applied to the image in input. This allows us to numerically quantify the meaning variation of the whole caption. Formally, given an image-captioning model \(f\) and an image \(x\) we generate a caption \(c=f(x)\) and we compute:
\[e_{ref}=E(c) \tag{3}\]
where \(e_{ref}\) is the embedding representation of \(c\) that we consider the _reference embedding_ of the caption, and \(E()\) is a function used to extract such a representation.
For each perturbed image \(x^{\prime}\) and its corresponding caption we extract, analogously, an embedding \(e^{\prime}\), then we compute:
\[s=cos(e_{ref},e^{\prime}) \tag{4}\]
where \(s\) is the variation in the embedding representation computed as the cosine distance \(cos(\cdot)\), between the reference embedding \(e_{ref}\) and the embedding of the caption of the perturbed image \(e^{\prime}\).
In other words, we use the cosine distance between the semantic representation of the reference caption and the caption generated upon input perturbation, to measure the model's variations; a schematic representation of the method is shown in Figure 3. Re-framing the problem as described allows us to apply Kernel SHAP to compute feature attributions taking into account the semantic variation of the whole caption in a single cooperative game instance.
### Exploiting Semantic Visual Priors
Partitioning the image into a grid of superpixels is a straightforward way to reduce the number of input features in the image. We argue that, although convenient, superpixels do not guarantee the preservation of semantic information depicting the visual content, as they shatter the image into equally sized patches regardless of the content represented. We address this issue by proposing a semantically guided approach, that selects the input features according to semantics-preserving visual concepts arising from the visual backbone of the V&L model. This not only allows for generating more meaningful explanations but explicitly focuses explanations of the model's generative choices on the output of the model's own visual backbone.
We generate input features leveraging the Deep Feature Factorization (DFF) method (Collins et al., 2018). DFF is an unsupervised method allowing concept discovery from the feature space of CNN-based models. We refer to such concepts as'semantic priors', that is, the knowledge or assumptions learned by the visual backbone, for a given domain or task. We use them to craft input features that produce semantically informed visual explanations.
Formally, following Collins et al. (2018)'s notation, given the activation tensor for an image \(I\):
\[A\in\mathbb{R}^{h\times w\times c} \tag{5}\]
where \(h,w,c\) correspond respectively to the height and width, and the number of channels of the visual backbone's last activation layer, we perform a non-negative matrix factorization (NMF) of \(A\):
\[NMF(A,k) =\operatorname*{argmin}_{\hat{A}_{Ik}}\ \|A-\hat{A}_{k}\|_{F}^{2},\] \[\operatorname{subject\ to}\ \hat{A}_{K}=HW,\] \[\forall i,j:H_{ij},W_{ij}\geq 0, \tag{6}\]
where \(W\in\mathbb{R}^{n\times k}\) and \(H\in\mathbb{R}^{k\times m}\) enforce the dimensionality reduction to rank \(k\).
Each column \(H_{j}\) can be reshaped into \(k\) heatmaps of dimensions \(h\times w\), each of which highlights a region that the factor \(W_{j}\) corresponds to. The heatmaps are then upsampled to match the original image size with bilinear interpolation and converted to binary masks, each of which corresponds to an input feature. In this way we obtain \(k\) input features, where \(k\) is the number of concepts extracted. A schematic example of input feature extraction performed by DFF is shown in Figure 4.
In our method, the regions identified via DFF are the features for which attributions are computed. The key intuition is that these features correspond to meaningful sub-parts of the input image according to the V&L model's visual backbone. They do not necessarily reflect humans' visual expectations of the image; rather they represent the visual priors learned by the vision model after training.
Figure 3: Example of the sentence-based explanation. 1) We compute the reference embedding (red) from the caption generated by the model when the input has no perturbation. For each perturbation applied, we compute the embedding (orange, blue) of the resulting caption and use the cosine distance between the reference and the current embedding, to measure the semantic variation of the caption.
To create a coalition we sum up multiple masks, then apply them to the original image which will contain only pixels belonging to input features in the selected coalition.
NMF can be seen as an unsupervised clustering algorithm, allowing control for the number of clusters or concepts to find. \(k\) can be considered a hyperparameter of the method, which we show can be kept small to achieve a good level of semantic detail and low compute cost.
#### 3.3.1 Non-partitioning features
DFF generates semantic masks reflecting the activations of the model's visual backbone. The whole process is unsupervised and produces masks that do not constitute partitions of the image, meaning that it is not guaranteed that the sum of all the extracted masks will match the total size of the image. In order to account for this issue, we create an additional _leftover_ mask covering the remaining area and we include it in the SHAP cooperative game, this allows us to consider the whole visual information represented by the image, in the game.
As noted in Section 2.2, the computation of Shapley values is based on a feature independence assumption. Since our features may be non-partitioning, this assumption is relaxed in our approach. We explore the consequences of this in more detail in Section 4.4.2.
#### 3.3.2 Intensity-preserving explanations
SHAP-based methods relying on superpixels assume that each pixel in a patch contributes equally, thus all the pixels in a patch are assigned the same Shapley value. However, in DFF, features in each binary mask correspond to an equally sized heatmap. Therefore, we multiply the Shapley value by the heatmap corresponding to the binary mask. This allows scaling the contribution according to the intensity of the feature signal.
## 4 Experiments
The methodology described in the previous section raises two questions which we now address experimentally:
1. What are the pros and cons of our method based on visual semantic priors in comparison with standard feature selection methods used in V&L, based on superpixels?
2. What are the benefits and potential limits of our method for human users, in terms of relevant dimensions such as intensity, detail, efficiency, and flexibility?
Figure 4: Schematic example of input features extraction using DFF. Through thresholding, we convert the heatmaps into binary masks that we use to create semantically meaningful features.
### Data
We validate the method presented in the previous section with experiments using the HL image caption dataset. The HL dataset (Cafagna et al., 2023) contains 15k images extracted from COCO (Lin et al., 2014). The dataset pairs images with captions that describe the visual contents along three different high-level dimensions, namely _scenes, actions_ and _rationales_ for the actions. These are additionally paired with the original COCO captions, which provide a more low-level, object-centric description. The annotations were collected by asking annotators three questions related to each of the three high-level dimensions. The systematic alignment of the object-centric and abstract captions provides us with a suitable test bed to compare the efficacy of our method in delivering global explanations in both captioning and visual question-answering scenarios. An example pairing the three high-level captions and the original COCO caption from the HL Dataset is shown in Table 1.
### Model
For our experiments, we focus on one V&L model, since our goal is to evaluate the quality of explanations. Our choice is motivated by two considerations: first, a model should ideally have good performance in zero-shot settings; second, it should exhibit state-of-the-art performance on generative tasks. OFA (Wang et al., 2022) is a large pre-trained multimodal model trained using a task-agnostic and modality-agnostic framework. OFA is able to perform a diverse set of cross-modal and unimodal tasks, like image captioning, visual question answering, image generation, image classification, etc. OFA is trained on a relatively small amount of data (20M image-text pairs) with instruction-based learning and a simple sequence-to-sequence architecture. Nevertheless, on downstream tasks, it outperforms or is on par with larger models trained on a larger amount of data. OFA is effectively able to transfer to unseen tasks and domains in zero-shot settings, proving to be well grounded also in out-of-domain tasks.
This makes OFA an excellent candidate to test our explainability framework in a real-world scenario, namely a large pre-trained generative model with SOTA performance on downstream tasks in zero-shot conditions.
### DFF vs Superpixel
In this Section, we focus on the comparison between the global visual explanations produced using superpixel or DFF input features. We focus on the capability of the two methods to adapt to different
\begin{table}
\begin{tabular}{l|l}
**Image** & **Axis** & **Caption** \\ \hline scene & at a sport field \\ action & they are playing a sport \\ rationale & they are having fun \\ \hline object-centric (COCO) & A woman has fallen on the ground in a field. \\ \end{tabular}
\end{table}
Table 1: Example of High-Level captions. It is shown one of the three captions available for the three axes collected: _scene, action, rationale_, combined with the object-centric captions from COCO.
semantic aspects of the explanation; in Section4.3.1 we specifically address this discussion leveraging the VQA task.
All the experiments are performed in zero-shot by using _OFA-large_ model in its original implementation 1. In order to ensure a fair comparison, we extract a similar number of features for both methods, namely \(12\) for superpixel and \(11\) for DFF. This number allows us to execute the experiments in a reasonable amount of time. In fact, we recall that the number of features has an exponential impact on the number of model evaluations needed to generate the explanations. Reducing the number of features mitigates the efficiency issue, but does not solve it. An in-depth discussion about the efficiency issue is provided in Section 4.3.2.
Footnote 1: [https://github.com/OFA-Sys/OFA](https://github.com/OFA-Sys/OFA)
As an initial comparison, Figure 5 shows a direct comparison between the two kinds of input features for the caption "drinking", generated using sentence-based Kernel SHAP. Both methods assign a positive contribution to the region corresponding to the glass, with some important differences:
* **Detail**: The DFF features succeed in capturing the key visual semantics of the image, i.e. the glass, in a single input feature (with some noise), producing a more detailed explanation than superpixel, where the region corresponding to the glass is shared across different patches (i.e. different features).
* **Intensity**: DFF scales the contributions according to the magnitude of the feature signal (as described in Section 3.3.2), providing additional information regarding the importance of a specific area within the same input feature region.
#### 4.3.1 Semantic visual features improve the quality of the explanations
We compare DFF and superpixel explanations on the VQA task. We select images and questions for the three axes in the HL dataset, i.e. action, scenes, and rationales, and we generate visual explanations for the
Figure 5: Global visual explanation for the question ”What is the subject doing?”, and corresponding model’s answer ”drinking” generated using Kernel SHAP. The explanation using DFF input features (on the left) provides a detailed positive (blue) area. We use DFF-features \(11\) and \(12\) superpixel-features respectively. The explanation generated by superpixel input features (on the right) although covering a similar region, i.e. the glass, does not provide the same level of detail.
answers. This allows us to compare how the two methods handle semantically different aspects highlighted in the visual content.
We expect to see that the positive contribution assignment (in blue) changes for the same image for different captions, corresponding to different kinds of questions. In response to different questions about location, rationale, or action, the model's output should depend on different regions of the image. For instance, we expect to observe a wider positive area highlighted in the picture for the _where_ question and a more specific detailed area for the _what_ question. As shown in Figure 6, the DFF-based method (middle row) succeeds in highlighting in significant detail the semantic areas contributing to the output. On the other hand, superpixels provide coarser detail, as it is limited by the size of the patches. This suggests that the DFF-generated explanations could lead to a visible advantage in terms of comprehensiveness and completeness; we further test these hypotheses by running a human evaluation, in Section 5.
#### 4.3.2 Semantics-guided explanations are efficient
In order for superpixel-based explanations to achieve a level of detail comparable to DFF, we need to significantly increase the number of patches. However, this causes an exponential surge in computing cost, which makes it unfeasible to run, especially if we are testing large models. This issue can be mitigated by performing Kernel SHAP sampling (as described in Section 3.1.1). The combination of the exponential growth of the sample space, and the limited sampling budget can easily lead to unreliable explanations. An example is shown in Figure 7 where we perform Kernel SHAP sampling with a budget of \(2048\) samples, which is the same budget used to compute the DFF explanation in Figure 5.
On the other hand, DFF does not suffer from this issue. In fact, there is no clear advantage in increasing the number of features, because the main semantic content is already embedded in a small number of features. In our experiments, we establish that a good number of features for DFF is betwe
Figure 6: Examples of explanations for the VQA task from the HL Dataset for the _scene_ and _action_ axes. In the top row are shown the questions (Q) and the generated answers (A). The middle and the bottom row, show visual explanation generated respectively with DFF and superpixel input features, with comparable compute cost.
number of features keeps the computational cost low, allowing to compute full Kernel SHAP or Kernel SHAP sampling with very high accuracy. We provide full details in the supplemental material.
### Semantic Features Analysis
The semantic features extracted by DFF are drastically different from superpixel features in many key aspects related to the visual content captured. Moreover, DFF is unsupervised and dynamically exploits the visual backbone's priors. In this Section, we focus on analyzing the benefits and limitations characterizing the semantic features generated by DFF. We discuss in detail key aspects like the kind of semantic content captured along with possible theoretical implications and how it can be generalized over different visual backbones.
Figure 8: Binary feature masks extracted using DFF with \(k=10\). The \(11^{th}\) feature is the _leftover_ mask. The original image is the same shown in Figure 5 and Figure 7.
Figure 7: Example of explanations generated with superpixel with an increasing number of features, namely \(16,64,256\) features (respectively 7a, 7b, 7c), obtained with Kernel Shap sampling using a fixed sampling budget of \(2048\) samples.
#### 4.4.1 What kind of semantics do DFF features capture?
DFF features capture semantic concepts learned by the model's visual backbone. These do not necessarily follow human visual expectations. In Figure 8 we show an example: features 1, 2, and 8 can be associated with three main **semantic objects and entities** of the image, namely _face, glass_ and _shirt_. However, we observe in the remaining features **several geometrical patterns**, that highlight the edges and the corners of the pictures. This pattern is recurrent in the features extracted by DFF, independently of the visual content. We believe this is partially due to the capability of CNNs to capture spatial configuration (Zeiler and Fergus, 2014) and the effectiveness of DFF, in factorizing together model activations with similar characteristics.
#### 4.4.2 Relaxing the feature independence assumption
As described in Section 2.2, SHAP in the cooperative game formulation assumes the _feature independence principle_, namely that each feature is independent of all the others. However, this assumption does not hold for image data since each pixel is inherently dependent on the other pixels, especially those in its vicinity, in representing the visual content. Therefore, in order to work with visual data, this constraint needs to be relaxed. This solution is typically applied for computer vision tasks by graphical models like Conditional Random Fields (CRF). CRFs relax the strong dependence assumption on the observations (the pixels of the image) by modeling the joint distribution of observations, usually intractable, as a conditional distribution (Li et al., 2022b).
Along the same lines, superpixel features relax this constraint by partitioning the image into patches that are not independent, considering the underlying semantics depicted in the visual content.
This issue is mitigated by the DFF features, as they tend to cover semantically related regions of the image, preserving the underlying visual semantics. On the other hand, as pointed out in Section 3.3.1, DFF features are not disjoint, meaning that to some extent, the contribution of overlapping regions is subject to contamination from other regions. In this section, we analyse the consequences of this in more detail. Our analysis follows two steps:
1. We measure the DFF feature overlap over a sample of 1000 images. We find that the amount of overlap among the feature masks corresponds to the \(0.77\%\) of the pixels in the image with a standard deviation
Figure 9: Example of overlap (highlighted in red) between two feature masks (Figure 9a) and comparison between visual explanations generated given the question ”What is the subject doing?” and the model’s answer ”drinking”, with regular DFF features (Figure 9b) and disjoint DFF features (Figure 9c). Although the masks overlap only to a small extent, the explanation is visibly affected.
of \(0.63\) and an average maximum peak of \(2.04\%\). This suggests that this phenomenon is present to a limited extent.
2. We compare visual explanations generated by disjoint and non-disjoint features. In order to generate disjoint features, we post-process the feature masks extracted, by checking all possible pairs of feature masks and assigning the possible overlapping region to one of the two compared features. An example is shown in Figure 9a where the overlapping regions (highlighted in red) between two feature masks are randomly assigned to one of the features (either blue or green).
Enforcing the features' disjointness leads to similar results to their non-disjoint counterpart. However, in some cases, the re-allocation of the overlapped region impacts the Shapley value of the feature, causing unpredictable results. This suggests that manually changing the feature masks can disruptively affect the visual semantics captured by the feature, leading to misleading visual explanations. A cherry-picked example is shown in Figure 9, where using the disjoint features (Figure 9c) causes a meaningful change in the visual explanation.
In conclusion, we observed that **the phenomenon of the non-disjoint features is present to a small extent** and overall **it does not invalidate the visual explanations** as it can be considered a relaxation of the feature independence assumption. Moreover, as empirically observed, relaxing this assumption is unlikely to invalidate the method, as the explanation is consistent with the ones generated by superpixel features. On the other hand, we observed that **forcing the feature masks' disjointness harms their capability to preserve the visual semantics, leading to misleading visual explanations**.
#### 4.4.3 Does the feature size matter?
Differently from superpixel patches, DFF semantic features can have different sizes, depending on the semantic role of the highlighted region. We ask to what extent the size of a visual feature could affect the final contribution in the SHAP cooperative game. In order to test for that, we normalize the Shapley value obtained according to the size of the feature mask and we compare normalised values with the
Figure 10: RBO scores computed between normalized and unnormalized Shapley values, for positive (blue), negative (orange), and all (green) features.
unnormalized ones. To normalize a Shapley value we compute:
\[r_{i}=\frac{\sum_{j=0}^{|M_{i}|}m_{j}}{|M_{i}|} \tag{7a}\] \[\hat{a}_{i}=\frac{a_{i}}{r_{i}} \tag{7b}\]
where \(m_{j}\) is a non-zero element of the binary mask, \(|M_{i}|\) is the total number of entries in mask \(i\) and \(r_{i}\) indicates the proportion of the image covered by the mask. \(r_{i}\) is then used to discount the magnitude of the Shapley value \(a_{i}\) obtaining the normalized value \(\hat{a}_{i}\).
In the normalization process, the feature contribution's magnitude is obviously re-scaled. However, we are interested in measuring to what extent the normalization has affected the features' importance in relative terms. Therefore, we use the Rank Biased Overlap (RBO) (Webber et al., 2010), a similarity metric for ranked lists, to measure the difference in the feature attribution ranking after normalization for a sample of 100 DFF-based explanations. A significant change in feature ranking would entail a positive correlation between size and feature importance.
In Figure 10 we show the results of this experiment: the RBO is overall at the ceiling, with a minimum value of \(0.9\) (in a range where \(1\) is identical ranking and \(0\) is totally different). The positive contributions, which are the most informative to understand the explanations, are the most stable in terms of ranking. This suggests that **the size of the features extracted using DFF, does not significantly affect the final contribution of the semantic feature and does not harm the visual explanations.**
### Does DFF adapt to other visual backbones?
DFF is designed to perform concept discovery in CNN-based visual backbones. However, current pre-trained V&L models' vision encoder often rely on different architectures, such as Vision Transformers (ViT) (Dosovitskiy et al., 2020), FasterRCNNs (FRCNN) (Ren et al., 2015), or their variants. In this section, we show how DFF can be adapted to these architectures. Moreover, we provide an alternative solution to perform model-agnostic semantic feature extraction, which is applicable to any architecture.
#### 4.5.1 Vision Transformers
In order to apply DFF to ViT encodings, we need to take into account two substantial differences with respect to CNNs: (1) firstly, ViT splits the image into a grid of patches and generates an embedding vector for each patch. To obtain an activation matrix, each embedding vector is stacked together and a special vector is added in position \(0\) to indicate the beginning of the sequence. Differently from the CNNs, the
Figure 11: Schematic example of how to generate semantic features with DFF from a ViT visual backbone. The index of the highlighted band in the heatmap is used to select the patches to create the feature.
spatial information related to the patch is lost in the encoding process and added later on, by concatenating a positional embedding to the embedding vectors. (2) Secondly, ViT activation contains both positive and negative values, differently from CNNs which generate only positive activations.
As described in Section 3.3, DFF requires a non-negative activation matrix as it is based on NMF, therefore in order to address (2) we normalize the ViT features to values between \(0\) and \(1\).
As a consequence of (1) above, when we apply DFF to the normalized ViT activations, we obtain binary masks with vertical bands, where each band corresponds to a patch of the image. We use the index of the highlighted vectors in the binary mask to select the patches to be grouped together in the semantic features. In this way, **we obtain feature masks by grouping together semantically related patches**. A schematic example is depicted in Figure 11.
#### 4.5.2 FasterRCNNs
FRCNNs are often used as feature extractors in V&L models (Zhang et al., 2021; Anderson et al., 2018; Tan and Bansal, 2019). They extract feature vectors representing bounding boxes of salient objects identified in the image. Similarly to ViT, the FRCNN's activation matrix is a stack of feature vectors, therefore we can extract semantic features, similarly to the method described in Section 4.5.1. However, FRCNNs tend to extract highly overlapping bounding boxes, which results in massively redundant semantic features. This prevents the features from effectively selecting specific semantic content, as they often result in sharing most of the selected area. A schematic example is shown in Figure 12, where although DFF manages to cluster semantically related boxes (like _collar, man, neck, sleeve_), it ends up selecting a large portion of the image in a single input feature.
An excessive amount of overlap among the features affects their capability to identify specific semantic concepts thus, we conclude that **DFF can be adapted to FRCNN's features but does not produce the desired results of capturing enough fine-grained semantic concepts to support informative explanations**. In the following subsection, we describe an alternative route towards obtaining semantically meaningful visual regions that can act as features for explaining V&L models, in cases where the visual backbone does not permit an application of bottom-up, unsupervised methods such as DFF.
#### 4.5.3 Beyond DFF: a model-agnostic semantic feature extraction
As shown in the previous sections:
Figure 12: Schematic example of how to generate semantic features with DFF from a FRCNN visual backbone. The index of the highlighted band in the binary mask is used to select the bounding boxes corresponding to objects that compose the input features. However, the bounding boxes highly overlap with each other and cover the majority of the pixels in the image.
* the full potential of DFF is evident with CNN models;
* it can be adapted to extract features from ViT models, though they are less detailed due to the initial discretization of the image into patches operated by the model;
* it does not produce satisfactory results on FRCNN activations, because of the redundancy of the bounding boxes extracted by the model.
In order to address limitations coming from the visual backbone's architecture (e.g. in the case of FRCNNs), we propose to use STEGO (Hamilton et al., 2022)2 a state-of-the-art segmentation model, to extract semantic feature masks. It is unsupervised, meaning that it does not require ground truth labels. As a consequence, the number of features extracted can not be controlled, though in our experiment we observe that it extracts a small number of semantic masks (usually less than \(10\)). This keeps the Shapley value computation low but could limit the number of semantic concepts captured, differently from DFF where the number of features is a controllable hyperparameter.
Footnote 2: At the time of this work, STEGO was the state-of-the-art model for semantic segmentation. However, the approach proposed here is agnostic as to the segmentation model used. For example, Segment Anything (Kirillov et al., 2023), a more recent model proposed after the present experiments were completed could yield better results
The biggest advantage of using an off-the-self segmentation model is that **it supports the generation of visual explanations, independently of the visual backbone's architecture**. On the other hand, we have the downside of **no longer relying on the visual backbones' priors, embedded in the captioning model**.
In Figure 13 we directly compare the visual explanations generated by all methods, DFF on CNN and ViT (Figures 12(a) and 12(b)), STEGO (Figure 12(c)), and superpixel (Figure 12(d)). All the explanations are generated with similar compute costs, apart from STEGO which uses a smaller amount of features (\(6\)). As expected, the explanations generated with STEGO's semantic features are more fine-grained than the others, as the model is trained on the semantic segmentation task. However, they come from an external model and do not necessarily reflect the visual priors of the V&L model itself. Nevertheless, this provides a flexible solution to adapt the explanation of V&L models with visual priors to any visual backbone. Furthermore, any segmentation model can in principle be used.
Figure 12: Direct comparison of explanations generated for the caption” riding a dirt bike” from different visual backbones and methods. The first two starting from the left, are generated from features extracted using DFF and activations of different visual backbones, namely a CNN (Figure 12(a)) and ViT(Figure 12(b)). Figure 12(c) uses semantic masks extracted by a segmentation model (STEGO) and 12(d) uses superpixel features. All the explanations have comparable compute costs, apart from Figure 12(c), where only \(6\) features are used.
## 5 Human Evaluation
The experiments in the previous section made direct comparisons between our method and superpixel-based explanations for V&L generative models. In this section, we report on an evaluation with human participants. Evaluating XAI techniques is a notoriously challenging task (e.g. Nauta et al., 2023; Adebayo et al., 2022). Here, we take inspiration from the work of Hoffman et al. (2018) and compare the judgments of participants on three qualities, namely _detail, satisfaction_ and _completeness_ of explanations generated using the two methods under consideration.
### Participants
For the purposes of this study, it is important to source judgments from participants who are knowledgeable about machine learning and explainable AI. Relying on crowd-sourcing is a risky strategy, as there is no guarantee that participants will be in a position to evaluate _explanations_ rather than, say, the quality of model outputs. We, therefore, recruited 14 researchers (9 male, 5 female; 9 aged 18-30, 4 aged 31-40, 1 aged 41-50) from our own network. All are researchers in AI-related fields and are familiar with XAI methods. Two of these are senior researchers who obtained their PhD more than 5 years ago; all the others were doctoral students at the time the experiment was run. Six participants are native speakers of English; the remainder are fluent or near-fluent speakers.
### Design and materials
We randomly selected 40 images from the HL dataset, for which we generated the corresponding answers to questions. In order to create a more challenging scenario, we framed it into a visual question-answering task, thus for each image, we select one of the available questions and we generate the corresponding caption. Moreover, for each image-caption pair, we generated visual explanations using both DFF and superpixels features.
Each participant was shown the question, the generated answer, the original image, and the visual explanation which can be either generated by DFF or by superpixel. In order to counterbalance the experimental materials, we divided images randomly into two groups, and further assigned participants randomly to two groups. We rotated items through a 2 (participant group) \(\times\) 2 (image group) Latin square,
Figure 14: Distribution of the Likert scores obtained in the human evaluation for _detail, completeness_ and _satisfaction_ for both DFF in (orange) and superpixel (in blue). The lower the score the higher the rating.
such that participants in any experimental group evaluated all images, but each image was always seen once and evaluated in only one condition (DFF or superpixel).3
Footnote 3: In the end, the experiment was completed by 8 participants in one group, and 6 in the other.
The participants were asked to judge explanations based on their agreement to each of the following statements:
* _Detail_: the areas highlighted in the explanation are detailed enough to understand how the model generated the caption;
* _Completeness_: the highlighted areas cover all the regions relevant to the caption;
* _Satisfaction_: based on the areas highlighted in the explanation I feel that I understand how the system explained makes its decisions.
Responses to each dimension were given on on a Likert scale from \(1\) to \(5\), where \(1\) corresponds to the total agreement and \(5\) to total disagreement. The full evaluation form is reproduced in the appendix.
### Results
As shown in Figure 14, DFF-based explanations (in orange) are considered on par with superpixel-based explanations (in blue) in terms of completeness, but at the same time, they are considered more detailed and more satisfactory for human judges. Thus, the score distributions for detail and satisfaction are more skewed towards lower (better) scores for the detail and satisfaction criteria. More detailed statistics are reported in Table 2.
Although the superpixel and DFF differ in the judged level of detail of the explanations, they yield attributions which are similarly located in the input image. This is in part due to the fact that in both cases, we are using the same feature attribution method, namely Kernel SHAP. However, in the same cases, we observe a certain degree of divergence in the visual explanation. In Figure 15 we show an example where we generate explanations for the question "Where is the picture taken?" and the generated caption "on a dirty road". The DFF-based explanation (on the right) broadly assigns a positive attribution to the background of the picture, depicting the road, and negative contributions to the subjects, namely the person and the animals. However, the superpixel-based explanation (on the left) assigns attributions to patches that are, at least partially, in contrast with the DFF-based explanation. This is probably due to the particular configuration of features selected by both methods, which in some instances might select insufficiently detailed regions, preventing the method from highlighting the semantically relevant areas of the image.
In order to quantify this phenomenon we manually inspected the 40 samples used in the human evaluation. We found that around \(10\%\) of the explanations diverged to some extent between the two feature selection
\begin{table}
\begin{tabular}{|c|l|l|l|l|} \hline
**Type** & **Metric** & **Mean** & **Std** & **Median** \\ \hline \multirow{3}{*}{SP} & completeness & 2.48 & 1.38 & 2.0 \\ \cline{2-5} & detail & 2.46 & 1.42 & 2.0 \\ \cline{2-5} & satisfaction & 2.51 & 1.51 & 2.0 \\ \hline \multirow{3}{*}{DFF} & completeness & 2.50 & 1.45 & 2.0 \\ \cline{2-5} & detail & 2.18 & 1.41 & 2.0 \\ \cline{1-1} \cline{2-5} & satisfaction & 2.32 & 1.48 & 2.0 \\ \hline \end{tabular}
\end{table}
Table 2: Results of the human evaluation, for superpixel-based (SP) and DFF-based (DFF) visual explanation. We report the mean, the standard deviation (std), and the median of the Likert scores. The lower the score the more positive the rating.
methods. We analyzed separately this sub-sample of divergent explanations. We find that the average scores for this subset are overall slightly worse (higher) than the full results reported in Table 2. Nevertheless, the trends observed in relation to Figure 14 for the three evaluation criteria still hold. This suggests that this phenomenon does not significantly affect the participants' judgments, except for a slight drop in the perceived quality of the explanations.
Moreover, in qualitative feedback given by participants, some declared that in some instances, their assessment was affected by the correctness of the caption, which in some cases was considered wrong or partially inaccurate. We quantified the inaccuracy of the caption by computing their lexical and semantic similarity with respect to the reference captions, using respectively, BLEU (Papineni et al., 2002) and Sentence-Bert (Reimers and Gurevych, 2019). We computed the Pearson correlation (Cohen et al., 2009) between the Likert scores and the lexical and semantic similarity previously computed. We find that the Likert scores slightly but not significantly positively correlate with both lexical and semantic similarity (\(\rho=-0.023\) for lexical similarity and \(\rho=-0.004\) for semantic similarity)4. This suggests that despite the fact that participants did note the quality of the captions, this did not significantly affect their judgments of the explanations.
Footnote 4: Note that since in the Likert score, 1 is the maximum agreement and 5 the minimum, a positive correlation corresponds to a negative \(\rho\).
In conclusion, we found that assessing visual explanations is a hard task even for specialists in the field. We observed a relatively low inter-annotator agreement for both groups in the Likert judgments (Krippendorff's \(\alpha=0.23\)(Krippendorff, 2004)). However, besides possible confounding factors, like inaccuracies of the captions and divergent explanations, the DFF-based explanations are consistently perceived as higher quality explanations than superpixel-based ones.
## 6 Conclusions
In this work, we proposed an explainability framework to bridge the gap between multimodality and explainability in image-to-text generative tasks exploiting textual and visual semantics.
Our method is developed around SHAP, as it provides a model-agnostic solution with solid theory and desirable properties. We design our approach to address certain crucial limitations of current approaches.
Figure 15: Comparison of divergent explanations for the question: ”Where is the picture taken?” and generated caption: ”on a dirty road”, obtained from superpixel features (on the left) and DFF features (on the right).
First, SHAP-based methods are rarely employed to explain large models as they are extremely expensive to compute. Our solution is efficient and allows an accurate approximation of the Shapley values.
Second, we overcome the limitations of current token-by-token explanations by proposing sentence-based explanations exploiting semantic textual variations which are also more efficient to compute.
Finally, based on the rationale that a model's generative outputs should be explained with reference to the knowledge encoded by the visual backbone, we propose an unsupervised method to extract semantically informative visual features. Using these features rather than superpixels means that we obtain explanations which are cheaper (insofar as more can be gleaned from fewer features) but also more intuitive, especially when compared to superpixel-based approaches. We show that our method can be employed with different visual-backbones architectures like CNN and Vision Transformers. In the case of visual backbones for which desirable results cannot be produced, such as FasterRCNN-based models, we propose an alternative solution based on a semantic segmentation model, to generate semantic input features.
Through a human evaluation, we show that using semantic priors improves the perceived quality of the explanation, resulting in more detailed and satisfactory explanations than superpixels though matching the same level of completeness.
Moreover, our framework is totally modular and it can co-exist with a wide range of possible configurations for all of its components. It allows the computation of sentence-based or token-by-token explanations. The core method, Kernel SHAP, can be replaced with another SHAP-based method, and the visual features can be extracted with one of the proposed methods or with any other method of choice.
## Funding
Contribution from the ITN project NL4XAI (_Natural Language for Explainable AI_). This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860621. This document reflects the views of the author(s) and does not necessarily reflect the views or policy of the European Commission. The REA cannot be held responsible for any use that may be made of the information this document contains.
|
2306.04334 | Echoes from Alexandria: A Large Resource for Multilingual Book
Summarization | In recent years, research in text summarization has mainly focused on the
news domain, where texts are typically short and have strong layout features.
The task of full-book summarization presents additional challenges which are
hard to tackle with current resources, due to their limited size and
availability in English only. To overcome these limitations, we present "Echoes
from Alexandria", or in shortened form, "Echoes", a large resource for
multilingual book summarization. Echoes features three novel datasets: i)
Echo-Wiki, for multilingual book summarization, ii) Echo-XSum, for
extremely-compressive multilingual book summarization, and iii) Echo-FairySum,
for extractive book summarization. To the best of our knowledge, Echoes, with
its thousands of books and summaries, is the largest resource, and the first to
be multilingual, featuring 5 languages and 25 language pairs. In addition to
Echoes, we also introduce a new extractive-then-abstractive baseline, and,
supported by our experimental results and manual analysis of the summaries
generated, we argue that this baseline is more suitable for book summarization
than purely-abstractive approaches. We release our resource and software at
https://github.com/Babelscape/echoes-from-alexandria in the hope of fostering
innovative research in multilingual book summarization. | Alessandro Scirè, Simone Conia, Simone Ciciliano, Roberto Navigli | 2023-06-07T11:01:39Z | http://arxiv.org/abs/2306.04334v1 | # Echoes from Alexandria:
###### Abstract
In recent years, research in text summarization has mainly focused on the news domain, where texts are typically short and have strong layout features. The task of full-book summarization presents additional challenges which are hard to tackle with current resources, due to their limited size and availability in English only. To overcome these limitations, we present "Echoes from Alexandria", or in shortened form, "Echoes", a large resource for multilingual book summarization. Echoes features three novel datasets: i) Echo-Wiki, for multilingual book summarization, ii) Echo-XSum, for extremely-compressive multilingual book summarization, and iii) Echo-FairySum, for extractive book summarization. To the best of our knowledge, Echoes - with its thousands of books and summaries - is the largest resource, and the first to be multilingual, featuring 5 languages and 25 language pairs. In addition to Echoes, we also introduce a new extractive-then-abstractive baseline, and, supported by our experimental results and manual analysis of the summaries generated, we argue that this baseline is more suitable for book summarization than purely-abstractive approaches. We release our resource and software at [https://github.com/Babelscape/echoes-from-alexandria](https://github.com/Babelscape/echoes-from-alexandria) in the hope of fostering innovative research in multilingual book summarization.
## 1 Introduction
Recent research in Automatic Text Summarization - the task of shortening a text while preserving its meaning - has mainly focused on news stories. News texts are usually short documents; for example, 99.3% and 98.6% of the articles in XSum Narayan et al. (2018) and CNN/DailyMail Nallapati et al. (2016), respectively, are shorter than 2048 tokens. Additionally, news stories are characterized by strong layout features, such as the "lead bias", in which the first sentences usually contain the most relevant information for a summary. Accordingly, the Lead-3 baseline, which uses the first three sentences of a news item as its summary, performs competitively on news summarization benchmarks Gehrmann et al. (2018); Zhu et al. (2019). Although recent approaches have achieved high performance, it is still unclear how they behave on longer documents and whether they can generalize across domains and genres. For this reason, the research community has been shifting toward more challenging settings, which include interviews Zhu et al. (2021) and scientific articles Gupta et al. (2021); Cohan et al. (2018).
One setting that has been attracting growing attention is full-book summarization Kryscinski et al. (2021), i.e., the task of producing the plot of a book from its full text. Summarizing a book is hard not only because of its average text length - currently not processable in a single forward pass even by architectures for long-form text processing Beltagy et al. (2020); Guo et al. (2022) - but also due to other critical aspects, such as the presence of dialogues, rich discourse structures, parallel and non-linear lines of plot, and long-distance dependencies between entities, among others. Therefore, we deem book summarization a complex testbed to challenge current approaches and investigate their capabilities and limitations.
Although the first small-scale datasets for the task were introduced several years ago Mihalcea and Ceylan (2007), the area has recently regained traction thanks to larger-scale resources, such as BookSum Kryscinski et al. (2021) and NarrativeQA Kocisky et al. (2017). However, despite this recent progress, current resources for book summarization are still, i) limited in size, making them difficult to use for proper training and evaluation, and ii) monolingual (usually English-only).
To overcome these issues, we introduce "Echoes from Alexandria" (Echoes), the largest resource to date for book summarization and the first one providing books and summaries in multiple languages. We use Echoes to investigate how current summarization approaches perform on a large-scale multilingual summarization dataset, concluding that current purely-abstractive approaches still struggle in our setting. We additionally devise a new baseline, showing that the extractive-then-abstractive paradigm represents a promising direction for future research.
The main contributions of our work are the following:
* We introduce Echoes, the first multilingual resource for book summarization, with thousands of texts and plots in 5 languages, for a total of 25 language pairs. Echoes is also the largest resource among current English datasets for full-book summarization.
* We release the three datasets of Echoes: i) Echo-Wiki, for multilingual abstractive summarization, ii) Echo-XSum, for extremely-compressive multilingual book summarization, and iii) Echo-FairySum, an English dataset for evaluating extractive book summarization.
* We leverage BookSum and Echoes to evaluate state-of-the-art systems, both in zero-shot and fine-tuning settings, bringing to light their inadequate generalization capabilities in book summarization.
* Our experiments demonstrate that an _extractive-then-abstractive_ baseline outperforms the purely-abstractive counterpart on our datasets while achieving state-of-the-art results on BookSum.
* We provide a comprehensive manual evaluation of the automatically generated summaries and release the dataset with our human judgments.
We hope our work will foster research in multilingual long document understanding and summarization. We release Echoes and our software for research purposes at [https://github.com/Babelscape/echoes-from-alexandria](https://github.com/Babelscape/echoes-from-alexandria).
## 2 Related Work
Resources for summarization.Research efforts to create summarization resources have steadily increased in numbers over recent years. For the news domain, XSum Narayan et al. (2018) and CNN/DailyMail Nallapati et al. (2016) are the _defacto_ standard datasets for training and evaluating summarization systems. XSum comprises 226k news articles accompanied by a one-sentence abstractive summary. In CNN/DailyMail, the authors retrieved 93k articles from CNN1 and 220k articles from DailyMail2 newspapers. Both publishers supplement their articles with a list of bullet points containing the main information of the news text.
Footnote 1: [https://www.edition.cnn.com/](https://www.edition.cnn.com/)
Footnote 2: [https://www.dailymail.co.uk/](https://www.dailymail.co.uk/)
More recently, summarization resources have been shifting towards more challenging scenarios, i.e., where the documents of interest are longer and belong to different domains. Notably, Cohan et al. (2018) released two large-scale datasets of long and structured scientific papers obtained from arXiv3 and PubMed4. In these datasets, paper abstracts are used as ground truth summaries. Another relevant example is MediaSum Zhu et al. (2021), a collection of interview transcriptions from National Public Radio (NPR)5 and CNN, where overview and topic descriptions are employed as summaries.
Footnote 3: [https://arxiv.org/](https://arxiv.org/)
Footnote 4: [https://pub0embed.ncbi.nlm.nih.gov/](https://pub0embed.ncbi.nlm.nih.gov/)
Footnote 5: [https://www.npr.org/](https://www.npr.org/)
In long-form text summarization research, a task that is attracting growing attention is book summarization. Although this task was originally introduced several years ago by Mihalcea and Ceylan (2007), who released the first small-scale evaluation resource, book summarization regained traction thanks to a few notable endeavors. The most important example is BookSum Kryscinski et al. (2021), which provides a collection of resources for book summarization at three levels of granularity: paragraph, chapter, and full book. Book texts are collected from Project Gutenberg, while summaries are obtained from the Web Archive.6 BookSum features 222 unique book titles with a total of 6,987 book chapters and 142,753 paragraphs. Relatedly, NarrativeQA Kocisky et al. (2017) is a collection of 1572 stories retrieved from Project Gutenberg (783 books and 789 movie scripts) associated with summaries from Wikipedia. The annotators were required to generate questions and answers based
on the summaries. Even if NarrativeQA is primarily intended for Question Answering, it can also be used for book summarization. Due to their limited size, however, BookSum (in the full-book setting) and NarrativeQA can be more useful for evaluating models on the task rather than for training purposes. It is also worth noting that these resources are monolingual, i.e., English-only, limiting their usefulness for researchers seeking to evaluate multilingual summarization models. Despite the great work carried out so far, we argue that there is still ample room to improve book summarization resources.
Approaches to book summarization.Kryscinski et al. (2021) conducted experiments on full-book summarization using a generate&rank strategy. This approach involves training a system to generate paragraph-level summaries, which are then sorted by perplexity and concatenated to form a full-book summary. More recently, Wu et al. (2021) proposed an approach where passages are recursively summarized and concatenated to form a full summary. However, generated summaries are affected by the errors accumulated from previous stages Wu et al. (2021). Recursively generating a summary is a paradigm that has also been used by other works for long-document summarization Zhang et al. (2021); Gidiotis and Tsoumakas (2020). Another family of approaches is that of _extractive-then-abstractive_ approaches. This family of approaches first extracts key sentences from the input document and then uses such sentences as input to an abstractive model, which is tasked with generating a summary that captures the main ideas and themes of the source. While it was successfully employed in previous works for short Li et al. (2021) and long-form summarization Chen and Bansal (2018), this paradigm has never been explored for summarizing books. In this paper, we aim to fill this gap by presenting a new, simple extractive-then-abstractive model and showing its effectiveness for book summarization.
## 3 Echoes
Echoes is the first collection of resources for book summarization in 5 languages: English, French, German, Italian, and Spanish. With Echoes, we introduce the following three novel datasets:
* **Echo-Wiki**, in which we pair book texts with plots retrieved from a hand-curated list of Wikipedia page sections.
* **Echo-XSum**, in which we pair book texts with extremely-compressive summaries, manually created starting from the lead section of Wikipedia pages.
* **Echo-FairySum**, an evaluation dataset for extractive summarization of short stories and fairy tales, composed of 197 English manually-annotated extractive summaries.
We provide an overview of the main differences between Echoes and existing resources in Table 1.
### Text collection
We collect the book texts that comprise Echoes from two main sources: Project Gutenberg and Wikisource. Project Gutenberg is a digital library that provides free access to public-domain books and features over 60k texts. We collect all the available books from Project Gutenberg by following their robot-access policies.7 While often considered one of the most reliable sources of copyright-free books, Project Gutenberg provides only very limited coverage of non-English books and non-English translations of English books. This is one of the reasons why we also rely on Wikisource. Part of the Wikimedia Foundation, Wikisource contains a huge number of texts from a wide range of domains, e.g., books, and legal and historical documents, in various languages. Therefore, for Echoes, we rely on Wikisource in English, French, German, Spanish, and Italian to retrieve other book texts and expand the coverage of books already available from Project Gutenberg.8 We call this set of full-text books \(B\). We note that Wikisource can also be used to expand Echoes to other languages. Given the limited amount of work in multilingual summarization, we focus on the five above high-resource languages. We defer the expansion of Echoes to future work.
Footnote 7: [https://www.gutenberg.org/help/mirroring.html](https://www.gutenberg.org/help/mirroring.html)
Footnote 8: Wikisource dumps are freely available to download at [https://dumps.wikimedia.org/](https://dumps.wikimedia.org/)<l>wikisource/ where <l> \(\in\) { EN, FR, DE, ES, IT}. Last accessed: July 1, 2022.
While Project Gutenberg has already been used as a source of books in previous resources, such as BookSum and NarrativeQA, the use of Wikisource is what enables Echoes to become the largest resource for book summarization in English and the first resource for multilingual book summarization.
### Pairing books with Wikipedia summaries
Book summaries from Wikipedia follow a standard set of guidelines9 and are often of remarkable quality, as they are continuously refined over time by the Wikipedia community. Therefore, once we have collected our set of full-book texts (see Section 3.1), we iterate over the Wikipedia dumps10 in English, French, German, Italian, and Spanish. Given our set \(B\) of full-book texts, and \(W\), the set of Wikipedia pages, our objective is to uniquely associate a book \(b\in B\) to a page \(w\in W\), such that \(w\) is the Wikipedia page of book \(b\). We obtain a set of potential matches by finding Wikipedia pages whose contents contain a hyperlink to a book in \(B\). To improve the accuracy of our mapping, we first apply a string distance metric11 to compare the titles of the books and their associated Wikipedia pages. We then check if the lead section of the Wikipedia page in question mentions the surname of the author of the associated book. This additional step helps us further refine and ensure the validity of our associations.
Footnote 9: [https://en.wikipedia.org/wiki/Wikipedia](https://en.wikipedia.org/wiki/Wikipedia): How_to_write_a_plot_summary
Footnote 10: Wikipedia dumps are freely available to download at [https://dumps.wikimedia.org/](https://dumps.wikimedia.org/)<b>wiki/ where <b> \(\in\) { EN, FR, DE, ES, IT}. Last accessed: July 1, 2022.
After our matching process, we manually inspect the cases in which books are associated with multiple Wikipedia pages. We discover that the pages in excess refer to adaptations of the book in other mediums, such as movies and theatrical plays. To resolve this ambiguity, we utilize the mapping between Wikipedia pages and Wikidata nodes to obtain metadata about the medium, e.g., _book, movie, play_, and retain only the Wikipedia page that corresponds to the book.
At this point, given the Wikipedia page content, our goal is to extract only the book summary and discard other information, such as the biography of the author, historical background, prizes and acolades, and critical reception, among others. To achieve this, we employ native speakers to manually identify a list of section names that, in the different languages, only contain plot information, aiming for high precision rather than coverage. We use the content of these identified sections as summaries and provide our list of section names in Appendix A for reference. We name the resulting set of (Wikipedia summary, full-text book) pairs **Echo-Wiki**.
We note that the average number of unique editors (220.6), revisions (421.4), and year of creation (2008) of the Wikipedia pages we select for the Echo-Wikidataset are large: this indicates that their book summaries have been curated over time and suggests that they are of high quality. Table 1 shows how Echo-Wiki compares against BookSum, the previous largest existing dataset for book summarization, to the best of our knowledge. Besides being multilingual, it is worth noticing that Echo-Wiki is about 12 times larger than BookSum (5,001 vs. 405 books) while still featuring similar compression ratios (103.7 vs. 126.2).
### Enabling extreme summarization of books
Inspired by the work of Narayan et al. (2018) on the news domain with XSum, which showcases the capabilities of highly-abstractive summarization, we
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Languages**} & \multirow{2}{*}{**\# Documents**} & \multirow{2}{*}{**Coverage**} & \multirow{2}{*}{**Density**} & \multirow{2}{*}{**C. Ratio**} & \multicolumn{2}{c}{**Avg. length (\# Tokens)**} \\ \cline{5-8} & & & & & & **Source** & **Summary** \\ \hline XSum & EN & 226,677 & 0.66 & 1.09 & 19.3 & 438.4 & 23.9 \\ CNN/DailyMail & EN & 311,971 & 0.85 & 3.47 & 14.9 & 803.7 & 59.7 \\ ArXiv/PubMed & EN & 346,187 & 0.87 & 3.94 & 31.2 & 5,179.2 & 257.4 \\ MediaSum & EN & 463,596 & 0.80 & 1.86 & 116.3 & 1,925.8 & 16.6 \\ BookSum (full) & EN & 405 & 0.89 & 1.83 & 126.2 & 112,885.2 & 1,167.2 \\ \hline Echo-Wiki & EN, FR, DE, ES, IT & 5,001 & 0.79 & 2.08 & 103.7 & 75,600.9 & 729.4 \\ Echo-Wiki\({}_{EN}\) & EN & 2,375 & 0.84 & 2.24 & 117.1 & 83,724.1 & 678.0 \\ Echo-XSum & EN, FR, DE, ES, IT & 3,383 & 0.78 & 1.67 & 1624.0 & 86,040.0 & 53.0 \\ Echo-XSum\({}_{EN}\) & EN & 1,828 & 0.81 & 1.78 & 1706.1 & 90,971.9 & 53.0 \\ Echo-FairySum & EN & 197 & 1.00 & 1.00 & 2.8 & 4,438.8 & 1,506.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Echoes (Echo-Wiki, Echo-XSum, and Echo-FairySum) with existing resources for summarization. **Coverage and density:** measures of the “extractiveness” of a summary. **Compression Ratio:** micro-average ratio between the lengths of the source and the summary.
introduce **Echo-XSum**, a new dataset for training and evaluating systems for extreme summarization of books. In Echo-XSum, we pair full-text books with very short summaries. These summaries contain the minimum number of sentences required to provide an overview of the main contents of a book, typically one to three sentences. The main challenge posed by Echo-XSum is dealing with the great disparity between the size of the input and the size of the output. Indeed, as we can observe in Table 1, the compression ratio of Echo-XSum (1624.0) is unprecedented in the field of summarization, being an order of magnitude greater than those of Echo-Wiki (103.7) and BookSum (126.2).
The extreme summaries in Echo-XSum are the result of a manual annotation process, which involved an expert linguist who is a fluent speaker in all 5 languages of Echoes. The annotator was explicitly contracted for this task. Given a book and its previously-identified Wikipedia page (see Section 3.1), the annotator was tasked with extracting portions of text from the introduction that described the essential plot of a book. An excerpt of a book text with the corresponding multilingual summaries from Echo-XSum can be found in Appendix B. Notice that the portions of text extracted by the annotator are not necessarily contiguous, as long as the extracted text can be read independently of its original text. As a rule of thumb for the annotation process, the linguist followed the definitions of Consistency, Relevance, Fluency, and Coherence of a summary (Fabbri et al., 2021). The annotator spent an average of 5 minutes per sample. We provide an example of the annotations produced in Appendix C. At the end of the manual creation of our extreme summaries, the resulting Echo-XSum is still about 8 times larger than BookSum (3,383 vs. 405 books).12
Footnote 12: Echo-XSum includes fewer book/summary pairs than Echo-Wiki because the annotator was not able to find an extreme summary in the Wikipedia pages of some books.
### Classifying books into genres
Differently from existing resources, such as BookSum, which is limited by its relatively small size, the thousands of books in Echoes give us the opportunity to investigate book summarization more in-depth. Indeed, books in Echoes cover a wide range of genres, including novels, theatrical plays, and poems, among others. We argue that developing a strategy to automatically identify book genres provides valuable insights into the dataset and enables a fine-grained evaluation of current and future summarization approaches. An analysis by genre can help us determine which genres are the most challenging to summarize.
Similarly to what was described in Section 3.2, we rely on a graph-based heuristic on the knowledge graph of Wikidata to identify genres. More specifically, given a Wikipedia article of a book, we retrieve its corresponding Wikidata node, and analyze its relations (e.g., _genre_ and _form_of_creative_work_) with its neighboring nodes. This process is able to distinguish between 7 main genres: novels, plays, poems, epic poems, short stories, fairy tales, and essays. Note that our heuristic may assign more than one genre to a single book. Figure 1 illustrates the distribution of the genres in the English partition of Echo-Wiki, showing that novels are the most represented genre, followed by short stories and plays.
### Digging up extractive summarization
Over the past few years, the attention of the research community has gradually shifted from extractive to abstractive summarization, especially thanks to the advent of flexible sequence-to-sequence models, which have proven effective for summarizing short documents. Thanks to genre classification (see Section 3.4), we are able to perform a small-scale investigation of extractive book summarization on two genres in Echoes. More specifically, we construct **Echo-FairySum**, the first evaluation dataset for extractive summarization of fairy tales and short stories.
To create extractive summaries for Echo-FairySum, we set up the following manual annotation process: given the text of a book, and its
Figure 1: Distribution of the genres – novels, short stories, play, poems, essays, fairy tales, and epic poems – in the English partition of Echo-Wiki.
abstractive summary from Wikipedia (Section 3.2), annotators are required to extract relevant sentences from the book text. A sentence is relevant if it provides a piece of information that is also contained in the abstractive summary. The annotators were asked to adhere as closely as possible to the concepts of Consistency, Relevance, and Coherence defined by Fabbri et al. (2021). The annotators were drawn from a pool of fifty-eight Master-level students from the 'Narrative Understanding and Storytelling' minicourse held at the Sapienza University of Rome by the last co-author, as part of the AI and Robotics degree. The selected students carried out the task as part of their course assignments. On average, each student annotated 3 texts, resulting in multiple annotations for each text. The annotation agreement was measured using Cohen's Kappa coefficient, which indicated substantial agreement (0.71). A subset of annotations was further validated by our contracted annotator to ensure that the students were adhering to the guidelines. Overall, Echo-FairySum provides extractive summaries for 197 documents, about 4 times the size of the test set of BookSum.
### Aggregating books across versions and languages
A book can be published in various editions after its original publication. Perhaps most importantly, the same version of a book can also be translated into multiple languages. Given the potentially large variety of versions and translations of a book, we argue that it is important to aggregate those versions. Indeed, aggregating books across versions and translations can allow Echoes to also be employed for machine translation, cross-lingual sentence alignment, and cross-lingual summarization.
To achieve this objective, we leverage two characteristics of Wikipedia. First, we aggregate all those book texts aligned to the same Wikipedia page (see Section 3.2). We increase the accuracy of this step by taking into account the information found on some Wikisource pages, which list the editions available for some books. Second, we navigate the Wikipedia interlanguage links, which connect pages that refer to the same concept/entity in different languages, to aggregate different translations and summaries (in different languages) of the same book. Figure 2 presents the number of _book-summary_ and the _version-summary_ pairs for all the language pairs in Echo-Wiki obtained after our aggregation process.
## 4 Experiments and Results
In recent years, two promising paradigms have emerged from previous work on long-document summarization: _recursive-abstractive_ and _extractive-then-abstractive_. In this section, we evaluate and analyze their effectiveness on Echoes.
### Recursive-abstractive approaches
Recursive-abstractive approaches consist in dividing the source document into smaller segments, referred to as chunks, and then using an abstractive summarization model to summarize each segment. If the concatenated output summaries are still larger
Figure 2: Number of _book-summary_ (left) and _version-summary_ pairs (right) for all language pairs in Echo-Wiki. Best seen in color.
than a single chunk, the recursive-abstractive approach repeats the process by treating the concatenation as a new source document and summarizing it in the same way. The recursive process continues until the concatenated output summaries are short enough to be considered as the final summary, i.e., until their size is shorter than the maximum size of a single chunk.
Experimental setting.In its simplest form, a recursive-abstractive approach requires a model trained on a standard summarization dataset; this model is then employed recursively, as described above. For our experiments, we consider three sequence-to-sequence Transformer-based models - BART-large Lewis et al. (2020), LED-base Beltagy et al. (2020), and LongT5-base Guo et al. (2022) - and train them on XSum (short documents, news) and MediaSum (long documents, interviews). Then, we evaluate our trained models on the test set of Echo-XSum,13 whose summaries feature an average length similar to that of the summaries in XSum and MediaSum but belong to a different genre (books). For the evaluation, we adopt standard summarization metrics, such as ROUGE-1, ROUGE-2, ROUGE-L, and BERTScore Zhang et al. (2019).
Footnote 13: We split Echo-Wiki and Echo-XSum into train/dev/test sets using the standard 80/10/10 split.
Results.Table 2 (top) provides an overview of the results obtained by our recursive-abstractive baseline using different language models and trained on different summarization datasets. Overall, we can observe that, independently of the language model and training dataset employed, the baseline does not achieve good results on Echo-XSum. Indeed, the best configuration (LED\({}_{XSum}\)) obtains only 14.83 points in ROUGE-L on Echo-XSum. By comparison, the same configuration achieves 30.24 points on XSum. Therefore, i) Echo-XSum is empirically more challenging than XSum, ii) a simple recursive-abstractive approach is not sufficient to obtain acceptable results on Echo-XSum, and, iii) different pretrained language models and different summarization datasets (from different genres/domains) do not significantly affect the results of a recursive-abstractive approach on our book summarization dataset.
### Extractive-then-abstractive approaches
Since recursive-abstractive approaches yield unsatisfying results on Echo-XSum (see Table 2), we propose a simple, novel baseline based on the extractive-then-abstractive paradigm. Our model is composed of two submodules: the _extractor_ extracts key sentences from the input text, while the _abstractor_ uses the concatenation of these key sentences to generate an abstractive plot of the book. Given an input text \(T=(s_{1},s_{2},\dots,s_{[T]})\) where each \(s_{i}\) is a sentence, the extractor produces a score in \([0.0,1.0]\) for each \(s_{i}\), quantifying its degree of importance for the target summary. More formally:
\[\mathbf{e}_{i}^{s} =\textsc{SentenceEncoder}(s_{i})\] \[\textsc{Score}(s_{i}) =\sigma(W\mathbf{e}_{i}+\mathbf{b})\]
where \(\mathbf{e}_{i}^{s}\) is the sentence representation of \(s_{i}\) from a sentenceEncoder.14 Then, the abstractor takes the subset \(T^{*}\) composed of the \(k\) sentences with higher scores according to the extractor, and uses \(T^{*}\) to generate the final summary. To make the abstractor aware of the relative importance of each sentence, we multiply the embedding of each token by the score of its sentence, as follows:
Footnote 14: We adopt a SentenceTransformer based on DistilRoBERTa from [https://www.sbert.net/](https://www.sbert.net/).
\[\mathbf{e}_{i,j}^{t}=\textsc{Score}(s_{i})\;\cdot\;\textsc{Embedding}(t_{i,j})\]
where \(\mathbf{e}_{i,j}^{t}\) is the encoding of the \(j\)-th token of the \(i\)-th sentence, for each sentence in \(T^{*}\).
The model is trained in an end-to-end fashion, i.e., the extractor and abstractor are trained jointly, by minimizing the cross-entropy loss between the reference summary and the generated summary.
Experimental setting.We follow the experimental setting we used for our recursive-abstractive approach. We train and evaluate 3 models - BART-large, LED-base, and LongT5-base - on Echo-XSum. Since pretraining on XSum results in
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & **Model** & **R-1** & **R-2** & **R-L** & **BERTScore** \\ \hline \multirow{4}{*}{BART-large} & BART\({}_{XSum}\) & 18.02 & 2.91 & 13.81 & 0.438 \\ & BART\({}_{MediaSum}\) & 13.95 & 5.11 & 12.72 & 0.416 \\ & LED\({}_{XSum}\) & 18.86 & 2.99 & 14.83 & 0.440 \\ & LED\({}_{DataSum}\) & 14.69 & 4.26 & 12.79 & 0.421 \\ & LongTS\({}_{XSum}\) & 14.53 & 2.31 & 12.05 & 0.413 \\ & LongTS\({}_{MediaSum}\) & 16.54 & 5.47 & 14.35 & 0.429 \\ \hline \multirow{4}{*}{BART-large} & BART & 30.44 & 12.41 & 25.76 & 0.557 \\ & BART\({}_{XSum}\) & **30.78** & 13.44 & **26.73** & 0.558 \\ & LED & 30.18 & 12.73 & 25.79 & 0.558 \\ & LED\({}_{XSum}\) & 30.22 & 13.05 & 26.28 & **0.560** \\ & LongT5 & 30.05 & **13.52** & 26.02 & **0.560** \\ & LongT5\({}_{XSum}\) & 29.42 & 13.35 & 26.00 & 0.557 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Automatic evaluation of recursive-abstractive and extractive-then-abstractive approaches on Echo-XSum.
slightly improved performance for the recursive-abstractive approach, we also evaluate how pre-training on XSum affects the performance of our extractive-then-abstractive approach. Finally, we also train and evaluate our approach on Echo-Wiki and on BookSum (the latter to directly compare performance with the current state of the art).
Results.Table 2 (bottom) provides an overview of the results obtained by our extractive-then-abstractive approach on Echo-XSum. We can immediately notice that each configuration significantly outperforms the recursive-abstractive baselines by a large margin. For example, the best extractive-then-abstractive model (BART\({}_{XSum}\)) improves over the best recursive-abstractive model (LED\({}_{XSum}\)) by 11.90 points in ROUGE-L (26.73 vs. 14.83), and this is true for all the metrics we consider (ROUGE-1, ROUGE-2, ROUGE-L, and BERTScore). It is interesting to note that, while there is little difference in the results on Echo-XSum of different model configurations, there is a significant difference between BART, LED, and LongT5 when evaluated on Echo-Wiki, as shown in Table 3. We hypothesize that such a variance in performance is due to several factors, but the inadequacy of current non-semantic metrics plays a large role, as supported by our human evaluation (see Section 5).
Finally, we further assess the effectiveness of our extractive-then-abstractive approach on the standard test set of BookSum (Table 6). In particular, our approach outperforms the system of Kryscinski et al. (2021) using 33% of its parameters, and is competitive with the system of Wu et al. (2021) using only 0.1% of its parameters.
## 5 Analysis and Discussion
Human evaluation.Following common practice in the field of summarization, we set up a human evaluation process to assess the quality of the system-generated summaries. The annotation task, performed by an expert English speaker, consists of reading the source text and rating the summaries using a Likert scale for Consistency, Relevance, Fluency, and Coherence, as outlined in Fabbri et al. (2021). To make this experiment feasible in terms of time and resources, we focus our evaluation on fairy tales and short stories, which can be read by a human in a short time. Interestingly, but not surprisingly (Fabbri et al., 2021), the results of our human evaluation experiment tell a story that is different from ROUGE, as shown in Tables 4 and 5. However, the evaluation still highlights the effectiveness of our extractive-then-abstractive model compared to the recursive-abstractive baseline. It is clear, however, that future work should focus in particular on improving the Consistency and Relevance of the summaries generated.
Challenges.Echoes opens the door to several other analyses and experiments that were not possible with previous datasets. For example, we can leverage Echo-FairySum to perform an analysis of the behavior of the extractor submodule of our extractive-then-abstractive approach, as we show in Appendix D. In Section 3.4, we examined the different book genres in Echoes; LongT5 model performances are detailed for each genre in Figure 3. We notice that epic poems are the hardest to summarize in this setting, while our model performs reasonably well on fairy tales.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & **Model** & **Cons.** & **Fluency** & **Rel.** & **Coher.** \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & BART\({}_{XSum}\) & 2.19 & 3.81 & 1.62 & 3.58 \\ & LED\({}_{XSum}\) & 1.65 & 3.96 & 1.31 & 2.92 \\ & LongT5\({}_{XSum}\) & 1.23 & 2.88 & 1.19 & 2.34 \\ & BART\({}_{MediaSum}\) & 1.73 & 2.46 & 1.62 & 2.19 \\ & LED\({}_{MediaSum}\) & 1.61 & 2.23 & 1.46 & 1.92 \\ & LongT5\({}_{MediaSum}\) & 1.11 & 1.38 & 1.12 & 1.38 \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & BART & 1.69 & 4.38 & 1.76 & 4.42 \\ & BART\({}_{XSum}\) & 1.61 & 3.06 & 1.35 & 2.71 \\ \cline{1-1} & LED & 1.84 & 4.34 & 1.84 & 4.23 \\ \cline{1-1} & LED\({}_{XSum}\) & 1.72 & 3.97 & 1.55 & 3.66 \\ \cline{1-1} & LongT5 & **2.73** & **4.50** & **2.73** & **4.62** \\ \cline{1-1} & LongT5\({}_{XSum}\) & 2.04 & 3.85 & 1.74 & 3.52 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Human evaluation of recursive-abstractive approaches on Echo-XSum.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Cons.** & **Fluency** & **Rel.** & **Coher.** \\ \hline BART & 2.06 & **3.73** & 1.65 & **3.08** \\ LED & 2.02 & 3.63 & 1.61 & 3.07 \\ LongT5 & **2.15** & 3.62 & **1.72** & 3.06 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Human evaluation of extractive-then-abstractive approaches on Echo-Wiki.
Cross-lingual book summarization.Additionally, Echoes can be employed as a multilingual and cross-lingual summarization benchmark, thanks to its coverage of 5 languages and 25 language pairs. In particular, we argue that cross-lingual book summarization is a very interesting challenge, as it requires a model to compress vast amounts of information while transferring knowledge across languages. Moreover, enabling cross-lingual book summarization is fundamental for all those cases in which we do not have the source text available in the language of interest, i.e., its translation may still be under copyright or may not exist at all. To move the first step in this direction, we propose a _summarize-then-translate_ approach, a simple baseline for cross-lingual book summarization on Echo-XSum. As the name implies, our approach works by employing a monolingual model to produce a summary in the same language as the source text, and then it translates the summary from the source language to the desired target language. We report the results of this baseline in Table 7. While this is a strong baseline, it is still affected by two main issues: i) it requires two systems, a summarizer and a translator; ii) machine translation usually fails to translate language-specific items, e.g., character names may not be exact translations.
## 6 Conclusion
In this paper, we introduced Echoes, the first multilingual resource for book summarization and the largest among the English datasets. Echoes features three novel datasets, namely, Echo-Wiki, Echo-XSum, and Echo-FairySum, which address several limitations of existing book summarization resources, such as BookSum. Indeed, previous datasets for full-text book summarization are, i) limited in size, and, ii) monolingual, i.e., usually covering English only.
In addition, we leveraged Echoes to bring to light the unsatisfying capabilities of current approaches to generalize to book summarization. Finally, to mitigate this issue, we proposed a new _extractive-then-abstractive_ baseline for book summarization, which outperforms its purely-abstractive counterpart on Echo-Wiki and Echo-XSum, achieving results on the standard BookSum test set that are comparable with the current state of the art while using a number of parameters that is only 0.1% compared to the best-performing method.
We believe that Echoes will foster future work on long-document summarization, especially in the multilingual and cross-lingual setting.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Approach** & **R-1** & **R-2** & **R-L** & **\# Params.** \\ \hline Kryscinski et al. (2021) & 39.87 & 8.01 & 13.99 & 737M \\ Wu et al. (2021) & 43.19 & 10.63 & 17.10 & 175,000M \\
**Ours** (LED/extractive-abs.) & 42.13 & 10.53 & 16.75 & 243M \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results of our approach compared to the state of the art on the BookSum test set.
\begin{table}
\begin{tabular}{c r r r r r} \hline \hline
**Language** & **\# Examples** & **R-1** & **R-2** & **R-L** & **BERTScore** \\ \hline de & 24 & 21.219 & 6.808 & 17.742 & 0.641 \\ fr & 33 & 21.602 & 7.681 & 17.721 & 0.622 \\ es & 45 & 24.509 & 8.966 & 19.554 & 0.634 \\ it & 37 & 25.174 & 10.446 & 22.343 & 0.633 \\ \hline \hline \end{tabular}
\end{table}
Table 7: _Summarize-then-translate_ experiment. We translate the summaries generated by LongTS\({}_{base}\) model, fine-tuned on Echo-XSum, and compare them against gold standard references.
Figure 3: Genre-specific evaluation of LongT5\({}_{base}\) model fine-tuned on Echo-XSum. Best seen in color.
### Limitations
Despite the multilinguality of our resource, there is still a strong bias towards the English language, as the majority of books are in English and many translations are from English. This may result in the values of English literature being reflected, and these may differ from those of other cultures; summarizing literature from different cultures and regions may not be fully accurate, as every region has had its own historical development.
Language models used in the experiments can inherit biases from the training data and the tools, such as the ones used for preprocessing, and have limitations that have not been fully evaluated and could impact the results of this study.
This study includes the use of Web data, which - while marked as public domain - may be subject to copyright laws. The data used in this study was collected for research purposes and was not intended for any other use. Additionally, it is worth noting that the majority of books used in our resource are copyright-free, and therefore, old. While this allowed us to include a large number of texts in our dataset, it also means that our resource may not fully capture contemporary literature and may not be representative of current linguistic trends and cultural values.
## Acknowledgements
The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research.
The last author gratefully acknowledges the support of the PNRR MUR project PE0000013-FAIR. This work was carried out while Alessandro Scire was enrolled in the Italian National Doctorate on Artificial Intelligence run by Sapienza University of Rome. We would like to express our gratitude to Luigi Procopio and Edoardo Barba for their valuable insights on extractive-then-abstractive architectures, as well as to Fabrizio Brignone (Babelscape) for his exceptional support with the adaptation and use of Babelsscape's keyword and phrase annotation interface.
|
2303.01492 | An Improved Classical Singular Value Transformation for Quantum Machine
Learning | We study quantum speedups in quantum machine learning (QML) by analyzing the
quantum singular value transformation (QSVT) framework. QSVT, introduced by
[GSLW, STOC'19, arXiv:1806.01838], unifies all major types of quantum speedup;
in particular, a wide variety of QML proposals are applications of QSVT on
low-rank classical data. We challenge these proposals by providing a classical
algorithm that matches the performance of QSVT in this regime up to a small
polynomial overhead.
We show that, given a matrix $A \in \mathbb{C}^{m\times n}$, a vector $b \in
\mathbb{C}^{n}$, a bounded degree-$d$ polynomial $p$, and linear-time
pre-processing, we can output a description of a vector $v$ such that $\|v -
p(A) b\| \leq \varepsilon\|b\|$ in $\widetilde{\mathcal{O}}(d^{11}
\|A\|_{\mathrm{F}}^4 / (\varepsilon^2 \|A\|^4 ))$ time. This improves upon the
best known classical algorithm [CGLLTW, STOC'20, arXiv:1910.06151], which
requires $\widetilde{\mathcal{O}}(d^{22} \|A\|_{\mathrm{F}}^6 /(\varepsilon^6
\|A\|^6 ) )$ time, and narrows the gap with QSVT, which, after linear-time
pre-processing to load input into a quantum-accessible memory, can estimate the
magnitude of an entry $p(A)b$ to $\varepsilon\|b\|$ error in
$\widetilde{\mathcal{O}}(d\|A\|_{\mathrm{F}}/(\varepsilon \|A\|))$ time.
Our key insight is to combine the Clenshaw recurrence, an iterative method
for computing matrix polynomials, with sketching techniques to simulate QSVT
classically. We introduce several new classical techniques in this work,
including (a) a non-oblivious matrix sketch for approximately preserving
bi-linear forms, (b) a new stability analysis for the Clenshaw recurrence, and
(c) a new technique to bound arithmetic progressions of the coefficients
appearing in the Chebyshev series expansion of bounded functions, each of which
may be of independent interest. | Ainesh Bakshi, Ewin Tang | 2023-03-02T18:53:03Z | http://arxiv.org/abs/2303.01492v4 | # An Improved Classical Singular Value Transformation for Quantum Machine Learning
###### Abstract
Quantum machine learning (QML) has shown great potential to produce large quantum speedups for computationally intensive linear algebra tasks. The quantum singular value transformation (QSVT), introduced by Gilyen, Su, Low and Wiebe [1], is a unifying framework to obtain QML algorithms. We provide a classical algorithm that matches the performance of QSVT on low-rank inputs, up to a small polynomial overhead. Under efficient quantum-accessible memory assumptions, given a bounded matrix \(A\in\mathbb{C}^{m\times n}\), a vector \(b\in\mathbb{C}^{n}\), and a bounded degree-\(d\) polynomial \(p\), QSVT can output a measurement from the state \(|p(A)b\rangle\) in \(\mathcal{O}(d\|A\|_{F})\) time after linear-time pre-processing. We show that, in the same setting, for any \(\varepsilon>0\), we can output a vector \(v\) such that \(\|v-p(A)b\|\leqslant\varepsilon\|b\|\) in \(\mathcal{O}(d^{9}\|A\|_{F}^{4}/\varepsilon^{2})\) time after linear-time pre-processing. This improves upon the best known classical algorithm [1], which requires \(\mathcal{O}(d^{22}\|A\|_{F}^{6}/\varepsilon^{6})\) time.
Instantiating the aforementioned algorithm with different polynomials, we obtain fast _quantum-inspired_ algorithms for regression, recommendation systems, and Hamiltonian simulation. We improve in numerous parameter settings on prior work, including those that use problem-specialized approaches, for quantum-inspired regression [1, 2, 1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 285, 287, 288, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 312, 323, 335, 336, 337, 313, 324, 338, 333, 34, 35, 36, 37, 38, 39, 39, 314, 33, 34, 35, 36, 38, 39, 315, 32, 33, 34, 36, 39, 32, 34, 37, 38, 39, 32, 35, 39, 33, 36, 39, 33, 37, 38, 39, 34, 38, 39, 35, 39, 36, 37, 39, 38, 39, 39, 31, 32, 34, 35, 39, 36, 39, 37, 38, 39, 39, 32, 39, 33, 38, 39, 34, 39, 35, 39, 36, 37, 39, 38, 39, 39, 37, 39, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 173, 174, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 209, 211, 223, 214, 215, 216, 217, 218, 219, 223, 219, 224, 225, 219, 230, 231, 232, 233, 24, 246, 247, 248, 249, 250, 261, 278, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 302, 303, 304, 305, 306, 307, 308, 309, 311, 323, 335, 308, 309, 320, 331, 330, 34, 35, 36, 37, 38, 39, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 80, 81, 84, 85, 87, 88, 89, 82, 89, 83, 86, 89, 90, 82, 89, 84, 85, 87, 88, 89, 86, 89, 91, 92, 93, 94
###### Contents
* 1 Introduction
* 2 Our Results
* 2.1 Applications: Dequantizing QML
* 3 Technical overview
* 4 Related work
* 5 Preliminaries
* 5.1 Linear algebra
* 5.2 Polynomials and the Chebyshev Basis
* 5.3 Sampling and query access
* 6 Extending the Sketching Toolkit
* 6.1 The Bi-Linear Entry-wise Sampling Transform
* 6.2 Approximate Matrix Product via \(\ell_{2}^{2}\) Sampling
* 7 Sums of Chebyshev coefficients
* 8 Properties of the Clenshaw recursion
* 8.1 Deriving the Clenshaw recursions
* 8.2 Evaluating even and odd polynomials
* 9 Stability of the scalar Clenshaw recursion
* 9.1 Analyzing error propagation
* 9.2 Bounding the iterates of the Clenshaw recurrence
* 10 Computing matrix polynomials
* 10.1 Computing odd matrix polynomials
* 10.2 Generalizing to even polynomials
* 10.3 Bounding iterates of singular value transformation
* 11 Decoupling algorithms
* 11.1 Recommendation systems
* 11.2 Linear regression
* 11.3 Hamiltonian simulation
Introduction
Quantum machine learning (QML) has rapidly developed into a highly active field of study, with numerous proposals for speeding up machine learning tasks on quantum computers ([12, 13, 14, 15, 16, 17], and see [14, 13] for a survey). These proposals include quantum algorithms for several basic tasks in machine learning, including regression, principal component analysis, support vector machines, recommendation systems, Hamiltonian simulation and semi-definite programming. The central goal in QML is to demonstrate a problem on which quantum computers obtain a substantial practical speedup over classical computers. A successful resolution of this goal would provide compelling motivation to further invest resources into developing scalable quantum computers (i.e. be a _killer app_ for quantum computing).
The quantum singular value transformation (QSVT) framework [11, 12, 13] uses ideas from signal processing to present a unified approach to design algorithms for quantum linear algebra and, by extension, QML. This framework is known to capture essentially all linear algebraic QML techniques [13] (and, more generally, the most prominent examples of quantum advantage), so it will be the focus of our investigation in this work. Informally, the QSVT framework defines a central primitive known as the _block-encoding_, and shows that, given a matrix \(A\in\mathbb{C}^{m\times n}\) with bounded operator norm as a block-encoding, one can generate a block-encoding for a degree-\(d\) polynomial applied to that matrix, \(p(A)\), defined in the appropriate sense, with only \(O(d\log(mn))\) overhead in gate complexity.1 Efficient block-encodings do not exist in general, but they do exist for two broad classes of matrices, assuming appropriately strong forms of coherent access: matrices with low sparsity (a typical setting for quantum simulation) and matrices with low stable rank (a typical setting for quantum machine learning). We treat the latter case; specifically, we assume that the classical input data is in a quantum-accessible data structure, which allows for efficient (as in low-depth) preparation of a block-encoding of \(A/\|A\|_{F}\), where \(\|A\|_{F}=\sum_{i,j}\bigl{|}A_{ij,j}\bigr{|}^{2}\) denotes the Frobenius norm of \(A\).
Footnote 1: We will generally not concern ourselves with \(\log(mn)\) factors, since quantum algorithms typically count bit complexity where classical algorithms count word RAM complexity, which muddles any comparison of such \(\log(mn)\) and \(\log\frac{1}{\varepsilon}\) factors.
This type of block-encoding is the one commonly used for quantum linear algebra algorithms on classical data [11, 12], since it works for arbitrary matrices and vectors, paying only a \(\|A\|_{F}/\|A\|\) in sub-normalization. The main consequence of the QSVT framework for QML is that, given a matrix \(A\in\mathbb{C}^{m\times n}\) with bounded operator norm, a vector \(b\in\mathbb{C}^{n}\) of unit norm, and a degree-\(d\) polynomial \(p\) with \(|p(x)|\leqslant 1\) for \(x\in[-1,1]\), we can output a sample2 from the state \(|p(\frac{A}{\|A\|_{F}})b\rangle\) in time \(O(d\log(mn))\). Since in applications we care about applying polynomials to the singular values of \(A\), which in \(\frac{A}{\|A\|_{F}}\) lie in the range \([0,\frac{\|A\|}{\|A\|_{F}}]\), we need to pay an additional overhead of \(\frac{\|A\|_{F}}{\|A\|}\) to amplify these singular values [13, Theorem 30] to the full region \([0,1]\), making the gate complexity to produce a state \(\varepsilon\)-close to \(|p(A)b\rangle\), \(\mathcal{O}(d\frac{\|A\|_{F}}{\|A\|}\log(mn)\log(1/\varepsilon))\). Note that, though the quantum algorithm has only logarithmic dependence on \(\varepsilon\), if we wish to determine a property of the output state (say, whether its norm is large or whether it has high overlap with another state), this incurs an \(\mathcal{O}(1/\varepsilon)\), since distinguishing a state from one \(\varepsilon\)-far in trace distance requires \(\Omega(1/\varepsilon)\) additional overhead, even when given an oracle efficiently
preparing that state. Therefore, the effective running time for QSVT is \(\Omega\Big{(}\frac{d\|A\|_{F}}{\|A\|_{\varepsilon}}\log(mn)\Big{)}\).
At first glance, it appears that QSVT obtains an exponential speed-up over classical algorithms, since it scales logarithmically with input size. However, as first argued in the work of [19], to obtain a fair comparison, the classical algorithms should be equipped with a \(\ell_{2}^{2}\) sampling data structure to access the input (see Section 5.3 for the precise model). Given access to such a data structure, and an accuracy parameter \(\varepsilon>0\), Chia, Gilyen, Li, Lin, Tang and Wang [20] showed that there exists a classical algorithm that outputs a vector \(v\) such that \(\|p(A)b-v\|\leqslant\varepsilon\), in \(\mathcal{O}(d^{22}\|A\|_{F}^{6}/\varepsilon^{6})\) time. As a result, they obtain _quantum-inspired_ algorithms for quantum-inspired versions of several fundamental problems in machine learning, albeit with a running time that is a large polynomial in the degree of the polynomial \(p\), the Frobenius norm of \(A\), and the accuracy parameter \(\varepsilon\).
Multiple papers [14, 15] have conjectured that the large polynomial running time (significantly larger than quartic) for classical algorithms may be inherent, and thus can demonstrate a practical speedup for several problems in QML. This is borne out in prior work on quantum-inspired algorithms, which is dominated by the cost of computing a singular value decomposition of a matrix with \(\Omega((\frac{\|A\|_{F}}{\|A\|_{\varepsilon}})^{2})\) rows and columns, immediately incurring a power-six dependence in \(\varepsilon\) and the Frobenius norm of \(A\). Therefore, the central question we address in this paper is as follows:
_Is the large polynomial running time for dequantizing QSVT inherent?_
## 2 Our Results
We answer the central question above in the negative for all parameters except polynomial degree, and show that there are indeed better classical algorithms that simulate QSVT. Our main result is as follows:
**Theorem 2.1** (Classical Singular Value Transformation, informal Theorem 10.1).: _Suppose we are given \(A\in\mathbb{C}^{m\times n}\) with \(\|A\|\leqslant 1\) and \(b\in\mathbb{C}^{n}\), and an accuracy parameter \(\varepsilon\). Then, after \(O(\operatorname{nnz}(A)+\operatorname{nnz}(b))\) preprocessing time to create a data structure3, for an even or odd degree-d polynomial \(p\) such that \(|p(x)|\leqslant 1\) for \(x\in[-1,1]\), we can output a description of a vector \(y\in\mathbb{C}^{n}\) such that with probability at least 0.9, \(\|y-p(A)b\|\leqslant\varepsilon\|b\|\) in \(O\Big{(}d^{9}\log^{4}(d)\|A\|_{F}^{4}\log(n)/\varepsilon^{2}\Big{)}\) time. This description gives \(y\) as either \(Ax\) or \(A^{4}x\) for a sparse vector \(x\), depending on the parity of \(p\), and allows us to compute entries of \(y\) in \(\tilde{O}\Big{(}d^{4}\|A\|_{F}^{2}/\varepsilon^{2}\Big{)}\) time or obtain an \(\ell_{2}^{2}\) sample from \(y\) in \(\tilde{O}\Big{(}d^{6}\|A\|_{F}^{4}/(\varepsilon^{2}\|y\|^{2})\Big{)}\) time._
Footnote 3: If we are already given \(A\) and \(b\) in the QRAM data structures needed to prepare a block-encoding of \(A/\|A\|_{F}\) and a quantum state of \(b/\|b\|\), this preprocessing can be done in \(O(d^{8}\log^{8}(d)\log^{2}(n)\|A\|_{F}^{4}/\varepsilon^{4})\) time.
**Remark 2.2** (No large speedup for low-degree QSVT circuits).: Recall the setting of QSVT for QML, where we have \(A\) with bounded operator norm in a quantum-accessible data structure, can apply a QSVT circuit, and wish to learn a simple linear algebraic property of the output state (say, the inner product \(\langle v|p(A)b\rangle\) for a given vector \(v\)). As discussed, the quantum gate complexity of this QSVT algorithm4 is \(\Omega(d\|A\|_{F}/\varepsilon)\).
Classically, we can use Theorem 2.1 to compute a description for \(p(A)b\) in \(O(d^{9}\log^{4}(d)\|A\|_{F}^{4}\log(n)/\varepsilon^{2})\) time, which we can then use to compute an entry or estimate an overlap \(\langle v|p(A/\|A\|_{F})b\rangle\). The runtime is dominated by the cost of the initial algorithm, and so the gap between quantum and classical is 1-to-9 for the degree \(d\), 1-to-4 for \(\|A\|_{F}\) (which we think of as square root of stable rank \(\frac{\|A\|_{F}}{\|A\|}\)), and 1-to-2 for \(\varepsilon\). So, considering constant degree, the gap is merely quartic, which is the type of speedup that prior work suggests might just suffice to achieve a quantum advantage for intermediate-term quantum computers [14].
**Remark 2.3** (Comparison to [14, 15, 20]).: There are three papers that get similar results about "dequantizing the quantum singular value transformation". The work of Chia, Gilyen, Li, Lin, Tang, and Wang [14] gives a runtime of \(\widetilde{O}(d^{22}\|A\|_{F}^{6}/\varepsilon^{6})\) (after renormalizing so that \(\|A\|=1\) instead of \(\|A\|_{F}=1\)). We improve in all parameters over this work: degree of the polynomial \(p\), Frobenius norm of \(A\), and accuracy parameter \(\varepsilon\).
The work of Jethwani, Le Gall, and Singh [15] provides two algorithms for applying \(f(A)b\) for Lipschitz-continuous \(f\) and \(A\) with condition number \(\kappa\). (Standard results in function approximation state that such functions can be approximated by polynomials of degree \(O(L)\), where \(L\) is the Lipschitz constant of \(f\) and the tail is either polynomial or exponential depending on how smooth \(f\) is [16].) In particular, they achieve a runtime of \(O(\|A\|_{F}^{6}\kappa^{20}(d^{2}+\kappa)^{6}/\varepsilon^{6})\) to apply a degree-\(d\) polynomial. Again, we improve in all parameters over this work: degree of the polynomial \(p\), Frobenius norm of \(A\), and accuracy parameter \(\varepsilon\). We also do not incur condition number dependence.
Finally, the work of Gharibian and Le Gall [15] considers QSVT when the input matrices are sparse, which is the relevant regime for quantum chemistry and other problems in many-body systems. Here, the matrices of interest are local Hamiltonians, which are row- and column-sparse. Their main contribution is distinguishing between constant-degree QSVT circuits, which can be simulated in polynomial time classically, and polynomial-size QSVT circuits (in the number of qubits), which can solve BQP-complete problems. We deal with a different setting, where all circuits can be simulated efficiently in polynomial time.
Next, we describe the implications of Theorem 2.1 to specific problems in QML and show that we obtain faster _quantum-inspired_ algorithms for several such problems.
### Applications: Dequantizing QML
We begin by stating the de-quantization result we obtain for regression.
**Corollary 2.4** (De-quantizing Regression, informal Corollary 11.5).: _Given an \(\ell_{2}^{2}\)-sampling oracle for \(A\in\mathbb{C}^{m\times n}\) and \(b\in\mathbb{C}^{n}\) such that \(\|A\|,\|b\|\leqslant 1\), a parameter \(0<\sigma<1\), and an accuracy parameter \(\varepsilon>0\), we can output the representation of a vector \(y\in\mathbb{C}^{n}\) such that with probability at least \(0.9\), \(\left\|y-A_{\geq\sigma}^{+}b\right\|\leqslant\varepsilon\), where \(A_{\geq\sigma}^{+}\) denotes the function on \(A\) that is the inverse for singular values that are \(\geq\sigma\) and smoothly thresholds away all singular values below \(\sigma\). This algorithm runs in \(\tilde{O}\Big{(}\|A\|_{F}^{4}/(\varepsilon^{2}\sigma^{11})\Big{)}\) time, where \(\sigma\) is the chosen threshold for \(A\). We can also output a sample from \(y\) in the same running time._
**Remark 2.5** (Comparison with [14, 15, 21]).: We note that Chia et. al. [14] get a running time of \(O\Big{(}\|A\|_{F}^{6}/(\varepsilon^{6}\sigma^{28})\Big{)}\). We improve over this result in all parameters.
Chepurko et. al. [14] get a running time of \(\tilde{O}(\|A\|_{F}^{4}\log(d)/(\sigma^{8}\varepsilon^{4}))\), where \(\sigma\) is the minimum singular value of \(A\), assuming that we perform regression with some sizable regularization
\(\lambda>0\). We improve over their work in \(\varepsilon\) dependence, and match it in \(\|A\|_{F}\) dependence. As for the \(\sigma\) dependence, this achieves error to \(\varepsilon\|A\|\|b\|\) error, which is worse than the \(\varepsilon\|x^{*}\|\) bound that we actually achieve; making the conversion costs an additional \(1/\sigma^{4}\) in overhead. So, for this low-error setting, we achieve better \(\sigma\) dependence, but if only worse error is required, their algorithm performs better.
Shao and Montanaro [21] get a running time of \(O\big{(}\|A\|_{F}^{6}/\sigma^{8}\varepsilon^{2}\big{)}\) which matches our \(\varepsilon\)-dependence, obtains a better \(\sigma\)-dependence and a worse Frobenius norm dependence. Additionally they requires that \(b\) is in the image of \(A\).
In the context of Recommendation systems, the goal is to output a sample from the rows of a low-rank approximation to \(A\), denoted by \(A_{\geqslant\sigma}\), where we zero out all the singular values smaller than \(\sigma\) (see Section11.1 for a formal problem description). We then obtain the following corollary:
**Corollary 2.6** (De-quantizing Recommendation Systems, informal Corollary11.3).: _Given a matrix \(A\in\mathbb{C}^{m\times n}\) such that \(\|A\|\leqslant 1\), an accuracy parameter \(\varepsilon\), and an \(i\in[n]\), we can produce a data structure in \(O(\operatorname{nnz}(A))\) time such that, we can compute a vector \(x_{i}\) such that with probability at least \(0.9\), \(\big{\|}A^{4}x-[A_{\geqslant\sigma}]_{i,*}\big{\|}\leqslant\varepsilon\|A_{i,*}\|\) in \(\tilde{O}\big{(}\|A\|_{F}^{4}/\left(\sigma^{9}\varepsilon^{2}\right)\big{)}\) time. Further, we can \(\ell_{2}^{2}\)-sample from \(x_{i}\) in \(\tilde{O}\big{(}\|A\|_{F}^{4}/\left(\sigma^{6}\varepsilon^{2}\big{\|}A^{4}x \right\|^{2}\big{)}\big{)}\) time._
**Remark 2.7** (Comparison to [20, 21]).: Chia et. al. [20] achieve a runtime of \(\tilde{O}\big{(}\frac{\|A\|_{F}^{6}}{\sigma^{16}\varepsilon^{6}\varepsilon^{ 4}}\big{)}\). We improve upon it in every parameter, including error \(\varepsilon\), the threshold \(\sigma\), and the Frobenius norm \(\|A\|_{F}\).
Chepurko et. al [20, Theorem 26] achieves a runtime that is at least \(\Omega(k^{3}/\varepsilon^{6})\) to get a low-rank approximation of an input vector \(M\) with the guarantee that \(\|A-M\|_{F}^{2}\leqslant(1+\varepsilon)\|A-A_{k}\|_{F}^{2}\). The authors use that \(\ell_{2}^{2}\) importance sampling sketches oversample ridge leverage score sketch in certain parameter regimes, so this requires certain addition assumptions on the size of \(\|A_{k}\|_{F}\) and the residual \(\|A-A_{k}\|_{F}\). Work on quantum recommendation systems [21, 22] require a singular value threshold \(\sigma\) instead of a rank threshold \(k\), and the standard way to convert between this "sketching"-style error bound and the "QSVT"-style error bound is to bound \(k\leqslant\|A\|_{F}^{2}/\sigma^{2}\). Upon doing this, we see that the runtime is \(\tilde{O}\big{(}\frac{\|A\|_{F}^{6}}{\sigma^{6}\varepsilon^{6}}\big{)}\). Our work improves upon this in the \(\|A\|_{F}\) and \(\varepsilon\) parameters, but we lose a factor of \(\sigma^{3}\).
Next, we state the de-quantization result we obtain for Hamiltonian simulation:
**Corollary 2.8** (Hamiltonian Simulation, informal Corollary11.8).: _Given a symmetric Hamiltonian \(H\in\mathbb{C}^{n\times n}\) with \(\|H\|\leqslant 1\) and a vector \(b\in\mathbb{C}^{n}\), we can output a description of a vector \(v\) such that, with probability \(\geqslant 0.9\), \(\big{\|}v-e^{iHt}b\big{\|}\leqslant\varepsilon\|b\|\) in \(\tilde{O}(t^{9}\|H\|_{F}^{4}/\varepsilon^{2})\)._
We note that the only prior work [20] we are aware of in the low-rank regime obtains a running time \(O\Big{(}t^{16}\|H\|_{F}^{6}/\varepsilon^{6}\Big{)}\), and we improve upon it in every parameter.
## 3 Technical overview
In this section, we describe our classical framework for simulating QSVT and provide an overview of our key new contributions. In brief: our main conceptual contribution, using an iterative method (the Clenshaw recurrence) instead of a pure sketching algorithm, is enough to achieve \(O(d^{13}\|A\|_{F}^{4}/\varepsilon^{4})\), corresponding to \(d\) iterations of matrix-vector products of size \(\|A\|_{F}^{2}/(\varepsilon/d^{3})^{2}\) by \(\|A\|_{F}^{2}/(\varepsilon/d^{3})^{2}\). With insights into the stability of the Clenshaw recurrence
and sums of Chebyshev coefficients, we improve the \(\varepsilon/d^{3}\) to an \(\varepsilon/(d^{2}\log^{2}(d))\). In the worst case, we may need to rescale \(\varepsilon\) to \(\varepsilon/d^{2}\); in other words, our stability analysis is tight up to \(\log(d)\) factors. With insights into matrix sparsification, we improve the \(\varepsilon^{4}\) to an \(\varepsilon^{2}\), which is clearly tight. Together,5 this gives the final runtime of \(O(d^{9}\log^{4}(d)\|A\|_{F}^{4}/\varepsilon^{2})\). We begin by providing an informal description of the input access model.
Footnote 5: It is natural to wonder here why the complexity is not something like \(d\|A\|_{F}^{4}/(\varepsilon/(d^{2}\log^{2}(d)))^{2}=d^{5}\log^{2}(d)\|A\|_{F}^{ 4}/\varepsilon^{2}\). Such a runtime is conceivable, but our analysis essentially replaces two factors of \(1/\varepsilon\) with factors of \(1/d^{2}\), so our sketching ideas do not save any factors of \(d\).
Input model: Oversampling and query access.Our goal is to give a classical version of the quantum singular value transformation. In the quantum setting, we have a matrix \(A\in\mathbb{C}^{m\times n}\) with \(\|A\|\leqslant 1\) (given as a _block-encoding_) and a vector \(b\) with \(\|b\|=1\) (given encoded into the amplitudes of a quantum state), and wish to compute \(f(A)b\), where \(f:[-1,1]\to\mathbb{R}\) is an appropriately smooth function. The block-encoding access model is inherently quantum, but there are standard ways to construct a block-encoding for a matrix from classical data, including by assuming the matrix is sparse with efficiently computable entries [1, Lemma 48] and given as a quantum circuit [1, Definition 44]. Both of these support universal quantum computation, and therefore cannot be simulated classically unless BPP=BQP.
However, block-encodings can also be achieved given an arbitrary matrix in a data structure placed in quantum random access memory [1, Lemma 50], a proposal for a quantum hardware architecture theorized to give the ability for quantum computers to efficiently access stored memory in superposition [1]. As noted in prior work, this model incurs a square root of stable rank, i.e. \(O(\|A\|_{F}/\|A\|)\) overhead, allowing it to be simulated classically with only polynomial overhead with sketching algorithms [1].
Chia et. al. [1] proceed by introducing the access model of _oversampling and query access_, which can be interpreted as the classical analogue of the block-encoding and amplitude-encoded quantum state in the quantum setting. In particular, given a vector \(v\in\mathbb{C}^{n}\), _oversampling and query access_ corresponds to: (1) given an index \(i\in[n]\), outputting \(v_{i}\), (2) sampling an index \(j\) with probability \(\left|v(j)\right|^{2}/\|v\|^{2}\), and (3) outputting \(\|v\|^{2}\). Similarly for a matrix \(A\in\mathbb{C}^{m\times n}\), _oversampling and query access_ corresponds to having _oversampling and query access_ to all rows of A, as well as the vector of the row norms. We point the reader to [1] for an explanation of why this model is the right classical analogue to benchmark quantum algorithms against. In short, this model has closure properties very similar to that of the block-encoding, and can be achieved whenever quantum states and generic block-encodings can be prepared efficiently.
Computing polynomials of matrices.The main primitive of QSVT is to take a block-encoding of a matrix \(A\) and give a block-encoding of an even or odd polynomial of that matrix, \(p(A)\), with an overhead of \(\deg(p)\). When \(A\) is asymmetric, we can interpret QSVT as applying the matrix function that applies \(p\) to each singular value of \(A\) (Definition 5.1). In this way, quantum linear algebra algorithms can apply generic functions to the singular values of a matrix, provided that they are smooth enough to be approximated well by a low-degree polynomial.
In order to simulate QSVT classically, given a matrix \(A\in\mathbb{C}^{m\times n}\), a vector \(b\in\mathbb{C}^{n}\), and a polynomial \(p:[-1,1]\to\mathbb{R}\), our goal is to compute some description \(p(A)b\). Specifically, we aim for our algorithm to run in \(\operatorname{poly}(\|A\|_{F},\frac{1}{\varepsilon},d,\log(mn))\) time after \(O(\operatorname{nnz}(A)+\operatorname{nnz}(b))\) preprocessing, and to output sampling and query access to \(x\), where \(\|x-p(A)b\|\leqslant\log(d)\|A\|_{F}^{4}/\varepsilon^{2}\). Such a runtime is conceivable, but our analysis essentially replaces two factors of \(1/\varepsilon\) with factors of \(1/d^{2}\), so our sketching ideas do not save any factors of \(d\).
\(\varepsilon\|p\|_{\sup}\|b\|\), where \(\|p\|_{\sup}=\max_{\varepsilon\in[-1,1]}|p(x)|\). We note that as a byproduct, we also obtain a \(O(\operatorname{nnz}(A)+\operatorname{nnz}(b)+\operatorname{poly}(\|A\|_{F},1/ \varepsilon,d,\log(mn)))\) algorithm for the task of outputting \(x\).
We require the running time of the algorithm to be independent of input dimension (after the preprocessing) and therefore are compelled to create sketches of \(A\) and \(b\) and work with these sketches. We note that prior work [19, 19, 20] stops here, and directly computes a SVD of the sketches, and applies the relevant polynomial \(p\) to the singular values of the sketch. As noted in the previous section, this approach loses large polynomial factors in the relevant parameters.
Combining sketches with iterative algorithms.Our main conceptual insight is to run iterative algorithms on the resulting sketches of \(A\) and \(b\) in order to approximate matrix polynomials. In the canonical numerical linear algebra regime (working with matrices directly, instead of sketches), there are two standard methods to achieve this goal: (1) compute the polynomial explicitly through something like a Clenshaw recurrence [21], or (2) use the Lanczos method [19] to create a roughly-orthogonal basis for the Krylov subspace \(\{b,Ab,A^{2}b,\ldots\}\) and then apply the function exactly to the matrix in this basis, in which \(A\) is tridiagonal, implicitly using a polynomial approximation in the analysis. We note that in a world where we are allowed to pay \(O(\operatorname{nnz}(A))\) (or even \(O(m+n)\)) time per-iteration, we can simply run either of these algorithms and call it a day. The main challenge in the setting we consider is that each iterative step must run in time that is dimension-independent.
Given that we sketch each iterate down to a size that is dimension-independent, we introduce additive error at each step. So, in essence, we must perform a stability analysis of an iterative method, where the error introduced in each iteration is from truncating the iterates. Finite-precision/stability analysis of Clenshaw and Lanczos iteration are well-understood [21, 22, 23], and therefore one might expect to use these analysis in a black-box manner. However, unlike typical finite-precision analyses, which is concerned with error either at the granularity of "number of bits to maintain", and so is fine with polynomial loss, our final runtime depends polynomially on the quality of our error analysis. This constraint requires us to form a more intimate understanding of these iterative methods.
Folklore intuition suggests that Lanczos is a stabler algorithm for applying matrix functions, but state-of-the-art analyses of it rely on the stability of the Clenshaw recurrence as a subroutine (see [24]), and therefore gives strictly worse error-accumulation bounds than the Clenshaw recurrence. In particular, if we wish to compute a generic \(p(A)b\) to \(\varepsilon\) error in the regime where every matrix-vector product \(Ax\) incurs an error of \(\varepsilon\|A\|\|x\|\) using Lanczos, the stability analysis of Musco, Musco, and Sidford suggests that the error of the output is \(O(d^{5.5}\varepsilon)\), which would introduce a \(d^{11}\) in our setting [24].6 To incur less error, we do not use Lanczos and analyze the Clenshaw recurrence directly.
Footnote 6: This computation arises from taking Lemma 9 of [24] to evaluate a degree-\(d\) polynomial, say, bounded by \(1\) in \([-1,1]\). A generic example of such polynomial is only by a constant in \([-1-\eta,1+\eta]\) when \(\eta=O(1/d^{2})\) (Lemma 5.4), and has Chebyshev coefficients bounded only by a constant, without decaying. Thus, the bound from Lemma 9 becomes \(O(d^{5}\|E\|)\). Continuing the analysis into [22], \(E\) is the matrix whose \(i\)th column is the error incurred in the \(i\)th iteration; each column has norm \(\varepsilon\|A\|\|v_{j+1}\|=\varepsilon\) in our setting where we only incur error in matrix-vector products, since \(\|A\|=1\) by normalizing and \(\|v_{j+1}\|=1\) because the algorithm normalizes it to unit norm, and we assume that scalar addition and multiplication can be performed exactly. We have no control over the direction of error, so we can at best bound \(\|E\|\) with the column-wise bound, giving \(\|E\|\leqslant\|E\|_{F}\leqslant\sqrt{k}e\). So, our version of [24, Equation 16] gives \(\|E\|\leqslant\sqrt{d}\varepsilon\), which gives us the asserted \(O(d^{5.5}\varepsilon)\) bound.
Stability of the scalar Clenshaw recurrence.The Chebyshev polynomials \(\{T_{\ell}(x)\}_{\ell}\) form a basis and therefore any degree-\(d\) polynomial \(p(x)\) can be written as a linear combination of Chebyshev polynomials, i.e. \(p(x)=\sum_{\ell=0}^{d}a_{\ell}T_{\ell}(x)\). The Clenshaw recurrence computes \(p(x)\) through the iteration computing \(q_{d}\) through to \(q_{0}\):
\[q_{d+1},q_{d+2} \coloneqq 0;\] \[q_{k} \coloneqq 2xq_{k+1}-q_{k+2}+a_{k};\] \[p(x) =\tfrac{1}{2}(a_{0}+q_{0}-q_{2}).\]
For example, in the randomized numerical linear algebra (RNLA) literature, this is often applied in the case where \(a_{d}=1\) and \(a_{k}=0\) otherwise, to evaluate a degree-\(d\) Chebyshev polynomial \(T_{d}(x)\). Note that by Markov's inequality, a bounded polynomial has derivative bounded by \(d^{2}\)[11]. This is achieved for a Chebyshev polynomial, \(p(x)=T_{d}(x)\). So, if our error was only in changing \(x\) to some value in \((x-\varepsilon,x+\varepsilon)\), this error must cascade to a \(O(d^{2}\varepsilon)\) worst-case error in the output. This rough argument suggests that, in some sense, a \(O(d^{2})\) overhead is the best we could hope for.
Our first technical contribution is an analysis showing that the Clenshaw algorithm gives this optimal overhead, up to a logarithmic overhead. This proceeds by showing that the overhead can be upper bounded by the size of the largest Clenshaw iterate \(|q_{k}|\) (see Proposition 9.1), and then bounding the size of iterate \(|q_{k}|\) by \(O(d\log(d)\|p\|_{\text{sup}})\) (see Theorem 9.4). The main lemma we prove states that for a bounded polynomial \(p(x)=\sum_{\ell=0}^{d}a_{\ell}T_{\ell}(x)\), sums of the form \(a_{\ell}+a_{\ell+2}+\cdots\) are all bounded by \(O(\log(\ell)\|p\|_{\text{sup}})\) (Fact 7.3). This statement follows from facts in Fourier analysis (in particular, this \(\log(\ell)\) is the same \(\log\) as the one occurs when bounding the \(L^{1}\) norm of the Dirichlet kernel). Finally, we note that just using individual bounds on coefficients does not suffice, since this would only give a bound of \(O(d)\) on these sums, which can be exponentially worse than the bound we obtain, and would give \(O(d^{3})\) overhead (and a \(d^{13}\) in the final runtime instead of a \(d^{9}\)).
We note that we are not aware of any prior work where such a sharp analysis appears in the literature. The standard literature either considers an additive error (where, unlike usual models like floating-point arithmetic, each multiplication incurs identical error regardless of magnitude) [10, 11, 12] or eventually boils down to bounding \(|a_{i}|\) (since their main concern is dependence on \(x\)) [13, 14], which is insufficient to get our \(d^{2}\log(d)\) stability bound. The modern work we are aware of shows a \(O(d^{2})\) bound only for Chebyshev polynomials [1], sometimes used to give a \(O(d^{3})\) bound for computing generic bounded polynomials [13], since a degree-\(d\) polynomial can be written as a linear combination of \(T_{k}(x)\) with bounded coefficients.
Extending Clenshaw to compute matrix polynomials.So far, we have only considered the Clenshaw recurrence for a scalar \(x\). Generalizing this to matrices is fairly straightforward: for a matrix \(A\) (not necessarily square), we wish to apply an even or odd polynomial to it, and then apply it to a vector \(b\). Then, we can use a corresponding variant of the Clenshaw recurrence to compute it. For example, the matrix Clenshaw recurrence for computing \(p(x)=\sum_{\ell=0}^{d}a_{2\ell+1}T_{2\ell+1}(x)\) is
\[u_{d+1},u_{d+2} \coloneqq 0;\] \[u_{k} \coloneqq 2(2AA^{\dagger}-I)u_{k+1}-u_{k+2}+2a_{2k+1}Ab;\] \[p(A)b =u \coloneqq\tfrac{1}{2}(u_{0}-u_{1}).\]
The question becomes how to perform this iteration efficiently and stably, in a dimension-independent regime. We begin by sketching down our matrix and vector: we show that it suffices to maintain a sparse description of \(u_{k}\) of the form \(u_{k}=Av_{k}\) where \(v_{k}\) is sparse. In particular, we produce sketches \(S\in\mathbb{C}^{n\times s}\) and \(T\in\mathbb{C}^{1\times m}\) such that
1. \(\|AS(AS)^{\dagger}-AA^{\dagger}\|\leqslant\epsilon\),
2. \(\|AS^{\dagger}b-Ab\|\leqslant\epsilon\),
3. \(\|TAS(TAS)^{\dagger}-AS(AS)^{\dagger}\|\leqslant\epsilon\)
Sketches that satisfy the above type of guarantees are called _approximate matrix product_ (AMP) sketches, and are standard in the quantum-inspired algorithms literature [10]. We observe that if we have sampling and query access to \(A\) and \(b\), then we can produce these sketches of size \(s,t=O(\frac{\|A\|_{F}^{2}}{\epsilon^{2}}\log(n)\log\frac{1}{\delta})\), and then compute \(TAS\) in \(O(st)\) time. We also note that the first two guarantees follow from observing that AMP sketches oversample the symmetric approximate matrix product sketch by a factor of \(2\), and thus both guarantees hold simultaneously. The third guarantee is straight-forward and does not require asymmetry. Using these guarantees we can sketch the iterates as follows:
\[u_{k} =2(2AA^{\dagger}-I)u_{k+1}-u_{k+2}+2a_{2k+1}Ab\] \[=4AA^{\dagger}Av_{k+1}-2Av_{k+1}-Av_{k+2}+2a_{2k+1}Ab\] \[\approx AS[4(TAS)^{\dagger}(TAS)v_{k+1}-2v_{k+1}-v_{k+2}+2a_{2k+1} S^{\dagger}b].\]
Therefore, we can interpret Clenshaw iteration as the recursion on the dimension-independent term \(v_{k}\approx 4(TAS)^{\dagger}(TAS)v_{k+1}-2v_{k+1}-v_{k+2}+2a_{2k+1}S^{\dagger}b\), and then applying \(AS\) on the left to lift it back to \(m\) dimensional space. We can then analyze the sketched recursion to obtain an algorithm that runs in \(O(\|A\|_{F}^{\dagger}/\epsilon^{4})\) time per-iteration, not including the loss from rescaling \(\epsilon\), whereas we wish to achieve a \(O(1/\epsilon^{2})\) dependence per-iteration.
Though so far we have only used a very limited subset of the sketching toolkit--namely, \(\ell_{2}^{2}\) importance sampling--we remark that it's not clear how, for example, oblivious sketches [13] or the connection between importance sampling and leverage score sampling [10] help us, since our choices of sketches are optimal up to log factors for the guarantees we desire. To get the additional improvement, we need a new sketching technique.
Improving the \(\epsilon\)-dependence.A natural next step to improve per-iteration runtime is to sparsify the matrix \(TAS\), in order to make matrix-vector products more efficient. If we can sparsify \(TAS\) to \(O(1/\epsilon^{2})\) non-zero entries, then we get the desired quadratic savings in per-iteration cost.
There is significant literature on sparsifying the entries of a matrix [1, 1, 1]. However, it does not suffice for us to use these as a black box. For example, consider the sketch given by Drineas and Zouzias [11]: for a matrix \(M\in\mathbb{R}^{n\times n}\), zero out every entry smaller than \(\frac{\epsilon}{2n}\), then sample entries proportional to their \(\ell_{2}^{2}\) magnitude, and consider the corresponding unbiased estimator of \(M\), denoted \(\tilde{M}\). The guarantee is that the operator norms are close, \(\|M-\tilde{M}\|\leqslant\epsilon\), and the sparsity is \(O(n\log(n)\frac{\|A\|_{F}^{2}}{\epsilon^{2}})\). This is precisely the guarantee we need and the sketch can be performed efficiently; however, this does not sparsify the matrix for us, since in our setting, our matrices \(TAS\) have dimension \(\|A\|_{F}^{2}/\epsilon^{2}\), so the sparsity guarantee is only \(O(\|A\|_{F}^{4}/\epsilon^{4}\ln(n))=O(st\ln(n))\). In other words, this sketch gives us no improvement on sparsity!
Bi-linear entry-wise sampling transform.Our second main technical contribution is to bypass this barrier by noticing that we don't need a guarantee as strong as a spectral norm bound. Instead, we only need to achieve approximations of the form
\[\left\|ASMv-AS\bar{M}x_{k}\right\|\leqslant\varepsilon, \tag{1}\]
for various different choices of \(x_{k}\). So, we only need our sketch \(\bar{M}\) to approximate \(M\) in some directions with good probability, instead of approximating it in all directions. We define a simple sketch, which we call the Bilinear Entry Sampling Transform (best7), that is an unbiased estimator for \(A\) linearly (rather than the product \(A^{\dagger}A\)) and achieves the aforementioned guarantee (Equation (1)). This sketch is sampled from the same distribution as the one of Drineas and Zouzias from above, but without the zero-ing out small entries. In particular, we define
Footnote 7: That is, we pronounce best\((A)\) as “best of \(A\)”. We make no normative claim about our sketch vis-a-vis other sketches.
\[M^{(k)}=\frac{1}{p_{i,j}}A_{i,j}e_{i}e_{j}^{\dagger}\quad\text{ with probability }p_{i,j}=\frac{\left|A_{i,j}\right|}{\left\|A\right\|_{F}^{2}}.\]
Then, to get the sketch with sparsity \(r\), we take the average of \(r\) independent copies of the above random matrix:
\[\textsc{best}(A)=\frac{1}{r}\sum_{k\in[r]}M^{(k)}.\]
This definition is not new: for example, one of the earliest papers on matrix sparsification for linear algebra algorithms briefly considers this sketch [1], and this sketch has been used implicitly in prior work on dequantizing QML algorithms [13]. However, as far as we know, our analysis of this sketch for preserving bi-linear forms and saving a factor of \(1/\varepsilon^{2}\) is novel.
We show that best\((A)\) satisfies the following guarantees: taking \(r=\Omega(\|M\|_{F}^{2}/\varepsilon^{2})\), this sketch preserves the bilinear form \(u^{\dagger}Mv\) to \(\varepsilon\|u\|\|v\|\) error with probability \(\geqslant 0.9\) (Lemma 6.2). Second, taking \(r=\Omega(\|M\|_{F}^{2}n)\), this sketch preserves the norm \(\|Mv\|\) to \(0.1\|v\|\) error with probability \(\geqslant 0.9\). So, by taking \(r=\Omega(\|M\|_{F}^{2}(n+\frac{1}{\varepsilon^{2}}))\), we can get both properties. However, with this choice of \(r\), best\((M)\) does not preserve the spectral norm \(\|M\|\), even to constant error.8 This sketch can be interpreted as a relaxation of the approximate matrix product and the sparsification sketches above, since we use it in regimes where \(\|\textsc{best}(A)\|\) is unbounded and \(\|\textsc{best}(A)b\|\) is not \(\varepsilon\)-close to \(\|Ab\|\). However, if we use it in the "interior" of a matrix product expression, as a proxy for approximate matrix product, it can be used successfully to improve the \(n/\varepsilon^{2}\) dependence of spectral norm entry sparsification to something like \(n+1/\varepsilon^{2}\). In our setting, this corresponds to an improvement of \(1/\varepsilon^{4}\) to \(1/\varepsilon^{2}\).
Footnote 8: At this point, one might wonder whether one can simply zero out small entries to get these guarantees with the additional spectral norm bound. This is not the case; if we only zero out entries smaller than \(\varepsilon/(2n)\), this threshold is too small to improve the matrix Bernstein tail bound, whereas if we zero out entries smaller than, say, \(1/(100n)\), the matrix Bernstein tail bound goes through, but the bilinear form is not \(\varepsilon\)-preserved.
Applying best to Clenshaw Iteration.Returning to our odd Clenshaw iteration, we had our approximate iterate
\[u_{k}\approx AS[4(TAS)^{\dagger}(TAS)v_{k+1}-2v_{k+1}-v_{k+2}+2a_{2k+1}S^{ \dagger}b].\]
We can approximate this by taking \(B=\textsc{best}(TAS)\) and \(B_{\dagger}=\textsc{best}((TAS)^{\dagger})\) with sparsity \(r=\Theta(\|A\|_{F}^{4}/\varepsilon^{2})\) to get that the bilinear forms like \([AS]_{i,*}(TAS)^{\dagger}((TAS)v_{k+1})\) are preserved. This allows us to successfully approximate with sparsified matrices,
\[\approx AS[4B_{\dagger}Bv_{k+1}-2v_{k+1}-v_{k+2}+2a_{2k+1}S^{\dagger}b].\]
\[\text{so }v_{k}\approx 4B_{\dagger}Bv_{k+1}-2v_{k+1}-v_{k+2}+2a_{2k+1}S^{ \dagger}b.\]
This recurrence in \(v_{k}\) will be our algorithm (Algorithm 10.4): compute \(S\) and \(T\) in the preprocessing phase, then compute the iteration of \(v_{k}\)'s, pulling fresh copies of \(B\) and \(B_{\dagger}\) each time, and output it as the description of the output \(u\approx p(A)b\). What remains is the error analysis, which similar to the scalar case, requires bounding the size of the iterates \(\|v_{k}\|\). We used the \(\varepsilon\)-approximation of the bilinear form to show that the sparsifications \(B\) and \(B_{\dagger}\) successfully approximate \(u_{k}\); we use the constant-error approximation of the norm to show that the \(\|v_{k}\|\)'s are bounded.
Challenges in extending finite-precision Clenshaw iteration.We note here that the above error does not directly follow from the error of the sketches along with the finite-precision scalar Clenshaw iteration. The error we incur in iteration \(k\) is not \(\varepsilon\|u_{k+1}\|\), but \(\varepsilon\|v_{k+1}\|\), so our error analysis requires understanding both the original Clenshaw iteration along with the "dual" Clenshaw iteration \(v_{k}\), which requires a separate error analysis.
Sums of Chebyshev coefficients.One final technical wrinkle remains, which is to bound the error accumulation of the matrix Clenshaw recurrences. In the same way that the original scalar Clenshaw algorithm required bounding arithmetic progressions of Chebyshev coefficients with step size two, \(a_{\ell}+a_{\ell+2}+\cdots\) by the norm of the corresponding polynomial \(p=\sum_{\ell=0}^{d}a_{\ell}T_{\ell}(x)\), to prove stability for the even and odd versions, we need to bound arithmetic progressions with step size four. Surprisingly, this is significantly more challenging.
We give a thorough explanation for this in Remark 7.8, but in brief, some arithmetic progressions of Chebyshev coefficients arise naturally as linear combinations of polynomial evaluations. For example, \(\sum_{k\geqslant 0}a_{2k}=\frac{1}{2}(p(-1)+p(1))\), so we can conclude that \(\big{|}\sum_{k\geqslant 0}a_{2k}\big{|}\leqslant\|p\|_{\sup}\). Through Fourier analysis, this can be generalized to progressions with different step sizes, but breaks for progressions with certain offsets. The sum \(\sum_{k\geqslant 0}a_{4k+1}\) is one such example, and this is a quantity we need to bound to give a \(\tilde{O}(d^{2})\) bound on the iterates of odd matrix Clenshaw. A naive bound of \(O(d)\) follows from bounding each coefficient separately, but this results in a significantly worse running time down the line.
In Lemma 7.4, we show that we can bound the above sum \(\sum_{k\geqslant 0}a_{4k+1}\) by \(\mathcal{O}(\log^{2}(d)\|p\|_{\sup})\). This shows that this sum has near-total cancellation: despite our guarantee on coefficient being merely that it is bounded by a constant,9 the sum of \(O(d)\) of these coefficients is only poly-logarithmic in magnitude. The proof proceeds by considering many different polynomial evaluations \(p(x_{0}),p(x_{1}),\ldots,p(x_{d})\), and trying to write the arithmetic progression as a linear combination of these evaluations \(\sum c_{k}p(x_{k})\). This can be thought of as a linear system in the \(c_{k}\)'s, which we can then prove has a solution, and then bound the \(\ell_{1}\) norm of this solution. To do this, we argue that we can bound its solution \(A^{-1}b\) by the solution of a different system, \(C^{-1}b\), where \(C\) is an matrix that is entrywise larger than \(A\). Since this is not true generally, we use strong properties of the particular matrix \(A\) in the linear system at hand to prove this.
Footnote 9: In fact, for any degree \(d\), there are polynomials of that degree which are bounded and yet \(\sum_{\ell=0}^{d}|a_{\ell}|=\Omega(d)\)[17, Theorems 8.2 and 8.3].
## 4 Related work
Quantum machine learning.Starting with the breakthrough work of Harrow, Hassidim and Lloy [11] for solving sparse linear systems, QML has seen a growing number of pro
posals. Several works assume access to _quantum random access memory_ and provide quantum algorithms for Principal Component Analysis [11], support vector machines [12], k-means clustering [11], quantum recommendation systems [13], and neural networks [14, 15]. We refer the reader to [16] for a comprehensive survey. In particular, we note that the current proposals that resist de-quantization and potentially obtain a super-polynomial quantum speedup include Zhao, Fitzsimons, and Fitzsimons on Gaussian process regression [17] and Lloyd, Garnerone, and Zanardi on topological data analysis [18].
Quantum-inspired algorithms.Since the breakthrough work of Tang [13] for recommendation systems, there has been a flurry of work obtaining _quantum-inspired_ classical algorithms for various problems in machine learning and numerical linear algebra. The works most closely related to ours are the quantum-inspired framework for QSVT by Chia, Gilyen, Li, Lin, Tang and Wang [15], and a de-quantization of QSVT when the input matrices are sparse by Gharibian and Le Gall [16]. A series of papers also focus on dequantizing specific problems, such as regression [14, 15, 17, 18, 19, 20], recommendation system [13, 14, 15, 16] and principal component analysis [13]. Classical algorithms for quantum simulation include Van den Nest's work on simulating restricted classes of quantum circuits using probabilistic methods [10] and Rudi, Wossnig, Ciliberto, Rocchetto, Pontil, and Severini's work on using the Nystrom method to simulate a sparse Hamiltonian \(H\) on a sparse input state [15]..
Randomized numerical linear algebra.Our work is draws upon several ideas from the randomized numerical linear algebra literature and we refer the reader to the following surveys: [10, 14] for the relevant background. The asymmetric approximate matrix product sketch we introduce is closely related to the approximate matrix product sketches considered in [10, 11, 14] (non-oblivious) and [12] (oblivious). We note that in our setting, we cannot afford to use oblivious sketches, since we lose the ability to maintain sampling and query access to the resulting vectors. Cohen, Nelson, and Woodruff obtain an AMP guarantee in the symmetric case using _ridge leverage score_ sampling, whereas we work with asymmetric matrices and would need to pay condition number dependent oversampling factors to sample from the ridge leverage score distribution. Magdon-Ismail [10] and Magen and Zouzias [11] construct asymmetric approximate matrix products, but require sampling from \(\Big{\{}\left\|A_{i}\right\|^{2}+\left\|B_{i}\right\|^{2}/(\sum_{i\in[n]} \left\|A_{i}\right\|^{2}+\left\|B_{i}\right\|^{2})\Big{\}}\) and \(\Big{\{}\left\|A_{i}\right\|\left\|B_{i}\right\|/\sum_{i\in[n]}\left\|A_{i} \right\|\left\|B_{i}\right\|\Big{\}}\) respectively. We note that we do not have efficient access to these distributions in the _sampling and query access_ model. Drineas, Kannan and Mahoney [14] use \(\ell_{2}^{2}\) sampling to get the assymetric approximate matrix product guarantee, but under the Frobenius norm instead of spectral norm, which yet again does not suffice for our purposes. Regarding the bi-linear entry-wise sampling transform, creating sketches by sampling entries proportional to their squared-magnitude are well-known in the literature but are typically used to bound the operator norm of the matrix rather than any particular bi-linear form. Bounding the operator norm directly happens to be too restrictive, as we discussed in the technical overview. Regardless, entry-wise sampling sketches were introduced by Achlioptas and McSherry [1] to speed up low-rank approximation. Arora, Hazan and Kale used them in the context of faster algorithms for SDP solving [1]. Since then, a series of works have provided sharper guarantees for such sketches [13, 14, 15, 16]. Finally, there has been a recent flurry of work on sublinear time algorithms for low-rank approximation under various structural assumptions on the
input [17, 18, 19, 20], however all such algorithms still incur at least a linear dependence on the number of rows and columns of the input matrix.
In addition to the usual sketching and sampling toolkit, we also use ideas from iterative algorithms in the numerical linear algebra literature. Iterative methods, such as power iteration, Krylov subspace methods, Golub-Kahan Bidiagonalization, Arnoldi iteration, and the Lanczos iteration are ubiquitous in scientific computing and are used for matrix inversion, solving linear systems, linear programming, low-rank approximation, and numerous other fundamental linear algebra primitives [14, 15]. Our work is closely related to iterative algorithms that use the sparsification sketches and exploit singular values gaps, such as [1, 1, 2, 2], however these results incur dimension dependent factors, which are crucial to avoid in our setting.
## 5 Preliminaries
We use the notation \(f\lesssim g\) to denote the ordering \(f=\mathcal{O}(g)\) (and respectively for \(\gtrsim\) and \(\approx\)).
### Linear algebra
For vectors \(v\in\mathbb{C}^{n}\), \(\|v\|\) denotes standard Euclidean norm (so \(\|v\|\coloneqq(\sum_{i=1}^{n}|v_{i}|^{2})^{1/2}\)). For a matrix \(A\in\mathbb{C}^{m\times n}\), the _Frobenius norm_ of \(A\) is \(\|A\|_{F}\coloneqq(\sum_{i=1}^{m}\sum_{j=1}^{n}\big{|}A_{i,j}\big{|}^{2})^{1/2}\) and the _spectral norm_ of \(A\) is \(\|A\|\coloneqq\sup_{x\in\mathbb{C}^{n},\|x\|=1}\|Ax\|\).
A _singular value decomposition_ (SVD) of \(A\) is a representation \(A=UDV^{t}\), where for \(N\coloneqq\min(m,n)\), \(U\in\mathbb{C}^{m\times N}\) and \(V\in\mathbb{C}^{n\times N}\) are isometries and \(D\in\mathbb{R}^{N\times N}\) is diagonal with \(\sigma_{i}\coloneqq D_{i,j}\) and \(\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{N}\geq 0\). We can also write this decomposition as \(A=\sum_{i=1}^{N}\sigma_{i}u_{i}v_{i}^{\ddagger}\), where \(u_{i}\coloneqq U_{*,i}\) and \(v_{i}\coloneqq V_{*,i}\).
We denote the set of singular values of \(A\) by \(\operatorname{Spec}(A)\coloneqq\{\sigma_{k}\}_{k\in[N]}\). For a Hermitian matrix \(A\in\mathbb{C}^{n\times n}\) and a function \(f:\mathbb{R}\to\mathbb{C}\), \(f(A)\) refers to applying \(f\) to the eigenvalues of \(A\).
### Polynomials and the Chebyshev Basis
We consider polynomials with complex coefficients, \(p\in\mathbb{C}[x]\). For a Hermitian matrix \(A\), \(p(A)\) refers to evaluating the polynomial with \(x\) replacing \(A\); this is equivalent to applying \(p\) to the eigenvalues of \(A\). The right definition for applying \(p\) to a general non-square matrix is subtle; as done in QSVT, we restrict to settings where the matrix formed by evaluating \(p\) on the singular values of \(A\) coincides with the evaluation of a corresponding polynomial in \(A\).
**Definition 5.1** (Definition 6.1 of [2]).: For a matrix \(A\in\mathbb{C}^{m\times n}\) and degree-\(d\) polynomial \(p(x)\in\mathbb{C}[x]\) of parity-\(d\) (i.e., even if \(d\) is even and odd if \(d\) is odd), we define the notation \(p(A)\) in the following way:
1. If \(p\) is _even_, meaning that we can express \(p(x)=q(x^{2})\) for some polynomial \(q(x)\), then \[p(A)\coloneqq q(A^{\ddagger}A)=p(\sqrt{A^{\ddagger}A}).\]
2. If \(p\) is _odd_, meaning that we can express \(p(x)=x\cdot q(x^{2})\) for some polynomial \(q(x)\), then \[p(A)\coloneqq A\cdot q(\sqrt{A^{\ddagger}A}).\]
For example, if \(p(x)=x^{2}+1\), then \(p^{(\mathrm{Q}\mathrm{V})}(A)=A^{\ddagger}A+I\), and if \(p(x)=x^{3}+x\), then \(p^{(\mathrm{Q}\mathrm{V})}(A)=AA^{\ddagger}A+A\). Looking at a singular value decomposition \(A=\sum\sigma_{i}u_{i}v_{i}^{\ddagger}\), \(p^{(\mathrm{Q}\mathrm{V})}(A)=\sum p(\sigma_{i})u_{i}v_{i}^{\ddagger}\) when \(p\) is odd and \(p^{(\mathrm{Q}\mathrm{V})}(A)=\sum p(\sigma_{i})v_{i}v_{i}^{\ddagger}\) when \(p\) is even, thus making this definition coincide with the singular value transformation as given in [11, Definition 16].
We work in the Chebyshev basis of polynomials throughout. Let \(T_{\ell}(x)\) and \(U_{\ell}(x)\) denote Chebyshev polynomials of the first and second kind, respectively. They can be defined on \([-1,1]\) via
\[T_{k}(\cos(\theta)) =\cos(n\theta) \tag{2}\] \[U_{k}(\cos(\theta)) =\sin((n+1)x)/\sin(x), \tag{3}\]
but we will give attention to their recursive definitions, since we will use them for computation.
\[T_{0}(x) =1 U_{0}(x) =1\] \[T_{1}(x) =x U_{1}(x) =2x \tag{4}\] \[T_{k}(x) =2x\cdot T_{k-1}(x)-T_{k-2}(x) U_{k}(x) =2x\cdot U_{k-1}(x)-U_{k-2}(x)\]
For a function \(f:[-1,1]\to\mathbb{R}\), we denote \(\|f\|_{\sup}\coloneqq\sup_{x\in[-1,1]}|f(x)|\). In this norm, the Chebyshev polynomials have \(\|T_{k}(x)\|_{\sup}=1\) and \(\|U_{k}(x)\|_{\sup}=n+1\).
Any Lipschitz continuous function10\(f:[-1,1]\to\mathbb{R}\) can be written as a (unique) linear combination of Chebyshev polynomials, \(f(x)=\sum_{\ell}a_{\ell}T_{\ell}(x)\) (where we interpret \(T_{\ell}(x)\equiv 0\) for negative \(\ell\)). When \(f\) is a degree-\(d\) polynomial, then \(a_{\ell}=0\) for all \(\ell>d\). A common way to approximate a function is by truncating the polynomial expansion; we denote this operation by \(f_{k}(x)=\sum_{\ell=0}^{k}a_{\ell}T_{\ell}(x)\), and we denote the remainder to be \(\bar{f}_{k}(x)=f(x)-f_{k}(x)=\sum_{\ell=k+1}^{\infty}a_{\ell}T_{\ell}(x)\).
Footnote 10: We call a function \(f:[-1,1]\to\mathbb{R}\) Lipschitz continuous if there exists a constant \(C\) such that \(|f(x)-f(y)|\leqslant C|x-y|\) for \(x,y\in[-1,1]\).
We use the following well-known properties of Chebyshev polynomials from Mason and Handscomb [14].
\[T_{i}(x) =\tfrac{1}{2}(U_{i}(x)-U_{i-2}(x)) \tag{5}\] \[U_{i}(x) =\sum_{j\geqslant 0}T_{i-2j}(x)(1+\llbracket i-2j\neq 0 \rrbracket)\] (6) \[T_{jk}(x) =T_{j}(T_{k}(x))\] (7) \[U_{2k+1}(x) =U_{k}(T_{2}(x))U_{1}(x)=U_{k}(T_{2}(x))2x\] (8) \[\tfrac{d}{dx}T_{k}(x) =kU_{k-1}(x) \tag{9}\]
We recommend the book by Trefethen, and rely on its results to prove our bounds. We use a couple of standard results about Chebyshev polynomials.
**Lemma 5.2** (Coefficient bound, consequence of [16, Eq. (3.12)]).: _Let \(f:[-1,1]\to\mathbb{R}\) be a Lipschitz continuous function. Then all its Chebyshev coefficients \(a_{k}\) are bounded: \(|a_{k}|\leqslant 2\|f\|_{\sup}\)._
**Lemma 5.3** ([16, Theorems 8.1 and 8.2]).: _Let a function \(f\) analytic in \([-1,1]\) be analytically continuable to the open Bernstein ellipse \(E_{\rho}=\{\tfrac{z+z^{-1}}{2}\mid|z|<\rho\}\), where it satisfies \(|f(x)|\leqslant M\) for some \(M\). Then its Chebyshev coefficients satisfy \(|a_{0}|\leqslant M\) and_
\[|a_{k}|\leqslant 2M\rho^{-k},\quad k\geqslant 1.\]
_Consequently,_
\[\|\bar{f}_{n}\|_{\sup}\leqslant\frac{2M\rho^{-n}}{\rho-1}.\]
**Lemma 5.4**.: _For a degree-\(d\) polynomial \(p\), and \(\delta=\frac{1}{4d^{2}}\),_
\[\sup_{x\in[-1-\delta,1+\delta]}|p(x)|\leqslant e\|p\|_{\sup}.\]
Proof.: Without loss of generality, take \(\|p\|_{\sup}=1\). By Proposition 2.4 in Sachadeva and Vishnoi [14], and basic properties of Chebyshev polynomials,
\[\sup_{x\in[-1-\delta,1+\delta]}|p(x)|\leqslant\sup_{x\in[-1-\delta,1+\delta] }|T_{d}(x)|=T_{d}(1+\delta).\]
Further, by Proposition 2.5 in [14], we can evaluate \(T_{d}(1+\delta)\) via the formula
\[T_{d}(x) =\frac{1}{2}\Big{(}x+\sqrt{x^{2}-1}\Big{)}^{d}+\frac{1}{2}\Big{(} x-\sqrt{x^{2}-1}\Big{)}^{d}\] \[T_{d}(1+\delta) =\frac{1}{2}\Big{(}1+\delta+\sqrt{2\delta+\delta^{2}}\Big{)}^{d} +\frac{1}{2}\Big{(}1+\delta-\sqrt{2\delta+\delta^{2}}\Big{)}^{d}\] \[\leqslant\exp\Big{(}d(\delta+\sqrt{2\delta+\delta^{2}})\Big{)}\] \[\leqslant\exp\Big{(}\tfrac{1}{4d}+\sqrt{\tfrac{1}{2}+\tfrac{1}{16 d^{2}}}\Big{)}\leqslant e\]
### Sampling and query access
We now introduce the "quantum-inspired" access model, following prior exposition [13]. We refer the reader there for a more thorough investigation of this access model. From a sketching perspective, this model encompasses "the set of algorithms that can be performed in time independent of input dimension, using only \(\ell_{2}^{2}\) sampling", and is a decent classical analogue for the input given to a quantum machine learning algorithms operating on classical data.
**Definition 5.5** (Sampling and query access to a vector, [13, Definition 3.2]).: For a vector \(v\in\mathbb{C}^{n}\), we have \(\operatorname{SQ}(v)\), _sampling and query access_ to \(v\), if we can:
1. query for entries of \(v\);
2. obtain independent samples \(i\in[n]\) where we see \(i\) with probability \(|v(i)|^{2}/\|v\|^{2}\);
3. query for \(\|v\|\).
These samples are called something like "\(\ell_{2}^{2}\) importance samples" in the randomized numerical linear algebra literature. Quantumly, these can be considered to be simulated measurements of the quantum state \(|v\rangle\coloneqq\frac{1}{\|v\|}\sum v_{i}|i\rangle\) in the computational basis. Sampling and query access is closed under taking linear combinations, once we introduce slack in the form of oversampling.
**Definition 5.6** (Oversampling and query access, [13, Definition 3.4]).: For \(v\in\mathbb{C}^{n}\) and \(\phi\geqslant 1\), we have \(\operatorname{SQ}_{\phi}(v)\), \(\phi\)_-oversampling and query access_ to \(v\), if we have the ability to query \(v\) and \(\operatorname{SQ}(\tilde{v})\) for \(\tilde{v}\in\mathbb{C}^{n}\) a vector satisfying \(\|\tilde{v}\|^{2}=\phi\|v\|^{2}\) and \(|\tilde{v}(i)|^{2}\geqslant|v(i)|^{2}\) for all \(i\in[n]\).
The \(\ell_{2}^{2}\) distribution over \(\tilde{v}\)\(\phi\)-oversamples the distribution over \(v\):
\[\frac{\left|\tilde{v}_{i}\right|^{2}}{\left\|\tilde{v}\right\|^{2}}=\frac{\left| \tilde{v}_{i}\right|^{2}}{\phi\|v\|^{2}}\geqslant\frac{1}{\phi}\frac{\left|v_{ i}\right|^{2}}{\left\|v\right\|^{2}}.\]
Intuitively speaking, estimators that use \(\mathcal{D}_{v}\) can also use \(\mathcal{D}_{\tilde{v}}\) via rejection sampling at the expense of a factor \(\phi\) increase in the number of utilized samples. From this observation we can prove that oversampling access implies an approximate version of the usual sampling access:
**Lemma 5.7** (Oversampling to sampling, [1, Lemma 3.5]).: _Suppose we are given \(\operatorname{SQ}_{\phi}(v)\) and some \(\delta\in(0,1]\). We can sample from \(\mathcal{D}_{v}\) with probability \(\geqslant 1-\delta\) in \(\mathcal{O}\big{(}\phi\log\frac{1}{\delta}\big{)}\) queries to \(\operatorname{SQ}_{\phi}(v)\). We can also estimate \(\left\|v\right\|\) to \(v\) multiplicative error for \(v\in(0,1]\) with probability \(\geqslant 1-\delta\) in \(\mathcal{O}\Big{(}\frac{\phi}{v^{2}}\log\frac{1}{\delta}\Big{)}\) queries. Both of these algorithms take linear time in the number of queries._
Generally, compared to a quantum algorithm that can output (and measure) a desired vector \(\left|v\right\rangle\), our algorithms will output \(\operatorname{SQ}_{\phi}(u)\) such that \(\left\|u-v\right\|\) is small. So, to destabilize a quantum algorithm, we will need to bound \(\phi\) to show that we can output samples from \(\left|v\right\rangle\). As for error, bounds on \(\left\|u-v\right\|\) imply that measurements from \(u\) and \(v\) follow distributions that are close in total variation distance [16, Lemma 4.1]. Now, we show that oversampling and query access of vectors is closed under taking small linear combinations.
**Lemma 5.8** (Linear combinations, [1, Lemma 3.6]).: _Given \(\operatorname{SQ}_{\phi_{t}}(v_{t})\in\mathbb{C}^{n}\) and \(\lambda_{t}\in\mathbb{C}\) for all \(t\in[\tau]\), we have \(\operatorname{SQ}_{\phi}(\sum_{t=1}^{\tau}\lambda_{t}v_{t})\) for \(\phi=\tau\frac{\sum_{t}\phi\left\|\tilde{\lambda}v_{t}\right\|^{2}}{\left\| \sum\lambda_{t}v_{t}\right\|^{2}}\). After paying the pre-processing cost of querying for each of the norms of the \(\tilde{v}_{i}\)'s, the cost of any query is equal to the cost of sampling from any of the \(v_{t}\)'s plus the cost of querying an entry from all of the \(v_{t}\)'s._
**Definition 5.9** (Oversampling and query access to a matrix, [1, Definition 3.7]).: For a matrix \(A\in\mathbb{C}^{m\times n}\), we have \(\operatorname{SQ}(A)\) if we have \(\operatorname{SQ}(A(i,\cdot))\) for all \(i\in[m]\) and \(\operatorname{SQ}(a)\) for \(a\in\mathbb{R}^{m}\) the vector of row norms (\(a(i)\coloneqq\left\|A(i,\cdot)\right\|\)).
We have \(\operatorname{SQ}_{\phi}(A)\) if we have \(\operatorname{Q}(A)\) and \(\operatorname{SQ}(\bar{A})\) for \(\bar{A}\in\mathbb{C}^{m\times n}\) satisfying \(\left\|\bar{A}\right\|_{F}^{2}=\phi\left\|A\right\|_{F}^{2}\) and \(\left|\bar{A}(i,j)\right|^{2}\geqslant\left|A(i,j)\right|^{2}\) for all \((i,j)\in[m]\times[n]\).
**Remark 5.10**.: QML algorithms achieve their largest speedups over classical algorithms when given state preparation in \(\operatorname{polylog}(mn)\) time. So, we concern ourselves with this setting.
We can get \(\operatorname{SQ}\) access to input matrices and vectors in input-sparsity time. Given \(v\in\mathbb{C}^{n}\) in the standard RAM model, the alias method [11] takes \(\Theta(\operatorname{nnz}(v))\) pre-processing time to output a data structure that uses \(\Theta(\operatorname{nnz}(v))\) space and can sample from \(v\) in \(\Theta(1)\) time. In other words, we can get \(\operatorname{SQ}(v)\) with constant-time queries in \(\mathcal{O}(\operatorname{nnz}(v))\) time, and by extension, for a matrix \(A\in\mathbb{C}^{m\times n}\), \(\operatorname{SQ}(A)\) with constant-time queries in \(\mathcal{O}(\operatorname{nnz}(v))\) time.11.
Footnote 11: This holds in the word-RAM model, with an additional \(\log\) overhead when considering bit complexity.
Therefore, the quantum-inspired setting can be directly translated to a basic randomized numerical linear algebra algorithm. More precisely, with this data structure, a fast quantum-inspired algorithm (say, one running in time \(\mathcal{O}(T\operatorname{\mathbf{sq}}(A))\) for \(T\) independent of input size) implies an algorithm in the standard computational model (running in \(\mathcal{O}(\operatorname{nnz}(A)+T)\) time).
**Corollary 5.11**.: _Suppose we are given sampling and query access to a matrix \(A\in\mathbb{C}^{m\times n}\) and a vector \(b\in\mathbb{C}^{n}\), where we can respond to queries in \(O(1)\) time. Further suppose we have a vector
implicitly represented by \(v\in\mathbb{C}^{m}\) and \(\eta\), with \(u=A^{\intercal}v+\eta b\). Then by Lemma 5.8, we have \(\mathsf{SQ}_{\phi}(u)\) for_
\[\phi=(\|v\|_{0}+1)\frac{\sum_{k}\|v_{k}A_{k}\|^{2}+\|\eta b\|^{2}}{\|u\|^{2}}\]
_and a query cost of \(O(\|v\|_{0})\). In particular, by Lemma 5.7 we can draw one sample from \(u\) with probability \(\geqslant 1-\delta\) in \(O(\|v\|_{0}\phi\log\frac{1}{\delta})\) time._
This factor is only large when the linear combination has significantly smaller norm than the components \(v_{t}\) in the sum suggest. Usually, in our applications, we can intuitively think about this overhead being small when the desired output vector mostly lies in a subspace spanned by singular vectors with large singular values in our low-rank input. Quantum algorithms also have the same kind of overhead. Namely, the QSVT framework encodes this in the subnormalization constant \(\alpha\) of block-encodings, and the overhead from the subnormalization appears during post-selection [10]. When this cancellation is not too large, the resulting overhead typically does not affect too badly the runtime of our applications.
## 6 Extending the Sketching Toolkit
In this section, we show how to extend the modern sketching toolkit (see e.g. [20]) in two ways: (a) we provide a sub-sampling sketch that preserves bi-linear forms with only a inverse quadratic dependence on \(\varepsilon\) and (b) a non-oblivious, \(\ell_{2}^{2}\) sampling based asymmetric approximate matrix product sketch.
### The Bi-Linear Entry-wise Sampling Transform
**Definition 6.1** (Bi-linear Entry-wise Sparsifying Transform).: For a matrix \(A\in\mathbb{C}^{m\times n}\), the best of \(A\) with parameter \(T\) is a matrix sampled as follows: for all \(k\in[T]\),
\[M^{(k)}=\frac{1}{p_{i,j}}A_{i,j}e_{i}e_{j}^{\dagger}\quad\text{ with probability }p_{i,j}=\frac{\left|A_{ij}\right|}{\|A\|_{F}^{2}}\]
Then,
\[\textsc{best}(A)=\frac{1}{T}\sum_{k\in[T]}M^{(k)}.\]
**Lemma 6.2** (Basic Properties of the Bi-Linear Entry-wise Sparsifying Transform).: _For a matrix \(A\in\mathbb{C}^{m\times n}\), let \(M=\textsc{best}(A)\) with parameter \(T\). Then, for \(X\in\mathbb{C}^{m\times m}\), \(u\in\mathbb{C}^{m}\) and \(v\in\mathbb{C}^{n}\), we have_
\[\operatorname{nnz}(M) \leqslant T \tag{10}\] \[\operatorname{\mathbf{E}}\left[M\right] =A\] (11) \[\operatorname{\mathbf{E}}\left[M^{\dagger}XM-A^{\dagger}XA\right] =\frac{1}{T}\Big{(}\operatorname{Tr}(X)\|A\|_{F}^{2}I-A^{\dagger }XA\Big{)}\] (12) \[\operatorname{\mathbf{E}}\left[\left(u^{\dagger}Mv-\operatorname{ \mathbf{E}}\left[u^{\dagger}Mv\right]\right)^{2}\right] =\frac{1}{T}\Big{(}\|A\|_{F}^{2}\|u\|^{2}\|v\|^{2}-(u^{\dagger} Av)^{2}\Big{)} \tag{13}\]
Proof.: Observe, since \(M\) is an average of \(T\) sub-samples, each of which are \(1\)-sparse, \(M\) has at most \(T\) non-zero entries. Next,
\[\operatorname{\mathbf{E}}\left[M\right]=\frac{1}{T}\sum_{k\in T} \operatorname{\mathbf{E}}\left[M^{(k)}\right]=\sum_{i\in[m]}\sum_{j\in[n]}p_{ i,j}\frac{A_{ij}}{p_{i,j}}e_{i}e_{j}^{\dagger}=A \tag{14}\]
Similarly,
\[\begin{split}\mathbf{E}\left[M^{\ddagger}XM\right]&= \frac{1}{T^{2}}\,\mathbf{E}\left[\left(\sum_{k\in[T]}M^{(k)}\right)^{\ddagger}X \left(\sum_{k\in[T]}M^{(k)}\right)\right]\\ &=\frac{1}{T^{2}}\,\mathbf{E}\left[\left(\sum_{k,k^{\prime}\in[T] }\left(M^{(k)}\right)^{\ddagger}XM^{(k^{\prime})}\right)\right]\\ &=\frac{1}{T^{2}}\left(\left(\sum_{k\neq k^{\prime}\in[T]} \mathbf{E}\left[M^{(k)}\right]^{\ddagger}\cdot X\cdot\mathbf{E}\left[M^{(k^{ \prime})}\right]\right)+\left(\sum_{k\in[T]}\mathbf{E}\left[\left(M^{(k)} \right)^{\ddagger}XM^{(k)}\right]\right)\right)\\ &=\left(1-\frac{1}{T}\right)A^{\ddagger}XA+\frac{1}{T}\sum_{i\in[ m],j\in[n]}p_{ij}\frac{A_{ij}^{2}}{p_{i,j}^{2}}e_{j}e_{i}^{\ddagger}Xe_{i}e_{j}^{ \ddagger}\\ &=\left(1-\frac{1}{T}\right)A^{\ddagger}XA+\frac{\left\|A \right\|_{F}^{2}}{T}\sum_{i\in[m],j\in[n]}X_{i,i}e_{j}e_{j}^{\ddagger}\\ &=\left(1-\frac{1}{T}\right)A^{\ddagger}XA+\frac{\left\|A \right\|_{F}^{2}\operatorname{Tr}(X)}{T}I.\end{split} \tag{15}\]
Finally, observe, for \(i,i^{\prime}\in[m]\) and \(j,j^{\prime}\in[n]\),
\[M_{i,j}^{(k)}M_{i^{\prime},i^{\prime}}^{(k)}=\begin{cases}M_{i,j}^{2}&\text{ if }(i,j)=(i^{\prime},j^{\prime})\\ 0&\text{otherwise}\end{cases},\]
and therefore,
\[\begin{split}\mathbf{E}\left[\left(u^{\ddagger}Mv-\mathbf{E} \left[u^{\ddagger}Mv\right]\right)^{2}\right]&=\mathbf{E}\left[\left(u^{ \ddagger}(M-A)v\right)^{2}\right]\\ &=\mathbf{E}\left[\sum_{i,j^{\prime}\in[m],j,i^{\prime}\in[n]}(M-A)_ {i,j}(M-A)_{i^{\prime},j}u_{i}v_{j}u_{i^{\prime}}v_{j}\right]\\ &=\mathbf{E}\left[\sum_{i\in[m],j\in[n]}(M)_{i,j}^{2}u_{i}^{2}v_{j}^{2}- \left(u^{\ddagger}Av\right)^{2}\right]\\ &=\frac{1}{T}\bigg{(}\|A\|_{F}^{2}\|u\|^{2}\|v\|^{2}-\left(u^{ \ddagger}Av\right)^{2}\bigg{)}\end{split} \tag{16}\]
We list a simple consequence of these bounds that we use later.
**Corollary 6.3**.: _For a matrix \(A\in\mathbb{C}^{m\times n}\), let \(M=\textsc{best}(A)\) with parameter \(T\). Then, for matrices \(X\in\mathbb{C}^{\ell\times m}\) and \(Y\in\mathbb{C}^{n\times d}\),_
\[\mathbf{Pr}\left[\|XMY\|_{F}\geqslant\|XAY\|_{F}+\frac{\|X\|_{F}\|A\|_{F}\|Y\| _{F}}{\sqrt{\delta T}}\right]\leqslant\delta\]
### Approximate Matrix Product via \(\ell_{2}^{2}\) Sampling
In this subsection, we extend the well-known approximate matrix product (see for instance [20]) to the setting where we have an \(\ell_{2}^{2}\)-sampling oracle. Typically, the approximate matrix product guarantee is achieved in the oblivious sketching model, however, we cannot extend the oblivious sketching to the quantum setting. Quantum-inspired algorithms follows row subsampling literature, which can be performed in this setting. Formally, we show the following:
**Definition 6.4**.: Given two matrices \(A\in\mathbf{C}^{m\times n}\) and \(B\in\mathbf{C}^{n\times d}\), along with a probability distribution \(p\in\mathbb{R}_{\geqslant 0}^{n}\), we define the _Asymmetric Approximate Matrix Product_ of sketch size \(s\), denoted \(\operatorname{AMP}_{s}(A,B,p)\), to be the \(n\times s\) matrix whose columns are i.i.d. sampled according to the law
\[[\operatorname{AMP}_{s}(A,B,p)]_{*,j}=\frac{e_{k}}{\sqrt{s\cdot p_{k}}}\text{ with probability }p_{k}\]
For an \(S=\operatorname{AMP}_{s}(A,B,p)\), we will typically consider the expression \(ASS^{\ddagger}B\), which can be written as the sum of independent rank-one matrices \(\frac{1}{s\cdot p_{k}}A_{*,k}B_{k,*}\).
**Theorem 6.5** (Asymmetric Approximate Matrix Multiplication).: _Given matrices \(A\in\mathbb{R}^{m\times n}\) and \(B\in\mathbb{R}^{n\times d}\) such that \(\|A\|=1,\|B\|=1\), let \(\operatorname{AMP}(A,B)\in\mathbb{R}^{m\times d}\) denote a sketch, where \(\operatorname{AMP}(A,B)=\frac{1}{k}\sum_{i\in[k]}x_{i}\otimes y_{i}\) such that \(x_{i}\otimes y_{i}=\frac{1}{p_{i}}a_{i}\otimes b_{i}\) with probability \(p_{i}\geqslant c\|a_{i}\|^{2}/\left(2\|A\|_{F}^{2}\right)+\|b_{i}\|^{2}/\left( 2\|B\|_{F}^{2}\right)\), for a fixed constant \(c\). Then, with probability at least \(1-\delta\)_
\[\|\bar{A}B-AB\|\leqslant\sqrt{\frac{c^{\prime}\log(n)\log(1/\delta)\big{(}\|A \|_{F}^{2}+\|B\|_{F}^{2}\big{)}}{k}},\]
_for a fixed universal constant \(c^{\prime}\)._
To obtain this result, we prove the following key lemma:
**Lemma 6.6** (Concentration of Asymmetric Random Outer Products).: _Let \((x,y)\) be a tuple of random vectors, where \(x\in\mathbb{R}^{m}\) and \(y\in\mathbb{R}^{d}\) such that_
\[\left(\max_{i\in[n]}\|x_{i}\|^{2}\right\|\mathbb{E}\left[yy^{\top}\right]\Big{\|} +\max_{i\in[n]}\|y_{i}\|^{2}\Big{\|}\mathbb{E}\left[xx^{\top}\right]\Big{\|} \right)\leqslant M^{2}.\]
_Let \(\{(x_{i},y_{i})\}_{i\in[n]}\) be independent copies of the random variable \((x,y)\). Then, for any \(t\in(0,1)\),_
\[\mathbf{Pr}\left[\left\|\frac{1}{n}\sum_{i\in[n]}x_{i}\otimes y_{i}-\mathbb{E }\left[x\otimes y\right]\right\|\geqslant t\right]\leqslant(m+d)\exp\left(- \frac{ct^{2}n}{M^{2}+\max_{i\in[n]}\|x_{i}\otimes y_{i}\|t}\right)\]
Proof.: For \(i\in[n]\), let \(Z_{i}=\frac{1}{n}(x_{i}\otimes y_{i}-\mathbb{E}\left[x\otimes y\right])\). Further, \(\|Z_{i}\|\leqslant\frac{2}{n}\|x_{i}\otimes y_{i}\|\leqslant\frac{2M}{n}\). Next, we bound the variance :
\[\sigma^{2}=\max\left(\underbrace{\left\|\sum_{i\in[n]}\mathbb{E}\left[Z_{i}Z_ {i}^{\top}\right]\right\|}_{(i)}\underbrace{\left\|\sum_{i\in[n]}\mathbb{E} \left[Z_{i}^{\top}Z_{i}\right]\right\|}_{(ii)}\right)\]
We bound term \((i)\) as follows:
\[\left\|\sum_{i\in[n]}\mathbf{E}\left[Z_{i}Z_{i}^{\top}\right]\right\| =\left\|\frac{1}{n}\,\mathbf{E}\left[\left(x_{i}y_{i}^{\top}- \mathbf{E}\left[x\otimes y\right]\right)\left(x_{i}y_{i}^{\top}-\mathbf{E} \left[x\otimes y\right]\right)^{\top}\right]\right\| \tag{17}\] \[\leqslant\frac{\max_{i\in[n]}\|y_{i}\|^{2}}{n}\,\mathbf{E}\left[ xx^{\top}\right]\]
We bound term \((ii)\) as follows:
\[\left\|\sum_{i\in[n]}\mathbf{E}\left[Z_{i}^{\top}Z_{i}\right]\right\| =\left\|\frac{1}{n}\,\mathbf{E}\left[\left(x_{i}y_{i}^{\top}- \mathbf{E}\left[x\otimes y\right]\right)^{\top}\left(x_{i}y_{i}^{\top}- \mathbf{E}\left[x\otimes y\right]\right)\right]\right\| \tag{18}\] \[\leqslant\frac{2}{n}\Big{\|}\mathbf{E}\left[\|x_{i}\|^{2}y_{i}y_ {i}^{\top}\right]\Big{\|}\] \[\leqslant\frac{\max_{i\in[n]}\|x_{i}\|^{2}}{n}\,\mathbf{E}\left[ yy^{\top}\right]\]
Let \(M^{2}\geqslant\left(\max_{i\in[n]}\|x_{i}\|^{2}\|\mathbf{E}\left[yy^{\top} \right]\right\|+\max_{i\in[n]}\|y_{i}\|^{2}\|\mathbf{E}\left[xx^{\top}\right] \|\right)\). Applying Matrix Bernstein (see Fact 6.7),
\[\mathbf{Pr}\left[\left\|\sum_{i\in[n]}Z_{i}\right\|\geqslant t\right] =\mathbf{Pr}\left[\left\|\frac{1}{n}\sum_{i\in[n]}x_{i}\otimes y_ {i}-\mathbf{E}\left[x\otimes y\right]\right\|\geqslant t\right]\] \[\leqslant(m+d)\exp\left(-\frac{ct^{2}n}{M^{2}+\max_{i\in[n]}\|x_{i }\otimes y_{i}\|t}\right),\]
for a fixed constant \(c\geqslant 1\).
**Fact 6.7** (Matrix Bernstein, Theorem 1.6[12]).: _Given a sequence of independent \(d_{1}\times d_{2}\) random matrices \(\{\,Z_{i}\,\}_{i\in[k]}\) such that for all \(i\in[k]\), \(\mathbf{E}\left[Z_{i}\right]=0\) and \(\|Z_{i}\|\leqslant L\) almost surely, let_
\[\sigma^{2}=\max\left(\left\|\sum_{i\in[k]}\mathbf{E}\left[Z_{i}Z_{i}^{\top} \right]\right\|_{i}\left\|\sum_{i\in[k]}\mathbf{E}\left[Z_{i}^{\top}Z_{i} \right]\right\|\right).\]
_Then, for any \(t\geqslant 0\),_
\[\mathbf{Pr}\left[\left\|\sum_{i\in[k]}Z_{i}\right\|\geqslant t\right]\leqslant (d_{1}+d_{2})\exp\left(-\frac{t^{2}/2}{\sigma^{2}+Lt/3}\right)\]
It is now straight-forward to prove Theorem 6.5 using the aforementioned lemma:
Proof of Theorem 6.5.: Setting \(x_{i}=\frac{1}{\sqrt{p_{i}}}\cdot a_{i}\) and \(y_{i}=\frac{1}{\sqrt{p_{i}}}\cdot b_{i}\), where \(p_{i}=\frac{\|a_{i}\|^{2}}{2\|A\|_{F}^{2}}+\frac{\|b_{i}\|^{2}}{2\|B\|_{F}^{2}}\), we have
\[\|x_{i}\| \leqslant 2\|A\|_{F}\] \[\|y_{i}\| \leqslant 2\|B\|_{F}\] \[\|x_{i}\otimes y_{i}\|^{1/2} \leqslant\left\|\frac{\|A\|_{F}\|B\|_{F}}{\|a_{i}\|\|b_{i}\|}a_{i} \otimes b_{i}\right\|^{1/2}\leqslant\sqrt{\|A\|_{F}\cdot\|B\|_{F}}.\]
Further,
\[\begin{split}\operatorname{\mathbf{E}}\left[x\otimes x\right]& =\sum_{i\in[\eta]}\frac{1}{2}\Bigg{(}\frac{\left\|a_{i}\right\|^{2}}{ \left\|A\right\|_{F}^{2}}+\frac{\left\|b_{i}\right\|^{2}}{\left\|B\right\|_{F }^{2}}\Bigg{)}\cdot\frac{a_{i}\otimes a_{i}}{p_{i}}\\ &=AA^{\top},\end{split} \tag{19}\]
and similarly
\[\begin{split}\operatorname{\mathbf{E}}\left[y\otimes y\right]& =\sum_{i\in[\eta]}\frac{1}{2}\Bigg{(}\frac{\left\|a_{i}\right\|^{2}}{ \left\|A\right\|_{F}^{2}}+\frac{\left\|b_{i}\right\|^{2}}{\left\|B\right\|_{F }^{2}}\Bigg{)}\cdot\frac{b_{i}\otimes b_{i}}{p_{i}}b_{i}\otimes b_{i}\\ &=BB^{\top}.\end{split} \tag{20}\]
Observe, setting \(M^{2}=2\Big{(}\left\|A\right\|_{F}^{2}+\left\|B\right\|_{F}^{2}\Big{)}\) suffices, and applying Matrix Bernstein, we have
\[\operatorname{\mathbf{Pr}}\left[\left\|AB-\bar{A}\bar{B}\right\|\geqslant t \right]\leqslant(m+d)\exp\Bigg{(}-\frac{ct^{2}k}{\left\|A\right\|_{F}^{2}+ \left\|B\right\|_{F}^{2}}\Bigg{)}. \tag{21}\]
Setting \(t=\sqrt{\frac{c\log(1/\delta)\log(m+d)\big{(}\left\|A\right\|_{F}^{2}+\left\| B\right\|_{F}^{2}\big{)}}{k}}\), we know that with probability at least \(1-\delta\)
\[\left\|AB-\bar{A}\bar{B}\right\|\leqslant\sqrt{\frac{c\log(1/\delta)\log(m+d) \Big{(}\left\|A\right\|_{F}^{2}+\left\|B\right\|_{F}^{2}\Big{)}}{k}},\]
as desired.
## 7 Sums of Chebyshev coefficients
To give improved stability bounds for the Clenshaw recurrence, we need to bound various sums of Chebyshev coefficients. Since we aim to give bounds that hold for all degree-\(d\) polynomials, we use no property of the function beyond that it has a unique Chebyshev expansion; of course, for any particular choice of function \(f\), the bounds in this section can be improved by explicitly computing its Chebyshev coefficients, or in some cases, by using smoothness properties of the function [17, Theorems 7.2 and 8.2].
Let \(f:[-1,1]\to\mathbb{R}\) be a Lipschitz continuous function. Then it can be expressed uniquely as a linear combination of Chebyshev polynomials \(f(x)=\sum_{i=0}^{\infty}a_{i}T_{i}(x)\). A broad topic of interest in approximation theory is bounds for linear combinations of these coefficients, \(\sum a_{i}c_{i}\), in terms of \(\left\|f\right\|_{\sup}\); this was one motivation of Vladimir Markov in proving the Markov brothers' inequality [12, p575]. Our goal for this section will be to investigate this question in the case where these sums are arithmetic progressions of step four. This will be necessary for later stability analyses, and is one of the first non-trivial progressions to bound. We begin with some straightforward assertions (see [17] for background).
**Fact 7.1**.: _Let \(f:[-1,1]\to\mathbb{R}\) be a Lipschitz continuous function. Then its Chebyshev coefficients
_satisfy_
\[\Big{|}\sum_{\ell}a_{\ell}\Big{|} =|f(1)|\leqslant\|f\|_{\sup}\] \[\Big{|}\sum_{\ell}(-1)^{\ell}a_{\ell}\Big{|} =|f(-1)|\leqslant\|f\|_{\sup}\] \[\Big{|}\sum_{\ell}a_{\ell}[\ell\text{ is even}]\Big{|} =\Big{|}\sum_{\ell}a_{\ell}\frac{1}{2}(1+(-1)^{\ell})\Big{|}\leqslant \|f\|_{\sup}\] \[\Big{|}\sum_{\ell}a_{\ell}[\ell\text{ is odd}]\Big{|} =\Big{|}\sum_{\ell}a_{\ell}\frac{1}{2}(1-(-1)^{\ell})\Big{|}\leqslant \|f\|_{\sup}\]
We use the following result on Lebesgue constants to bound truncations of the Chebyshev coefficient sums.
**Lemma 7.2** ([19, Theorem 15.3]).: _Let \(f:[-1,1]\to\mathbb{R}\) be a Lipschitz continuous function, let \(f_{k}(x)=\sum_{\ell=0}^{k}a_{\ell}T_{\ell}(x)\), and let optimal degree-\(k\) approximating polynomial to \(f\) be denoted \(f_{k}^{*}\). Then_
\[\|f-f_{k}\|_{\sup} \leqslant\Big{(}4+\frac{4}{\pi^{2}}\log(k+1)\Big{)}\|f-f_{k}^{*} \|_{\sup}\] \[\leqslant\Big{(}4+\frac{4}{\pi^{2}}\log(k+1)\Big{)}\|f\|_{\sup}.\]
_Similarly,_
\[\|f_{k}\|_{\sup}\leqslant\|f-f_{k}\|_{\sup}+\|f\|_{\sup}\leqslant\Big{(}5+ \frac{4}{\pi^{2}}\log(k+1)\Big{)}\|f\|_{\sup}.\]
This implies bounds on sums of coefficients.
**Fact 7.3**.: _Consider a function \(f(x)=\sum_{\ell}a_{\ell}T_{\ell}(x)\). Then_
\[\Big{|}\sum_{\ell=k}^{\infty}a_{\ell}[\ell-k\text{ is even}]\Big{|}\leqslant\|f- f_{k-2}\|_{\sup}\leqslant\Big{(}4+\frac{4}{\pi^{2}}\log(k-1)\Big{)}\|f\|_{ \sup},\]
_where the inequalities follow from Fact 7.1 and Lemma 7.2. When \(k=0,1\), then the sum is bounded by \(\|f\|_{\sup}\), as shown in Fact 7.1._
Now, we prove similar bounds in the case that \(f(x)\) is an odd function. In particular, we want to obtain a bound on alternating signed sums of the Chebyshev coefficients and we incur a blowup that scales logarithmically in the degree.
**Lemma 7.4**.: _Let \(f:[-1,1]\to\mathbb{R}\) be an odd Lipschitz continuous function with Chebyshev coefficients \(\{a_{\ell}\}_{\ell}\), so that \(a_{k}=0\) for all even \(k\). Then the Chebyshev coefficient sum is bounded as_
\[\Big{|}\sum_{\ell=0}^{d}(-1)^{\ell}a_{2\ell+1}\Big{|} \leqslant(\ln(d)+2)\max_{0\leqslant k\leqslant 2d+1}\|f_{k}\|_{ \sup}\] \[\leqslant(\ln(d)+2)\Big{(}5+\frac{4}{\pi^{2}}\ln(2d+2)\Big{)}\|f \|_{\sup}\] \[\leqslant\Big{(}16+4\ln^{2}(d+1)\Big{)}\|f\|_{\sup}.\]
We first state the following relatively straight-forward corollary:
**Corollary 7.5**.: _Lemma 7.4 gives bounds on arithmetic progressions with step size four. Let \(f:[-1,1]\to\mathbb{R}\) be a Lipschitz continuous function, and consider nonnegative integers \(c\leqslant d\). Then_
\[\Big{|}\sum_{\ell=c}^{d}a_{\ell}[\ell-c\equiv 0\ (\mathrm{mod}\ 4)]\Big{|} \leqslant(32+8\ln^{2}(d+1))\|f\|_{\mathrm{sup}}\]
Proof.: Define \(f^{\mathrm{odd}}\coloneqq\frac{1}{2}(f(x)-f(-x))\) and \(f^{\mathrm{even}}\coloneqq\frac{1}{2}(f(x)+f(-x))\) to be the odd and even parts of \(f\) respectively. Triangle inequality implies that \(\|f^{\mathrm{odd}}\|_{\mathrm{sup}},\|f^{\mathrm{even}}\|_{\mathrm{sup}} \leqslant\|f\|_{\mathrm{sup}}\). Suppose \(c,d\) are odd. Then
\[\Big{|}\sum_{\ell=c}^{d}a_{\ell}[\ell-c\equiv 0\ (\mathrm{mod}\ 4)] \Big{|} =\frac{1}{2}\Big{|}\sum_{\ell=0}^{\lfloor(d-c)/2\rfloor}a_{c+2 \ell}(1\pm(-1)^{\ell})\Big{|}\] \[\leqslant\frac{1}{2}\Big{(}\Big{|}\sum_{\ell=0}^{\lfloor(d-c)/2 \rfloor}a_{c+2\ell}\Big{|}+\Big{|}\sum_{\ell=0}^{\lfloor(d-c)/2\rfloor}(-1)^{ \ell}a_{c+2\ell}\Big{|}\Big{)}\] \[\leqslant\frac{1}{2}\Big{(}\|f_{c}^{\mathrm{odd}}\|_{\mathrm{sup }}+\|f_{d}^{\mathrm{odd}}\|_{\mathrm{sup}}+2(\ln(d)+2)\max_{0\leqslant k\leqslant d }\|f_{k}^{\mathrm{odd}}\|_{\mathrm{sup}}\Big{)}\] \[\leqslant(32+8\ln^{2}(d+1))\|f^{\mathrm{odd}}\|_{\mathrm{sup}}\] \[\leqslant(32+8\ln^{2}(d+1))\|f\|_{\mathrm{sup}}\]
The case when \(c\) is even is easy, and it follows from Fact 7.3: for, we know that
\[\Big{\|}\sum_{\ell}a_{2\ell}T_{\ell}(x)\Big{\|}_{\mathrm{sup}}=\Big{\|}\sum_{ \ell}a_{2\ell}T_{\ell}(T_{2}(x))\Big{\|}_{\mathrm{sup}}=\Big{\|}\sum_{\ell}a_ {2\ell}T_{2\ell}(x)\Big{\|}_{\mathrm{sup}}=\Big{\|}f^{\mathrm{even}}(x)\Big{\|} _{\mathrm{sup}}\leqslant\|f\|_{\mathrm{sup}},\]
so
\[\Big{|}\sum_{\ell>c}a_{\ell}[\ell-c\equiv 0\ (\mathrm{mod}\ 4)] \Big{|} =\Big{|}\sum_{\ell>c/2}a_{2\ell}[\ell-c/2\ \text{is even}] \Big{|}\] \[\leqslant\Big{(}4+\frac{4}{\pi^{2}}\log(c/2-1)\Big{)}\Big{\|} \sum_{\ell}a_{2\ell}T_{\ell}(x)\Big{\|}_{\mathrm{sup}}\] \[\leqslant\Big{(}4+\frac{4}{\pi^{2}}\log(c/2-1)\Big{)}\|f\|_{ \mathrm{sup}},\]
and combining these two bounds for \(c\) and \(d\) gives the desired statement.
We note that Lemma 7.4 will be significantly harder to prove. See Remark 7.8 for an intuitive explanation why. We begin with two structural lemmas on how the solution to a unitriangular linear system behaves, which might be on independent interest.
**Lemma 7.6** (An entry-wise positive solution).: _Suppose that \(A\in\mathbb{R}^{d\times d}\) is an upper unitriangular matrix such that, for all \(i\leqslant j\), \(a_{ij}>0\), \(a_{ij}>a_{i-1,j}\). Then \(A^{-1}\overline{1}\) is a vector with positive entries._
_The same result holds when \(A\) is a lower unitriangular matrix such that, for all \(i\geqslant j\), \(a_{ij}>0\), \(a_{ij}>a_{i+1,j}\)._
Proof.: Let \(x=A^{-1}\overline{1}\). Then \(x_{d}=1\geqslant 0\). The result follows by induction:
\[x_{i} =1-\sum_{j=i+1}^{d}A_{ij}x_{j}\] \[=\sum_{j=i+1}^{d}(A_{i+1,j}-A_{ij})x_{j}+1-\sum_{j=i+1}^{d}A_{i+1,j}x_{j}\]
\[=\sum_{j=i+1}^{d}(A_{i+1,j}-A_{ij})x_{j}+1-[Ax]_{i+1}\] \[=\sum_{j=i+1}^{d}(A_{i+1,j}-A_{ij})x_{j}\] \[>0\]
For lower unitriangular matrices, the same argument follows. The inverse satisfies \(x_{1}=1\) and
\[x_{i} =1-\sum_{j=1}^{i-1}A_{ij}x_{j}\] \[=\sum_{j=i+1}^{d}(A_{i-1,j}-A_{ij})x_{j}+1-\sum_{j=i+1}^{d}A_{i-1,j}x_{j}>0\]
Next, we characterize how the solution to a unitriangular linear system behaves when we consider a partial ordering on the matrices.
**Lemma 7.7**.: _Let \(A\) be a nonnegative upper unitriangular matrix such that \(A_{ij}>A_{i-1,j}\) and \(A_{ij}>A_{i,j+1}\) for all \(i\leqslant j\). Let \(B\) be a matrix with the same properties, such that \(A\geqslant B\) entrywise. By Lemma 7.6, \(x^{(A)}=A^{-1}\overline{1}\) and \(x^{(B)}=B^{-1}\overline{1}\) are nonnegative. It further holds that \(\sum_{i=1}^{d}[A^{-1}\overline{1}]_{i}\leqslant\sum_{i=1}^{d}[B^{-1}\overline {1}]_{i}\)._
Proof.: We consider the line between \(A\) and \(B\), \(A(t)=A(1-t)+Bt\) for \(t\in[0,1]\). Let \(x(t)=A^{-1}\overline{1}\); we will prove that \(\overline{1}^{T}x(t)\) is monotonically increasing in \(t\). The gradient of \(x(t)\) has a simple form [13]:
\[A(t)x(t) =\overline{1}\] \[\partial[A(t)x(t)] =\partial_{t}[\overline{1}]\] \[(B-A)x(t)+A(t)\partial_{t}x(t) =0\] \[\partial_{t}x(t) =A^{-1}(t)(A-B)x(t).\]
So,
\[\overline{1}^{T}\partial_{t}x(t) =\overline{1}^{T}A^{-1}(t)(A-B)A^{-1}(t)\overline{1}\] \[=[A^{-T}(t)\overline{1}]^{T}(A-B)[A^{-1}(t)\overline{1}].\]
Since \(A\) and \(B\) satisfy the entry constraints, so do every matrix along the line. Consequently, the column constraints in Lemma 7.6 are satisfied for both \(A\) and \(A^{T}\), so both \(A^{-T}(t)\overline{1}\) and \(A^{-1}(t)\overline{1}\) are positive vectors. Since \(A\geqslant B\) entrywise, this means that \(\overline{1}^{T}\partial_{t}x(t)\) is positive, as desired.
Proof of Lemma 7.4.: We first observe that the following sorts of sums are bounded. Let \(x_{k}:=\cos(\frac{\pi}{2}(1-\frac{1}{2k+1}))\). Then, using that \(T_{\ell}(\cos(x))=\cos(\ell x)\),
\[f_{2k+1}(x_{k}) =\sum_{\ell=0}^{2k+1}a_{\ell}T_{\ell}(x_{k})\] \[=\sum_{\ell=0}^{k}a_{2\ell+1}T_{2\ell+1}(x_{k})\]
\[=\sum_{\ell=0}^{k}a_{2\ell+1}\cos\Big{(}\frac{\pi}{2}\Big{(}2\ell+1 -\frac{2\ell+1}{2k+1}\Big{)}\Big{)}\] \[=\sum_{\ell=0}^{k}(-1)^{\ell}a_{2\ell+1}\sin\Big{(}\frac{\pi}{2} \frac{2\ell+1}{2k+1}\Big{)}.\]
We have just shown that
\[\Big{|}\sum_{\ell=0}^{k}a_{2\ell+1}(-1)^{\ell}\sin\Big{(}\frac{\pi}{2}\frac{2 \ell+1}{2k+1}\Big{)}\Big{|}\leqslant\|f_{2k+1}\|_{\text{sup}}. \tag{22}\]
We now claim that there exist non-negative \(c_{k}\) for \(k\in\{0,1,\dots,d\}\) such that
\[\sum_{\ell=0}^{d}(-1)^{\ell}a_{2\ell+1}=\sum_{k=0}^{d}c_{k}f_{2k+1}(x_{k}). \tag{23}\]
The \(f_{2k+1}(x_{k})\)'s can be bounded using Lemma 7.2. The rest of the proof will consist of showing that the \(c_{k}\)'s exist, and then bounding them.
To do this, we consider the coefficient of each \(a_{2\ell+1}\) separately; let \(A^{(k)}\in[0,1]^{d+1}\) (index starting at zero) be the vector of coefficients associated with \(p_{2k+1}(x_{k})\):
\[A^{(k)}_{\ell}=\sin\Big{(}\frac{\pi}{2}\frac{2\ell+1}{2k+1}\Big{)}\text{ for }0\leqslant\ell\leqslant k\text{, }0\text{ otherwise} \tag{24}\]
Note that the \(A^{(k)}_{\ell}\) is always non-negative and increasing with \(\ell\) up to \(A^{(k)}_{k}=1\). Then Eq. (23) holds if and only if
\[c_{0}A^{(0)}+\dots+c_{d}A^{(d)}=\vec{1},\]
or in other words, the equation \(Ac=\vec{1}\) is satisfied, where \(A\) is the matrix with columns \(A^{(k)}\) and \(c\) is the vector of \(c_{\ell}\)'s. Since \(A\) is upper triangular (in fact, with unit diagonal), this can be solved via backwards substitution: \(c_{d}=1\), then \(c_{d-1}\) can be deduced from \(c_{d}\), and so on. More formally, the \(s\)th row gives the following constraint that can be rewritten as a recurrence.
\[\sum_{t=s}^{d}\sin\Big{(}\frac{\pi}{2}\frac{2s+1}{2t+1}\Big{)}c_{t}=1 \tag{25}\] \[c_{s}=1-\sum_{t=s+1}^{d}\sin\Big{(}\frac{\pi}{2}\frac{2s+1}{2t+ 1}\Big{)}c_{t} \tag{26}\]
Because the entries of \(A\) increase in \(\ell\), the \(c_{\ell}\)'s are all positive.
Invoking Lemma 7.6 with the matrix A establishes that such \(c_{s}\) exist; our goal now is to bound them. Doing so is not as straightforward as it might appear: since the recurrence Eq. (26) _subtracts_ by \(c_{t}\)'s, an upper bound on \(c_{t}\) for \(t\in[s+1,d]\) does not give an upper bound on \(c_{s}\); it gives a lower bound. So, an induction argument to show bounds for \(c_{s}\)'s fails. Further, we were unable to find any closed form for this recurrence. However, since all we need to know is the sum of the \(c_{s}\)'s, we show that we _can_ bound this via a generic upper bound on the recurrence.
Here, we apply Lemma 7.7 to \(A\) as previously defined, and the bounding matrix is (for \(i\leqslant j\))
\[B_{ij}=\frac{i}{j}\leqslant\frac{2i+1}{2j+1}\leqslant\sin\Big{(}\frac{\pi}{2} \frac{2i+1}{2j+1}\Big{)}=A_{ij},\]
using that \(\sin(\frac{\pi}{2}x)\geqslant x\) for \(x\in[0,1]\). Let \(\hat{c}=B^{-1}\overline{1}\). Then \(\hat{c}_{i}=\frac{1}{i+1}\) for \(i\neq d\) and \(\hat{c}_{d}=1\).
\[[B\mathcal{E}]_{i}=\sum_{j=i}^{d}B_{ij}\hat{c}_{j}=\sum_{j=i}^{d-1}\frac{i}{j} \frac{1}{j+1}+\frac{i}{\bar{d}}=i\sum_{j=i}^{d-1}\Big{(}\frac{1}{j}-\frac{1}{j +1}\Big{)}+\frac{i}{\bar{d}}=i\Big{(}\frac{1}{i}-\frac{1}{\bar{d}})+\frac{i}{ \bar{d}}=1\]
By Lemma 7.7, \(\sum_{i}c_{i}\leqslant\sum_{i}\hat{c}_{i}\leqslant\ln(d)+2\). So, altogether, we have
\[\Big{|}\sum_{\ell=0}^{d}(-1)^{\ell}a_{2\ell+1}\Big{|} =\Big{|}\sum_{k=0}^{d}c_{k}f_{2k+1}(x_{k})\Big{|}\] \[\leqslant\sum_{k=0}^{d}c_{k}\|f_{2k+1}\|_{\sup}\] \[\leqslant\Big{(}\sum_{k=0}^{d}c_{k}\Big{)}\max_{0\leqslant k \leqslant d}\|f_{2k+1}\|_{\sup}\] \[=\Big{(}\sum_{k=0}^{d}c_{k}\Big{)}\max_{0\leqslant k\leqslant 2d+1 }\|f_{k}\|_{\sup}\] \[\leqslant(\ln(d)+2)\max_{0\leqslant k\leqslant 2d+1}\|f_{k}\|_{\sup}\]
**Remark 7.8**.: A curious reader will (rightly) wonder whether this proof requires this level of difficulty. Intuition from the similar Fourier analysis setting suggests that arithmetic progressions of any step size at any offset are easily bounded. We can lift to the Fourier setting by considering, for an \(f:[-1,1]\to\mathbb{R}\), a corresponding \(2\pi\)-periodic \(g:[0,2\pi]\to\mathbb{R}\) such that
\[g(\theta)\coloneqq f(\cos(\theta))=\sum_{k=0}^{\infty}a_{k}T_{k}(\cos(\theta) )=\sum_{k=0}^{\infty}a_{k}\cos(k\theta)=\sum_{k=0}^{\infty}a_{k}\frac{e^{ik \theta}+e^{-ik\theta}}{2}\]
This function has the property that \(|g(\theta)|\leqslant\|f\|_{\sup}\) and \(\widehat{g}(k)=a_{|k|}/2\) (except \(\widehat{g}(0)=a_{0}\)). Consequently,
\[\frac{1}{t}\sum_{j=0}^{t-1}f\Big{(}\cos(\frac{2\pi j}{t})\Big{)} =\frac{1}{t}\sum_{j=0}^{t-1}g\Big{(}\frac{2\pi j}{t}\Big{)}=\frac{1}{t}\sum_{ j=0}^{t-1}\sum_{k=-\infty}^{\infty}\widehat{g}(k)e^{2\pi ijk/t}=\sum_{k=- \infty}^{\infty}\widehat{g}(k)\sum_{j=0}^{t-1}\frac{1}{t}e^{2\pi ijk/t}\] \[=\sum_{k=-\infty}^{\infty}\widehat{g}(k)[\text{$k$ is divisible by $t$}]=\sum_{k=-\infty}^{\infty}\widehat{g}(kt),\]
so we can bound arithmetic progressions \(|\sum_{k}\widehat{g}(kt)|\leqslant\|f\|_{\sup}\), and this generalizes to other offsets, to bound \(|\sum_{k}\widehat{g}(kt+o)|\) for some \(o\in[t-1]\). Notably, though, this approach does not say anything about sums like \(\sum_{k}a_{4k+1}\). The corresponding progression of Fourier coefficients doesn't give it, for example, since we pick up unwanted terms from the negative Fourier coefficients.12
Footnote 12: These sums are related to the Chebyshev coefficients one gets from interpolating a function at Chebyshev points [19, Theorem 4.2].
\[\sum_{k}\widehat{g}(4k+1) =(\widehat{g}(1)+\widehat{g}(5)+\widehat{g}(9)+\cdots)+(\widehat {g}(-3)+\widehat{g}(-7)+\widehat{g}(-11)+\cdots)\] \[=\frac{1}{2}(a_{1}+a_{5}+a_{9}+\cdots)+\frac{1}{2}(a_{3}+a_{7}+a_{ 11}+\cdots)=\sum_{k\geqslant 0}a_{2k+1}.\]
In fact, by inspection of the distribution13\(D(x)=\sum_{k=0}^{\infty}T_{4k+1}(x)\), it appears that this arithmetic progression cannot be written as a linear combination of evaluations of \(f(x)\). Since the shape of the distribution appears to have \(1/x\) behavior near \(x=0\), we conjecture that our analysis losing a log factor is, in some respect, necessary.
Footnote 13: This is the functional to integrate against to compute the sum, \(\frac{2}{\pi}\int_{-1}^{1}f(x)D(x)/\sqrt{1-x^{2}}=\sum a_{4k+1}\). The distribution is not a function, but can be thought of as the limit object of \(D_{n}(x)=\sum_{k=0}^{n}T_{4k+1}(x)\) as \(n\to\infty\), analogous to Dirichlet kernels and the Dirac delta distribution.
**Conjecture 7.9**.: _For any step size \(t>1\) and offset \(o\in[t-1]\) such that \(o\neq t/2\), there exists a function \(f:[-1,1]\to\mathbb{R}\) such that \(\|f\|_{\mathrm{supp}}=1\) but \(|\sum_{k=0}^{n}a_{4k+o}|=\Omega(\log(n))\)._
## 8 Properties of the Clenshaw recursion
### Deriving the Clenshaw recursions
Suppose we are given as input a degree-\(d\) polynomial as a linear combination of Chebyshev polynomials:
\[p(x)=\sum_{k=0}^{d}a_{k}T_{k}(x). \tag{27}\]
Then this can be computed with the _Clenshaw algorithm_, which gives the following recurrence.
\[\begin{split} q_{d+1}&=q_{d+2}=0\\ q_{k}&=2xq_{k+1}-q_{k+2}+a_{k}\\ \tilde{p}&=\tfrac{1}{2}(a_{0}+q_{0}-q_{2})\end{split}\] (Clenshaw)
**Lemma 8.1**.: _The recursion in Eq. (Clenshaw) computes \(p(x)\). That is, in exact arithmetic, \(\tilde{p}=p(x)\). In particular,_
\[q_{k}=\sum_{i=k}^{d}a_{i}\mathrm{U}_{i-k}(x). \tag{28}\]
Proof.: We show Eq. (28) by induction.
\[\begin{split} q_{k}&=2xq_{k+1}-q_{k+2}+a_{k}\\ &=2x\Big{(}\sum_{i=k+1}^{d}a_{i}\mathrm{U}_{i-k-1}(x)\Big{)}- \Big{(}\sum_{i=k+2}^{d}a_{i}\mathrm{U}_{i-k-2}(x)\Big{)}+a_{k}\\ &=a_{k}+2xa_{k+1}\mathrm{U}_{0}(x)+\sum_{i=k+2}^{d}a_{i}(2x \mathrm{U}_{i-k-1}(x)-\mathrm{U}_{i-k-2}(x))\\ &=\sum_{i=k}^{d}a_{i}\mathrm{U}_{i-k}(x).\end{split} \tag{29}\]
Consequently, we have
\[\frac{1}{2}(a_{0}+u_{0}-u_{2}) =\frac{1}{2}\left(a_{0}+\sum_{i=0}^{d}a_{i}U_{i}(x)-\sum_{i=2}^{d}a_{ i}U_{i-2}(x)\right)\] \[=a_{0}+a_{1}x+\sum_{i=2}^{d}\frac{a_{i}}{2}(U_{i}(x)-U_{i-2}(x))\] \[=\sum_{i=0}^{d}a_{i}T_{i}(x).\]
**Remark 8.2**.: Though the aforementioned discussion is specialized to the scalar setting, it extends to the the matrix setting almost entirely syntactically: consider a Hermitian \(A\in\mathbb{C}^{n\times n}\) and \(b\in\mathbb{C}^{n}\) with \(\|A\|,\|b\|\leqslant 1\). Then \(p(A)b\) can be computed in the following way:
\[u_{d+1} =\vec{0}\] \[u_{d} =a_{d}b\] \[u_{k} =2Au_{k+1}-u_{k+2}+a_{k}b \tag{30}\] \[u:=p(A)b =\frac{1}{2}(a_{0}b+u_{0}-u_{2})\]
The proof that this truly computes \(p(A)b\) is the same as the proof of correctness for Clenshaw's algorithm shown above.
### Evaluating even and odd polynomials
We will be considering evaluating odd and even polynomials. We again focus on the scalar setting and note that this extends to the matrix setting in the obvious way. The previous recurrence Eq. (Clenshaw) can work in this setting, but it'll be helpful for our analysis if the recursion multiplies by \(x^{2}\) each time, instead of \(x\)[12, Chapter 2, Problem 7]. So, in the case where the degree-\((2d+1)\) polynomial \(p(x)\) is _odd_ (so \(a_{2k}=0\) for every \(k\)), it can be computed with the iteration
\[q_{d+1} =q_{d+2}=0\] \[q_{k} =2T_{2}(x)q_{k+1}-q_{k+2}+a_{2k+1}U_{1}(x)\] (Odd Clenshaw) \[\bar{p} =\tfrac{1}{2}(q_{0}-q_{1})\]
When \(p(x)\) is a degree-\((2d)\)_even_ polynomial (so \(a_{2k+1}=0\) for every \(k\)), it can be computed via the same recurrence, replacing \(a_{2k+1}U_{1}(x)\) with \(a_{2k}\). However, we will use an alternative form that's more convenient for us (since we can reuse the analysis of the odd case).
\[\bar{a}_{2k} \coloneqq a_{2k}-a_{2k+2}+a_{2k+4}-\cdots\pm a_{2d} \tag{31}\] \[q_{d+1} =q_{d+2}=0\] \[q_{k} =2T_{2}(x)q_{k+1}-q_{k+2}+\bar{a}_{2k+2}U_{1}(x)^{2}\] (Even Clenshaw) \[\bar{p} =\bar{a}_{0}+\tfrac{1}{2}(q_{0}-q_{1})\]
These recurrences correctly compute \(p\) follows from a similar analysis to the standard Clenshaw algorithm, formalized below.
**Lemma 8.3**.: _The recursions in Eq._ (Odd Clenshaw) _and Eq._ (Even Clenshaw) _correctly compute \(p(x)\) for even and odd polynomials, respectively. That is, in exact arithmetic, \(\tilde{p}=p(x)\). In particular,_
\[q_{k}=\sum_{i=k}^{d}a_{i}U_{i-k}(x). \tag{32}\]
Proof.: We can prove these statements by applying Eq. (28). In the odd case, Eq. (Odd Clenshaw) is identical to Eq. (Clenshaw) except that \(x\) is replaced by \(T_{2}(x)\) and \(a_{k}\) is replaced by \(a_{2k+1}U_{1}(x)\), so by making the corresponding changes in the iterate, we get that
\[q_{k} =\sum_{i=k}^{d}a_{2i+1}U_{1}(x)U_{i-k}(T_{2}(x))=\sum_{i=k}^{d}a_ {2i+1}U_{2(i-k)+1}(x) \text{by Eq.} \tag{8}\] \[\tilde{p} =\tfrac{1}{2}(q_{0}-q_{1})=\sum_{i=0}^{d}\frac{a_{2i+1}}{2} \Big{(}U_{2i+1}(x)-U_{2i-1}(x)\Big{)}=p(x). \text{by Eq.} \tag{5}\]
Similarly, in the even case, Eq. (Even Clenshaw) is identical to Eq. (Clenshaw) except that \(x\) is replaced by \(T_{2}(x)\) and \(a_{k}\) is replaced by \(4\tilde{a}_{2k}x^{2}\) (see Definition 31), so that
\[q_{k} =\sum_{i=k}^{d}\tilde{a}_{2i+2}U_{1}(x)^{2}U_{i-k}(T_{2}(x)) \tag{33}\] \[=\sum_{i=k}^{d}\tilde{a}_{2i+2}U_{1}(x)U_{2(i-k)+1}(x) \text{by Eq.}\] (8) \[=\sum_{i=k}^{d}\tilde{a}_{2i+2}(U_{2(i-k)}(x)+U_{2(i-k+1)}(x)) \text{by Eq.}\] (4) \[=\sum_{i=k}^{d+1}\tilde{a}_{2i+2}U_{2(i-k)}(x)+\sum_{i=k+1}^{d+1} \tilde{a}_{2i}U_{2(i-k)}(x) \text{noticing that }\tilde{a}_{2d+2}=0\] \[=\tilde{a}_{2k+2}+\sum_{i=k+1}^{d+1}(\tilde{a}_{2i}+\tilde{a}_{2 i+2})U_{2(i-k)}(x)\] (34) \[=\tilde{a}_{2k+2}+\sum_{i=k+1}^{d}a_{2i}U_{2(i-k)}(x)\] (35) \[=-\tilde{a}_{2k}+\sum_{i=k}^{d}a_{2i}U_{2(i-k)}(x) \tag{36}\]
Finally, observe
\[\tilde{a}_{0}+\frac{1}{2}(q_{0}-q_{1})=\tilde{a}_{0}+\frac{1}{2}(a_{0}-\tilde {a}_{0}+\tilde{a}_{2})+\sum_{i=1}^{d}\frac{a_{2i}}{2}(U_{2i}(x)-U_{2i-2}(x))=p (x). \tag{37}\]
## 9 Stability of the scalar Clenshaw recursion
Before we move to the matrix setting, we warmup with a stability analysis of the scalar Clenshaw recursion. Suppose we perform Eq. (Clenshaw) to compute a degree-\(d\) polynomial \(p\), except every addition, subtraction, and multiplication incurs \(\varepsilon\) relative error. Typically, this has been
analyzed in the finite precision setting, where the errors are caused by truncation. These standard analyses show that this finite precision recursion gives \(p(x)\) to \(d^{2}(\sum\lvert a_{i}\rvert)\varepsilon=O(d^{3}\lVert p\rVert_{\sup}\varepsilon)\) error. A use of Parseval's formula [13, Theorem 5.3]\(\sum\lvert a_{i}\rvert\leqslant\sqrt{d}\sqrt{\sum a_{i}^{2}}=O(\sqrt{d}\lVert p \rVert_{\sup})\) reduces this by a factor of \(\sqrt{d}\).
However, this bound on \(\sum\lvert a_{i}\rvert\) is essentially tight, and moreover, it is tight for a relevant class of polynomials: truncations of well-behaved functions. Such polynomials generally have the property that \(\lvert a_{k}\rvert=\Theta((1-\log(1/\varepsilon)/d)^{-k})\) (Lemma 5.2), so most of the coefficients are constant-sized, eventually decaying to \(\varepsilon\).
We improve on prior stability analyses to show that the Clenshaw recurrence for Chebyshev polynomials only incurs an error overhead of \(d^{2}\log(d)\). This is tight up to a logarithmic factor. This, for example, could be used to improve the bound in [12, Lemma 9] from \(k^{3}\) to \(k^{2}\log(k)\) (where in that paper, \(k\) denotes degree).
### Analyzing error propagation
The following is a simple analysis of Clenshaw, with some rough resemblance to an analysis of Oliver [14].
**Proposition 9.1** (Stability Analysis for Scalar Clenshaw).: _Consider a degree-\(d\) polynomial \(p:[-1,1]\to\mathbb{R}\) with Chebyshev coefficients \(p(x)=\sum_{k=0}^{d}a_{k}T_{k}(x)\). Let \(\oplus,\ominus:\mathbb{C}\times\mathbb{C}\to\mathbb{C}\) be binary operations representing addition, subtraction, and multiplication to \(\mu\varepsilon\) relative error, for \(0<\varepsilon<1\):_
\[\lvert(x\oplus y)-(x+y)\rvert \leqslant\mu\varepsilon(\lvert x\rvert+\lvert y\rvert)\] \[\lvert(x\ominus y)-(x-y)\rvert \leqslant\mu\varepsilon(\lvert x\rvert+\lvert y\rvert)\] \[\lvert x\odot y-x\cdot y\rvert \leqslant\mu\varepsilon\lvert x\rvert\lvert y\rvert=\mu \varepsilon\lvert xy\rvert.\]
_Given an \(x\in[-1,1]\), consider performing the Clenshaw recursion with these noisy operations:_
\[\tilde{q}_{d+1} =\tilde{q}_{d+2}=0\] \[\tilde{q}_{k} =(2\odot x)\odot\tilde{q}_{k+1}\ominus(\tilde{q}_{k+2}\ominus a_{ k})\] (Finite-Precision Clenshaw) \[\vec{p} =\tfrac{1}{2}\odot((a_{0}\oplus q_{0})\ominus q_{2})\]
_Then Eq. (Finite-Precision Clenshaw) outputs \(p(x)\) up to \(50\varepsilon\lVert p\rVert_{\sup}\) error14, provided that \(\mu>0\) satisfies the following three criterion._
Footnote 14: We did not attempt to optimize the constants for this analysis.
* \(\mu\varepsilon\leqslant\tfrac{1}{50(d+2)^{2}}\)_;_
* \(\mu\sum_{i=0}^{d}\lvert a_{i}\rvert\leqslant\lVert p\rVert_{\sup}\)_;_
* \(\mu\lvert q_{k}\rvert=\mu\lvert\sum_{i=k}^{d}a_{i}U_{i-k}(x)\rvert\leqslant \tfrac{1}{d}\lVert p\rVert_{\sup}\) _for all_ \(k\in\{0,\dots,d\}\)_._
This analysis shows that arithmetic operations incurring \(\mu\varepsilon\) error result in computing \(p(x)\) to \(\varepsilon\) error. In particular, the stability of the scalar Clenshaw recurrence comes down to understanding how small we can take \(\mu\). Note that if we ignored coefficient sign, \(\lvert\sum_{i=k}^{d}a_{i}U_{i-k}(x)\rvert\leqslant\lvert\sum_{i=k}^{d}\lvert a _{i}\rvert U_{i-k}(x)\rvert=\sum_{i=k}^{d}(i-k+1)\lvert a_{i}\rvert\), this would require setting \(\mu=\Theta(1/d^{3})\). We show in Section 9.2 that we can set \(\mu=\Theta((d^{2}\log(d))^{-1})\) for all \(x\in[-1,1]\) and polynomials \(p\).
**Lemma 9.2**.: _In Proposition 9.1, it suffices to take \(\mu=\Theta((d^{2}\log(d))^{-1})\)._
Proof of Proposition 9.1.: We will expand out these finite precision arithmetic to get error intervals for each iteration.
\[\tilde{q}_{d+1}=\tilde{q}_{d+2}=0, \tag{38}\]
and
\[\tilde{q}_{k} =(2\odot x)\odot\tilde{q}_{k+1}\odot(\tilde{q}_{k+2}\odot a_{k}) \tag{39}\] \[=(2x\pm 2\mu\varepsilon|x|)\odot\tilde{q}_{k+1}\odot(\tilde{q}_{k +2}-a_{k}\pm\mu\varepsilon(|\tilde{q}_{k+2}|+|a_{k}|))\] \[=((2x\tilde{q}_{k+1}\pm 2\mu\varepsilon|x|\tilde{q}_{k+1})\pm\mu \varepsilon|(2x\pm 2\mu\varepsilon|x|)\tilde{q}_{k+1}|)\odot(\tilde{q}_{k+2}-a_{k}\pm \mu\varepsilon(|\tilde{q}_{k+2}|+|a_{k}|))\] \[\in(2x\tilde{q}_{k+1}\pm(2\mu\varepsilon+\mu^{2}\varepsilon^{2}) 2|x\tilde{q}_{k+1}|)\odot(\tilde{q}_{k+2}-a_{k}\pm\mu\varepsilon(|\tilde{q}_{ k+2}|+|a_{k}|))\] \[\in(2x\tilde{q}_{k+1}\pm 6\mu\varepsilon|x\tilde{q}_{k+1}|)\odot( \tilde{q}_{k+2}-a_{k}\pm\mu\varepsilon(|\tilde{q}_{k+2}|+|a_{k}|))\] \[=2x\tilde{q}_{k+1}-\tilde{q}_{k+2}+a_{k}\pm\mu\varepsilon(6|x \tilde{q}_{k+1}|+|\tilde{q}_{k+2}|+|a_{k}|)\] \[\qquad+\mu\varepsilon|2x\tilde{q}_{k+1}\pm 6\mu\varepsilon|x \tilde{q}_{k+1}||+\mu\varepsilon|\tilde{q}_{k+2}-a_{k}\pm\mu\varepsilon(| \tilde{q}_{k+2}|+|a_{k}|)|\] \[\in 2x\tilde{q}_{k+1}-\tilde{q}_{k+2}+a_{k}\pm\mu\varepsilon(14|x \tilde{q}_{k+1}|+3|\tilde{q}_{k+2}|+3|a_{k}|),\]
and,
\[\tilde{p} =\tfrac{1}{2}\odot((a_{0}\oplus q_{0})\ominus q_{2}) \tag{40}\] \[=\tfrac{1}{2}\odot((a_{0}+q_{0}\pm\mu\varepsilon(|a_{0}|+|q_{0}|) )\ominus q_{2})\] \[=\tfrac{1}{2}\odot((a_{0}+q_{0}-q_{2}\pm\mu\varepsilon(|a_{0}|+|q _{0}|))\pm\mu\varepsilon(|a_{0}+q_{0}\pm\mu\varepsilon(|a_{0}|+|q_{0}|)|+|q_{2 }|))\] \[\in\tfrac{1}{2}\odot(a_{0}+q_{0}-q_{2}\pm\mu\varepsilon(3|a_{0}|+ 3|q_{0}|+|q_{2}|))\] \[=\tfrac{1}{2}(a_{0}+q_{0}-q_{2}\pm\mu\varepsilon(3|a_{0}|+3|q_{0} |+|q_{2}|))\pm\mu\varepsilon\tfrac{1}{2}|a_{0}+q_{0}-q_{2}\pm\mu\varepsilon(3| a_{0}|+3|q_{0}|+|q_{2}|)|\] \[\in\tfrac{1}{2}(a_{0}+q_{0}-q_{2})\pm\tfrac{1}{2}\mu\varepsilon( 7|a_{0}|+7|q_{0}|+3|q_{2}|).\]
To summarize, we have
\[\tilde{q}_{d+1} =\tilde{q}_{d+2}=0 \tag{41}\] \[\tilde{q}_{k} =2x\tilde{q}_{k+1}-\tilde{q}_{k+2}+a_{k}+\delta_{k}, \text{where }|\delta_{k}| \leqslant\mu\varepsilon(14|x\tilde{q}_{k+1}|+3|\tilde{q}_{k+2}|+3|a_{k}|)\] (42) \[\tilde{p} =\tfrac{1}{2}(a_{0}+q_{0}-q_{2})+\delta, \text{where }|\delta| \leqslant\tfrac{1}{2}\mu\varepsilon(7|a_{0}|+7|q_{0}|+3|q_{2}|) \tag{43}\]
By Lemma 8.1, this recurrence satisfies
\[\tilde{q}_{k} =\sum_{i=k}^{d}U_{i-k}(x)(a_{i}+\delta_{i})\] \[q_{k}-\tilde{q}_{k} =\sum_{i=k}^{d}U_{i-k}(x)\delta_{i}\] \[q-\tilde{q} =\delta+\frac{1}{2}\Big{(}\sum_{i=0}^{d}U_{i}(x)\delta_{i}-\sum_{ i=2}^{d}U_{i-2}(x)\delta_{i}\Big{)}\] \[=\delta+\frac{1}{2}\delta_{0}+\sum_{i=1}^{d}T_{i}(x)\delta_{i}\] \[|q-\tilde{q}| \leqslant|\delta|+\frac{1}{2}|\delta_{0}|+\sum_{i=1}^{d}|T_{i}(x )\delta_{i}|\leqslant|\delta|+\sum_{i=0}^{d}|\delta_{i}|. \tag{44}\]
This analysis so far has been fully standard. Let's continue bounding.
\[\leqslant\mu\varepsilon\Big{(}\tfrac{7}{2}|a_{0}|+\tfrac{7}{2}|q_{0}|+\tfrac{ 3}{2}|q_{2}|+\sum_{i=0}^{d}(14|x\tilde{q}_{i+1}|+3|\tilde{q}_{i+2}|+3|a_{i}|) \Big{)}\]
\[\leqslant\mu\varepsilon\sum_{i=0}^{d}(20|\tilde{q}_{i}|+10|a_{i}|). \tag{44}\]
Now, we will bound all of the \(\delta_{k}\)'s. Combining previous facts, we have
\[|\tilde{q}_{k}| =\left|\sum_{i=k}^{d}U_{i-k}(x)(a_{i}+\delta_{i})\right|\] \[\leqslant\left|\sum_{i=k}^{d}U_{i-k}(x)a_{i}\right|+\sum_{i=k}^{d} \Bigl{|}U_{i-k}(x)\delta_{i}\Bigr{|}\] \[\leqslant\left|\sum_{i=k}^{d}U_{i-k}(x)a_{i}\right|+\sum_{i=k}^{d }(i-k+1)|\delta_{i}|\] \[\leqslant\left|\sum_{i=k}^{d}U_{i-k}(x)a_{i}\right|+\mu\varepsilon \sum_{i=k}^{d}(i-k+1)(14|\tilde{q}_{i+1}|+3|\tilde{q}_{i+2}|+3|a_{i}|)\] \[\leqslant\Bigl{(}\frac{1}{\mu d}+3\mu\varepsilon\frac{d-k+1}{\mu }\Bigr{)}\|p\|_{\sup}+\mu\varepsilon\sum_{i=k}^{d}(i-k+1)(14|\tilde{q}_{i+1}| +3|\tilde{q}_{i+2}|)\] \[\leqslant\frac{1.5}{\mu d}\|p\|_{\sup}+\mu\varepsilon\sum_{i=k}^ {d}(i-k+1)(14|\tilde{q}_{i+1}|+3|\tilde{q}_{i+2}|)\]
Note that \(|\tilde{q}_{k}|\leqslant c_{k}\), where
\[c_{d} =0;\] \[c_{k} =\frac{1.5}{\mu d}\|p\|_{\sup}+\mu\varepsilon\sum_{i=k}^{d}(i-k+ 1)(14c_{i+1}+3c_{i+2}) \tag{45}\]
Solving this recurrence, we have that \(c_{k}\leqslant\frac{2}{\mu d}\|p\|_{\sup}\), since by strong induction,
\[c_{k} \leqslant\Bigl{(}\frac{1.5}{\mu d}+\mu\varepsilon\sum_{i=k}^{d}( i-k+1)17\frac{2}{\mu d}\Bigr{)}\|p\|_{\sup}\] \[=\Bigl{(}\frac{1.5}{\mu d}+17\mu\varepsilon\frac{1}{\mu d}(d-k+1) (d-k+2)\Bigr{)}\|p\|_{\sup}\leqslant\frac{2}{\mu d}\|p\|_{\sup} \tag{46}\]
Returning to Equation (44):
\[|q-\tilde{q}| \leqslant\mu\varepsilon\sum_{i=0}^{d}(20|\tilde{q}_{i}|+10|a_{i} |)\leqslant\mu\varepsilon\sum_{i=0}^{d}(20c_{i}+10|a_{i}|) \tag{47}\] \[\leqslant 40\varepsilon\|p\|_{\sup}+10\mu\varepsilon\sum_{i=0}^{d} |a_{i}|\leqslant 50\varepsilon\|p\|_{\sup}\qed\]
### Bounding the iterates of the Clenshaw recurrence
The goal of this section is to prove Lemma 9.2. In particular, we wish to show that for \(\mu=\Theta((d^{2}\log(d))^{-1})\), the following criteria hold:
1. \(\mu\varepsilon\leqslant\frac{1}{50(d+2)^{2}}\);
2. \(\mu\sum_{i=0}^{d}|a_{i}|\leqslant\|p\|_{\sup}\);
3. \(\mu|q_{k}|=\mu|\sum_{i=k}^{d}a_{i}U_{i-k}(x)|\leqslant\frac{1}{d}\|p\|_{\sup}\) for all \(k\in\{0,\dots,d\}\).
For this choice of \(\mu\), (a) is clearly satisfied, and since \(|a_{i}|\leqslant 2\|p\|_{\sup}\) (Lemma 5.2), \(\mu\sum_{i=0}^{d}|a_{i}|\leqslant 2(d+1)\|p\|_{\sup}\leqslant\|p\|_{\sup}\), so (b) is satisfied. In fact, both of these criterion are satisfied for \(\mu=O(1/d)\), provided \(\epsilon\) is sufficiently small.
Showing (c) requires bounding \(\|\sum_{\ell=k}^{d}a_{\ell}U_{\ell-k}(x)\|_{\sup}\) for all \(k\in[d]\). These expressions are also the iterates of the Clenshaw algorithm (Lemma 8.1), so we are in fact trying to show that in the process of our algorithm we never produce a value that's much larger than the final value. From testing computationally, we believe that the following holds true.
**Conjecture 9.3**.: _Let \(p(x)\) be a degree-\(d\) polynomial with Chebyshev expansion \(p(x)=\sum_{\ell=0}^{d}a_{\ell}T_{\ell}(x)\). Then, for all \(k\) from \(0\) to \(d\),_
\[\Big{\|}\sum_{\ell=k}^{d}a_{\ell}U_{\ell-k}(x)\Big{\|}_{\sup}\leqslant(d-k+1) \|p\|_{\sup},\]
_maximized for the Chebyshev polynomial \(p(x)=T_{d}(x)\)._
Conjecture 9.3 would imply that it suffices to take \(\mu=\Theta(1/d^{2})\). We prove it up to a log factor.
**Theorem 9.4**.: _For a degree-\(d\) polynomial \(p\), consider the degree-\((d-k)\) polynomial \(q_{k}(x)=\sum_{\ell=k}^{d}a_{\ell}U_{\ell-k}(x)\). Then_
\[\|q_{k}\|_{\sup}\leqslant(d-k+1)\Big{(}16+\frac{16}{\pi^{2}}\log(d)\Big{)}\|p \|_{\sup}.\]
Proof.: We proceed by carefully bounding the Chebyshev coefficients of \(q_{k}\), which turn out to be arithmetic progressions of the \(a_{k}\)'s which we bounded in Section 7.
\[q_{k}(x) =\sum_{i}a_{i}U_{i-k}(x)\] \[=\sum_{i}\sum_{j\geqslant 0}a_{i}T_{i-k-2j}(x)(1+[i-k-2j\neq 0])\] \[=\sum_{i}\sum_{j\geqslant 0}a_{i+k+2j}T_{i}(x)(1+[i\neq 0])\] \[=\sum_{i}T_{i}(x)(1+[i\neq 0])\sum_{j\geqslant 0}a_{i+k+2j}\] \[|q_{k}(x)| \leqslant\sum_{i}[i\geqslant 0](1+[i\neq 0])\Big{|}\sum_{j \geqslant 0}a_{i+k+2j}\Big{|}\] \[\leqslant 2\sum_{i=0}^{d-k}\Big{|}\sum_{j\geqslant 0}a_{i+k+2j} \Big{|}\] \[\leqslant 4\sum_{i=0}^{d-k}\Big{(}4+\frac{4}{\pi^{2}}\log(i+k-1) \Big{)}\|p\|_{\sup}\] by Fact 7.3 \[\leqslant(d-k+1)\Big{(}16+\frac{16}{\pi^{2}}\log(d)\Big{)}\|p\|_{ \sup}.\qed\]
**Remark 9.5**.: We spent some time trying to prove Conjecture 9.3, since its form is tantalizingly close to that of the Markov brothers' inequality [1]\(\|\frac{d}{dx}p(x)\|_{\sup}=\|\sum_{\ell=0}^{d}a_{\ell}U_{\ell-1}(x)\|_{\sup} \leqslant d^{2}\|p(x)\|_{\sup}\), except with the linear differential operator \(\frac{d}{dx}:T_{\ell}\mapsto\ell U_{\ell-1}\) replaced with the linear operator \(T_{\ell}\mapsto U_{\ell-k}\). However, calculations suggest that the variational characterization of \(\max_{\|p\|_{\sup}=1}|\frac{d}{dx}p(x)|\) underlying proofs of the Markov brothers' inequality [14] does not hold here, and from our shallow understanding of these proofs, it seems that they strongly use properties of the derivative.
Computing matrix polynomials
Our goal is to prove the following theorem:
**Theorem 10.1**.: _Suppose we are given sampling and query access to \(A\in\mathbb{C}^{m\times n}\) and \(b\in\mathbb{C}^{n}\), and suppose we are given an even or odd degree-\(d\) polynomial. Then we can output a description of a vector \(y\in\mathbb{C}^{n}\) such that \(\|y-P(A)b\|\leqslant\varepsilon\|p\|_{\operatorname{Spec}(A)}\) with probability \(\geqslant 0.9\) in time_
\[O\bigg{(}\min\bigg{\{}\operatorname{nnz}(A),\frac{d^{8}\|A\|_{F}^{4}}{ \varepsilon^{4}}\log^{8}(d)\log^{2}(n)\bigg{\}}+\frac{d^{9}\|A\|_{F}^{4}}{ \varepsilon^{2}}\log^{4}(d)\log(n)\bigg{)}.\]
_We can access the output description in the following way:_
1. _Compute entries of_ \(y\) _in_ \(O\Big{(}\frac{d^{4}\|A\|_{F}^{2}}{\varepsilon^{2}}\log^{4}(d)\log(n)\Big{)}\) _time;_
2. _Sample_ \(i\in[n]\) _with probability_ \(\frac{|y_{i}|^{2}}{\|y\|^{2}}\) _in_ \(O\Big{(}\frac{d^{6}\|A\|_{F}^{4}}{(\varepsilon\|y\|)^{2}}\log^{8}(d)\log(n) \Big{)}\) _time;_
3. _Estimate_ \(\|y\|^{2}\) _to_ \(v\) _relative error in_ \(O\Big{(}\frac{d^{6}\|A\|_{F}^{4}}{(\varepsilon v\|y\|)^{2}}\log^{8}(d)\log(n) \Big{)}\) _time._
We now extend the error analysis in the scalar setting from Section 9 to the matrix setting. For simplicity, we treat the odd case, and in Section 10.2, describe what changes to prove the result for even polynomials.
### Computing odd matrix polynomials
The main theorem we obtain is as follows:
**Theorem 10.2**.: _Suppose we are given sampling and query access to \(A\in\mathbb{C}^{m\times n}\) and \(b\in\mathbb{C}^{n}\) with \(\|A\|\), \(\|b\|\leqslant 1\), and suppose we are given a \((2d+1)\)-degree odd polynomial, written in its Chebyshev coefficients as_
\[p(x)=\sum_{i=0}^{d}a_{2i+1}T_{2i+1}(x).\]
_Then we can output a vector \(x\in\mathbb{C}^{n}\) such that \(\|Ax-p(A)b\|\leqslant\varepsilon\|p\|_{\operatorname{Spec}(A)}\) with probability \(\geqslant 0.9\) in time_
\[O\bigg{(}\min\bigg{\{}\operatorname{nnz}(A),\frac{\|A\|_{F}^{4}}{(\mu \varepsilon)^{4}}\log^{2}(n)\bigg{\}}+\frac{d^{5}\|A\|_{F}^{4}}{(\mu\varepsilon )^{2}}\log(n)\bigg{)},\]
_assuming \(\mu\) satisfies the following bounds:_
1. \(\mu\leqslant\frac{1}{100d^{2}}\)_;_
2. \(\mu\sum_{i=0}^{d}|a_{2i+1}|\leqslant 1\)_;_
3. \(\mu\|\sum_{i=k}^{d}a_{2i+1}U_{i-k}(T_{2}(x))\|_{\sup}\leqslant\frac{d-k+1}{d^ {2}}\) _for all_ \(0\leqslant k\leqslant d\)_;_
_The output description has the additional properties_
\[\sum_{i}\|A_{i,\nu}\|^{2}|x_{i}|^{2}\lesssim\frac{\varepsilon^{2}}{\log(n)d^{ 2}}\qquad\|x\|_{0}\lesssim\frac{\|A\|_{F}^{2}}{(\mu\varepsilon)^{2}}\log(n),\]
_so that by Corollary 5.11, we can_
1. _Compute entries of_ \(Ax\) _in_ \(\|x\|_{0}=O\Big{(}\frac{\|A\|_{F}^{2}}{(\mu\varepsilon)^{2}}\log(n)\Big{)}\) _time;_
_._
2. _Sample_ \(i\in[n]\) _with probability_ \(\frac{|y|_{i}^{2}}{\|y\|^{2}}\) _in_ \(O\Big{(}\frac{\|A\|_{F}}{z_{\eta}^{2}u^{2}d\|y\|^{2}}\Big{)}\) _time;_
3. _Estimate_ \(\|y\|^{2}\) _to_ \(\nu\) _relative error in_ \(O\Big{(}\frac{\|A\|_{F}^{2}\log(n)}{(\nu\mu^{2}ed\|y\|)^{2}}\log(n)\Big{)}\) _time._
The criterion for what \(\mu\) need to be are somewhat non-trivial; essentially, these bounds are what's necessary for Eq. (Odd Clenshaw) to be numerically stable when computing \(p(x)/x\). This is necessary because we primarily work in the "dual" space, maintaining our Clenshaw iterate \(u_{k}\) as \(Av_{k}\) where \(v_{k}\) is a sparse vector. For any bounded polynomial, though, we can always take \(\mu\) to be \(\Omega((d^{2}\log^{2}(d))^{-1})\).
**Proposition 10.3**.: _In Theorem 10.2, we can always take \(1/\mu\lesssim d^{2}\log^{2}(d)\|p\|_{\operatorname{Spec}(A)}\)._
We are now ready to dive into the proof of Theorem 10.2.
**Algorithm 10.4** (Odd singular value transformation).:
**Input:**: A matrix \(A\in\mathbf{C}^{m\times n}\), vector \(b\in\mathbf{C}^{n}\), \(\varepsilon,\delta,\mu>0\), a degree \(2d+1\) polynomial \(p\), and the corresponding coefficients in the Chebyshev basis, i.e. \(p(x)=\sum_{i=0}^{d}a_{2i+1}T_{2i+1}(x)\).
**Preprocessing sketches:**: Let \(s,t=\Theta\Big{(}\frac{\|A\|_{F}^{2}}{(\mu\mu)^{2}}\log(\frac{n}{\delta}) \Big{)}\).
1. If \(\operatorname{SQ}(A^{\dagger})\) and \(\operatorname{SQ}(b)\) are not given, compute data structures to simulate them in \(O(1)\) time;
2. Sample \(S\in\mathbb{C}^{n\times s}\) to be \(\operatorname{AMP}\Big{(}A,b,\{\frac{1}{2}(\frac{\|A_{s,\|}^{2}}{\|A\|_{F}^{ 2}}+\frac{\|b\|_{\eta}^{2}}{\|b\|})\}\Big{)}\) (Theorem 6.5) with sketch size \(s\);
3. Sample \(T\in\mathbb{C}^{t\times m}\) to be \(\operatorname{AMP}\big{(}S^{\ddagger}A^{\dagger},AS\big{)}\) (Theorem 6.5) with sketch size \(t\);
4. Compute a data structure that can respond to \(\operatorname{SQ}(TAS)\) queries in \(O(1)\) time;
**Clenshaw iteration:**: Let \(r=\Theta(d^{4}\|A\|_{F}^{2}(s+t))=\Theta\Big{(}\frac{d^{4}\|A\|_{F}^{2}}{( \mu\mu)^{2}}\log\frac{n}{\delta}\Big{)}\). Then starting with \(v_{d+1}=v_{d+2}=\vec{0}^{s}\) and going until \(v_{0}\),
1. Let \(B^{(k)}=\textsc{best}(TAS)\) and \(B^{(k)}_{\dagger}=\textsc{best}((TAS)^{\dagger})\) with parameter \(r\);
2. Compute \(v_{k}=2(2B^{(k)}_{\dagger}B^{(k)}-I)v_{k+1}-v_{k+2}+a_{2k+1}Sb\).
**Output:**: Output \(x=\frac{1}{2}S(v_{0}-v_{1})\) that satisfies \(\|Ax-p(A)b\|\leq\varepsilon\).
Preprocessing sketches.Given a matrix \(A\in C^{m\times n}\) and a vector \(b\in\mathbb{C}^{n}\), the pre-processing phase of Algorithm 10.4 can be performed in \(O(\operatorname{nnz}(A))\) time. If \(O(Q)\)-time \(\operatorname{SQ}(A)\) and \(\operatorname{SQ}(b)\) are already given, then this can be performed in \(O(Qst)\) time.
The way to do this is straightforward. Finally, we query all of \(TAS\) in \(O(\min(\operatorname{nnz}(A),st))\) time. Using this, we can prepare the data structure for \(\operatorname{SQ}(TAS)\).
We list the guarantees of the sketch that we will use in the error analysis, and point to where they come from in Section 6.2. The guarantees of Theorem 6.5 individually fail with probability \(O(\delta)\), so we will rescale to say that they all hold with probability \(\geqslant 1-\delta\).
\[\|[AS]_{*,j}\|^{2} \leqslant 2\|A\|_{F}^{2}/s\text{ for all }j\in[s]\quad\text{by Definition \ref{def:error}}\] ( \[AS\text{ col bound}\] ) \[\|S^{\dagger}b\|^{2} \leqslant 2\|b\|^{2}\] by Definition 6.4 ( \[\|S^{\dagger}b\|\] bound) \[\|AA^{\dagger}-AS(AS)^{\dagger}\| \leqslant\mu\varepsilon\] by Theorem 6.5 ( \[AA^{\ddagger}\] AMP)
\[\|Ab-AS5^{\dagger}b\| \leqslant\mu\epsilon\] by Theorem 6.5 ( \[Ab\] AMP) \[\|TAS\|_{F}^{2} =\|AS\|_{F}^{2}\leqslant 2\|A\|_{F}^{2}\] by Definition 6.4 ( \[\|TAS\|_{F}\] bound) \[\|(AS)^{\dagger}(AS)-(TAS)^{\dagger}TAS\| \leqslant\mu\epsilon\] by Theorem 6.5 ( \[(AS)^{\dagger}AS\] AMP)
A direct consequence of Eq. ( \(AA^{\dagger}\) AMP) and Eq. ( \((AS)^{\dagger}AS\) AMP) is that \(AS\) and \(TAS\) have bounded spectral norms.
\[\|AS\|^{2} =\|AS(AS)^{\dagger}\|\leqslant\|AA^{\dagger}\|+\mu\epsilon=\|A\|^{ 2}+\mu\epsilon\] \[\|TAS\|^{2} =\|(TAS)^{\dagger}(TAS)\|\leqslant\|(AS)^{\dagger}(AS)\|+\mu \epsilon=\|AS\|^{2}+\mu\epsilon\leqslant\|A\|^{2}+2\mu\epsilon\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
As for approximation error, let \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{4}\) and \(\varepsilon_{5}\) be the errors introduced in the corresponding approximation above. Using the previously established bounds on \(S\) and \(T\),
\[\varepsilon_{1} \leqslant 4\|AA^{\dagger}-AS(AS)^{\dagger}\|\|u_{k+1}\|\leqslant 4 \mu\varepsilon\|u_{k+1}\|\] by Eq. \[(AA^{\dagger}\text{ AMP})\] \[\varepsilon_{2} \leqslant 4\|AS\|\|((AS)(AS)^{\dagger}-(TAS)(TAS)^{\dagger})v_{k+1} \|\leqslant 5\mu\varepsilon\|v_{k+1}\|\] by Eq. \[((AS)^{\dagger}AS\text{ AMP})\] \[\varepsilon_{3} \leqslant|a_{2k+1}|\|Ab-ASS^{\dagger}b\|\leqslant|a_{2k+1}|\mu\varepsilon\] by Eq. \[(Ab\text{ AMP})\]
The bounds on \(\varepsilon_{4}\) and \(\varepsilon_{5}\) follow from the bounds in Section 6.1 applied to \(TAS\). With probability \(\geqslant 1-1/(100d)\), the following hold:
\[\varepsilon_{4} \leqslant 4\|AS(TAS)^{\dagger}(TAS-B^{(k)})v_{k+1}\|\] \[\leqslant 4\sqrt{100d/r}\|AS(TAS)^{\dagger}\|TAS\|_{F}\|v_{k+1}\|\] by Corollary 6.3 \[\leqslant d^{-3/2}\mu\varepsilon\|v_{k+1}\|\]
Similarly,
\[\varepsilon_{5} \leqslant\|4AS((TAS)^{\dagger}-B_{\dagger}^{(k)})B^{(k)}v_{k+1}\|\] \[\leqslant 4\sqrt{100d/r}\|AS\|_{F}\|B^{(k)}v_{k+1}\|\] by Corollary 6.3 \[\leqslant 4\sqrt{100d/r}\|AS\|_{F}\Big{(}\|TASv_{k+1}\|+\sqrt{100d/r }\|I_{t}\|TAS\|_{F}\|v_{k+1}\|\Big{)}\] by Corollary 6.3 \[\leqslant d^{-3/2}\mu\varepsilon\|v_{k+1}\|.\] by taking
\[r=\Theta(d^{4}(s+t))=\Theta(\tfrac{d^{4}\|A\|_{t}^{4}}{(\mu\nu)^{2}}\log\tfrac{ \eta}{\delta}).\]
Call the cumulative error for this round \(\varepsilon^{(k)}\).
\[\varepsilon^{(k)}:=\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}+ \varepsilon_{4}+\varepsilon_{5}\lesssim\mu\varepsilon\Big{(}\|v_{k}\|+|a_{2k +1}|\Big{)}.\]
Error accumulation across iterations.A simple argument shows that the error of the full algorithm is the sum of the errors for each round. In other words, after completing the iteration, we have a vector \(u\) such that
\[\|u-p(A)b\|\leqslant\sum_{k=0}^{d}\varepsilon^{(k)}\lesssim\mu\varepsilon \sum_{k=0}^{d}(\|v_{k}\|+|a_{2k+1}|)\leqslant\varepsilon+\sum_{k=0}^{d}\mu \varepsilon\|v_{k}\|. \tag{48}\]
So, it suffices to bound the \(v_{k}\)'s. The recursions defining them is
\[v_{k} =4R_{\dagger}^{(k)}R^{(k)}v_{k+1}-2v_{k+1}-v_{k+2}+a_{2k+1}S^{ \dagger}b\] \[=2(2(TAS)^{\dagger}(TAS)-I)v_{k+1}-v_{k+2}+a_{2k+1}S^{\dagger}b+4 (B_{\dagger}^{(k)}B^{(k)}-(TAS)^{\dagger}(TAS))v_{k+1}\]
This solves to
\[v_{k}=\sum_{i=k}^{d}U_{i-k}(2(TAS)^{\dagger}(TAS)-I)\Big{(}a_{2i+1}S^{\dagger}b +4(B_{\dagger}^{(i)}B^{(i)}-(TAS)^{\dagger}(TAS))v_{i+1}\Big{)}.\]
We bound the second moment (averaging over the randomness of the \(B^{(k)}\)'s), using that the \(B^{(k)}\), \(B_{\dagger}^{(k)}\) and \(B^{(k^{\prime})}\) are all drawn independently. First, notice that the means are bounded.
\[\tilde{v}_{k}=\mathop{\mathbf{E}}_{[k,d]}\left[v_{k}\right]=\sum_{i=k}^{d}U_{ i-k}(2(TAS)^{\dagger}(TAS)-I)a_{2k+1}Sb.\]
Using sub-multiplicativity of the operator norm,
\[\|\tilde{v}_{k}\| \leq\Big{\|}\sum_{i=k}^{d}U_{i-k}(2(TAS)^{\dagger}(TAS)-I)a_{2k+1} \Big{\|}\Big{\|}\mathcal{S}^{\dagger}b\Big{\|}\] \[=\Big{\|}\sum_{i=k}^{d}U_{i-k}(T_{2}(x))a_{2k+1}\Big{\|}_{\text{ Spec}(TAS)}\Big{\|}\mathcal{S}^{\dagger}b\Big{\|}\] \[\leq e\Big{\|}\sum_{i=k}^{d}U_{i-k}(2x^{2}-1)a_{2k+1}\Big{\|}_{ \text{sup}}\Big{\|}\mathcal{S}^{\dagger}b\Big{\|}\] by Lemma 5.4 and Eq. ( \[\|TAS\|\] bound) \[\leq 2e\|\sum_{i=k}^{d}U_{i-k}(2x^{2}-1)a_{2k+1}\|_{\text{sup}}\] \[\leq 2e\frac{d-k+1}{\mu d^{2}}\] by Item 10.2 (c)
We now compute the second moment of \(v_{k}\).
\[\mathop{\text{\bf E}}_{[k,d]}\Big{[}\|v_{k}\|^{2}\Big{]} =\|\tilde{v}_{k}\|^{2}+\mathop{\text{\bf E}}_{[k,d]}\Big{[}\|v_{k }-\tilde{v}_{k}\|^{2}\Big{]}\] \[=\|\tilde{v}_{k}\|^{2}+\mathop{\text{\bf E}}_{[k,d]}\Bigg{[}\left\| \sum_{i=k}^{d}U_{i-k}(2(TAS)^{\dagger}(TAS)-I)4(B_{\dagger}^{(i)}B^{(i)}-(TAS) ^{\dagger}(TAS))v_{i+1}\right\|^{2}\Bigg{]}\]
We use independence of the \(B^{(i)}\)'s to note that the expectation of cross terms is \(0\), and therefore,
\[=\|\tilde{v}_{k}\|^{2}+\mathop{\text{\bf E}}_{[k,d]}\Bigg{[}\sum_ {i=k}^{d}\Big{\|}U_{i-k}(2(TAS)^{\dagger}(TAS)-I)4(B_{\dagger}^{(i)}B^{(i)}-( TAS)^{\dagger}(TAS))v_{i+1}\Big{\|}^{2}\Bigg{]}\] \[\leq\|\tilde{v}_{k}\|^{2}+\sum_{i=k}^{d}16(i-k+1)^{2}\mathop{\text {\bf E}}_{[i,d]}\Bigg{[}\Big{\|}(B_{\dagger}^{(i)}B^{(i)}-(TAS)^{\dagger}(TAS)) v_{i+1}\Big{\|}^{2}\Bigg{]}\] \[\leq\|\tilde{v}_{k}\|^{2}+\sum_{i=k}^{d}\frac{16(i-k+1)^{2}}{d^{4 }}\mathop{\text{\bf E}}_{[i+1,d]}\Big{[}\|v_{i+1}\|^{2}\Big{]}\,,\]
where the last line follows from the computation
\[\mathop{\text{\bf E}}_{k}\Big{[}\Big{\|}(B_{\dagger}^{(k)}B^{(k) }-(TAS)^{\dagger}(TAS))v_{k+1}\Big{\|}^{2}\Big{]}\] \[=\mathop{\text{\bf E}}_{k}\Big{[}\Big{\|}B_{\dagger}^{(k)}B^{(k) }v_{k+1}\Big{\|}^{2}\Big{]}-\Big{\|}(TAS)^{\dagger}(TAS)v_{k+1}\Big{\|}^{2}\] \[=\mathop{\text{\bf E}}_{k}\Big{[}(B^{(k)}v_{k+1})^{\dagger}(B_{ \dagger}^{(k)})^{\dagger}B_{\dagger}^{(k)}(B^{(k)}v_{k+1})\Big{]}-\Big{\|}(TAS) ^{\dagger}(TAS)v_{k+1}\Big{\|}^{2}\] \[\leq\mathop{\text{\bf E}}_{k}\Big{[}(B^{(k)}v_{k+1})^{\dagger}(( TAS)(TAS)^{\dagger}+\frac{1}{r}\mathop{\text{\bf Tr}}(I_{s})\|TAS\|_{F}^{2}I_{t})(B^{(k)}v_{ k+1})\Big{]}-\Big{\|}(TAS)^{\dagger}(TAS)v_{k+1}\Big{\|}^{2}\] by Eq. (12) \[\leq\mathop{\text{\bf E}}_{k}\Big{[}v_{k+1}^{\dagger}(TAS)^{ \dagger}((TAS)(TAS)^{\dagger}+\frac{s}{r}\|TAS\|_{F}^{2}I_{t})(TAS)v_{k+1}\] \[\qquad+v_{k+1}^{\dagger}\frac{1}{r}\mathop{\text{\bf Tr}}((TAS)( TAS)^{\dagger}+\frac{s}{r}\|TAS\|_{F}^{2}I_{t})\|TAS\|_{F}^{2}v_{k+1}\Big{]}-\Big{\|}(TAS)^{ \dagger}(TAS)v_{k+1}\Big{\|}^{2}\] \[=\frac{s}{r}\|TAS\|_{F}^{2}\|TASv_{k+1}\|^{2}+\frac{1}{r}\|TAS\|_{ F}^{4}\|v_{k+1}\|^{2}+\frac{st}{r^{2}}\|TAS\|_{F}^{4}\|v_{k+1}\|^{2}\]
\[\lesssim\left(\frac{\underline{s}\|A\|_{F}^{2}}{r}+\frac{\|A\|_{F}^{4} }{r}+\frac{st\|A\|_{F}^{4}}{r^{2}}\right)\|v_{k+1}\|^{2}\qquad\text{ by Eqs. (}\|\underline{TMS}\|_{F}\text{ bound) and (}\|\underline{TMS}\|\text{ bound)}\] \[=\frac{\|A\|_{F}^{2}}{r}\Big{(}s+\|A\|_{F}^{2}+\frac{t\|A\|_{F}^{ 2}}{r}\Big{)}\|v_{k+1}\|^{2}\leqslant\frac{1}{d^{4}}\|v_{k+1}\|^{2}\]
Now, we can bound the second moments of all \(\|v_{k}\|\) by bounding the recurrence. We define the following recurrence \(c_{k}\) to satisfy \(\mathbf{E}_{[k,d]}[\|v_{k}\|^{2}]\leqslant c_{k}\), and then inductively show that \(c_{k}\leqslant\|\bar{v}_{k}\|^{2}+\frac{(d-k+1)^{2}}{\mu^{2}d^{8}}\leqslant 3 0\frac{(d-k+1)^{2}}{\mu^{2}d^{4}}\):
\[c_{k} =\|\bar{v}_{k}\|^{2}+\sum_{i=k}^{d}\frac{(i-k+1)^{2}}{d^{4}}c_{i+1}\] \[\leqslant\|\bar{v}_{k}\|^{2}+\sum_{i=k}^{d}\frac{(i-k+1)^{2}}{d^{ 4}}\frac{30(d-i+1)^{2}}{\mu^{2}d^{4}}\] \[\leqslant\|\bar{v}_{k}\|^{2}+\frac{30}{\mu^{2}d^{8}}\sum_{i=k}^{d }(i-k+1)^{2}(d-i+1)^{2}\] \[\leqslant\|\bar{v}_{k}\|^{2}+\frac{(d-k+1)^{5}}{\mu^{2}d^{8}}\] \[\leqslant\|\bar{v}_{k}\|^{2}+\frac{(d-k+1)^{2}}{\mu^{2}d^{5}}\]
We have shown that \(\mathbf{E}[\|v_{k}-\bar{v}_{k}\|^{2}]\leqslant\frac{1}{d}(\frac{(d-k+1)}{\mu d ^{2}})^{2}\). By Markov's inequality, with probability \(\geqslant 0.999\), we have that for all \(k\), \(\|v_{k}\|\lesssim\frac{d-k+1}{\mu d^{2}}\). Returning to the final error bound Eq. (48),
\[\sum_{k=0}^{d}\varepsilon^{(k)}\lesssim\varepsilon+\sum_{k=0}^{d}\mu\varepsilon \|v_{k}\|\lesssim\varepsilon+\sum_{k=0}^{d}\varepsilon(d-k+1)/d^{2}\lesssim\varepsilon. \tag{49}\]
This concludes the error analysis.
Output description properties.After the iteration concludes, we can compute \(u\) by computing \(x=\frac{1}{2}S(v_{0}-v_{1})\) in linear \(O(s)\) time. Then, \(u=\frac{1}{2}(u_{0}-u_{1})=\frac{1}{2}AS(v_{0}-v_{1})=Ax\). Note that though \(x\in\mathbb{C}^{n}\), its sparsity is at most the sparsity of \(x\), which is bounded by \(s\).
Further, using the prior bounds on \(v_{0}\) and \(v_{1}\), we have that
\[\sum_{j=1}^{n}\bigl{\|}A_{*j}\bigr{\|}^{2}|x_{i}|^{2} =\sum_{j=1}^{s}\bigl{\|}[SA]_{*j}\bigr{\|}^{2}\bigl{|}\frac{1}{2} (v_{0}-v_{1})_{i}\bigr{|}^{2}\] \[\leqslant\sum_{j=1}^{s}\tfrac{2}{s}\|A\|_{F}^{2}\bigl{|}\tfrac{1 }{2}(v_{0}-v_{1})_{i}\bigr{|}^{2}\] \[\leqslant\tfrac{2}{s}\|A\|_{F}^{2}\bigl{\|}\tfrac{1}{2}(v_{0}-v_{ 1})\bigr{\|}^{2}\] \[\leqslant\tfrac{2}{s}\|A\|_{F}^{2}(\|v_{0}\|^{2}+\|v_{1}\|^{2})\] \[\leqslant 2\|A\|_{F}^{2}/(\sqrt{s}\mu d)^{2}.\]
### Generalizing to even polynomials
We also obtain an analogous result for even polynomials. For the most part, changes are superficial; the red text indicates differences from Theorem 10.2. The major difference is that the representation of the output is \(A^{\intercal}x+\eta b\) instead of \(Ax\), which results from a difference in the dimension of the output, and the allowing for constant terms in \(p\).
**Theorem 10.6**.: _Suppose we are given sampling and query access to \(A\in\mathbb{C}^{m\times n}\) and \(b\in\mathbb{C}^{n}\) with \(\|A\|,\|b\|\leqslant 1\), and suppose we are given a (2d)-degree even polynomial, written in its Chebyshev coefficients as_
\[p(x)=\sum_{i=0}^{d}a_{2i}T_{2i}(x).\]
_Then we can output a vector \(x\in\mathbb{C}^{m}\) and \(\eta\in\mathbb{C}\) such that \(\|A^{\dagger}x+\eta b-p(A)b\|\leqslant\varepsilon\|p\|_{\text{\rm Spec}(A)}\) in time_
\[O\bigg{(}\min\bigg{\{}\text{\rm nnz}(A),\frac{\|A\|_{F}^{4}}{( \mu\epsilon)^{4}}\log^{2}(n)\bigg{\}}+\frac{d^{5}\|A\|_{F}^{4}}{(\mu\epsilon)^ {2}}\log(n)\bigg{)},\]
_assuming \(\mu\) satisfies the following bounds:_
1. \(\mu\epsilon\leqslant\frac{1}{100d^{2}}\)_;_
2. \(\mu\sum_{i=0}^{d}|\tilde{a}_{2i}|\leqslant 1\)_;_
3. \(\mu\|\sum_{i=k}^{d}4\tilde{a}_{2i+2}x\cdot U_{i-k}(T_{2}(x))\|_{\sup}\leqslant \frac{d-k+1}{d^{2}}\) _for all_ \(0\leqslant k\leqslant d\)_._
_The output description has the additional properties that_
\[\sum_{i}\|A_{i,\star}\|^{2}|x_{i}|^{2}\lesssim\frac{\epsilon^{2} }{\log(n)d^{2}}\qquad\|x\|_{0}\lesssim\frac{\|A\|_{F}^{2}}{(\mu\epsilon)^{2}} \log(n),\]
_so that by Corollary 5.11, we can_
1. _Compute entries of_ \(Ax\) _in_ \(\|x\|_{0}=\|A\|_{F}/\epsilon^{2}\) _time;_
2. _Sample_ \(i\in[n]\) _with probability_ \(\frac{|y_{i}|^{2}}{\|y\|^{2}}\) _in_ \((\frac{\|A\|_{F}^{2}}{(\mu d)^{2}}+p(0)^{2})\frac{\|A\|_{F}^{2}\log(n)}{(\mu\|y \|)^{2}}\) _time;_
3. _Estimate_ \(\|y\|^{2}\) _to_ \(\nu\) _relative error in_ \((\frac{\|A\|_{F}^{2}}{(\mu d)^{2}}+p(0)^{2})\frac{\|A\|_{F}^{2}\log(n)}{(\nu\mu \epsilon\|y\|)^{2}}\) _time._
**Proposition 10.7**.: _In Theorem 10.6, we can always take \(1/\mu\lesssim d^{2}\log(d)\|p\|_{\text{\rm Spec}(A)}\)._
**Algorithm 10.8** (Even singular value transformation).:
**Input:**: A matrix \(A\in\mathbb{C}^{m\times n}\), vector \(b\in\mathbb{C}^{n}\), \(\epsilon>0\), a degree-\(2d\) polynomial \(p\), and the corresponding coefficients in the Chebyshev basis, i.e. \(p(x)=\sum_{i=0}^{d}a_{2i}T_{2i}(x)\).
**Preprocessing sketches:**: Let \(s,t=\Theta(\frac{\|A\|_{F}^{2}}{(\mu\epsilon)^{2}}\log(\frac{\eta}{\delta}))\). Compute all \(\tilde{a}_{2k}=a_{2k}-a_{2k+2}+\cdots\pm a_{2d}\).
1. If \(\text{\rm SQ}(A)\) and \(\text{\rm SQ}(b)\) are not given, compute data structures to simulate them in \(O(1)\) time;
2. Sample \(S\in\mathbb{C}^{s\times m}\) to be \(\text{\rm AMP}(A^{\dagger},A,\{\frac{\|A_{s,\star}\|^{2}}{\|A\|_{F}^{2}}\})\) (Theorem 6.5) with sketch size \(s\);
3. Sample \(T\in\mathbb{C}^{n\times t}\) to be \(\text{\rm AMP}(SA,A^{\dagger}\mathbb{S}^{t},\{\frac{1}{2}(\frac{\|SA\|_{s,\star }\|^{2}}{\|SA\|_{F}^{2}}+\frac{\|b\|_{F}^{2}}{\|b\|^{2}})\})\) (Theorem 6.5) with sketch size \(t\);
4. Compute a data structure that can respond to \(\text{\rm SQ}(SAT)\) queries in \(O(1)\) time;
**Clenshaw iteration:**: Let \(r=\Theta(d^{4}\|A\|_{F}^{2}(s+t))=\Theta(\frac{d^{4}\|A\|_{F}^{2}}{(\mu\epsilon )^{2}}\log\frac{\eta}{\delta})\). Then starting with \(v_{d+1}=v_{d+2}=\bar{0}^{s}\) and going until \(v_{0}\),
1. Let \(B^{(k)}=\texttt{best}(SAT)\) and \(B^{(k)}_{\texttt{t}}=\texttt{best}((SAT)^{\texttt{t}})\) with parameter \(r\);
2. Compute \(v_{k}=2(2B^{(k)}B^{(k)}_{\texttt{t}}-I)v_{k+1}-v_{k+2}+4\bar{a}_{2k+2}B^{(k)}b\).
3. Output \(x=\frac{1}{2}S^{\texttt{t}}(v_{0}-v_{1})\) and \(\eta=\bar{a}_{0}\) that satisfies \(\left\|A^{\texttt{t}}x+\eta b-p(A)b\right\|\leqslant\varepsilon\).
Recall the odd and even recurrences defined in Eqs. (Odd Clenshaw) and (Even Clenshaw).
\[\begin{split} u_{k}&=2(2AA^{\texttt{t}}-I)u_{k+1}-u_ {k+2}+2a_{2k+1}Ab,\qquad\qquad\qquad\qquad\text{(Odd Matrix Clenshaw)}\\ u&=\frac{1}{2}(u_{0}-u_{1}).\end{split}\]
The matrix analogue of the even recurrence is identical except for the final term is \(4\bar{a}_{2k+2}A^{\texttt{t}}Ab\) instead of \(2a_{2k+1}Ab\).
\[\begin{split}\bar{a}_{2k}&\coloneqq a_{2k}-a_{2k+2} +a_{2k+4}-\cdots\pm a_{2d}\\ u_{k}&=2(2A^{\texttt{t}}A-1)u_{k+1}-u_{k+2}+4\bar{a }_{2k+2}A^{\texttt{t}}Ab,\qquad\qquad\qquad\text{(Even Matrix Clenshaw)}\\ u&=\bar{a}_{0}b+\frac{1}{2}(u_{0}-u_{1}).\end{split}\]
So, a roughly identical analysis works upon making the appropriate changes.
One (even) Clenshaw iteration.We maintain \(u_{k}\) as \((SA)^{\texttt{t}}v_{k}\). The error analysis proceeds by bounding
\[\begin{split} 4A^{\texttt{t}}Au_{k+1}-& 2u_{k+1}-u_{k+2}+4\bar{a}_{2k+2}A^{ \texttt{t}}Ab\\ &\approx_{1}4(SA)^{\texttt{t}}(SA)u_{k+1}-2u_{k+1}-u_{k+2}+4\bar{ a}_{2k+2}A^{\texttt{t}}Ab\\ &=(SA)^{\texttt{t}}\Big{(}4(SA)(SA)^{\texttt{t}}v_{k+1}-2v_{k+1}-v _{k+2}\Big{)}+4\bar{a}_{2k+2}A^{\texttt{t}}Ab\\ &\approx_{2}(SA)^{\texttt{t}}\Big{(}4(SAT)(SAT)^{\texttt{t}}v_{k +1}-2v_{k+1}-v_{k+2}\Big{)}+4\bar{a}_{2k+2}A^{\texttt{t}}Ab\\ &\approx_{3}(SA)^{\texttt{t}}\Big{(}4(SAT)(SAT)^{\texttt{t}}v_{k +1}-2v_{k+1}-v_{k+2}+4\bar{a}_{2k+2}SAT^{\texttt{t}}b\Big{)}\\ &\approx_{4}(SA)^{\texttt{t}}\Big{(}4(SAT)B^{(k)}_{\texttt{t}}v_{ k+1}-2v_{k+1}-v_{k+2}+4\bar{a}_{2k+2}SAT^{\texttt{t}}b\Big{)}\\ &\approx_{5}(SA)^{\texttt{t}}\Big{(}4B^{(k)}B^{(k)}_{\texttt{t}}v_ {k+1}-2v_{k+1}-v_{k+2}+4\bar{a}_{2k+2}SAT^{\texttt{t}}b\Big{)}\\ &\approx_{6}(SA)^{\texttt{t}}\Big{(}4B^{(k)}B^{(k)}_{\texttt{t}}v_ {k+1}-2v_{k+1}-v_{k+2}+4\bar{a}_{2k+2}B^{(k)}T^{\texttt{t}}b\Big{)}\end{split}\]
Using the same approaches as in the odd case, we can show that the error incurred to compute \(u_{k}\) is \(O(\mu\epsilon(\left\|v_{k+1}\right\|+\left|\bar{a}_{2k+2}\right|))\).
Error accumulation across iterations.The corresponding recurrence for \(v_{k}\) is
\[\begin{split} v_{k}&=4B^{(k)}B^{(k)}_{\texttt{t}}v_ {k+1}-2v_{k+1}-v_{k+2}+4\bar{a}_{2k+2}B^{(k)}T^{\texttt{t}}b\\ &=(4(SAT)(SAT)^{\texttt{t}}-2I)v_{k+1}-v_{k+2}\\ &\qquad\qquad+\Big{[}4\bar{a}_{2k+2}SAT^{\texttt{t}}b+4(SAT( SAT)^{\texttt{t}}-B^{(k)}B^{(k)}_{\texttt{t}})v_{k+1}+(B^{(k)}-SAT)T^{\texttt{t}}b\Big{]} \end{split}\]
This solves to
\[v_{k}=\sum_{i=k}^{d}U_{i-k}(T_{2}(SAT))\Big{[}4\bar{a}_{2k+2}SAT^{\texttt{t}}b+ 4(SAT(SAT)^{\texttt{t}}-B^{(k)}B^{(k)}_{\texttt{t}})v_{k+1}+(B^{(k)}-SAT)T^{ \texttt{t}}b\Big{]}\]
From here, the identical analysis applies. The bound from Item 10.6(c) corresponds to \(\left\|\mathbf{E}[v_{k}]\right\|\) being bounded by \(\frac{d-k+1}{\mu d^{2}}\).
Output description properties.The main change is that the description is \(A^{\dagger}x+\tilde{a}_{0}b\), with an additional scalar \(b\) term. We first use the prior analysis to get \(\text{SQ}_{\phi}(A^{\dagger}x)\) for \(\phi=\|x\|_{0}\frac{\sum\|A_{i}\|^{2}|x_{i}|^{2}}{\|A^{\dagger}x\|^{2}}=\frac{ \|A\|^{2}_{i}}{(\mu d)^{2}\|A^{\dagger}x\|^{2}}\). Then we use Lemma 5.8 to get \(\text{SQ}_{\varphi}(A^{\dagger}x+\eta b)\) for \(\phi=2(\frac{\|A\|^{2}_{i}}{(\mu d)^{2}}+\eta^{2}\|b\|^{2})/\left\|A^{\dagger} x+\eta b\right\|^{2}\). Finally, we note that \(\tilde{a}_{0}=p(0)\).
### Bounding iterates of singular value transformation
**Theorem 10.9**.: _Let \(p(x)\) be a odd degree \(2d+1\) polynomial such that \(\|p\|_{\sup}\leqslant 1.\) Then it suffices for Theorem 10.2 to take_
\[\mu=O(d^{2}\log(d)).\]
Proof.: All there is to prove is that
\[\left\|\sum_{i=k}^{d}a_{i}U_{i-k}(T_{2}(x))\right\|_{\sup}\leqslant(d-k+1)\log ^{2}(d+1)\]
First, we note that it suffices to prove the bound for \(\sum_{i=k}^{d}a_{i}U_{i-k}(y)\), since we get the bound by setting \(y=T_{2}(x)\). The strategy to proving this is the same as the one we discussed before:
\[\sum_{i}a_{i}U_{i-k}(T_{2}(x))\] \[=\sum_{i}a_{i}\sum_{j>0}T_{i-k-2j}(y)(1+[i-k-2j\neq 0])\] \[=\sum_{j\geqslant 0}\sum_{i}a_{i}T_{i-k-2j}(y)(1+[i-k-2j\neq 0])\] \[=\sum_{j\geqslant 0}\sum_{i}a_{i+k+2j}T_{i}(x)(1+[i\neq 0])\] \[=\sum_{i}T_{i}(x)(1+[i\neq 0])\sum_{j\geqslant 0}a_{i+k+2j}\] \[\leqslant\sum_{i}(1+[i\neq 0])\Big{|}\sum_{j\geqslant 0}a_{i+k+2j}\Big{|}\] \[\leqslant\sum_{i}(1+[i\neq 0])(32+8\ln^{2}(d+1))[i\leqslant d-k]\] \[\lesssim(d-k+1)\log^{2}(d+1)\]
**Theorem 10.10**.: _Let \(p(x)\) be a even degree-\((2d)\) polynomial such that \(\|p\|_{\sup}\leqslant 1.\) Then it suffices for Theorem 10.6 to take_
\[\mu=O(d^{2}\log^{2}(d))\]
Proof.: All there is to prove is that
\[\left\|\sum_{i=k}^{d}4\tilde{a}_{2i+2}xU_{i-k}(T_{2}(x))\right\|_{\sup}=\left\| \sum_{i=k}^{d}2\tilde{a}_{2i+2}U_{2(i-k)+1}(x)\right\|_{\sup}\leqslant(d-k+1) \log^{2}(d+1)\]
The strategy to proving this is the same as the one we discussed before:
\[\sum_{i}\tilde{a}_{2i+2}U_{2(i-t)+1}(x)\]
\[=\sum_{i}\sum_{k\geq 0}(-1)^{k}a_{2(i+k+1)}\sum_{j=0}^{i-t}T_{2j+1}(x)\] \[=\sum_{i,j,k}(-1)^{k}a_{2(i+k+1)}T_{2j+1}(x)[j,k\geq 0][j\leq i-t]\] \[=\sum_{j\geq 0}T_{2j+1}\sum_{i,k}(-1)^{k}a_{2(i+k+1)}[k\geq 0][j \leq i-t]\] \[=\sum_{j\geq 0}T_{2j+1}\sum_{i,k,\ell}(-1)^{k}a_{2\ell}[\ell=i+k+1 ][k\geq 0][j\leq i-t]\] \[=\sum_{j\geq 0}T_{2j+1}\sum_{\ell}a_{2\ell}\sum_{k}(-1)^{k}[k \geq 0][j\leq\ell-k-1-t]\] \[=\sum_{j\geq 0}T_{2j+1}\sum_{\ell}a_{2\ell}\sum_{k}(-1)^{k}[0 \leq k\leq\ell-j-1-t]\] \[=\sum_{j\geq 0}T_{2j+1}\sum_{\ell}a_{2\ell}[\ell-j-1-t\in 2 \mathbb{Z}_{\geq 0}]\] \[=\sum_{j\geq 0}T_{2j+1}\sum_{\ell\geq 0}a_{2(2\ell+j+1+t)}\] \[\leq\sum_{j\geq t+1}^{d}\Bigl{|}\sum_{\ell\geq 0}a_{2(2\ell+j)} \Bigr{|}\] \[\leq(d-t)(32+8\ln^{2}(d+1))\] \[\lesssim(d-k+1)\log^{2}(d+1)\]
## 11 Dequantizing algorithms
### Recommendation systems
**Lemma 11.1** (Polynomial approximations of the sign function (Lemma 25 of [19])).: _For all \(\delta>0\) and \(\varepsilon\in(0,\frac{1}{2})\) there exists an efficiently computable odd polynomial \(p\in\mathbb{R}[x]\) with \(\deg(p)=O(\frac{1}{\delta}\log\frac{1}{\varepsilon})\)_
* _for all_ \(x\in[-2,2]\)_,_ \(|p(x)|\leq 1\)__
* _for all_ \(x\in[-2,-\delta]\cup[\delta,2]\)_,_ \(|p(x)-\operatorname{sign}(x)|\leq\varepsilon\)__
**Lemma 11.2**.: _For \(\varsigma,\eta\in(0,\frac{1}{2})\) and \(\sigma\in[-1,1]\), there exists an even polynomial \(p(x)\in\mathbb{R}[x]\) of degree \(O(\frac{1}{\eta\sigma}\log\frac{1}{\varsigma})\) such that \(p(x)\in[0,1]\) for all \(x\in[-1,1]\) and_
\[p(x)\in\begin{cases}[1-\varsigma,1]&\text{for }1\leqslant x\leqslant-(1+\eta) \sigma\\,&\text{for }-(1-\eta)\sigma\leqslant x\leqslant(1-\eta)\sigma\\,&\text{for }(1+\eta)\sigma\leqslant x\leqslant 1\end{cases}\]
Proof.: Let \(s(x)\) be the approximation in Lemma 11.1 with error paramters \(\delta\leftarrow\eta\sigma\) and \(\varepsilon\leftarrow\frac{\varsigma}{2}\). Then
\[p(x)\coloneqq(1-\varsigma)\Bigl{(}1+\frac{s(x-\sigma)+s(-x-\sigma)}{2}\Bigr{)}\]
satisfies the desired parameters.
**Corollary 11.3** (De-quantizing Recommendation Systems).: _The recommendation systems problem is to sample from \([A_{\geqslant\sigma,\frac{1}{6}}]_{i,\star}\). Using Theorem 10.6 on the polynomial from Lemma 11.2, given \(\operatorname{nnz}(A)\) preprocessing time to make a data structure, we can compute an \(x\) such that \(\left\|A^{\dagger}x-A_{\geqslant\sigma,\frac{1}{6}}^{\dagger}e_{i}\right\|\leqslant\varepsilon\) in \(\widetilde{O}\Big{(}\frac{\left\|A\right\|_{F}^{\frac{1}{6}}}{\sigma^{6} \varepsilon^{2}}\Big{)}\) time, and sample from this \(x\) in \(\widetilde{O}\Big{(}\frac{\left\|A\right\|_{F}^{\frac{1}{6}}}{\sigma^{6} \varepsilon^{2}\left\|A^{\dagger}x\right\|^{2}}\Big{)}\) time._
### Linear regression
**Lemma 11.4** (Polynomial approximations of \(1/x\), [14, Lemma 40], following [13]).: _Let \(\kappa>1\) and \(0<\varepsilon\leqslant\delta\leqslant\frac{1}{2}\). There is an odd polynomial \(p(x)\) such that_
\[\left|p(x)-\frac{\delta}{2x}\right|\leqslant\varepsilon\]
_is \(\varepsilon\)-close to \(1/x\) on the domain \([-1,1]\setminus(-\frac{1}{\kappa},\frac{1}{\kappa})\). Let \(J\coloneqq\left\lceil\sqrt{b\log(4b/\varepsilon)}\right\rceil\), then the \(O(\kappa\log\frac{\kappa}{\varepsilon})\)-degree odd real polynomial_
\[g(x)\coloneqq 4\sum_{j=0}^{l}(-1)^{j}\Bigg{[}\frac{\sum_{i=j+1}^{b}\binom{2b }{b+i}}{2^{2b}}\Bigg{]}T_{2j+1}(x)\]
_is \(\varepsilon\)-close to \(f(x)\) on the interval \([-1,1]\), moreover \(|g(x)|\leqslant 4J=O(\kappa\log(\frac{\kappa}{\varepsilon}))\) on this interval._
This is the polynomial that is applied to perform regression in the QSVT framework, with slight modifications to make the polynomial suitably bounded.
**Corollary 11.5**.: _If we have \(\mathsf{SQ}(A)\) and \(\mathsf{SQ}(b)\), we can apply Theorem 10.2 on \(g(x)/\kappa\) to error \(\varepsilon/\kappa\), where \(g(x)\) is the polynomial in Lemma 11.4. Then after \(O(\operatorname{nnz}(A))\) or \(O(\kappa^{12}\|A\|_{F}^{4}\varepsilon^{-4}\operatorname{polylog}(n\kappa/ \varepsilon))\), we compute This allows us to get a representation of a vector \(y\) such that \(\|y-g(A)b\|\leqslant\varepsilon\|b\|\) in_
\[O\Big{(}\frac{\kappa^{11}\|A\|_{F}^{4}}{\varepsilon^{2}}\operatorname{polylog }(n\kappa/\varepsilon)\Big{)}\]
_time. We can also sample and query from the output in this amount of time._
### Hamiltonian simulation
In this Subsection, we provide the corollary for de-quantizing Hamiltonian simulation. We begin by recalling the following definition:
**Definition 11.6** (Bessel functions of the first kind).: Given \(i\in\mathbb{N}\),
\[J_{i}(x)=\sum_{m=0}^{\infty}\frac{(-1)^{m}}{m!\Gamma(i+m+1)}\Big{(}\frac{x}{2} \Big{)}^{2m+i},\]
where \(\Gamma\) is the gamma function.
We also need the following polynomial approximation to trigonometric functions:
**Lemma 11.7** (Polynomial approximation to trigonometric functions, [14, Lemma 57]).: _Given \(t\in\mathbb{R}\) and \(\varepsilon\in(0,1/e)\), let \(r=\Theta\Big{(}t+\frac{\log(1/\varepsilon)}{\log\log(1/\varepsilon)}\Big{)}\). Then, the following degree \(2r\) and \(2r+1\)
_satisfy that for all \(x\in[-1,1]\),_
\[\left\|\cos(tx)-J_{0}(t)+2\sum_{i\in[1,r]}(-1)^{i}I_{2i}(t)T_{2i}(x) \right\|_{\text{sup}}\leqslant\varepsilon,\] \[\left\|\sin(tx)-2\sum_{i\in[0,r]}(-1)^{i}I_{2i+1}(t)T_{2i+1}(x) \right\|_{\text{sup}}\leqslant\varepsilon,\]
_where \(J_{i}\) is the \(i\)-th Bessel function of the first kind._
**Corollary 11.8** (Hamiltonian simulation).: _Given a symmetric Hamiltonian \(H\in\mathbb{C}^{n\times n}\) with \(\|H\|\leqslant 1\) and a vector \(b\in\mathbb{C}^{n}\), after \(O(\operatorname{nnz}(A))\) preprocessing, we can output a description of a vector \(v\) such that, with probability \(\geqslant 0.9\), \(\left|v-e^{iHt}b\right|\leqslant\varepsilon|b|\) in \(O(t^{9}\|H\|_{F}^{4}\log(1/\varepsilon)^{9}/(\varepsilon^{2}\log\log(1/ \varepsilon)^{9}))\)._
Proof.: Let \(p_{\cos}=J_{0}(t)-2\sum_{i\in[1,r]}(-1)^{i}I_{2i}(t)T_{2i}(x)\) and let \(p_{\sin}=2\sum_{i\in[0,r]}(-1)^{i}I_{2i+1}(t)T_{2i+1}(x)\). We apply Theorem 10.6 to get a description of \(c\) such that \(\|c-p_{\cos}(tH)b\|\leqslant\varepsilon\|b\|\) and apply Theorem 10.2 to get a description of \(s\) such that \(\|s-p_{\sin}(tH)b\|\leqslant\varepsilon\|b\|\). Then
\[e^{iHt}b=\cos(Ht)b+i\sin(Ht)b\approx_{\varepsilon\|b\|}p_{\cos}(Ht)b+ip_{\sin}( Ht)b\approx_{\varepsilon\|b\|}c+is.\]
This gives us a description \(O(\varepsilon)\)-close to \(e^{iHt}\). Using Corollary 5.11, we can get a sample from this output in \(O\big{(}t^{6}\|A\|_{F}^{4}\log(n)\frac{1}{\varepsilon^{2}}\operatorname{polylog }(\frac{1}{\varepsilon})\big{)}\) time.
## Acknowledgments
ET and AB thank Nick Trefethen, Simon Foucart, Alex Townsend, Sujit Rao, and Victor Reis for helpful discussions. ET thanks t.f. for the support. AB is supported by Ankur Moitra's ONR grant. ET is supported by the NSF GRFP (DGE-1762114). Work on this paper was initiated at the Simon Institute's "Probability, Geometry, and Computation in High Dimensions" program in 2020; we thank the institute for its hospitality.
|
2307.10068 | Practical Model Reductions for Verification of Multi-Agent Systems | Formal verification of intelligent agents is often computationally infeasible
due to state-space explosion. We present a tool for reducing the impact of the
explosion by means of state abstraction that is (a) easy to use and understand
by non-experts, and (b) agent-based in the sense that it operates on a modular
representation of the system, rather than on its huge explicit state model. | Wojciech Jamroga, Yan Kim | 2023-07-19T15:40:06Z | http://arxiv.org/abs/2307.10068v2 | # Practical Model Reductions for Verification of Multi-Agent Systems
###### Abstract
Formal verification of intelligent agents is often computationally infeasible due to state-space explosion. We present a tool for reducing the impact of the explosion by means of state abstraction that is (a) easy to use and understand by non-experts, and (b) agent-based in the sense that it operates on a modular representation of the system, rather than on its huge explicit state model.
## 1 Introduction
_Multi-agent systems (MAS)_[21, 22] describe interactions of autonomous agents, often assumed to be intelligent and/or rational. With the development of Internet and social networks, the impact of MAS on everyday life is becoming more and more significant. At the same time, their complexity is rapidly increasing. In consequence, formal methods for analysis and verification of MAS are badly needed.
**Verification and model reduction.** Algorithms and tools for verification have been in constant development for 40 years, with temporal model checking being most popular [1, 1]. The main obstacle for _practical_ use of those techniques is state-space explosion. Model checking of MAS with respect to their _modular representations_ ranges from \(\mathbf{PSPACE}\)-complete to undecidable [1, 13]. A possible way to mitigate the complexity is by model reductions, such as abstraction refinement [12] and partial-order reduction [14]. Unfortunately, lossless reductions (i.e., ones that produce fully equivalent models) are usually too weak, in the sense that the resulting model is still too large for feasible verification.
**Towards practical abstraction.** In this work, we revisit the idea of lossy state abstraction [1, 13], and in particular _may/must abstraction_[1] that potentially removes relevant information about the system, but produces arbitrarily small reduced models. Such verification works best with users who are knowledgeable about the application domain, as its conclisiveness crucially depends on what aspects of the model are being removed. Ideally, the user should be a domain expert, which often implies no in-depth knowledge of verification algorithms. This calls for a technique that is easy to use and understand, preferably supported by a Graphical User Interface (GUI). Moreover, the abstraction should be _agent-based_ in the sense that it operates on modular representations of the MAS, and does not require to generate the full explicit state model before the reduction. The theoretical backbone of our abstraction scheme is presented in [1]. Here, we report on the implementation, and show its usefulness through case studies.
**Contribution.** We propose a tool for reduction of MAS models by removing an arbitrary subset of variables from the model specification. After the user selects the variables to be removed, the tool can produce two new model specifications: one guaranteed to overapproximate, and one to underapproximate the original model. Then, the user can verify properties of the original model by model checking the new specifications with a suitable model checker. Our model specifications are in the form of _MAS Graphs_[1, 13], a variant of automata networks with asynchronous execution semantics and synchronization on joint action labels [15, 16]. As the model checker of choice, we use Uppaal [1], one of the few temporal model checkers with GUI.
Our tool provides a simple command-line interface, where the user selects the input file with a model specification prepared in Uppaal, the variables to be abstracted away, and the abstraction parameters. It outputs a file with the over- (resp. under-)approximating model specification, that can be opened in Uppaal for scrutiny and verification. The source code and examples are available at [https://tinyurl.com/ijcai-demo](https://tinyurl.com/ijcai-demo). Importantly, the abstraction uses modular representations for input and output; in fact, it does _not_ involve the generation of the global state space at all. To our best knowledge, this is the first tool for practical user-defined model reductions in model checking of MAS.
**Related work.** The existing implementations of state abstraction for temporal model checking concern mostly automated abstraction. In particular, CEGAR [12, 13] has been implemented for NuSMV [12], and 3-valued abstraction [1, 1] was implemented in Yasm [11] and YOGI [1]. In each case, abstraction involves the generation of the global state space,
which is the main bottleneck when verifying MAS. Other, user-defined abstraction schemes have been defined only theoretically [2, 14, 15], and also require to generate all global states and/or transitions. The approaches in [14, 15] come closest to our work, as they use modular representations of the state space. However, they both need a global representation of the transition space, and no implementation is reported.
## 2 Formal Background
**MAS graphs and templates.** To specify the system to be verified, we use _MAS graphs_, based on standard models of concurrency, and compatible with Uppaal model specifications. A _MAS graph_ is a multiset of _agent graphs_, possibly sharing a set of _global variables_. Each agent graph includes finitely many _locations_ and _private variables_ that, together, define its local state space. Moreover, _edges_ between locations determine the local transition relation. Each edge can be labelled with a randomized _selection_ command, boolean _precondition_, _synchronisation_ command, and/or a _postcondition_ updating the values of some variables. A synchronizing edge can only be taken with a complementary one in another agent. An example agent graph is shown in Figure 1.
A _MAS template_ treats each agent graph as a template, and specifies the number of its instances that occur in the verified system (each differing only by the value of variable id).
**Models.** Every MAS graph \(G\) can be transformed to its _combined MAS graph_: technically, a single agent graph \(comb(G)\) given by the asynchronous product of the agent graphs in \(G\). Each location in \(comb(G)\) is a tuple of agents' locations in \(G\). Moreover, the set of variables in \(comb(G)\) is the union of all variables occurring in \(G\). A _global model_ is obtained from \(comb(G)\) by unfolding it to the labelled transition system where states are defined by combined locations and valuations of all the variables. Such models are usually huge, and create an important bottleneck in model checking MAS.
**Formal verification and model reduction.** Our tool addresses model checking of temporal properties expressed in the well known branching-time logic \(\mathbf{CTL^{\star}}\)[1]. To mitigate the impact of state-space explosion, we use _state abstraction_, i.e., a method that reduces the state space by clustering similar _concrete states_ into a single _abstract state_. In order for the scheme to be practical, it must be easy to use, and avoid the generation of the concrete global model. We summarize the details of our abstraction scheme in the next section.
## 3 Abstraction by Removal of Variables
Our tool employs the abstraction scheme of [13], and produces specifications of two abstract models: a _may-abstraction_ (that overapproximates the concrete states and transitions) and a _must-abstraction_ (that underapproximates them). Consequently, if a universal \(\mathbf{CTL^{\star}}\) formula is true in the _may_-abstraction, then it must be true in the concrete model, and if it is false in the _must_-abstraction, then it must be false in the concrete model.
**Variable removal.** In the simplest variant, the abstraction concerns a complete removal of some variables \(V\subseteq\textit{Var}\) from the model specification. For example, one might remove variables mem_vt, mem_sg from the agent graph in Figure 1, i.e., the voter's memory of the cast vote and the voting declaration status. Selection of the right variables to remove requires a good understanding of the application domain; we assume that it is provided by the user. Roughly speaking, the abstraction procedure takes the combined MAS graph \(comb(G)\), computes an approximation of the reachable values for every \(v\in V\), and processes the edges of \(comb(G)\) by substituting the occurrences of \(v\) at location \(\ell\) with the values \(u\in appr(v,\ell)\). If \(appr(v,\ell)\) overapproximates (resp. underapproximates) the actual reachable values of \(v\) at \(\ell\), then the resulting model is a _may_ (resp. _must_)-abstraction of \(G\).
**Variable merge and scoping.** More generally, a subset of variables can be merged into a fresh variable by means of a user-defined mapping function. For example, mem_sg and mem_vt can be merged into a boolean variable valid given by (mem_sg+mem_vt+0), indicating the validity of the vote.
Additionally the user can specify the scope of abstraction, i.e., a subset of locations where the abstraction is applied.
**Abstraction on MAS templates.** In some cases, approximation of variable domains on the combined MAS graph is computationally infeasible. An alternative is to compute it directly on the MAS template by the right approximation of the synchronization edges. On the down side, this sometimes results in largely suboptimal abstract models, i.e., ones more likely to produce inconclusive verification results.
## 4 Architecture
The main components of the tool are: (1) local domain approximation and (2) generation of abstract model specifications. Additionally, the tool allows to perform simple pre-processing and code analysis, and to store parameters in a configuration file. Each component can be called from command line, possibly followed by a list of arguments:
* configure: sets the parameters in the configuration file;
* unfold: produces the combined MAS graph;
* approx: computes an approximation of the local domain;
Figure 1: _Voter_ template. The agent first declares if she prefers to receive the election package by post (dec=2) or in person (dec=1). Then, she waits until it can be collected, and casts the ballot together with her voting card. The _select_ label for edge idle\(\rightarrow\)waits (resp. has\(\rightarrow\)end) specifies a nondeterministic choice of the value of variable dec\(\in\{1,2\}\) (resp. \(\texttt{vt}\in\{1,\ldots,NC\}\) and \(\texttt{sg}\in\{0,1\}\))
* abstract: generates an abstract model specification based on the provided approximation of local domain;
* info: lists the variables, locations, and edges in the model.
Local domain approximation.Takes a subset of variables \(V\), a target template ('ext' for the combined MAS graph) and an abstraction type t \(\in\{\text{upper},\text{lower}\}\), and computes a t-approximation of the local domain over \(V\). The result is saved to a JSON file, where location identifiers are mapped to an array of evaluation vectors.
Abstract model generation.Takes the mapping function with an upper-approximation (resp. lower-approximation) of the local domain, and computes the corresponding may-abstraction (resp. must-abstraction). The mapping function specifies the target agent name or template name, the scope of abstraction, variables to be removed, and possibly a merge variable. We assume that the input provided by the user is correct; some debugging might be added in the future.
## 5 Experimental Results
We have evaluated the tool by means of experiments on two benchmarks: a simple postal voting scenario and gossip learning for social AI. The model specifications are available for download with the tool. The experiments have been performed in combination with Uppaal v4.1.24 (32-bit) on a machine with Intel i7-8665U 2.11 GHz CPU, running Ubuntu 22.04. We report the results for _may-abstractions_, typically more useful for universal branching-time properties.
Postal voting.We use a scalable family of MAS graphs proposed in [10] to model a simplified postal voting system. The system consists of \(NV\) Voters, voting for \(NC\) candidates, and a single Election Authority, and proceeds in four subsequent phases: collection of voting declarations, preparation and distribution of election packages, ballot casting, and tallying. The verification concerns a variant of resistance to ballot stuffing, expressed by formula \(\varphi_{\mathit{bstuff}}\):
A[](b_recv+=ep_sent && ep_sent<=NV)
where b_recv and ep_sent are variables storing the number of received ballots and sent election packages, respectively. For the experiments, we try the following abstractions:
* removes variables mem_vt and mem_sg from the Voter template, i.e., the voter's memory of the cast vote and the voting declaration status;
* removes variables mem_dec at Voter's locations \(\{\texttt{has},\) voted\(\}\) and variable dec_recv at Authority's location \(\{\texttt{coll\_vts}\}\), i.e., the information about how the election package has been delivered;
* the combination of A1 and A2.
The results in Table 1 present the numbers of states in the global model generated during the verification, as well as the verification running times (in seconds), including the generation of abstract model specifications where applicable. Formula \(\varphi_{\mathit{bstuff}}\) is satisfied in all the reported instances; all three abstractions have been conclusive on it.
Social AI.The second series of experiments uses the specifications of gossip learning for social AI [1, 1], proposed in [11]. The system consists of a ring network of AI agents, acting in three phases: data gathering, learning, and sharing of knowledge. The goal of the agents is to collectively reach knowledge of quality \(\mathit{mqual}\geq 2\). The system includes also an attacker who can impersonate any agent and fake its quality level. The model specification given as asynchronous MAS [13] and coded in the input language of the STV model checker [11] was manually translated into the input language of Uppaal. Afterwards, we hardcoded the attacker's strategy to always share the lowest quality model, and verified formula \(\varphi_{\mathit{compr}}\):
A[](exists(i:int[1,NA])(impersonated!=i && (!AI(i).wait || AI(i).mqual<2))).
\(\varphi_{\mathit{compr}}\) says that, on all execution paths, at least one AI agent is compromised. The model checking performance is shown in Table 2. We have been able to conduct verification for concrete models with up to 5 agents (4 honest AI and 1 attacker), and up to 7 agents after applying a _may_-abstraction that discards all variables except for \(\mathit{mqual}\) in the AI template.
## 6 Conclusions
We propose a tool for practical model reductions in multi-agent systems. The tool addresses state-space explosion by removal of selected variables from the model while preserving the truth of \(\mathbf{ACTL}\) formulas. The experiments show significant gains in terms of verification time as well as memory, with minimal time used by the abstraction procedure.
In the future, we plan to extend our tool to abstractions preserving temporal-epistemic and strategic properties in combination with the MCMAS and STV model checkers [15, 11].
\begin{table}
\begin{tabular}{|c||c|c||c|c||c||c|c|} \hline \multirow{2}{*}{**\#V**} & \multicolumn{2}{c||}{Concrete} & \multicolumn{3}{c||}{Abstract (A2)} & \multicolumn{2}{c|}{Abstract (A3)} \\ \cline{2-9} & **\#St** & **t** & **\#St** & **t** & **\#St** & **t** & **\#St** & **t** \\ \hline \hline
1 & 31 & 0 & 23 & 0 & 22 & 0 & 18 & 0 \\ \hline
2 & 529 & 0.1 & 217 & 0.1 & 214 & 0.1 & 120 & 0.1 \\ \hline
3 & 10891 & 0.1 & 2203 & 0.1 & 2440 & 0.1 & 838 & 0.1 \\ \hline
4 & 2.3e+5 & 0.9 & 22625 & 1 & 29938 & 0.1 & 5937 & 0.1 \\ \hline
5 & 5.1e+6 & 25 & 2.3e+5 & 1 & 3.7e+5 & 1 & 42100 & 0.6 \\ \hline
6 & memout & 2.3e+6 & 20 & 4.9e+6 & 23 & 2.9e+5 & 5 \\ \hline
7 & memout & 2.2e+7 & 304 & memout & 2.0e+6 & 33 \\ \hline
8 & memout & memout & memout & 1.4e+7 & 357 \\ \hline \end{tabular}
\end{table}
Table 1: Verification of \(\varphi_{\mathit{bestuff}}\) on models with 3 candidates. \(\#V\) is the number of Voter instances. We report the model checking performance for the concrete model, followed by _may_-models obtained by abstractions A1, A2, and A3
\begin{table}
\begin{tabular}{|c||c|c||c|c|c|} \hline \multirow{2}{*}{**\#Ag**} & \multicolumn{2}{c||}{Concrete} & \multicolumn{3}{c|}{Abstract} \\ \cline{2-7} & **\#St** & **t** & **\#St** & **Reduct** & **t** \\ \hline \hline
2 & 165 & 0 & 38 & 76.97 & 0 \\ \hline
3 & 8917 & 0.1 & 555 & 93.78 & 0 \\ \hline
4 & 4.6e+5 & 1.5 & 10247 & 97.77 & 0.1 \\ \hline
5 & 2.1e+7 & 123 & 1.5e+5 & 99.29 & 1.2 \\ \hline
6 & memout & 2.8e+6 & – & 42 \\ \hline
7 & memout & 4.1e+7 & – & 682 \\ \hline
8 & memout & memout & memout \\ \hline \end{tabular}
\end{table}
Table 2: Verification of \(\varphi_{\mathit{compr}}\) on models with social AI. \(\#Ag\) is the number of agents. “Reduct” shows the level of reduction of the state space (in %)
## Acknowledgments
The work was supported by NCBR Poland and FNR Luxembourg under the PolLux/FNR-CORE projects STV (POLLUX-VII/1/2019) and SpaceVote (POLLUX-XI/14/SpaceVote/2023), as well as the CHIST-ERA grant CHIST-ERA-19-XAI-010 by NCN Poland (2020/02/Y/ST6/00064).
|
2304.09069 | Robustness and complexity | When a biological system robustly corrects component-level errors, the direct
pressure on component performance declines. Components may become less
reliable, maintain more genetic variability, or drift neutrally in design,
creating the basis for new forms of organismal complexity. This article links
the protection-decay dynamic to other aspects of robust and complex systems.
Examples include the hourglass pattern of biological development and Doyle's
hourglass architecture for robustly complex systems in engineering. The deeply
and densely connected wiring architecture in biology's cellular controls and in
machine learning's computational neural networks provide another link. By
unifying these seemingly different aspects into a unified framework, we gain a
new perspective on robust and complex systems. | Steven A. Frank | 2023-04-18T15:42:27Z | http://arxiv.org/abs/2304.09069v1 | # Robustness and complexity
###### Abstract
When a biological system robustly corrects component-level errors, the direct pressure on component performance declines. Components may become less reliable, maintain more genetic variability, or drift neutrally in design, creating the basis for new forms of organismal complexity. This article links the protection-decay dynamic to other aspects of robust and complex systems. Examples include the hourglass pattern of biological development and Doyle's hourglass architecture for robustly complex systems in engineering. The deeply and densely connected wiring architecture in biology's cellular controls and in machine learning's computational neural networks provide another link. By unifying these seemingly different aspects into a unified framework, we gain a new perspective on robust and complex systems.
Evolution, system design, paradox of robustness, constructive neutral evolution, genomic complexity, deep learning The ultimate result of shielding men from the effects of fully, is to fill the world with fools. -- Herbert Spencer1
Footnote 1: Department of Ecology and Evolutionary Biology, University of California, Irvine, CA 92697–2525, USA
## 1 Introduction
The more strongly a robust system protects itself from the failure of its components, the more the system's components will tend to decay in performance. Suppose, for example, that our bodies added another protection against cancer. Then, a breakdown in an existing protection would have less consequence because the extra protection provides an additional check against disease.[2]
Reduced consequence means that the direct pressure of natural selection on existing components has weakened. Less selective pressure leads to evolutionary decay. The ultimate result of shielding a system from the failure of its components is to fill the system with weakened components. I call that the paradox of robustness.[3]
The logic is so simple and compelling that it must be true. But is it important? How much of evolutionary pattern and biological design arise from the paradox of robustness?
The answers remain unclear. Part of the difficulty is that the paradox of robustness focuses too narrowly. Instead, we must think more broadly about how robustness influences the architecture of organismal design.
I build toward that broader perspective through a series of steps. The first section develops the paradox of robustness by expanding the cancer example and adding an engineering example from the history of computer hard drives and data storage. Those examples clarify how system robustness leads to component decay and to greater complexity of design.
The second section links various ideas to the paradox of robustness, particularly the theory of constructive neutral evolution.[4, 5] The similarities and differences between these theories help to build a broader framework.
The third section reviews observed patterns of robust and complex systems. The hourglass pattern of development[6, 7] and the hourglass pattern for the architecture of robust systems[8] provide interesting examples, suggesting an expanded conceptual foundation for robustness and complexity.
The final section illustrates the new theory's perspective. In machine learning, deeply and densely connected computational neural networks revolutionized artificial intelligence.[9] Similarly, deeply and densely wired regulatory control architectures of
cells, which may have arisen as a consequence of the paradox of robustness, could have accelerated evolutionary adaptation in the history of life [10].
## The paradox of robustness
I illustrate the theory with two examples, cancer and computer hard drives.
To protect against cancer, our bodies have multiple protections. Tumors progress as those protections break down [11]. For example, several checkpoints act as brakes on the cell cycle. Knockouts of those brakes allow continuous cell division, favoring tumor growth. At the cellular level, damage often induces cell suicide, culling aberrant and potentially precancerous cells. Knockout of the normal apoptotic suicide program promotes cancer.
Different tissues in our bodies seem to have different numbers of protections against cancer [12]. The same tissue in different organisms seems to have different numbers of protections [13]. In other words, the amount of protection seems to be evolutionarily labile.
That evolutionary lability leads to a thought experiment [2]. What happens when an extra protection gets added? Initially, the system more robustly protects against perturbations that cause disease because there is one more factor that limits the spread of a tumor. That enhanced system robustness also changes the pressure of natural selection acting directly on the protective components that were already present. For example, losing a brake on the cell cycle is less important if there is now another apoptotic mechanism that can detect such damage and kill the cell.
Weakened selective pressure enhances the spread of mutations [14] and the heritability of disease [15]. The reduced benefit provided by a particular component may also cause that component to decay evolutionarily to a less costly, lower performing, and sloppier state [3].
As the components decay, the newly added protection becomes evolutionarily irreversible [3]. Removing that protection now exposes the lower performing components without the additional protection. The system would perform poorly. Thus, additional robust protection and the subsequent evolutionary relaxation of the prior components lead to an irreversible increase in complexity.
This relation between enhanced system robustness and component decay follows simple logic. As the system becomes better at protecting against failure, fluctuations and sloppiness in component performance matter less. Enhanced system robustness associates with decaying component performance. That logic applies broadly, to any system evolving with respect to performance.
The logic is so simple and general that it would seem to be a fundamental principle of evolutionary design. However, it is challenging to find compelling examples in biology [16]. One difficulty is that we cannot easily see the steps by which this evolutionary process occurs. We would need evidence for the origin of a new mechanism that enhances robustness at the system level. We would then need to trace the history over which various components of the system decay in performance.
In searching for examples that illustrate the steps of increasing system robustness and decaying component performance, the best case that I could find comes from the engineering history of computer hard drives and data storage.
Many years ago, a small hard drive was expensive. Part of the expense arose from the need to make the drives reliable, with low failure rates and low error rates. Drive failures cause catastrophic loss of data. Data errors cause loss of confidence, eventually rendering the data useless.
The primary approach to data storage changed over time. Instead of focusing on reliable and expensive individual drives, storage design emphasized Redundant Arrays of Inexpensive Disks, or RAID arrays [17]. Redundancy enhances reliability by making copies of the data. For example, copies of data may be stored on two disks. If one drive fails, the other has a full copy of the data and nothing is lost. However, making two fully redundant copies slows performance and doubles the number of drives required, increasing the cost.
To gain the benefit of redundancy and mitigate the costs, RAID arrays often use special RAID controllers that are small computers sitting above the data storage array. When the data come in to be stored, the RAID controller breaks up the data into
small chunks and spreads those data chunks across the array in a partially redundant manner. More data copies enhance protection against the failure of individual drives but also increase costs and reduce performance. One can tune the redundancy to achieve particular goals of reliability, cost, and performance. Most modern mid-level and high-level computing systems use some variant of RAID data storage.
If a drive in the RAID array fails, one can pull out that drive and put in a new one without turning off the system. The RAID controller uses the redundant data on the other drives to fill the new drive with the same data held by the failed drive. The system fully recovers while continuing to run.
Here is the key point with regard to the paradox of robustness. Because a failed drive causes relatively little disruption, it is no longer so important that individual drives be engineered to high reliability at large expense. Instead, system designers choose relatively inexpensive disks that have relatively high failure rates.[18]
The robustness gained by designing reliability at the higher system level causes a shift in the marginal costs and benefits of component disk performance. The best design typically allows a decay in component disk performance, leading to a reliable system that has cheaper, lower-performing, and sloppier components.
In engineering, if we wish to redesign the system, we can throw out the current design and start over. In biology, the greater robustness achieved by adding a high-level manager above the component parts will often be irreversible because the lower-level components will evolve to depend on the higher level of control and protection.
## Related theories
The cancer and RAID examples introduced the paradox of robustness. I now describe some related theories to give a sense of similar topics and to broaden the conceptual framing of the subject. I start with brief summaries of three ideas to provide some historical perspective. I then develop the theory of constructive neutral evolution in detail.
The first idea comes from Susan Lindquist. She studied cell biology systems in which one protein buffers the effects of variability in other proteins.[19] In the absence of the buffer, amino acid substitutions typically reduce protein performance. Some of those deleterious amino acid substitutions become functionally neutral in the presence of the buffer. I describe a specific example below.
The point here is that buffering causes a kind of robustness in which changes that were previously deleterious become neutral. In the presence of the buffer, those neutral variants will accumulate in the population, increasing genetic diversity. In essence, the buffered proteins decay in their performance when isolated from their system's robust protection, a kind of evolutionary relaxation in response to system robustness.
The second idea comes from the theory of neutral networks.[20, 21] We start with a network of interacting components within cells. Given our focus on robustness, we imagine some higher-level process that renders alternative network interactions nearly equivalent in function. The neutrality of alternative network interactions leads to evolutionary drift in those interactions.
Eventually, the interacting components may arrive at a state from which they can achieve a significantly altered way of functioning or a significantly better way to adapt to a novel environmental challenge. Put another way, the neutrality imposed by robustness leads to wide neutral exploration and subsequent novelty in design and function.
The third idea concerns the theory of fitness landscapes.[22, 23] Recently, that theory has been developed most extensively in the study of viruses because one can measure genotype, phenotype, and fitness more easily than for most other organisms.[24, 25] Fitness landscape theory has not been linked to the paradox of robustness but could be an important future development.
The paradox of robustness essentially describes how changes in system robustness tend to flatten the fitness landscape that shapes the evolution of the system's components. The flatter landscape leads to less intense selection, more variability, and altered marginal costs and benefits. Explicit analyses of those changes may provide further insight.
I now turn to the most important related theory,
constructive neutral evolution. This theory originated in the 1990s in Dalhousie, Canada, predating my own work on the paradox of robustness by about ten years. The work was developed by Michael Gray, Arlin Stoltzfus, Ford Doolittle, and many others.[26, 27, 28, 5, 29, 4] I only learned about constructive neutral evolution recently. One goal for this article is to bring the complementary insights of constructive neutral evolution and the paradox of robustness together to advance our understanding over a broader set of biological problems.
RNA editing provided the first example of this theory.[26] Typically, DNA makes RNA makes protein. In some organisms, DNA makes RNA, the RNA sequence is altered by an editing process, and the edited RNA sequence makes protein. For example, C nucleotides in the RNA may be converted to U nucleotides. U nucleotides in RNA are analogous to T nucleotides in DNA. The C \(\rightarrow\) U change means that, in the RNA, a U remains U, and a C becomes U.
In the absence of RNA editing, a DNA nucleotide G codes for an RNA C, and a DNA nucleotide A codes for an RNA U. In the presence of RNA editing in which C \(\rightarrow\) U, the DNA nucleotides G and A both code for RNA U. Thus, RNA editing causes neutrality at the DNA level between G and A nucleotides, leading to drift in the frequency of those nucleotides at particular sites in the DNA sequence.
If the RNA editing process were removed, some of the G and A DNA nucleotide variants would associate with different amino acids in the protein. The majority of amino acid changes would likely be deleterious. Thus, once RNA editing is in place and the associated DNA nucleotides drift in frequency, it will often be difficult evolutionarily to remove the RNA editing process.
An evolutionary ratchet occurs. RNA editing causes neutrality at the DNA level. Drift occurs. Removal of RNA editing leads to new deleterious DNA variants, disfavoring loss of the RNA editing process.
The general scenario leads to a ubiquitous force of genomic complexification. First, a new mechanism of genomic processing arises. That mechanism buffers variability in another process, rendering some variants neutral. Drift follows. The new buffering mechanism cannot be removed without deleterious consequences. The genomic processing system has become irreversibly more complex. Constructive neutral evolution has occurred.
Eukaryotic genomes often seem irrationally complex. Constructive neutral evolution shows how such complexity may arise nonadaptively, as a consequence of buffering or robustness mechanisms.[27]
A second example of constructive neutral evolution comes from Susan Lindquist's work on cellular buffering, briefly mentioned at the start of this section.[19] Lindquist worked on the heat shock protein Hsp90. This protein helps other proteins to fold correctly into functional three-dimensional structures. In the absence of Hsp90, primary amino acid sequence variants may misfold. In the presence of the Hsp90 folding chaperone, some of those sequence variants fold into approximately equivalent functional shapes.
Lindquist realized that the Hsp90 chaperone adds robustness to the system, effectively buffering amino acid variation and causing different genetic variants to be selectively neutral. Lindquist emphasized that such robustness and associated increase in genetic variation may enhance future adaptation. In a subsequently changed environment, some of those currently neutral variants might become advantageous, allowing rapid evolutionary response to the changed environment.
Lindquist developed her ideas in the 1990s, around the same time as the theory of constructive neutral evolution first arose. The ideas are similar. However, Lindquist focused on genetic variation and future evolutionary response. By contrast, constructive neutral evolution emphasizes the complexification of cellular process. Once protein folding chaperones are in place and the buffered neutral variation follows in the primary amino acid sequences, removing the chaperone may be significantly deleterious. An essentially irreversible complexification of cellular process occurs.
Finally, in my own work on the paradox of robustness, I have emphasized that genomes are overwired.[10] By that, I mean that the regulatory network of key processes seems to contain a very large number of inputs into particular functions. An engineer designing such a control system would not create such a complexly wired network that is so difficult to understand and adjust.
For example, many different factors influence the expression of a gene. Transcription factors bind to
nearby DNA, raising or lowering gene expression. Distant sites in the genome act as enhancers or suppressors. DNA winds around histone proteins, in which both the histones themselves and the DNA winding affect expression. The DNA is marked with methyl or acetyl groups, altering expression. A variety of RNAs encoded in other parts of the genome influence different steps in the DNA to RNA to protein process. Why is it all so complex?
The paradox of robustness naturally leads to additional higher level regulatory controls that cause evolutionary relaxation of lower level controls. The complexification is typically irreversible. Additional layers of robustness get added, leading to a deeply and densely wired control system.
Constructive neutral evolution would lead to a similar interpretation. A primary goal of this article is to consider how the paradox of robustness and constructive neutral evolution provide complementary perspectives on complexity, each theory emphasizing different aspects of evolutionary process. Bringing together those alternative perspectives leads to a broader and more powerful framework for understanding robustness and complexity.
## Hourglass patterns of robustness and complexity
This section introduces two patterns of complexity in robust systems. The following section joins those observed patterns with the previously described theory to formulate the broader conceptual framework for future work.
The first pattern concerns the hourglass model of development.[6, 7] When comparing related species, the early stages of development tend to diverge relatively rapidly. The intermediate stages diverge relatively slowly, implying stronger conservation or constraint for those stages. The late stages of development diverge relatively rapidly.
Visually, we may think of the early stages as the widely divergent bottom of the hourglass. The intermediate stages are the constrained narrow neck of the hourglass. The late stages of final adult form set the widely divergent top.
A recent study of the nematode _Caenorhabditis elegans_ provides an example.[30] Proteins that affect early stages of development have evolved relatively rapidly when compared to related species. Proteins that affect intermediate stages have evolved relatively slowly. Proteins that affect late stages have evolved relatively rapidly. The authors interpret this pattern in terms of the classic hourglass model.
The second example concerns the hourglass pattern of design for robust and complex systems in both engineering and biology.[8] These ideas about robust system architecture come from John Doyle, a major contributor to robust control theory in engineering.[31, 32]
Modern mobile phones provide an example of Doyle's hourglass architecture. Mobile phones are essentially small computers. The hardware aspect of a computer provides a few basic functions.[33, 34, 35] Information needs to be stored in a retrievable way. Digital logic supports programming. Different hardware can offer these same basic functions.
Various companies manufacture mobile phones. Their hardware designs differ but remain qualitatively equivalent with regard to computation. In Doyle's hourglass, lower-level hardware diversity arises because there are many approximately equivalent ways to provide a basic foundation for similar functions. The lower part of the hourglass is diverse and wide.
The different hardwares are functionally equivalent because they all support the same basic set of protocols. The protocols are the core part of the operating system that sits atop the hardware. An operating system is like Microsoft Windows, which runs on many personal computers, or Mac OS, which runs on Apple computers. Essentially all modern mobile phones run variants of the Linux operating system.
At the base layer of Linux, the kernel sits just above the hardware. When a software program running on a phone needs to store information, it tells the kernel's protocols to store the information. The software program does not know anything about how the hardware actually stores the data. The software only knows how to talk the core protocols. Similarly, the hardware does not know anything about the software layer. The hardware only provides the basis for the core protocols.
The core protocols are highly constrained by the
need to provide the common foundation for computation. They do not differ very much from one phone to another, apart from the need to translate messages from the software layer to any special hardware that a phone might have. In Doyle's hourglass, the protocols form the narrow middle waist. The common protocols allow different hardwares to be functionally equivalent, releasing constraint on hardware design and leading to hardware diversity.
The upper software layer creates the functions that make mobile phones useful. The same software can in principle run on any hardware because the software talks only to the commonly used operating system protocols. The upper software layer diversifies widely to match the wide range of functions that users demand. The diverse software layer forms Doyle's wide upper half of the hourglass.
In practice, different manufacturers add an additional software layer between the core operating system protocols and the functional software programs. That intermediate software layer differentiates the upper-level software that can run on the phones of different manufacturers. However, that limitation mostly arises from proprietary business practices rather than from fundamental aspects of engineering design.
At the engineering level, hardware diversifies because there are many physical ways to make a base system layer that supports common protocols. On top of those common protocols, many different functional or software processes can be developed, each talking to the same small set of common protocols. The core protocols act as a buffering layer that deconstrains the need to match hardware and software levels, allowing those levels to evolve in nearly independent ways.
Doyle suggests that essentially all robust complex systems have a similar hourglass architecture, from airplanes to automobiles to communication systems. Csete and Doyle have also argued that robust complex systems in biology have a similar architecture [36, 37]. Consider two examples.
First, essentially all cells of life power themselves by a disequilibrium between ATP and ADP molecules. Roughly speaking, food is used to drive reactions that add a phosphate group to ADP, making ATP. The ATP/ADP disequilibrium acts like a storage battery that provides power to the cell, driving processes that make the biomolecular structures of life and powering functional and behavioral activities.
Across life, widely conserved biochemical mechanisms create and control the ATP/ADP disequilibrium. Those conserved mechanisms form the core protocols of power at the hourglass's central waist. At a lower, hardware-like level, many different biochemical reactions acquire and process diverse kinds of food. Some organisms can live on methane. Others need sugar. At the wide bottom of the hourglass, diverse biochemistry does the initial processing of various food sources.
As the metabolic cascade moves upward from the diverse initial inputs toward the central ATP/ADP power protocols, the biochemistry narrows to an evolutionarily conserved core. From that narrow core, once the ATP/ADP disequilibrium is in place to provide power, life diversifies widely into different software-like functional programs. Different organisms use that core power to build different kinds of molecules and to function in different ways. The top of the hourglass widens.
Second, essentially all cells of life use the DNA makes RNA makes protein cascade to translate stored hereditary information into the proteins that provide function. Widely conserved protocols process this essential translation. Variations occur but remain tightly constrained by the basic need to use the information in nucleotides to make functional amino acid sequences.
Diversity in genomic information storage, transmission, and retrieval forms a wide hardware-like lower level that flows into the narrow mid-level core protocols. The proteins that emerge from that mid-level build a widely divergent upper level of software-like function. The genome hardware level and the protein software level may diverge broadly and in mostly uncoupled ways.
A recent study by Michael Levin's group suggests a link between the hourglass model of development and Doyle's hourglass model of robust complexity [38]. Their computer simulation followed the evolutionary processes that shape development. Genomes encoded only simple rules of cellular processes rather than final developed forms. To solve particular developmental challenges, the genomes evolved to encode a few
core protocols of cell-cell interactions and some specifications for how those core protocols were to be used.
The authors interpreted their results in terms of an hourglass model of development: "[M]utations resulting in noise or changes in initial positions of the organs...will not have a strong effect on survival because the competency of the tissues will make needed reconfigurations to compensate for errors in initial state."
Put another way, the narrowly conserved intermediate developmental stages robustly buffer fluctuations in early developmental steps, leading to the evolutionary diversification of those early steps. They also found that final forms could diverge widely, tracing the classic hourglass shape.
The authors neither used the term "hourglass" nor connected their work to those classic theories for the evolution of development. Instead, they rediscovered the hourglass pattern directly from observing how their computer simulations evolved.
Similarly, they rediscovered aspects of the hourglass model for the architecture of robust and complex systems without awareness of Doyle's work. In particular, they emphasized that the genomic hardware evolved nearly independently from the developmental software because the relatively conserved core developmental protocols screened off changes between the hardware and software layers. They emphasized the words "hardware" and "software" in their interpretations. They also noted that in malaria, a kind of flatworm, genomic changes are often uncoupled from developmental changes.
This study's rediscovery of the hourglass models of development and robust complexity provides a compelling signal. When different investigators start from distinct backgrounds and focal questions and then converge on similar concepts, it often means that the time is right for a new synthesis.
## Appendix A broader conceptual foundation
Constructive neutral evolution and the paradox of robustness describe similar processes. A system's higher-level mechanism suppresses the consequences of variability at lower component levels. The components become more variable, perhaps drifting neutrally or evolving to sloppier, lower cost and lower performance states.
The two theories, although similar, emphasize different aspects of biological design.
Constructive neutral evolution focuses primarily on genomic complexity. That complexity in the storage and transmission of information links to Doyle's hardware level of robust and complex systems. There are many physical ways to manage information. Diversity ultimately matters little as long as the physical variety flows through the common protocol of DNA makes RNA makes protein.
The paradox of robustness focuses primarily on functional complexity. This theory, initially motivated by the variety of component systems that protect against cancer, emphasizes physiological homeostasis, repair of cellular damage, cell suicide to avoid harm, excess capacity to mitigate exceptional challenge, and plasticity and behavioral adjustment to changing environments. These functional protections link to Doyle's software level of robust and complex systems.
At the genomic hardware level, constructive neutral evolution emphasizes how buffering mechanisms often induce neutrality and evolutionary drift in the processes that manipulate information. The particular ways in which information gets processed may not matter so much as long as the information retains the ability to encode proteins. Systems tolerate low-level physical variety that retains support for the essential protocols at the hourglass's narrow waist.
At the functional software level, the paradox of robustness emphasizes how buffering mechanisms often alter the marginal costs and benefits of functional components. For example, in the data storage RAID example, the higher-level RAID controller buffers the consequences of failure at the component hard drive level. Thus, the hard drives became cheaper and sloppier, decaying in marginal cost and benefit to a lower-performing state. The hard drives did not drift neutrally. Rather, they follow economic principles. Robustness at the functional level will often alter economic costs and benefits rather than induce neutrality.
Pushing the analogies, the earliest stages of development create the first physical pieces needed to build an organism. Those physical pieces of hardware
can be made in a variety of ways, as long as the basic pieces come into place. Then, at the intermediate stage of development, the hardware pieces have to be organized through the common protocols that shape tissues. Those protocols robustly buffer early variety and provide the functional basis for the software programming that makes diverse adult forms.
Of course, the analogies are far from perfect. But they do seem to capture fundamental aspects of biological design. They also match common patterns in human-engineered systems.
Previously, the various theories followed isolated lines of thought. The paradox of robustness, constructive neutral evolution, Lindquist's cellular buffering, the hourglass model of development, and Doyle's hourglass model of robust and complex systems arose separately and remained alone. The fact that these ideas fit together in a natural and cohesive way suggests progress toward a comprehensive foundation for understanding biological design.
## Ratchet of complexity: evolutionary consequences
The more seemingly separate problems that fit into our new framework, the more evidence we have of moving in the right direction. This final section considers one further step toward conceptual unification. Can we link our broad framing for the biological evolution of robust and complex systems to recent progress in machine learning and artificial intelligence? Biological evolution is a particular kind of learning process. Links with machine learning would not be surprising.
Recently, deep computational neural networks provided several breakthroughs in applications.[9] Part of the success came from using deep multilayer networks that are densely connected. The huge parameter space of these models typically overfits the data. In spite of that overfitting, the models often generalize well, with excellent performance on test data not used in the fitting process. This benign overfitting remains an unsolved puzzle.[39, 40, 41]
The paradox of robustness creates deeply densely connected networks in evolutionary systems. With each addition of robustness at the system level, the lower-level components relax evolutionarily, causing some decay. Subsequently, the system cannot reverse by removing the new robustness mechanism because the decayed components would perform poorly when not protected by the additional robustness. An irreversible layer of complexity has been added.
Eventually, a new high-level robustness mechanism may be favored, layered above the existing system. The process repeats, with decay of lower-level components and an irreversible ratchet of increasing complexity.[3, 27] Eventually the system becomes a deeply layered and densely wired architecture.[10] If deeply densely wired systems do in fact learn particularly well, then such overparameterized evolutionary systems may adapt particularly rapidly and effectively to novel challenges. Perhaps life owes part of its great evolutionary success to the inevitable overwiring that arises from the paradox of robustness.
Evolution proceeds by incremental trial and error. Other kinds of systems designed by incremental trial and error may share similar features. Human institutions come to mind. Incremental changes may be more common than global redesign. If so, we may expect that system-wide error correction leads to the decaying performance of subunits, a layered architecture, and irreversible complexity.
## Acknowledgments
The Donald Bren Foundation, National Science Foundation grant DEB-1939423, and DoD grant W911NF2010227 support my research. This manuscript arose from a prior video presentation available at [https://youtu.be/LP1-vQ3zYgM](https://youtu.be/LP1-vQ3zYgM).
|
2307.08627 | Managing Write Access without Token Fees in Leaderless DAG-based Ledgers | A significant portion of research on distributed ledgers has focused on
circumventing the limitations of leader-based blockchains mainly in terms of
scalability, decentralization and power consumption. Leaderless architectures
based on directed acyclic graphs (DAGs) avoid many of these limitations
altogether, but their increased flexibility and performance comes at the cost
of increased design complexity, so their potential has remained largely
unexplored. Management of write access to these ledgers presents a major
challenge because ledger updates may be made in parallel, hence transactions
cannot simply be serialised and prioritised according to token fees paid to
validators. In this work, we propose an access control scheme for leaderless
DAG-based ledgers which is based on consuming credits rather than paying fees
in the base token. We outline a general model for this new approach and provide
some simulation results showing promising performance boosts. | Darcy Camargo, Luigi Vigneri, Andrew Cullen | 2023-07-17T16:43:01Z | http://arxiv.org/abs/2307.08627v1 | # Managing Write Access without Token Fees in Leaderless DAG-based Ledgers
###### Abstract
A significant portion of research on distributed ledgers has focused on circumventing the limitations of leader-based blockchains mainly in terms of scalability, decentralization and power consumption. Leaderless architectures based on directed acyclic graphs (DAGs) avoid many of these limitations altogether, but their increased flexibility and performance comes at the cost of increased design complexity, so their potential has remained largely unexplored. Management of write access to these ledgers presents a major challenge because ledger updates may be made in parallel, hence transactions cannot simply be serialised and prioritised according to token fees paid to validators. In this work, we propose an access control scheme for leaderless DAG-based ledgers which is based on consuming credits rather than paying fees in the base token. We outline a general model for this new approach and provide some simulation results showing promising performance boosts.
Keywords:Leaderless distributed ledgers Dual-token economy Priority-based write access DAG-based ledgers.
## 1 Introduction
Blockchains have sparked a revolution in the way information is shared in a trustless way. Over the latest decade, research has focused on addressing blockchain's intrinsic shortcomings in search of improved scalability, a more sustainable way of reaching consensus and a fairer distribution of wealth, with the introduction of "smart transactions" [1], new governance solutions and tackling privacy-related issues, among others. One of the main criticisms, though, is still related to performance: Bitcoin and Ethereum, the two most relevant projects by market cap as of 2022, are only able to process a few transactions per second [2], creating competition between users to obtain writing permission to the blockchain. Such a limited writing space is shared through auction-like mechanisms to discriminate which transactions deserve to be added to the ledger. As transactions compete for the limited writing available, often this system leads to large fees [3].
### Related work
Various attempts have tried to make DLT projects more scalable, notably lightning networks, sharding and Layer 2 solutions [4]. Furthermore, more recently,
there has been an increasing interest in directed acyclic graph (DAG) ledgers, which generalize the original chain structure introduced by the blockchain: in fact, when blocks are created at a high rate compared to their propagation time, many competing (or even conflicting) blocks are created leading to frequent bifurcations; DAG-based approaches allow to include blocks not only to the main chain, but also to these bifurcations using additional references [5][6]. Since transactions can be written and processed in a parallel way, i.e., no total ordering artificially enforcing a pause between subsequent blocks, DAG-based ledgers promise improved throughput and scalability. A number of DAG-based distributed ledger technologies (DLTs) already provide strong performance for consensus and communications layers, such as Honeybadger [7], Hashgraph [8], Aleph [9] and more recently Narwhal [10]. However, the DLTs mentioned still involve leader-based block creation which leaves users exposed to censorship and value extraction by these powerful leaders.
Standard blockchains and leader-based DLTs are built on the dichotomy between the _user_ that wants to issue transactions or other state-changing data and a _block issuer_ (leader) responsible for creating the blocks that will actually include these data in the ledger. This standard model couples the consensus and access elements of the protocol in the block issuance, creating competition among block issuers to provide ledger access as a service to the base users. In order to have enough incentives for block issuers to invest in this competition, in these protocols users propose fees for their data and the block issuers select which data to include to maximise their profits. Such tokenomics schemes are known for being effective but carry a variety of drawbacks: exclusion of low-scale operations, value extraction from users, fee-bidding wars [3], market manipulation, unpredictable pricing and uncertainty of inclusion, to name but a few [11].
In order to fulfill the demand of DLT applications that require low to no token fee models, some DLT protocols have attempted to develop zero-fee systems to varied degrees of success [12][13][14][6]. Among these projects, the DAG-based protocols have shown more promises due to the option of decoupling access and consensus rights, like in Prism [14] or in the IOTA Tangle [6]. We adopt such a DAG-based _leaderless_ model in this work where users are block issuers. We use this as a basis for developing a novel approach to managing ledger write access that does not require token fees and overcomes many of the negative outcomes of traditional blockchain models.
### Contributions
The main contribution of this work is a novel scheme for managing write access to leaderless DAG-based DLTs through _Access Credit_, a quantity that is passively generated based on tokens held and contributions to the protocol (e.g., being a validator). This Access Credit can be _consumed_ to create new blocks, buy name services, interact with smart contracts or, in general, use a portion of the DLT resources. The key advantages of the proposed access control scheme are as follows.
* **Zero token fees**: our proposal does not require to pay any tokens to issue blocks; instead, the Access Credit, continuously generated, can be used to create new blocks, whose cost is proportional to its computation and storage requirements as well as the global demand for write access.
* **Leaderless**: contrary to most existing access control schemes for both blockchain and DAG-based architectures, our proposed solution does not rely on leaders or rounds; this greatly improves resilience against censorship and value extraction by powerful block creators.
* **Parallel ledger updates**: with a leaderless DAG-based ledger we enable parallel execution and writing, as blocks can reference multiple past blocks concurrently.
Furthermore, we validate our approach through Python simulations that show the effectiveness of our solution: as we will see, the parallel execution facilitated by the DAG ledger introduces additional complexity to keep ledger consistency across all network participants. We highlight to the reader that we present our proposal in a general manner, that is we do not refer to any existing solution such that the principles described in this paper can be applied to any leaderless DAG-based DLT, and as such, the analysis may lack implementation-specific details. To account for any important omissions, we add an extensive discussion section to present some of the questions that one may encounter while implementing our proposal.
### Paper structure
The rest of the paper is organized as follows: the system model is introduced in Section 2; then, in Section 3, we present our access control policy; after that, Section 4 presents our Python simulations in both single- and multi-node environment. Finally, we present some discussion in Section 5 and conclude our paper in Section 6.
## 2 System model
### Actors
We categorize the actors of a leaderless DLT as follows.
* **Accounts:** actors capable of holding tokens and, in our proposal, Access Credit. As such, accounts are capable of gaining write access to the ledger by creating blocks. Please note that an account-based DLT is not necessarily mutually exclusive with the UTXO model; in fact, an account can be thought as an identity registered in the ledger to whom one of more UTXOs are associated.
* **Nodes:** the physical machines able to peer with each other to keep local versions of the ledger (related to accounts' token and Access Credit balances) up-to-date through block processing and forwarding.
**Remark:** it is important to note that an account being a block creator in our scheme does not necessarily have the same implications as it does in blockchain architectures. Specifically, block creators do not necessarily act as _validators_ do in blockchains, gathering transactions from a shared mempool to include in their blocks. In fact, such a shared mempool is not possible in a DAG-based ledger because blocks are written to the ledger in parallel. Our work focuses on management of write access for accounts, so although these accounts are block issuers, we assume that their motive is to write to their own data to the ledger rather than considering them as intermediaries for base users.
In our model, accounts are the ones including state-changing data into blocks, thus they keep cryptographic signatures to confirm ownership of such data and consume Access Credit to issue the blocks containing such data. [15]. For the sake of completeness, we mention that forms of delegations are possible both for access (through service providers [16]) and for consensus (through delegated Proof of Stake [17]).
### Blocks
A block is the fundamental data structure of DLTs carrying value transactions, data, smart contract executions, or any other information that may alter the ledger state. Blocks must also include a cryptographic relation with past blocks, the issuer's signature and fields to manage the consensus protocol (e.g., timestamps). In blockchain technologies, the cryptographic relation is the hash of the block that the issuer believes to be the last included in the ledger. On the other hand, DAG technologies have more malleability: in fact, as multiple blocks may be referenced, the simple act of issuing a block can be used as a statement about trust in numerous blocks. This advantage of DAGs for DLTs was explored in consensus protocols such as [6]. As the content of each block may vary greatly, in this paper we define an associated "cost", which we call _work_. Work is measured by a protocol gadget as a part of the node software, and it is intended to represent the computational load on the node while processing the block and applying the necessary state changes as well as the resource consumption in terms of bandwidth and storage. As an example, size (in bytes) of the block is one components of the work calculation.
### Access Credit
The ledger keeps track of computing and storing both Access Credit and token balances. Access Credit is used to gain write access to the ledger. We refer to the amount of spent Access Credits as the _credit consumed_. This quantity needs to be part of the block and it must be signed by the associated account so that the value cannot be altered. The vredit consumed is then used to determine the priority of the block when there is competition to gain write access, as we shall explain in the following section describing our access control. Credits are generated when tokens are moved to a new address through blocks, smart
contracts or other means. The amount generated follows the amount of tokens and the time spent in such an address:
\[\text{AccessCredit}=\text{TokensMoved}\times\text{TimeHeld}-\text{CreditConsumed}. \tag{1}\]
When the value in (1) is positive, there is a surplus of credits that will be given to a declared account, in an act we call _allotting credits_. Each protocol has its time mechanisms, and the only requirement in the term _TimeHeld_ from equation (1) is that it is objective (so each node agree on how much each account holds of credit). This property is trivial for UTXO-based ledger, but can also be induced in any other protocol by using appropriate timestamping mechanisms.
## 3 Credit-based access control
In this section, we present the Credit-based access control mechanism for leaderless DLTs using DAG ledgers for parallel writing and execution. We stress that our proposal does _not_ make use of token fees.
### Block creation
As we mentioned, accounts are the actors that include the state-changing data (e.g., value transactions, smart contracts) in the blocks. They are required to interact with a node to set a reasonable amount of Access Credit consumed and to forward the block to the rest of the DLT network. In fact, nodes can be thought as gateways to the network and play a fundamental role in the congestion management of the entire architecture: accounts (that can be managed through wallets or light nodes) do not receive nor process the blocks produced in the network. Consequently, they ignore the current congestion level and are enable to properly set the amount of Access Credit to consume. Hence, accounts must either set up their own nodes or use third-party free or paid services (e.g., through access service providers).
Upon request, nodes send information related to the real-time congestion level of the network, namely an estimation of the amount of Access Credit needed to successfully schedule a block. Then, account can take a more or less conservative approach when setting the credit consumed of the newly created block depending on the account's preferences, similarly to the way priority is set when gas fees are paid in Ethereum [18]. In this work, we assume that consumed Access Credit is a quantity larger or equal to 0: while bounds are useful for spam protection (in the case of a lower bound) and to avoid overspending resources (in the case of an upper bound), we leave the study of this optimization as a future work.
### Access control
In this section, we describe our proposed access control mechanism for leaderless DAG-based DLTs. Unlike standard blockchains where block producers try to
extract value by selecting the most profitable blocks, in our approach the rules are defined at protocol level and each node participates without the possibility of extracting value. Our access control chooses which blocks get gossiped in the peer-to-peer network, where the Access Credit is consumed instead of being redistributed, hence nodes have no incentives to deviate from the protocol.
In the following, we present the main components of our proposal, namely the enqueueing phase, the scheduling mechanism and the policy to drop blocks during congestion.
#### 3.2.1 Enqueuing
As blocks get gossiped, receiving nodes verify the correctness of their content (verification procedure varies depending on the specific implementation). For valid blocks, the protocol calculates the _Priority Score_, defined as follows.
Definition 1 (Priority Score): _Consider a block \(B\) and the tuple \((c_{B},w_{B})\) where \(c_{B}\) is the Access Credit consumed and \(w_{B}\) the work of block \(B\). We refer to Priority Score \(S_{B}\) the ratio between the Access Credit consumed and the work of block \(B\):_
\[S_{B}=\frac{c_{B}}{w_{B}}. \tag{2}\]
Once the Priority Score is computed, the block is enqueued into the _scheduling buffer_, which gathers all blocks not yet scheduled (more details in the Scheduling subsection). In our proposal, this buffer is a _priority queue_ sorted by Priority Score. The insert of a new block in the buffer has linear complexity.
#### 3.2.2 Scheduling
The scheduling policy is a mechanism that selects which blocks must be forwarded in the DLT network. We consider a scheduler that works in a service loop, where every \(\tau\) units of time it selects the blocks with the largest Priority Score in the scheduling buffer such that the work units of the selected blocks are smaller or equal than \(m\) work units. In this scenario, the enforced network throughput limit is \(m/\tau\) work units per second.
When a block is chosen to be scheduled, it is forwarded to neighbouring nodes where it can be enqueued in their buffer if they have not yet received it, after which the block undergoes the same scheduling process in each new node. We do not assume any specific gossip protocol: flooding, i.e., forwarding indiscriminately to all neighbours, is a popular choice in DLTs, but this can be optimised to save network bandwidth.
#### 3.2.3 Block drop
In practice, scheduling buffers have a limited size. In fact, the usage of large buffers in networks has been proved to be detrimental to performance [19]. In this work, we use a simple policy to drop blocks when the buffers get full, namely the protocol will drop the block with the lowest priority score, removing it from the buffer. Additionally, to limit the effectiveness of long-range attacks, we also drop blocks whose timestamps become older than a certain threshold compared to the node's local clock.
**Remark:** when blocks do not get finalized, i.e., they do not receive enough references, the Access Credit consumed is "reimbursed" to the issuer's account. The reimbursement should happen when the consensus mechanism reaches finalization on the state of non-inclusion of the data in the ledger state. The exact details on how long the data is kept and the reimbursement takes is specific to each consensus mechanism, and thus protocol, being out of the scope of the write access mechanism of this paper.
## 4 Simulations
This section shows a performance analysis concerning the credit-based access control proposed in Section 3. We first introduce the simulation setup in Section 4.1. Then, in Section 4.2, we analyse the performance of the access control by looking at _a single node_: this allows to collect metrics related to cost of new blocks, time spent in the scheduling buffer, scheduled and not scheduled blocks per account, etc. Finally, in Section 4.3, we present the outcomes of experiments on a _multi-node setting_ to verify ledger consistency and analyse the rate of discarded blocks.
### Simulation setup
In our setup, we consider 1000 accounts, i.e., block issuers. The token holdings belonging to those issuers are drawn from a Power Law distribution of the form
\[p(x)=\frac{\alpha-1}{x_{min}}\cdot\left(\frac{x}{x_{min}}\right)^{-\alpha}, \tag{3}\]
where \(\alpha=2\) and minimum token \(x_{min}=10\). A visual representation of the token holdings sorted by tokens can be found in Figure 1. The amount of tokens per issuer does not vary over the course of the simulations. Furthermore, each user gets 1 credit/second for every 10 tokens held: for example, a user with 25 tokens obtains 2.5 credit/second.
Blocks are generated according to a non-homogenerous Poisson Process, with alternating congested and uncongested periods of 3 minutes each. We define a congested period as the time interval where total block generation rate1 is larger than scheduling rate. The simulation is run for one hour, that is 10 congested periods and 10 uncongested periods. Additionally, we impose a scheduling rate of 100 blocks per second, i.e., a block is scheduled every 10 ms (for the sake of simplicity, all blocks have the same size). In our simulations, the number of blocks issued by an account is proportional to its token holdings. Moreover, we define four types of block issuers according to the way the block cost is set2:
Footnote 1: This is the sum of the block generation rate over all accounts.
* **Impatient:** These accounts consume all of their Access Credits each time they issue a block, so their credit consumption per transaction is high when they do not have many transactions to issue, and the credit consumption per transaction is low when they have a large number of transactions to issue. They do not respond in any way to the credit consumption they see in the buffer.
* **Greedy:** These accounts look at the highest amount of Access Credit consumed in the scheduling buffer and consume 1 more Access credit than this. If this greedy policy dictates that they would need to consume more than they have, they simply do not issue anything until the price goes down or they have generated enough Access Credit.
* **Gambler:** These nodes consume the amount of Access Credit of one of the top 20 blocks in the priority queue, chosen randomly.
* **Opportunistic:** These nodes consume 0 Access Credit, regardless of what is seen in the scheduling buffer. Traffic from these nodes is perfectly acceptable during periods of low congestion, but constitutes spam during congested periods and it is expected to be dropped from scheduling buffers.
Finally, we assume the buffer having a maximum capacity of 500 blocks. Blocks are removed from the buffer when one of the two scenario happens:
* **Full buffer:** If the buffer contains 500 blocks, a newly arrived block can be added to the buffer if and only if it consumes more Access Credits than those consumed by at least one block in the buffer; in this case, the latter will be removed and replaced with the newly arrived block.
* **Maximum time in the buffer:** When a block spends 30 seconds in the buffer without being scheduled, it gets immediately removed.
Potential changes in the parameters used in the multi-node simulator will be explicitly mentioned in Section 4.3.
Figure 1: Token held by account.
### Single-node simulator
#### 4.2.1 Impatient strategy
In this set of simulations, all accounts follow the _impatient_ consumption strategy. In Figure 1(a), we plot the cost of a scheduled block and the sojourn time of the same block as the simulation time advances. As a reference, we also add the traffic load over time, which alternates congested and uncongested periods. When accounts act as _impatient_ users, we realize that the cost of a block increases during less congested periods, while - during congestion - the cost of a block stabilizes at less than 30 credits with peaks up to 150 credits; conversely, in uncongested periods the credits spent is at least the double. This is because users tend to overspend when using this consumption strategy: during congestion, accounts have less time to accumulate Access Credit; the plot basically shows how much one can accumulate since the latest block it has issued - accumulation is larger if blocks are issued less often.
The sojourn time, defined as the time a block spends in the scheduling buffer (remember, this is a single-node simulator, so the sojourn time is the time spent in a single buffer), is very low when the network is uncongested but experiences large oscillations during congestion: in particular, after the transition to congestion, the mean sojourn time spikes to around 2 seconds and then keeps oscillating between 0.5 and 1 second; a non-negligible number of blocks experience a much larger sojourn time as it can be seen by the blue line in Figure 1(a).
#### 4.2.2 Greedy strategy
Here, we show the same set of plots, but when all accounts act as _greedy_. A greedy consumption strategy seems to optimize the inefficiencies of the impatient one, which tends to overspend unnecessarily. The cost of a
Figure 2: Traffic load (top figure), credits consumed (middle) and sojourn time (bottom) per block over time. Red line indicates the scheduling rate in the traffic load plot, and the moving average in the other plots.
block, from Figure 1(b), is now very low (close to 0) with little traffic; however, the transition to a congested network creates a very steep increase in the cost of a scheduled block: for a short period of time, the average consumed Access Credits is larger than 300, then suddenly decreasing around 30. This strategy can be compared with first price auctions, carrying their intrinsic drawbacks as well. While several recent approaches have been trying to mitigate the fluctuations in the block cost and to improve the user experience [3][18], we stress that finding an optimal credit consumption policy is out of the scope of this paper.
Similarly, it is possible to see frequent oscillations in the sojourn times with spikes (i) at the beginning and (ii) at the end of the congested period: (i) the increased traffic load alters the dynamics of the system, lowering the rank in the priority queue for blocks not yet scheduled, and we notice that oscillations are visible throughout the entire congested period; (ii) additionally, when congestion ends, a lot of blocks sitting in the buffer for long (but not yet dropped) have the opportunity to be scheduled experiencing a large delay, witnessed by the spike at the end of each high-traffic period.
#### 4.2.2 Gambler strategy
In this set of simulations we have all accounts with the _gambler_ strategy. There are clear differences with the previous scenarios: in Figure 1(c), we see that the spikes in the Credits consumed are largely reduced compared to the _greedy_ scenario: we cannot see accounts consuming more than 100 credits. However, the cost of scheduled blocks stabilizes at a price only marginally lower than the previous scenarios.
#### 4.2.3 Mixed strategy
In this scenario, we allow users with different consumption strategies to coexist. Specifically, 10% of accounts is _impatient_, 60% is _greedy_ and the rest is _gambler_. This set of simulations aim to provide a more realistic environment where multiple types of users share the network.
In Figure 3 we see that the average cost of a scheduled block is still driven large by impatient accounts, although such nodes represent only the 10% of the total block issuers. Similar to previous scenarios, we also see large oscillations in the sojourn times during congestion with peaks up to 2 seconds.
An interesting consideration can be done with respect to Figure 4, which decouples the sojourn time per account differentiating between impatient (yellow), greedy (red) and gambler (dark red): we observe that the mean sojourn time for greedy issuers is much lower than the other policies. While a large sojourn time is expected for _gambler_, it should not be the case for _impatient_. The explanation is that greedy users do _not_ issue blocks if they do not have enough Access Credits: basically, these accounts have a self-regulating _rate setter_, and the benefits in terms of improved delays are clearly visible.
### Multi-node simulator
#### 4.3.1 Description of the simulator
The following simulations are implemented in a multi-node simulator which emulates a complete DAG-based DLT protocol, i.e.,
Figure 4: Sojourn time and related mean per account.
Figure 3: Traffic load (top figure), credits consumed (middle) and sojourn time (bottom) per block over time in the mixed scenario. Red line indicates the scheduling rate in the traffic load plot, and the moving average in the other plots.
each node maintains a copy of the DAG, uses a selection algorithm to choose where attach new blocks and checks the validity of all arriving blocks. A number of specific DAG-based protocol details are included which our proposal do not necessarily depend on, but this allows us to at least provide prelimenary results for integrating this approach into a working protocol. Each node also operates an account for issuing blocks, so we use the terms node and account interchangeably in this section. The same consumption policies are tested as for the single-node simulator, but we use a smaller network and shorter simulation times to facilitate detailed presentation of each node's outcome.
The simulations consist of 20 nodes connected in a random 4-regular graph topology i.e., 4 neighbours each. The communication delays between nodes are uniformly distributed between 50 ms and 150 ms. The scheduling rate is 25 blocks per second. We use the same token distribution as in the single-node simulator. We initially consider a mix of greedy and opportunistic nodes with token holding distribution as illustrated in Figure 5.
We slightly modify the block generation process in this set of simulations: here, blocks are generated according to a separate Poisson process for each node and added to the node's local mempool from which they can create blocks. For the first minute of each simulation, blocks are generated at 50% of the scheduling rate, then for the following two minutes, the rate gets to 150% compared to scheduling rate, and for the final minute, it decreases to the 50% again. This traffic pattern simply seeks to show one cycle of increase in demand and then subsiding of demand.
Finally, we introduce the concept of block _confirmation_ in the DAG through Cumulative Weight (CW):
Definition 2 (Block confirmation): The \(CW_{B}\in\mathbb{N}^{+}\) of block \(B\) indicates how many times \(B\) has been referenced directly or indirectly by other blocks. If \(CW_{B}\geq 100\), then a node locally considers block \(B\) as confirmed.
Definition 3 (Confirmation Rate): A block is confirmed when all nodes have marked the block as confirmed. Confirmation rate is the rate at which blocks become confirmed.
CW in a DAG is analogous to the depth of a block in a blockchain which is often used for confirmation. Additionally:
Definition 4 (Dissemination Rate): A block is disseminated when all nodes have seen the block. Dissemination rate is the rate at which blocks become disseminated.
**Remark:** "Scaled" plots are scaled by the node's "fair share" of the scheduler throughput which is proportional to their token holdings, so a scaled rate of 1 means they are getting 100% of their fair share. In plots showing metrics for all nodes, the thickness of the trace corresponding to each node is proportional to the token holdings of that node.
Experimental results
We begin by considering the dissemination rates, as seen in Figure 6. Here, the greedy nodes are able to issue more than their "fair share" because the opportunistic nodes are opting not to consume any Credit-its and hence the greedy nodes get priority from the scheduler by consuming more. Figure 7 displays the corresponding dissemination latency of each node's blocks as a cumulative density. This paints a similar picture, with greedy nodes experiencing lower delays than their opportunistic counterparts.
Figure 8 illustrates the confirmation rates corresponding to this simulation. These traces follow a very similar trajectory to the dissemination rates, but we notice that even when congestion dies down, the confirmation rates of the opportunistic nodes do not recover immediately as the dissemination rates did. This is due to the fact that many old delayed blocks from the congested period are stuck in the buffers of nodes across the network and as these old blocks begin to be forwarded when the congestion goes away, they are not selected by other nodes to attach to, so their cumulative weight does not grow and they do not become confirmed.
These multi-node simulator results only present a very limited scenario with basic credit consumption policies, but the results show promise for providing effective access control. However, they also begin to show some of the complexities of integrating this approach into complete DAG-based DLT protocols. Further studies need to be carried out for specific DAG implementations to fully understand the implications of our approach.
## 5 Discussion
### Economic Incentives
As we discussed before, fees can be used to regulate access to DLT, but can also bring detrimental properties, such as the possibility of extracting value from users and the creation of inconsistencies in access. Nevertheless, fees provide
Figure 5: Token distribution and credit consumption policies for 20 nodes.
Figure 6: Dissemination rates and scaled dissemination rates. The scaled rate shows the rate relative to the account’s token holdings.
Figure 7: Cumulative density function of the dissemination latency.
essential incentives for many protocols, usually being the way security is ensured. In order to create sustainable economic incentives, we expect that Access Credits will have an active market, where users can sell their spare access. This creates a positive feedback loop where the generated gains received are in form of access which further incentives network adoption and usage.
### Limiting the accumulation of Access Credit
One can notice that the way Access Credit is defined makes this quantity highly inflationary, as credits are passively generated by tokens even when they are not in use. This could lead to situations where congestion leads to an amount of Access Credits consumed being excessively high, which would make access prohibitive for some periods of time, or to situations where accounts can accumulate enough Access Credits to continuously spam the network for some time. To counteract these events, we propose to introduce a concave function \(F(t)\) increasing with time, where \(F(0)=0\), such that Eq. (1) becomes:
\[\text{AccessCredit}= \text{TokensMoved}\times F(\text{TimeHeld})\] \[-\text{CreditConsumed}.\]
This slows down the accumulation and, depending on the function chosen can even cap it, e.g., when \(F(t)\propto(1-e^{-\gamma t})\). Using a concave function in the
Figure 8: Confirmation rates and scaled confirmation rates. The scaled rate shows the rate relative to the account’s token holdings.
_TimeHeld_ factor has a side effect: it pushes one to create blocks to allot credits more often, since Access Credit generation is faster soon after tokens are moved. The Access Credits being consumed to issue block will work as an offset to this: if an account allots credits from their tokens \(n\) times during the TimeHeld interval and consuming the same amount of Access Credits each time, the balance by the end would be
\[\text{AccessCredit}_{n}= \text{TokensMoved}\times nF(\text{TimeHeld}/n)\] \[-n\text{CreditConsumed}.\]
Hence, the Access Credit generated over this generated will have a maximal value in \(n\).
### The negative Access Credit problem
Due to the distributed nature of DLIs, Access Credit balances may become temporarily negative either due to natural network delay or due to malicious behavior (similar to nothing-at-stake problem). In this paper, we have not touched yet the scenario where accounts reach negative balance on Access Credits. The most effective solution is to process all blocks from the account with negative balance, find consensus on which ones should be accepted and punish the offending account after this process.
**Remark:** it is not possible to filter out blocks leading to negative Access Credit balances to avoid forks in nodes' local views of the DAG. Suppose a malicious account sends two blocks, \(A\) and \(B\), where processing only one of them would not cause its balance to go negative but processing both would. A subset of the nodes in the network may process \(A\) and filter \(B\), while the other subset may process \(B\) and filter out \(A\). This would create a problematic scenario where nodes will have inconsistent views of the ledger. An attacker could then repeat this procedure with many blocks, creating many possible forks.
## 6 Conclusion
We have proposed a credit-based access control mechanism for leaderless DAG-based DLTs. Our solution solves the problem of regulating write access to DAG-based DLTs without the need for token fees or serialisation of ledger updates into blocks by validators. The proposal is based on _Access Credits_, which are naturally generated for holding the base token. State-changing data must consume these credits to be included in the ledger, creating a utility loop where rewards are given in Access Credits.
Our simulations show that under varied user behaviors, the consumed credits remain stable over time, even with large jumps of demand for ledger write access. Additionally, we showed that write access can be effectively regulated across multiple nodes in a peer-to-peer network in some simple scenarios. Leaderless DAG-based ledgers present enormous potential for advances in the DLT field, and this work will provide a foundation for similar schemes seeking to manage write access in these systems in the future. |
2307.07514 | Explainability is NOT a Game | Explainable artificial intelligence (XAI) aims to help human decision-makers
in understanding complex machine learning (ML) models. One of the hallmarks of
XAI are measures of relative feature importance, which are theoretically
justified through the use of Shapley values. This paper builds on recent work
and offers a simple argument for why Shapley values can provide misleading
measures of relative feature importance, by assigning more importance to
features that are irrelevant for a prediction, and assigning less importance to
features that are relevant for a prediction. The significance of these results
is that they effectively challenge the many proposed uses of measures of
relative feature importance in a fast-growing range of high-stakes application
domains. | Joao Marques-Silva, Xuanxiang Huang | 2023-06-27T09:32:49Z | http://arxiv.org/abs/2307.07514v2 | # Explainability is NOT a Game
###### Abstract.
Explainable artificial intelligence (XAI) aims to help human decision-makers in understanding complex machine learning (ML) models. One of the hallmarks of XAI are measures of relative feature importance, which are theoretically justified through the use of Shapley values. This paper builds on recent work and offers a simple argument for why Shapley values can provide misleading measures of relative feature importance, by assigning more importance to features that are irrelevant for a prediction, and assigning less importance to features that are relevant for a prediction. The significance of these results is that they effectively challenge the many proposed uses of measures of relative feature importance in a fast-growing range of high-stakes application domains.
Explainable AI, Shapley values, Abductive reasoning +
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
## 1. Introduction
The societal and economic significance of machine learning (ML) cannot be overstated, with many remarkable advances made in recent years. However, the operation of complex ML models is most often inscrutable, with the consequence that decisions taken by ML models cannot be fathomed by human decision makers. It is therefore of importance to devise automated approaches to explain the predictions made by complex ML models. This is the main motivation for eXplainable AI (XAI). Explanations thus serve to build trust, but also to debug complex systems of AI. Furthermore, in situations where decisions of ML models impact people, one should expect explanations to offer the strongest guarantees of rigor.
However, the most popular XAI approaches [1, 10, 11, 12] offer no guarantees of rigor. Unsurprisingly, a number of works have demonstrated several misconceptions of informal approaches to XAI [12, 13, 14, 15, 16]. In contrast to informal XAI, formal explainability offers a logic-based, model-precise approach for computing explanations [14]. Although formal explainability also exhibits a number of drawbacks, including the computational complexity of logic-based reasoning, there has been continued progress since its inception [13, 12].
Among the existing informal approaches to XAI, the use of Shapley values as a mechanism for feature attribution is arguably the best-known. Shapley values [14] were originally proposed in the context of game theory, but have found a wealth of application domains [15]. More importantly, for more than two decades Shapley values have been proposed in the context of explaining the decisions of complex ML models [13, 16, 17, 18]. The importance of Shapley values for explainability is illustrated by the massive impact of tools like SHAP [16], including many recent uses that have a direct influence on human beings (see [12] for some recent references).
Unfortunately, the exact computation of Shapley values in the case of explainability has not been studied in practice, in part because of its computational complexity. Hence, it is unclear how good are existing approximate solutions, with a well-known example being SHAP [13, 14, 15]. Recent work [1] proposed a polynomial-time algorithm for computing Shapley values in the case of classifiers represented by deterministic decomposable boolean circuits. As a result, and for one concrete family of classifiers, it became possible to compare the estimates of tools such as SHAP [16] with those obtained with exact algorithms.
Furthermore, since Shapley values aim to measure the relative importance of features, a natural question is whether the relative importance of features obtained with Shapley values can indeed be trusted. Given that the definition of Shapley values is axiomatic, one may naturally question how reliable those values are. Evidently, if the relative order of features dictated by Shapley values can be proved inadequate, then the use of Shapley values in explainability ought to be deemed unworthy of trust.
A number of earlier works reported practical problems with explainability approaches based on Shapley values [17] ([12]) covers a number of additional references). However, these works focus on practical tools, which approximate Shapley values, but do not investigate the possible existence of fundamental limitations with the use of Shapley values in explainability. In contrast with these other works, this paper offers a simple argument for why relative feature importance obtained with Shapley values can provide misleading information, in that features that bear no significance for a prediction can be deemed more important, in terms of Shapley values, than features that bear some significance for the same prediction. The importance of this paper's results, and of the identified flaws with Shapley values, should be assessed in light of the fast-growing uses of explainability solutions in domains that directly impact human beings, e.g. medical diagnostic applications, especially when the vast majority of such uses build on Shapley values for explainability.
The paper is organized as follows. Section 2 introduces the notation and definitions used throughout. This includes a brief introduction to formal explanations, but also to Shapley values for explainability. Section 3 revisits the concepts of relevancy/irrelevancy, which have been studied in logic-based abduction since the mid 1990s (Eiter and Gottlob, 1995). Section 4 demonstrates the inadequacy of Shapley values for feature attribution. Finally, Section 5 discusses the paper's results, but it also briefly examines additional flaws of Shapley values.
## 2. Definitions
Throughout the paper, we adopt the notation and the definitions introduced in earlier work, namely (Marques-Silva, 2022; Marques-Silva and Ignatiev, 2022) and also (Arenas et al., 2021).
### Classification Problems
A classification problem is defined on a set of features \(\mathcal{F}=\{1,\ldots,m\}\), and a set of classes \(\mathcal{K}=\{c_{1},\ldots,c_{K}\}\). Each feature \(i\in\mathcal{F}\) takes values from a domain \(\mathcal{D}_{i}\). Domains can be ordinal (e.g. real- or integer-valued) or categorical. Feature space is defined by the cartesian product of the domains of the features: \(\mathbb{F}=\mathcal{D}_{1}\times\cdots\times\mathcal{D}_{m}\). A classifier \(\mathcal{M}\) computes a (non-constant) classification function: \(\kappa:\mathbb{F}\rightarrow\mathcal{K}\)1. A classifier \(\mathcal{M}\) is associated with a tuple \((\mathcal{F},\mathbb{F},\mathcal{K},\kappa)\). For the purposes of this paper, we restrict \(\kappa\) to be a non-constant boolean function. This restriction does not in any way impact the validity of our results.
Footnote 1: A classifier that computes a constant function, i.e. the same prediction for all points in feature space, is of course uninteresting, and so it is explicitly disallowed.
Given a classifier \(\mathcal{M}\), and a point \(\mathbf{v}\in\mathbb{F}\), with \(c=\kappa(\mathbf{v})\) and \(c\in\mathcal{K}\), \((\mathbf{v},c)\) is referred to as an _instance_ (or sample). An explanation problem \(\mathcal{E}\) is associated with a tuple \((\mathcal{M},(\mathbf{v},c))\). As a result, \(\mathbf{v}\) represents a concrete point in feature space, whereas \(\mathbf{x}\in\mathbb{F}\) represents an arbitrary point in feature space.
As a running example, we consider the decision tree (DT) shown in Figure 1. Since it will be used later, we also show the truth table for the DT classifier. Given the information shown in the DT, we have that \(\mathcal{F}=\{1,2,3,4\}\), \(\mathcal{D}_{i}=\{0,1\},i=1,2,3,4\), \(\mathbb{F}=\{0,1\}\)4, and \(\mathcal{K}=\{0,1\}\). The classification function \(\kappa\) is given by the decision tree shown, or alternatively by the truth table. Finally, the instance considered is \((\mathbf{v},c)=((0,0,0,0),0)\).
Footnote 1: A classifier that computes a constant function, i.e. the same prediction for all points in feature space, is of course uninteresting, and so it is explicitly disallowed.
### Formal Explanations
The presentation of formal explanations follows recent accounts (Marques-Silva, 2022). In the context of XAI, abductive explanations (AXp's) have been studied since 2018 (Ignatiev et al., 2019; Shih et al., 2018). Similar to other heuristic approaches, e.g. Anchors (Ribeiro et al., 2018), abductive explanations are an example of explainability by feature selection, i.e. a subset of features is selected as the explanation. AXp's represent a rigorous example of explainability by feature selection, and can be viewed as the answer to a "_Why (the prediction)?_" question. An AXp is defined as a subset-minimal (or irreducible) set of features \(\mathcal{X}\subseteq\mathcal{F}\) such that the features in \(\mathcal{X}\) are sufficient for the prediction. This is to say that, if the features in \(\mathcal{X}\) are fixed to the values determined by \(\mathbf{v}\), then the prediction is guaranteed to be \(c=\kappa(\mathbf{v})\). The sufficiency for the prediction can be stated formally:
\[\forall(\mathbf{x}\in\mathbb{F}).\left[\bigwedge_{i\in\mathcal{X}}(x_{i}=v_{i} )\right]\rightarrow(\kappa(\mathbf{x})=\kappa(\mathbf{v})) \tag{1}\]
Observe that (1) is monotone on \(\mathcal{X}\), and so the two conditions for a set \(\mathcal{X}\subseteq\mathcal{F}\) to be an AXp (i.e. sufficiency for prediction and subset-minimality), can be stated as follows:
\[\forall(\mathbf{x}\in\mathbb{F}).\left[\bigwedge_{i\in\mathcal{X} }(x_{i}=v_{i})\right]\rightarrow(\kappa(\mathbf{x})=\kappa(\mathbf{v}))\wedge\] \[\forall(t\in\mathcal{X}).\exists(\mathbf{x}\in\mathbb{F}).\left[ \bigwedge_{i\in\mathcal{X}\setminus\{t\}}(x_{i}=v_{i})\right]\wedge(\kappa( \mathbf{x})\neq\kappa(\mathbf{v})) \tag{2}\]
A predicate AXp : \(2^{\mathcal{F}}\rightarrow\{0,1\}\) is associated with (2), such that AXp (\(\mathcal{X};\mathcal{E}\)) holds true if and only if (2) holds true2.
Footnote 2: When defining concepts, we will show the necessary parameterizations. However, in later uses, those parameterizations will be omitted, for simplicity.
An AXp can be interpreted as a logic rule of the form:
\[\text{TF}\quad\left[\bigwedge_{i\in\mathcal{X}}(x_{i}=v_{i})\right]\quad \text{THEN}\quad(\kappa(\mathbf{x})=c) \tag{3}\]
where \(c=\kappa(\mathbf{v})\). It should be noted that informal XAI methods have also proposed the use of IF-THEN rules (Ribeiro et al., 2018) which, in the case of Anchors (Ribeiro et al., 2018) may or may not be sound (Ignatiev, 2020; Ignatiev et al., 2019). In contrast, rules obtained from AXp's are logically sound.
Moreover, contrastive explanations (CXp's) represent a type of explanation that differs from AXp's, in that CXp's answer a "_Why Not (some other prediction)?_" question (Ignatiev et al., 2020; Miller, 2019). Given a set \(\mathcal{Y}\subseteq\mathcal{F}\), sufficiency for changing the prediction can be stated formally:
\[\exists(\mathbf{x}\in\mathbb{F}).\left[\bigwedge_{i\in\mathcal{Y} \setminus\mathcal{Y}}(x_{i}=v_{i})\right]\wedge(\kappa(\mathbf{x})\neq\kappa( \mathbf{v})) \tag{4}\]
A CXp is a subset-minimal set of features which, if allowed to take a value other than the value determined by \(\mathbf{v}\), then the predicted can be changed by choosing suitable values to those features.
Similarly to the case of AXp's, for CXp's (4) is monotone on \(\mathcal{Y}\), and so the two conditions (sufficiency for changing the prediction and subset-minimality) can be stated formally as follows:
\[\forall(t\in\mathcal{Y}).\left[\bigwedge_{i\in\mathcal{F}\setminus \mathcal{Y}\setminus\{t\}}(x_{i}=v_{i})\right]\rightarrow(\kappa(\mathbf{x})= \kappa(\mathbf{v})) \tag{5}\]
A predicate CXp : \(2^{\mathcal{F}}\rightarrow\{0,1\}\) is associated with (5), such that CXp (\(\mathcal{Y};\mathcal{E}\)) holds true if and only if (5) holds true.
Algorithms for computing AXp's and CXp's for different families of classifiers have been proposed in recent years ((Marques-Silva and Ignatiev, 2022) and (Ignatiev, 2022) provides a recent account of the progress observed in computing formal explanations). These algorithms include the use of automated reasoners (e.g. SAT, SMT or MILP solvers), or dedicated algorithms for families of classifiers for which computing one explanation is tractable.
Given an explanation problem \(\mathcal{E}\), the sets of AXp's and CXp's are represented by:
\[\mathbb{A}(\mathcal{E}) =\{\mathcal{X}\subseteq\mathcal{F}\mid\text{AXp}(\mathcal{X}; \mathcal{E})\} \tag{6}\]
(7) \[\mathbb{C}(\mathcal{E}) =\{\mathcal{Y}\subseteq\mathcal{F}\mid\text{CXp}(\mathcal{Y}; \mathcal{E})\}\]
For example, \(\mathbb{A}(\mathcal{E})\) represents the set of all logic rules that predict \(c=\kappa(\mathbf{v})\), which are consistent with \(\mathbf{v}\), and which are irreducible (i.e. no literal \(x_{i}=v_{i}\) can be discarded).
Furthermore, it has been proved (Ignatiev et al., 2020) that (i) a set \(\mathcal{X}\subseteq\mathcal{F}\) is an AXp if and only if it is a minimal hitting set (MHS) of the set of CXp's; and (ii) a set \(\mathcal{Y}\subseteq\mathcal{F}\) is a CXp if and only if it is an MHS of the set of AXp's. This property is referred to as MHS duality, and can be traced back to the seminal work of R. Reiter (Reiter, 1987) in model-based diagnosis. Moreover, MHS duality has been shown to be instrumental for the enumeration of AXp's and CXp's, but also for answering other explainability queries (Marques-Silva, 2022).
For the running example, and since it is feasible to represent the function with a truth table, then there exist polynomial-time algorithms (on the size of the truth-table) for computing all AXp's and all CXp's (Huang and Marques-Silva, 2023). This is illustrated in Figure 2. Table 1 illustrates how each set is analyzed when computing AXp's or CXp's.
2021a,b) allows for different input distributions when computing the average values. For the purposes of this paper, it suffices to consider solely a uniform input distribution, and so the dependency on the input distribution is not accounted for.
Table 2 illustrates how the average value is computed for two concrete sets of features. For example, if \(\mathcal{S}=\{1,4\}\), then features 1 and 4 are fixed to value 0 (as dictated by v). We then allow all possible assignments to features 2 and 3, obtaining \(\Upsilon(\{1,4\})=\{(0,0,0,0),(0,0,1,0),(0,1,0,0),(0,1,1,0)\}\). To compute \(\phi(\mathcal{S})\), we sum up the values of the rows of the truth table indicated by \(\Upsilon(\mathcal{S})\), and divide by the total number of points, which is 4 in this case.
To simplify the notation, the following definitions are used throughout,
\[\Delta(i,\mathcal{S};\mathcal{M},\mathbf{v}) =(\phi(\mathcal{S}\cup\{i\};\mathcal{M},\mathbf{v})-\phi( \mathcal{S};\mathcal{M},\mathbf{v}))\] (10) \[\varsigma(\mathcal{S};\mathcal{M},\mathbf{v}) =|\mathcal{S}|(|\mathcal{P}|-|\mathcal{S}|-1)^{\nicefrac{{1}}{{2} }}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\begin{tabular}{c c c c c c c c} \hline \hline \(\mathcal{S}\) & rows for \(\mathcal{S}\) & rows for \(\mathcal{S}\cup\{1\}\) & \(\phi(\mathcal{S})\) & \(\phi(\mathcal{S}\cup\{1\})\) & \(\Delta(\mathcal{S})\) & \(\varepsilon(\mathcal{S})\) & \(\varepsilon(\mathcal{S})\times\Delta(\mathcal{S})\) \\ \hline \(\emptyset\) & 1.16 & 1.8 & \({}^{3}\)/\({}_{16}\) & \({}^{2}\)/\({}_{16}\) & \({}^{1}\)/\({}_{16}\) & \({}^{0(4-1)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{16}\) \\ \(\{1\}\) & 1.8 & 1,3,5,7,9,11,13,15 & \({}^{3}\)/\({}_{16}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{16}\) & \({}^{0(4-1)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{16}\) \\ \(\{1\}\) & 1.8 & 1,3,5,7 & \({}^{3}\)/\({}_{8}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{16}\) & \({}^{0(4-1)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{2\}\) & 1,2,3,4,9,10,11,12 & 1,2,3,4 & \({}^{2}\)/\({}_{1}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0(4-2)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{3\}\) & 1,2,5,6,9,10,13,14 & 1,2,5,6 & \({}^{2}\)/\({}_{1}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) \\ \(\{4\}\) & 1,3,5,7,9,11,13,15 & 1,3,5,7 & \({}^{2}\)/\({}_{1}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) \\ \(\{2,3\}\) & 1,2,9,10 & 1,2 & \({}^{1}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{1(4-3)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{2,4\}\) & 1,3,9,11 & 1,3 & \({}^{1}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{2(4-3)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{3,4\}\) & 1,5,9,13 & 1,5 & \({}^{1}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{2(4-3)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{2,3,4\}\) & 1,9 & 1 & 0 & 0 & 0 & \({}^{3(4-4)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{1}\) \\ \hline \hline \end{tabular}
Shapley value for feature 1
\begin{tabular}{c c c c c c c} \hline \hline \(\mathcal{S}\) & rows for \(\mathcal{S}\) & rows for \(\mathcal{S}\cup\{2\}\) & \(\phi(\mathcal{S})\) & \(\phi(\mathcal{S}\cup\{2\})\) & \(\Delta(\mathcal{S})\) & \(\varepsilon(\mathcal{S})\) & \(\varepsilon(\mathcal{S})\times\Delta(\mathcal{S})\) \\ \hline \(\emptyset\) & 1.16 & 1,2,3,4,9,10,11,12 & \({}^{3}\)/\({}_{16}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{16}\) & \({}^{0(4-1)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{1}\) \\ \(\{1\}\) & 1.8 & 1,2,3,4 & \({}^{3}\)/\({}_{1}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0(4-1)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{1}\) \\ \(\{3\}\) & 1,2,5,6,9,10,13,14 & 1,2,9,10 & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{2(4-2)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{4\}\) & 1,3,5,7,9,11,13,15 & 1,3,9,11 & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{2(4-2)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{1,3\}\) & 1,2,5,6 & 1,2 & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{2(4-3)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{1,4\}\) & 1,3,5,7 & 1,3 & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{2(4-3)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) & \({}^{1}\)/\({}_{12}\) \\ \(\{3,4\}\) & 1,5,9,13 & 1,9 & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{-1}\)/\({}_{1}\) & \({}^{3(4-4)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{1}\) & \({}^{-1}\)/\({}_{1}\) \\ \hline \hline \end{tabular}
Shapley value for feature 2
\begin{tabular}{c c c c c c c} \hline \hline \(\mathcal{S}\) & rows for \(\mathcal{S}\) & rows for \(\mathcal{S}\cup\{4\}\) & \(\phi(\mathcal{S})\) & \(\phi(\mathcal{S}\cup\{4\})\) & \(\Delta(\mathcal{S})\) & \(\varepsilon(\mathcal{S})\) & \(\varepsilon(\mathcal{S})\times\Delta(\mathcal{S})\) \\ \hline \(\emptyset\) & 1.16 & 1,3,5,7,9,11,13,15 & \({}^{3}\)/\({}_{16}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{16}\) & \({}^{0(4-1)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) \\ \(\{1\}\) & 1.8 & 1,3,5,7 & \({}^{3}\)/\({}_{8}\) & \({}^{2}\)/\({}_{1}\) & \({}^{1}\)/\({}_{1}\) & \({}^{0}\) & \({}^{2(4-3)}\)/\({}_{2}\) & \({}^{1}\)/\({}_{12}\) & \({}^{1}\)/\({}_{16}\) \\ \(\{2\}\) & 1,2,3,4,9,10,11,12 & 1,3,9,11 & \({}^{2}\)/\({}_{8}\) & \(
at some of those sets of features to fix, Shapley values will analyze all possible subsets.
The use of Shapley values in explainability have been justified by significant claims. We illustrate some of the claims stated in earlier work [21]:
* _"According to the 2nd axiom, if two features values have an identical influence on the prediction they are assigned contributions of equal size. The 3rd axiom says that if a feature has no influence on the prediction it is assigned a contribution of 0."_
(Note: the axioms above refer to the axiomatic characterization of Shapley values in [21].)
* _"When viewed together, these properties ensure that any effect the features might have on the classifiers output will be reflected in the generated contributions, which effectively deals with the issues of previous general explanation methods."_
Given the above, one would expect a direct correlation between a feature's importance and the absolute value of its Shapley value. As the rest of the paper shows, this is not the case.
## 3. Feature (Ir)relevancy
Given (6) and (7), we can aggregate the features that occur in AXp's and CXp's:
\[\mathcal{F}_{\mathbb{A}(E)}=\bigcup_{\mathcal{X}\in\mathbb{A}(E)} \mathcal{X} \tag{14}\] \[\mathcal{F}_{\mathbb{C}(E)}=\bigcup_{\mathcal{Y}\in\mathbb{C}( \mathcal{E})}\mathcal{Y} \tag{13}\]
Moreover, MHS duality between the sets of AXp's and CXp's allows proving that: \(\mathcal{F}_{\mathbb{A}(E)}=\mathcal{F}_{\mathbb{C}(\mathcal{E})}\). Hence, we just refer to \(\mathcal{F}_{\mathbb{A}(E)}\) as the set of features that are contained in some AXp (or CXp).
A feature \(i\in\mathcal{F}\) is relevant if it is contained in some AXp, i.e. \(i\in\mathcal{F}_{\mathbb{A}(E)}=\mathcal{F}_{\mathbb{C}(E)}\); otherwise it is irrelevant, i.e. \(i\notin\mathcal{F}_{\mathbb{A}(E)}\)3. We will use the predicate \(\text{Relevant}(i)\) to denote that feature \(i\) is relevant, and predicate \(\text{Irrelevant}(i)\) to denote that feature \(i\) is irrelevant.
Footnote 3: It should be noted that feature relevancy is tightly related with the concept of relevancy studied in logic-based abduction [11].
Relevant and irrelevant features provide a fine-grained characterization of feature importance, in that irrelevant features play no role whatsoever in prediction sufficiency. In fact, if \(p\in\mathcal{F}\) is an irrelevant feature, then we can write:
\[\forall(\mathcal{X}\in\mathbb{A}(E)).\forall(u_{p}\in\mathcal{D} _{p}).\forall(\mathbf{x}\in\mathbb{F}).\] \[\quad\left[\bigwedge\nolimits_{i\in\mathcal{X}}(x_{i}=v_{i}) \wedge(x_{p}=u_{p})\right]\rightarrow(\kappa(\mathbf{x})=\kappa(\mathbf{v})) \tag{15}\]
The logic statement above clearly states that, if we fix the values of the features identified by any AXp then, no matter the value picked for feature \(p\), the prediction is guaranteed to be \(c=\kappa(\mathbf{v})\). The bottom line is that an irrelevant feature \(p\) is absolutely unimportant for the prediction, and so there is no reason to include it in a logic rule consistent with the instance.
For the example DT, we have that \(\mathbb{A}(\mathcal{E})=\{\{2,3,4\}\}\) and that \(\mathbb{C}(\mathcal{E})=\{\{2\},\{3\},\{4\}\}\), i.e. the explanation problem has one AXp and three CXp's. (Recall that the computation of both AXp's and CXp's is summarized in Figure 2.) As expected, \(\mathcal{F}_{\mathbb{A}(E)}=\mathcal{F}_{\mathbb{C}(\mathcal{E})}=\{2,3,4\}\). Hence, we conclude that feature 1 is irrelevant, and that features 2, 3 and 4 are relevant. Observe that no AXp/CXp includes feature 1. For any AXp \(\mathcal{X}\) this means that, adding feature 1 to \(\mathcal{X}\), when feature 1 is assigned any value from its domain \(\mathcal{D}_{1}\), would not change the prediction.
There are a few notable reasons for why irrelevant features are not considered in explanations. First, one can invoke Occam's razor (a mainstay of ML [1]) and argue for simplest (i.e. irreducible) explanations. Second, if irreducibility of explanations were not a requirement, then one could claim that a prediction using all features would suffice, and that is never the case. Third, the fact that irrelevant features can take any value in their domain without that impacting the prediction shows how unimportant those features are.
## 4. Refuting Shapley Values for Explainability
We now proceed to demonstrate that Shapley values for explainability can produce misleading information about feature importance, in that the relative feature importance obtained with Shapley values disagrees with the characterization of features in terms of (ir)relevancy. Clearly, information about feature (ir)relevancy is obtained by a rigorous, logic-based, analysis of the classifier, and so it captures precisely essential information about how the classifier's prediction depends (or not) on each of the features.
### Misleading Feature Importance
Given the definition of Shapley values for explainability and of irrelevant features, we show that Shapley values will provide misleading information regarding relative feature importance, concretely that an irrelevant feature can be assigned the largest absolute Shapley value. Evidently, misleading information will cause human decision makers to consider features that are absolutely irrelevant for a prediction.
For the example DT, we have argued that feature 1 is irrelevant and that features 2, 3 and 4 are relevant. (The computation of AXp's and CXp's using a truth table is illustrated in Figure 2. Section 3 details how feature (ir)relevancy is decided.) Furthermore, from Figure 3, we obtain that,
\[\forall(i\in\{2,3,4\}).|\text{Sv}(1)|>|\text{Sv}(i)|\]
Thus, the feature with the largest absolute Shapley value is irrelevant for the prediction.
One might be tempted to argue that the sign of \(\text{Sv}(1)\) differs from the sign of \(\text{Sv}(2)\), \(\text{Sv}(3)\), \(\text{Sv}(4)\), and that that could explain the reported issue. However, the hypothetical relationship between the sign of the Shapley values and their perceived impact of the value of the prediction is a flawed argument, in that feature 1 plays no role in setting the prediction to 0, but feature 1 also plays no role in changing the value of the prediction. The results in the next section further confirm that the sign of a feature's Shapley value bears no direct influence on a (ir)relevancy of a feature.
### Issues with Shapley Values for Explainability
By automating the analysis of boolean functions [14], we have been able to identify a number of issues with Shapley values for explainability, all of which demonstrate that Shapley values can provide misleading information about the relative important of features. The list of possible issues is summarized in Table 3. Observe that some issues imply the occurrence
of other issues, e.g. I4 implies I3, and I5 implies I2, among others. Our goal is to highlight a comprehensive range of problems that the use of Shapley values for explainability can induce, and so different issues aim to highlight such problems.
By analyzing all possible boolean functions defined on four variables, Table 4 summarizes the percentage of functions exhibiting the identified issues. For each possible function, the truth table for the function serves as the basis for the computation of all explanations, for deciding feature (ir)relevancy, and for the computation of Shapley values. The algorithms used are the ones sketched earlier in the paper, and all run in polynomial-time on the size of the truth table. For example, whereas issue I5, which is exemplified by the example DT and instance (also, see Section 4.1), occurs in 1.9% of the functions, issues I1, I2 and I6 occur in more than 55% of the functions, with I1 occurring in more than 99% of the functions. It should be noted that the identified issues were distributed evenly for instances where the prediction takes value 0 and instances where the prediction takes value 1. Moreover, it should be restated that the two constant functions were ignored.
### Verdict & Justification
First, it should be plain that any of the issues described in Table 3 should be perceived as problematic in terms of assigning relative importance to features, with some issues serving to confirm the existence of misleading relative feature importance. This is the case with issues I2, I4, I5 and I7. However, assigning a Shapley value of 0 to a relevant feature or assigning non-zero Shapley value to an irrelevant feature will also cause human decision makers to overlook important features, or to analyze unimportant features. Such cases are also covered by the remaining issues.
Second, and as the results of the previous two sections amply demonstrate, the concept of Shapley values for explainability is fundamentally flawed. Furthermore, any explainability tool whose theoretical underpinnings are Shapley values for explainability is also fundamentally flawed.
Third, given the similarities between the computation of abductive and contrastive explanations and Shapley values for explainability, an immediate question is: why do Shapley values for explainability produce misleading measures of relative feature importance? It seems apparent that, whereas in the original definition of Shapley values for game theory, all coalitions are acceptable, this is not the case with explainability, i.e. some sets of features should not be considered when assigning importance to a feature. Thus, one reason that causes Shapley values to produce misleading information is the fact that some disallowed set of features are accounted for.
## 5. Discussion
This paper presents a simple argument demonstrating that Shapley values for explainability can produce misleading information regarding relative feature important. A number of potential issues with Shapley values for explainability has been identified, and shown to occur rather frequently in boolean classifiers. It is therefore plain that the continued use of XAI approaches based on Shapley values (see (Huang and Marques-Silva, 2023) for additional
\begin{table}
\begin{tabular}{l r} \hline \hline Metric & Value \\ \hline \# of functions & 65534 \\ \# number of instances & 1048544 \\ \hline \# of I1 issues & 781696 \\ \# of functions exhibiting I1 issues & 65320 \\ \% functions exhibiting I1 issues & 99.67 \\ \hline \# of I2 issues & 105184 \\ \# of functions exhibiting I2 issues & 40448 \\ \% functions exhibiting I2 issues & 61.72 \\ \hline \# of I3 issues & 43008 \\ \# of functions exhibiting I3 issues & 7800 \\ \% functions exhibiting I3 issues & 11.90 \\ \hline \# of I4 issues & 5728 \\ \# of functions exhibiting I4 issues & 2592 \\ \% functions exhibiting I4 issues & 3.96 \\ \hline \# of I5 issues & 1664 \\ \# of functions exhibiting I5 issues & 1248 \\ \% functions exhibiting I5 issues & 1.90 \\ \hline \# of I6 issues & 109632 \\ \# of functions exhibiting I6 issues & 36064 \\ \% functions exhibiting I6 issues & 55.03 \\ \hline \# of I7 issues & 11776 \\ \# of functions exhibiting I7 issues & 7632 \\ \% functions exhibiting I7 issues & 11.65 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Identified potential issues with Shapley values
\begin{table}
\begin{tabular}{l l} \hline \hline Issue & Condition \\ \hline I1 & \(\exists(i\in\mathcal{F}).[\mathsf{Irrelevant}(i)\wedge(\mathsf{Sv}(i)\neq 0)]\) \\ I2 & \(\exists(i_{1},i_{2}\in\mathcal{F}).[\mathsf{Irrelevant}(i_{1})\wedge\mathsf{ Relevant}(i_{2})\wedge(|\mathsf{Sv}(i_{1})|>|\mathsf{Sv}(i_{2}))]\) \\ I3 & \(\exists(i\in\mathcal{F}).[\mathsf{Relevant}(i)\wedge(\mathsf{Sv}(i)=0)]\) \\ I4 & \(\exists(i_{1},i_{2}\in\mathcal{F}).[\mathsf{Irrelevant}(i_{1})\wedge(\mathsf{ Sv}(i_{1})\neq 0)]\wedge[\mathsf{Relevant}(i_{2})\wedge(\mathsf{ Sv}(i_{2})=0)]\) \\ I5 & \(\exists(i\in\mathcal{F}).[\mathsf{Irrelevant}(i)\wedge\mathsf{V}(1\leq j \leq m,j\neq i).|\mathsf{Sv}(i)|>|\mathsf{Sv}(j)|]\) \\ I6 & \(\exists(i_{1},i_{2}\in\mathcal{F}).[\mathsf{Irrelevant}(i_{1})\wedge\mathsf{ Relevant}(i_{2})\wedge(\mathsf{Sv}(i_{1})\times\mathsf{Sv}(i_{2})>0)]\) \\ I7 & \(\exists(i_{1},i_{2}\in\mathcal{F}).[\mathsf{Irrelevant}(i_{1})\wedge\mathsf{ Relevant}(i_{2})\wedge(|\mathsf{Sv}(i_{1})|>|\mathsf{Sv}(i_{2}))\wedge( \mathsf{Sv}(i_{1})\times\mathsf{Sv}(i_{2})>0)]\) \\ \hline \hline \end{tabular}
\end{table}
Table 4. Results over all 4-variable boolean functions. The two constant functions were discarded, since \(\boldsymbol{\kappa}\) is required not to be constant. |
2306.10341 | Tailoring Machine Learning for Process Mining | Machine learning models are routinely integrated into process mining
pipelines to carry out tasks like data transformation, noise reduction, anomaly
detection, classification, and prediction. Often, the design of such models is
based on some ad-hoc assumptions about the corresponding data distributions,
which are not necessarily in accordance with the non-parametric distributions
typically observed with process data. Moreover, the learning procedure they
follow ignores the constraints concurrency imposes to process data. Data
encoding is a key element to smooth the mismatch between these assumptions but
its potential is poorly exploited. In this paper, we argue that a deeper
insight into the issues raised by training machine learning models with process
data is crucial to ground a sound integration of process mining and machine
learning. Our analysis of such issues is aimed at laying the foundation for a
methodology aimed at correctly aligning machine learning with process mining
requirements and stimulating the research to elaborate in this direction. | Paolo Ceravolo, Sylvio Barbon Junior, Ernesto Damiani, Wil van der Aalst | 2023-06-17T12:59:51Z | http://arxiv.org/abs/2306.10341v1 | # Tailoring Machine Learning for Process Mining
###### Abstract
Machine learning models are routinely integrated into _process mining_ pipelines to carry out tasks like data transformation, noise reduction, anomaly detection, classification, and prediction. Often, the design of such models is based on some _ad-hoc_ assumptions about the corresponding data distributions, which are not necessarily in accordance with the _non-parametric_ distributions typically observed with process data. Moreover, the learning procedure they follow ignores the constraints _concurrency_ imposes to process data. Data _encoding_ is a key element to smooth the mismatch between these assumptions but its potential is poorly exploited. In this paper, we argue that a deeper insight into the issues raised by training machine learning models with process data is crucial to ground a sound integration of process mining and machine learning. Our analysis of such issues is aimed at laying the foundation for a methodology aimed at correctly aligning machine learning with process mining requirements and stimulating the research to elaborate in this direction.
Process Mining Machine Learning
## 1 Introduction
Process Mining (PM) is a consolidated discipline grounded on _data mining_ and _business process management_. The exploitation of traditional PM tasks (_discovery_, _conformance checking_, and _enhancement_) is today a reality in many organizations [1, 2]. In the last decade, a wave of new results in _artificial intelligence_ has triggered the interest of the PM research community in using supervised or unsupervised Machine Learning (ML) techniques for gaining insight into business processes and providing advice on how to improve their inefficiencies.
In today's practice, ML models are routinely integrated into PM data pipelines [3] to carry out tasks like data transformation, noise reduction, anomaly detection, classification, and prediction. For example, ML is playing a key role in the interface between PM and sensor platforms. Advances in sensing technologies have made it possible to deploy distributed monitoring platforms capable of detecting fine-grained events. The granularity gap between these events and the activities considered by classic PM analysis has often been bridged using ML models [4, 5] that compute virtual activity logs, a problem which is also known as _log lifting_[6]. ML has been proposed as a key technology to _strengthen_ existing techniques, for example, using trace clustering to reduce the diversity that a process discovery algorithm must handle in analyzing an event log [7, 8, 9, 10], to simplify the discovered models [11, 12, 13], or to
support real-time analysis on event streams [14; 15; 16]. ML is adopted to apply predictive models to the executing cases of a process. This research area, known as _predictive process monitoring_, exploits event log data to foresee future events, remaining time, or the outcome of cases, in support of decision making [17; 18; 19]. Root cause analysis [20] and data explainability [21] are other tasks that can be applied to event log data using ML techniques, in order to improve our understanding of a business process. ML models have also been used in addition to (or in lieu of) classic linear programming [22] to _optimize_ business processes' resource consumption and to provide insights to process _re-design_[23]. Computational support for PM tends to converge with the one available for ML models also from the technology standpoint [24; 25]. This makes their integration seem straightforward.
In fact, it is not. When PM tasks are mapped to ML tasks, business process-specific assumptions should drive the construction of training functions and hyperparameters selection. Some of these assumptions stem from the very nature of human social systems. For example, it is well known that process variants are shaped by _non-parametric_ distributions [26]. Quite the contrary, data normality is beneficial for many ML models, moreover, if the data distribution is skewed, ML models may be biased toward a particular outcome. In addition, the ML view on event log data is often oversimplified. The correct encoding of the procedural nature of event log traces is challenging. Often, the sequence of executed events is simply captured by a prefix of fixed length. Even more problematic is encoding _concurrency_ and the _interactions_ constraining the events in the business process. Encoding event log data into a feature space compatible with ML algorithms is a critical design choice in other concerns [27]. It impacts the _sample complexity_, the _data distribution_, and the relevance of the features to put under analysis, for example, to detect _concept drift_ or to support _zero-shot learning_[28].
Today, much of the research on integrating ML with PM focuses on developing ML models to attain high performance in specific business process management scenarios. Less attention has been paid to designing a general methodology to select and adapt ML models based on the nature of the PM problem, taking into account the specific properties of the process data. We argue that, when using ML models in PM pipelines, it is important to prevent any _mismatch_ between the assumptions on input data distributions underlying the ML models and the statistics of the event logs used to feed them [29]. Arbitrarily selecting algorithms leads to unfair evaluation and sub-optimal solutions. For example, a given model cannot be compared with another if their implementations consider different feature spaces [30]. It is also important to make sure that ML models are exposed to process-specific information, such as the processes' control-flow constraints. In this paper, we attempt to identify some of the causes of this mismatch and suggest how to remove them, with the aim of fostering research on a sound methodology to address the integration between PM and ML.
We believe that an effort on these aspects must be jointly made by the PM and Artificial Intelligence research communities. This call to collaboration is valid in general but particularly in business process management, where data analysis has to leave the safe harbor of experimental science to sail into the open sea of decision science. In this paper, we discuss the challenges in a specific direction, i.e., from PM to ML. More specifically, in Section 2 we discuss the issues leading to the PM-to-ML mismatch. In Section 3 we introduce some basic PM notions. In Section 4 we link them to ML principles. Section 5 clarifies the discussion by presenting a couple of samples. Section 6 proposes research lines for advancing in the direction of a general methodology that integrates ML models into PM pipelines. Section 7 closes the paper.
## 2 The Issues Landscape
An important problem underlying our discussion is how to take into account process data specificity in ML model selection and (hyper-) parameter tuning. Of course, processing event logs poses all the usual challenges of data pre-processing and preparation. We will not discuss standard data pre-processing techniques such as outlier removal [31; 32], noise filtering [33; 34], and missing entries recovery [35] as they can be tackled by current statistical techniques. Rather, we will focus on issues that are specific to process data, including their statistical distribution and event concurrency. Indeed, careless assumptions on the encoding of input data may result in biased models with reduced generalization capability.
### Data Distribution
When choosing an ML model for a PM task, it is tempting to assume that the process data fed to the model will follow a normal distribution. Indeed, data normality is beneficial for many types of ML models. Models like Gaussian, naive Bayes, logistic and linear regression explicitly rely on the assumption that the data distribution is bi-variate or multivariate normal. Many phenomena of interest for business process analysis, such as the duration of some activities,
are known to follow normal or log-normal distributions 1. For other PM data, however, assuming normality is not always advisable. For example, process variants are specific activity sequences that occur through a process from start to end. Variants' occurrence in an activity log is typically following a _non-parametric_ trend that complies with the _Pareto principle_[26]. A normal distribution cannot always be assured also for the pairwise dependency relationship between activities, a key statistical information exploited by process discovery algorithms [36]. Indeed, in this case, the normality assumption has been verified for some event logs, including some popular benchmarks we will discuss in Section 5 (the "Road traffic fines" [37] and "Receipt phase of an environmental permit application process" [38]). However, the normality of dependencies in less regular, "spaghetti" like, processes is not observed, as in the "BPI Challenge 2015 Municipality 1"[39]. There are reasons to believe that dependencies in loosely specified logs may follow some power-law trend as well, and require careful parameter fitting in statistical analysis. Imbalanced data sets or non-stationary environments may also cause serious difficulties. For example, if the training data is skewed towards a particular class or outcome, the model may be more likely to predict that class or outcome even when it is not the most likely one. Independent component analysis [40] provides ways to reveal Gaussianity and non-Gaussianity. Of course, non-normal distributions can be transformed to normal ones using Box-Cox transformations [41], and unbalanced data sets can be balanced [42; 43] but, as we shall see, such data transformations should be applied with caution, as they have consequences on the performance of the models.
Footnote 1: See, for instance, the “lunch break” duration distributions at [https://www.statista.com/statistics/995991/distribution-of-lunch-breaks-by-length-in-europe/](https://www.statista.com/statistics/995991/distribution-of-lunch-breaks-by-length-in-europe/)
In any case, PM data regarding distributions of variants cannot be expected to always follow a Gaussian behavior, demanding estimation techniques to sample from the sequential process underlying log generation. _Markov Chain Monte Carlo_ (MCMC) techniques are sometimes used for sampling from an unknown probability distribution (for instance, the distribution of variants) by using data to construct a Markov chain whose equilibrium distribution approximates the unknown one. MCMC techniques can be combined with Kalman filtering[44] to control uncertainty. Of course, an explicit estimate of the data distribution may not even be necessary. Some ML models work well also in the case of non-normally distributed data. Simple yet effective ML models like decision trees and random forests do not assume any normality and work reasonably well on raw event data. Also, linear regression is statistically effective if the model errors are Gaussian, an assumption less stringent for process data than the normality of the entire data set. Kernel methods, e.g., Gaussian processes and support vector machines, provide flexible models that are practical to work with but require proper hyperparameter variables to fit the data.
### Concurrency
Another key attention point is concurrency. How to use ML to predict the behavior of highly concurrent systems and processes is still an open problem, and the research done in the AI community has only scratched its surface (see [45] for a recent review). Most ML approaches view event logs as merely sequential data [46], rather than sequential manifestations of a concurrent system. This may lead to under-sampling the log space and to insufficient training to handle apparently out-of-order event sequences [47]. To address this issue, it is important to provide ML models with control-flow information about the iterative or concurrent execution of tasks as additional context alongside event logs. One approach that has been explored is the use of Bi-directional Long-Short Term Memory (BiLSTM) architectures. Thapa et al. [48] leveraged BiLSTM to detect concurrent human activities in a smart home environment. Additionally, Thapa et al. [49] adapted the LSTM algorithm into a synchronous algorithm called sync-LSTM, enabling the model to handle multiple parallel input sequences and generate multiple synchronized output sequences. The field of predicting the behavior of highly concurrent systems using ML is rapidly evolving, as indicated by the recent survey conducted by Neu et al. [50]. Researchers are actively exploring new techniques and methodologies to improve the understanding and prediction of concurrency in various domains.
### Non-stationary Behaviour
Even when the process data distributions can be fitted precisely, running processes, especially the ones involving resources that learn and age like people and equipment, change over time. This gives rise to _non-stationary_ behavior. This problem is a critical one since ML models1 learning capacity decreases under non-stationary conditions [16]. Concept drift detection techniques are therefore required. In traditional data mining applications, _concept drift_ is identified when, at two separate points in time, a concept, i.e., the relation between a data instance and its associated class, changes [51]. In PM, many aspects of drift should be carefully monitored, including the appropriateness of the event trace with respect to the model, the dependency relationship between activities, and the interdependence between the activities and the available resources or cycle time. Each aspect should be appropriately encoded and monitored using statistical analysis [52].
### Zero-shot Learning
A related topic is using ML models to identify solutions never observed during training, the so-called _zero-shot learning_[53]. There are several zero-shot learning approaches, but a commonality is that unstructured auxiliary information is encoded during the training process instead of using explicit labels. The training process aims to learn to connect new input elements to encodings that have the greatest similarity in terms of auxiliary information. In this way, the system can propose an outcome never observed during the training stage. Zero-shot is relevant in PM when the availability of labeled process data is limited, as the process may be recently developed, unused, or its outcomes inaccessible. In these situations, relying on historical observations to guide learning tasks is insufficient or erroneous. This scarcity has drawn the attention of the PM community to _contrastive learning_, a manner of unsupervised learning that learns representations by contrasting positive and negative pairs. Graph-related contrastive learning methods apply this notion to all types of graph data. Some popular unsupervised representation learning methods imply the idea of contrastive learning. For instance, DeepWalk [54] and node2vec [55] generate Markov chains of nodes based on random walking on graphs, forcing the neighboring nodes of a graph to have similar representations. More recent proposals such as DGI [56] and InfoGraph [57] combine contrastive learning with ordinary supervised training to maximize the mutual information of node and graph levels.
Much work is also being done on _generative_ engines for logs based on likelihood-based models, like auto-encoders and Generative Adversarial Networks (GANs)[58]. However, ordinary GANs show some limitations when applied to generate "clean" process data, where low confidence variants are due to failures of the monitoring context rather than to adversarial constructions [59]. In addition, the GANs' _objective function_, i.e. the difference between the generated and the original distribution of traces in the event log, is not always suitable for evaluating the quality of the generated process variants, and even less for comparing different generators. Performance measures should be used instead, and the trained algorithms should be able to provide an answer with different information details, for example, predicting a performance result knowing or not the availability of resources currently in use.
### Data Encoding
Supervised ML algorithms are trained on collections of examples, each encoded as a vector in a multidimensional feature space. An appropriate encoding method can reduce the sample complexity and reduce the space or time complexity of the model [27]. In PM, even more, than selecting individual features, it is important to capture the interconnections between the different process dimensions. The event logs analyzed in PM contain information from several complementary dimensions, such as event data, executing traces, resource consumption, and cycle time. Each event can be described as a multidimensional object, but its value for the process execution lies in the interdependence with the other events composing the process case instance, the resources available in the system, and the temporal limits constraining the case, which in turn depend on the other cases executed, executing, or to be executed in the system. Therefore, capturing the constraints due to the alternative, optional or mandatory dependency between events is crucial in PM. Encoding methods should also identify features subject to _concept drift_. Extracting insights from this type of functional data is not straightforward; covariance control[60] is needed to take into account the hidden relationships between the different dimensions.
Despite all this, little effort has been spent by the PM community to study the impact of encoding methods on the performance of PM pipelines. Only a few comparative studies are available [61; 27; 62; 63]. Basic techniques, such as the one-hot encoding scheme [64] or frequency-based encoding [9], are often adopted. For numerical attributes, general statistics have been used, such as average, maximum, minimum, and sum [18]. The \(k\)-gram encoding schema [8] is also quite popular. Each activity in the trace is represented as the sequence of \(k\) activities executed to reach it. As an alternative, arrays encoding traces as the frequency of their activities at each position have been proposed [65]. These encoding techniques can incorporate some control-flow information, but cannot fully account for concurrency. To better capture dependency between activities, techniques borrowed from other domains have been proposed, including text mining [66; 67] and graph embedding [68; 69]. Graph embedding methods emerged from the necessity of representing graphs as low-dimensional vectors to be exploited by downstream ML models. These methods rely themselves on ML models (usually, supervised learners) to compute highly informative but low-dimensional vectors of fixed length [70]. When applied to event logs encoding, such methods outperform the others, at the cost of higher time complexity and loss of transparency, as the resulting vectors are organized in a latent space losing any reference to the event log attributes or their statistical properties [71]. In any case, the representation of the control-flow is purely sequential and concurrency is not captured by these methods too. Recently, emerging attention on techniques for encoding control-flow information into a feature space is observed, for example by representing the degree of parallelism or optionality of activities [72; 73]. Another trend is aimed at constructing multi-perspective views of traces, representing the data-flow and control-flow into the same encoding [74; 75]. However, the application of these methods is still limited.
Generally speaking, the encoding procedures used to map PM data to ML models are not documented enough in the PM literature. Sometimes, the feature space selected is not explicitly presented, the steps followed to encode data are not well specified, or the adopted code is not shared. _Ablation studies_, removing parts of the data representation and studying the removal's impact on performance, are still the exception rather than the norm. We argue that the formalization of the encoding procedure allows explaining this key design choice to be justified by the specific analytical goals and the assumptions applying to the algorithms considered. We will propose such a formalization in Section 4.
## 3 Basic Notions in PM
To make this paper self-contained, in this section we recall some of the basic concepts of PM. An _event log_ is a collection of _events_ generated in a temporal sequence and stored as _tuples_, i.e., recorded values from a set of _attributes_. Events are aggregated by _case_, i.e., the end-to-end execution of a business process. For the sake of classification, all cases following the same _trace_, i.e., performing the same sequence of business process activities, can be considered equal as they belong to the same process _variant_.
**Definition 1** (Event, Attribute): _Let \(\Sigma\) be the event universe, i.e., the set of all possible event identifiers; \(\Sigma^{*}\) denotes the set of all finite sequences over \(\Sigma\). Events have various attributes, such as timestamp, activity, resource, associated cost, and others. Let \(\mathcal{AN}\) be the set of attribute names. For any event \(e\in\Sigma\) and attribute \(\texttt{a}\in\mathcal{AN}\), the function \(\#_{\texttt{A}}(e)\) returns the value of the attribute \(\texttt{a}\) for event \(e\)._
The set of possible values of each attribute is restricted to a domain. For example, \(\#_{\texttt{ACTIVITY}}:\Sigma\rightarrow\mathcal{A}\), where \(\mathcal{A}\) is the set of the legal activities of a business process, e.g. \(\mathcal{A}=\{a,b,c,d,e\}\). If \(e\) does not contain the attribute value for some \(\texttt{a}\in\mathcal{AN}\), then \(\#_{\texttt{A}}(e)=\bot\). It follows that an event can also be viewed as a tuple of attribute-value pairs \(e=(\mathcal{A}_{1},...,\mathcal{A}_{m})\), where \(m\) is the cardinality of \(\mathcal{AN}\).
**Definition 2** (Sequence, Sub-sequence): _In a sequence of events \(\sigma\in\Sigma^{*}\), each event appears only once and time is non-decreasing, i.e., for \(1\leq i\leq j\leq|\sigma|:\#_{\texttt{timestamp}}(e_{i})\leq\#_{\texttt{timestamp}}( e_{j})\). Thus \(\langle e_{1},e_{2},e_{3}\rangle\) denotes three subsequent events. A sequence can also be denoted as a function generating the corresponding event for each position in the sequence: \(\sigma(i\to n)\mapsto\langle e_{i},...,e_{n}\rangle\), with \(e_{n}\) the last event of a sequence. In this way, we can define a sub-sequence as a sequence \(\sigma(i\to j)\) where \(0\leq i<j<n\)._
**Definition 3** (Case, Event Log): _Let \(\mathcal{C}\) be the case universe, that is, the set of all possible identifiers of a business case execution. \(\mathcal{C}\) is the domain of an attribute \(\#_{\texttt{CASE}}\in\mathcal{AN}\). We denote a case \(c\in\mathcal{C}\) as \(\langle e_{1},e_{2},e_{3}\rangle_{c}\), meaning that all events are in a sequence and share the same case. For a case \(\langle e_{1},e_{2},e_{3}\rangle_{c}\) we have \(\#_{\texttt{CASE}}(e_{1})\) = \(\#_{\texttt{CASE}}(e_{2})\) = \(\#_{\texttt{CASE}}(e_{3})\) = \(c\). An event log \(L\) is a set of cases \(L\subseteq\Sigma^{*}\) where each event appears only once in the log, i.e., for any two different cases, the intersection of their events is empty. When the case identifier is not used as a grouping attribute, an event log \(\hat{L}\) can be simply viewed as a set of events, thus \(\hat{L}\subseteq\Sigma\)._
**Definition 4** (Variant, Event Log): _The cases \(c_{1}\) and \(c_{2}\) follow the same variant if \(\langle e_{1},e_{2},e_{3}\rangle_{c_{1}}\) and \(\langle e_{4},e_{5},e_{6}\rangle_{c_{2}}\) have the same sequence of activities, e.g. \(\#_{\texttt{ACTIVITY}}(e_{1})\) = \(\#_{\texttt{ACTIVITY}}(e_{4})\) = a, \(\#_{\texttt{ACTIVITY}}(e_{2})\) = \(\#_{\texttt{ACTIVITY}}(e_{5})\) = b, \(\#_{\texttt{ACTIVITY}}(e_{6})\) = a. We call this sequence a trace. This implies an event log can also be viewed as a multi-set of traces. We denote an event log as a multi-set by writing \(\overline{L}=[\langle a,b,c\rangle^{3},\langle a,b,a\rangle^{11},\langle a,c,b,a \rangle^{20}]\). The superscript number of a trace details the number of cases following this variant. For example, \(\langle a,b,a\rangle^{11}\) means we have a variant with \(11\) cases following the trace \(\langle a,b,a\rangle\)._
## 4 A Formalisation of PM Data Encoding
Despite the variety of encoding methods discussed in Section 2, we argue that available approaches fail to capture key process-level information such as the interplay between cases, or between activity execution and availability of resources. Most of the encoding methods in use today focus on the _control-flow_, according to an _inter-case_ view. Methods focusing on the _intra-case_ view have been proposed but are rarely applied [76]. Similarly, proposals for encoding the _data-flow_[77] are available in the literature, but never adopted in comparative studies or surveys. Another recent trend is stressing the need of capturing constraints connected to concurrency [72; 73]. In this section, we discuss in detail how PM data is encoded to suit ML models' training procedures. For the sake of space, we limit our discussion to supervised learning, probably the most widely applied ML approach. Generally speaking, supervised techniques train models to compute functions \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}\) where the input is a \(d\)-dimensional vector \(\mathbf{x}\) and the output is a \(d^{\prime}\)-dimensional vector \(\mathbf{y}\). Each dimension is a measurable piece of data, a.k.a feature or attribute. For popular ML tasks, the output is mono-dimensional. In regression, the output is a real-valued scalar value, while in classification,
the output is a natural number indexing a "class". However, nothing prevents having multidimensional vectors in output. In structured learning, input and output may be a structure like a block matrix, divided into sub-matrices to represent algebraic entities such as graphs, tensors, etc. The training process to approximate \(f\) requires a set of examples \(\{(\mathbf{x}_{1},\mathbf{y}_{1}),...,(\mathbf{x}_{n},\mathbf{y}_{n})\}\) where inputs and outputs are paired. We can then define this training set as an example matrix \(\mathbf{X}:=[\mathbf{x}_{1},...,\mathbf{x}_{n}]^{\top}\in\mathbb{R}^{n\times d}\) and a label matrix \(\mathbf{Y}:=[\mathbf{y}_{1},...,\mathbf{y}_{n}]^{\top}\in\mathbb{R}^{n\times d}\), given by the number \(n\) of vectors and the number \(d\) of dimensions in the vector space.
In their original format, PM log entries do not belong to a vector space. This is because the events in an event log are grouped by case and this grouping is essential to keep a connection with business process execution.
Our goal here is to formalize the procedure to encode the cases into vectors in a way that can be used as a template to describe the specific encoding chosen for a PM application. Our starting point is \(\hat{L}\subseteq\Sigma\), a log view as a set of event identifiers. This representation can be mapped into a vector space \(\mathbf{X}\) by applying a suitable _transformation function_ grouping event by case and returning vectors of size equal to or less than the event size.
**Definition 5** (Encoding function): _Given an event log \(\hat{L}\subseteq\Sigma\), an encoding function \(\Gamma:\Sigma\rightarrow\mathbf{X}^{n\times d}\) represents \(\hat{L}\) in the vector space \(\mathbf{X}\). The encoding function \(\Gamma\) is valid if it defines a transformation where two elements of \(\Sigma\), \(e_{i}\) and \(e_{j}\) are aggregated on the same element \(\mathbf{x}\in\mathbb{R}^{d}\) if \(\#_{\textsc{cASE}}(e_{i})=\#_{\textsc{cASE}}(e_{j})\), with \(n\leq|\mathcal{C}|\), i.e. the vectors in \(\mathbf{X}\) are a subset of the cases in \(\mathcal{C}\)._
We propose a canonical representation of \(\Gamma\) as a composition of a _filtering function_\(\pi\), a _dimensioning function_\(\rho\), a _grouping function_\(\eta\), and a _valuation function_\(\nu\), i.e., \(\Gamma=\nu\circ\eta\circ\rho\circ\pi\). One or more of these components can implement the identity function with null effects.
In particular, \(\pi:\Sigma\rightarrow\Sigma_{\alpha}\) imposes a condition on the events' attributes or the attributes' values, \(\forall e\in\hat{L}\wedge\textsc{a}\in\mathcal{A}\mathcal{N}:P(\#_{e}(e))\), where \(P\) is a predicate, thus \(|\Sigma_{\alpha}|\leq|\Sigma|\). For example, filtering the events by their timestamp \(\forall e\in\hat{L}:\textsc{YYYY-MM-DD}\geq\#_{\textsc{timestamp}\textsc{MP}}(e) \leq\textsc{YYYY-MM-DD}\). The function \(\rho:\Sigma_{\alpha}\to D\) defines the dimensions of the vector space, creating new dimensions based on a range of values in the original dimensions or, less commonly, grouping multiple dimensions into a single one. Often, the set \(D\) is the union of multiple attribute domains, i.e. \(D=\mathcal{A}_{k=1}\cup\mathcal{A}_{k=2}\cup\cdots\mathcal{A}_{k=l}\). The function \(\eta:\Sigma_{\alpha}\rightarrow\mathbf{X}_{\alpha}^{n\times d}\), with \(d=|D|\), assigns to \(\mathbf{X}_{\alpha}\) the values of the attributes in \(e\) and groups events by case so that \(\forall\textsc{x}\forall\textsc{a}_{k}:\mathbf{x}_{i,j}=\#_{\textsc{a}_{k}}(e )\iff\#_{\textsc{a}_{k}}(e)=D_{j}\wedge\#_{\textsc{cASE}}(e)=c_{i}\). The number of elements in the vector space equals the number of cases to include in the example matrix, thus \(n\leq|\mathcal{C}|\). Because the sets \(\Sigma_{\alpha}\) and \(D\) can be view as columnar matrices \(M_{\Sigma_{\alpha}}^{\times 1}\) and \(M_{D}^{d\times 1}\), the size of \(\mathbf{X}_{\alpha}\) is equal to \(M_{\Sigma_{\alpha}}\times M_{D}^{\top}\), i.e. the set of events we selected with \(\pi\) is multiplied by the dimensions we identified with \(\rho\). It is worth mentioning that, when grouping is applied, each vector component becomes an array of attribute values rather than a single value. The function \(\nu\) aims at transforming these arrays of attribute values into real-valued scalar values. We define \(\nu:\mathbf{X}_{\alpha}^{n\times d}\rightarrow\mathbf{X}^{n\times d}\) to clarify the components of the two matrices are valuated differently.
For example, the basic _one-hot_ encoding schema corresponds to a null \(\pi\), a \(\rho\) with \(D=\bigcup_{k=1}^{l}\mathcal{A}_{k}\), an \(\eta\) for grouping the events of a same case, and a \(\nu:\mathbf{X}_{\alpha}^{n\times d}\rightarrow\{0,1\}^{n\times d}\), returning \(\mathbf{x}_{i,j}=1\) if at least a value \(\#_{\textsc{a}_{k}}(e)=D_{j}\) is observed for the case \(\#_{\textsc{cASE}}(e)=c_{i}\), and \(0\) if not. The popular _activity profile_ schema [7] encodes an event log into a vector of activity values by simply counting all events of a case that include that activity. The encoding function maps the events in \(\hat{L}\) into \(\mathbf{X}\) by executing the four canonical transformations as follows. First, it verifies to consider only events associated with activity values \(\forall e\in\Sigma:\#_{\textsc{Activity}}(e)\neq\bot\). Then it defines the dimensions of \(\mathbf{X}\) with \(\rho\) so that \(D=\mathcal{A}\), where \(\mathcal{A}\) is the set of legal business process activities. Third, it aggregates the data by case with \(\eta\). Finally, it performs the evaluation with \(\nu\), assigning the count of the components in \(\mathbf{x}_{i,j}\) for each case \(c_{i}\). For instance, the log \(\overline{L}=[\langle a,b,c\rangle^{3},\langle a,b,a\rangle^{11},\langle a,c,b,a \rangle^{20}]\), is transformed in the first matrix in 1 with \(\pi\), in the second matrix with \(\rho\), in the third matrix in with \(\eta\), to finally get the fourth matrix in 1 with \(\nu\).
\[\begin{bmatrix}e_{1}\\ e_{2}\\ e_{3}\\ e_{4}\\ e_{5}\\ \cdots\end{bmatrix}\begin{bmatrix}a\\ b\\ c\end{bmatrix}\begin{bmatrix}a&b&c\\ a&b&c\\ a&b&c\\ \begin{bmatrix}(a,a]&b&\bot\\ a&b&\bot\\ a,a&b&\bot\\ \cdots\end{bmatrix}\begin{bmatrix}1&1&1\\ 1&1&1\\ 1&1&1\\ 2&1&0\\ 2&1&0\\ 2&1&0\\ \cdots\end{bmatrix} \tag{1}\]
We believe that if the PM community would get used to clarifying the definition of the following functions when defining an encoding procedure, the literature will benefit in terms of the comparability of the results. For example, a _data-flow_ approach will require clarifying the contribution of the different dimensions in encoding cases. An _intra-case_ approach will require modifying the \(\eta\) function to encode multiple cases into a single vector.
\begin{table}
\begin{tabular}{l c c} \hline \hline Cases & Number of Variants & Coverage of Cases \\ \hline
56482 & 1 & 37,6\% \\
102853 & 2 & 68,4\% \\
132758 & 4 & 88,3\% \\
142926 & 7 & 95,0\% \\
148887 & 17 & 99,0\% \\
150270 & 131 & 99,9\% \\
150370 & 231 & 100,0\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: _Managing Road Traffic Fines_ Event Log
Figure 1: Two _decision trees_ generated from our sample event log. In 1a the data in input conforms to the case distribution observed in the event log. As a consequence, the most frequent variants take the lion’s share and the numeric feature _amount_ decides multiple split points. In 1b data is balanced oversampling those variants with low occurrence. The split points in the tree use categorical features only. Decision tree is an example of an algorithm significantly affected by uncritical training using the case distribution of event logs.
## 5 Illustrative Examples
We will now use two examples to illustrate the concepts introduced above.
The first example refers to the real-live event log of road traffic fines [37]. The events captured in the event log include creating a fine notice, recording the penalty amount, verifying if the payment is received, registering an appeal to the prefecture, and others. The reader interested in more details is referred to [78]. As illustrated in Table 1, the occurrence of trace variants follows a Pareto distribution with only 4 variants covering more than 88% of the recorded cases and with \(100\) variants that have a single occurrence. The most occurring variant is \(\langle Create\ Fine,\ Send\ Fine,\ Insert\ Fine\ Notification,\ Add\ Penalty,\ Send\ for\ Credit\ Collection\rangle^{56482}\), the second is \(\langle Create\ Fine,\ Payment\rangle^{46371}\), the third is \(\langle Create\ Fine,\ Send\ Fine\rangle^{20385}\), and so on.
Let us now try to develop predictive analytics on this event log. For example, we could ask ourselves why certain cases exhibit a duration that is significantly longer than others. To study the problem, we are interested in searching for patterns correlated to long duration. Using encoding, we can represent the cases in the event log as vectors composed of categorical data, such as the executed activities, and of numerical data such as the number of penalties and the trace duration2. A decision tree can then be used to highlight the factors influencing case duration. We express it as a simple binary problem: being below or above a threshold of 200 days. Figure 1 illustrates the results we obtain. Figure 0(a) presents a decision tree conforming to the case distribution observed in the event log. The entire set of cases in \(L\) is encoded in \(\mathcal{X}\). As a consequence, the most frequent variants take the lion's share of the examples used to train the decision tree. Figure 0(b) presents the decision tree obtained by balancing the case distribution among variants, oversampling those variants with low occurrence. This is, for example, achieved by creating \(\mathcal{X}\) taking an equal number of occurrences to the traces in \(L\).
Footnote 2: The methods used for encoding the event log in a vector space are available in the PM4PY library [https://pm4py.fit.fraunhofer.de/documentation#decision-trees](https://pm4py.fit.fraunhofer.de/documentation#decision-trees)
Because the split points of the tree are chosen to best separate examples into two groups with minimum mixing, the cases with low occurrence tend to be ignored. Indeed, the tree in Figure 0(a) relies on the numeric feature _amount_ to decide on multiple split points. On the contrary, the tree in Figure 0(b) defines the split points using categorical features only. This is due to the fact that the variants not associated with a penalty amount were quite rare, and by increasing their representation for balancing the data set we prevented the algorithm to use the penalty amount as a discrimination feature.
It is important to note that, in general, we cannot say if proactive balancing is better than using data as they are, and even which is the correct balancing factor to be applied. The strategy to be preferred strongly depends on our goal. If we want to analyse an event log in order to identify procedures that can be automated and learn the decision rule to be used, our interest is in the frequent behaviour. The real distribution of the event log, or even a distribution pruned from rare examples [26], must address the learning procedure we adopt. If our goal is anomaly detection [81] or root cause analysis [82] rare examples have to be represented.
Figure 2: (a) The Heuristic Miner Algorithm [79] was used to discover a model from the “Road Traffic Fines” [37] event log. The discovered model specifies alternative routes that can be followed to complete the process. In particular, executing Payment or Send Fine implies the following alternative paths. (b) The Heuristic Miner Algorithm [79] is used to discover a model from the “Artificial Patient Treatment” [80] event log. The discovered model specifies that Blood test, X-ray scan, and Physical test are executed in parallel. Any order can be followed in executing these activities.
Our next example is related to the need of capturing concurrency (Section 2). While cases included in an event log are described as sequences of activities, the behaviour they describe should be interpreted differently based on the model that generated them. To capture control-flow behaviour, one needs to encode the dependency relationships in event logs. By executing the Heuristic Miner algorithm [79] on the "Road Traffic Fines" [37] event log, we observe alternative paths can be followed to complete the process. If a case includes the execution of the Payment activity, it will not include Send Fine and the following activities. The same algorithm applied to the "Artificial Patient Treatment" [80] will reveal the concurrent execution of the Blood test, X-ray scan, and Physical test activities. All these activities are required to complete the diagnostic stage, except for X-ray scan, which may be skipped, but the order of execution is not relevant. Thanks to process models, PM techniques do consider concurrency. Two sequences \(\langle a,b,c\rangle\) and \(\langle a,c,b\rangle\) can have the same conformance to the model if the model describes \(b\) and \(c\) as concurrent activities, while the conformance value will be different if \(b\) and \(c\) are in sequence or relate to alternative paths. Unfortunately, most ML models view event logs merely as sequential data. When cases get encoded into a vector space, the inference the ML model can produce is based on the distance in this space. The distance between \(\langle a,b,c\rangle\), and \(\langle a,c,b\rangle\) is accounted in the same way in the vector space, and we cannot differentiate between the sequences based on the reference process model. This limitation impairs capturing concurrent behaviour that is not detected by simply matching the two sequences. In terms of our example, an ML procedure could effectively predict the lead time of a case knowing that the Payment activity was executed. Training an ML algorithm to predict the conformance to the diagnostic protocol of a delivered treatment is more complex, and will require a higher amount of training data, as the ML model needs to incorporate examples on the equivalence of the different orders of execution of the Blood test, X-ray scan, and Physical test activities. Encoding this equivalence in vector space spaces, for example, defining suitable pictograms to feed a CNN, is still an open challenge.
## 6 Toward an Integrated Methodology
Guided by the above considerations about encoding, we will now outline the strategy to be used to properly integrate PM and ML. In the previous sections, we argued that when PM tasks are mapped to ML tasks, PM-specific assumptions should drive the construction of training functions and hyper-parameters selection. Simple ML classification and regression algorithms model the data by a single Gaussian grounded on mean and co-variance. On the other hand, kernel methods like Gaussian Processes and Support Vector Machines, have opened the possibility of flexible models that are practical to work with, but require non-trivial hyper-parameter tuning to fit behavioural data[83].
Figure 3 provides a synoptic view of mapping PM tasks to ML ones.
As an example of non-trivial mapping, let us consider the non-linear relationship between data samples and the expected outcomes addressed by robust ML algorithms with adjusted hyper-parameters. At this point, linear projections as PCA are not effective as t-SNE visualisation [84] to obtain insights from the data. Other challenges with moderate difficulty are related to label availability and imbalanced scenarios [81]. In this case, semi-supervised ML techniques and generative models can tackle the label issue, as well as sampling or synthesising methods are the second ones. Problems related to data quality, in which the difficulty is to build an approximation to have a proper data distribution accentuated, can be solved by enlarging the training data and by a proper tuning of the ML algorithm. Alternatively, the training process can be enriched using generative models [85]. To handle the difficulties outlined in Section 1, when using non-pictorial traces representation "process-friendly" GANs can be considered, like Sequence GANs (SGANs), in which the adversarial samples are designed from discrete sequences, like events. The application of GANs is not limited to data augmentation, as it can be used also for improving data quality for process model generalisation [85]. Preliminary results are available on using GAN-generated data to improve predictive tasks (e.g., lead time of incomplete cases) under an adversarial framework [86].
Coming from non-stationary process behaviour, sampling methods are a promising way to reduce the impact of non-stationary distributions of event log data [87]. After bringing the data to at least a near-stationary behaviour, the business process can naturally change its pattern over time, leading to a burdensome problem called concept drift [88, 89, 52, 16]. In dealing with this problem, a significant part of the PM community has focused on detecting and managing its onset. Regardless of the success of these attempts, we still consider this problem an open issue, since the event data stream is modelled as a complete trace stream, known from the start to the end activity. In reality, the drift onset occurs at an arbitrary position of the event stream, well before the endpoint is reached and the rest of the trace is known. Some researchers are addressing this information deficiency by using statistical adaptations based on the Hoeffding Bounds [90]. In principle, it is possible to rely on statistical assumptions about the confidence interval of the data to make a decision on the drift onset. In other words, it is possible to create ML models and perform predictions supported by an approximated conjecture about the future, obtained from the available event log data. The use of
"stateful" ML models with memory, in particular, Deep Learners based on the LSTM architecture, could enable handling drifts. However, this kind of challenge demands experienced ML practitioners and a robust computational structure.
### Hyper-parameter Tuning
Once a class of ML models has been chosen, hyper-parameter tuning must be performed to instantiate the ML model that delivers the desired accuracy (and possibly some required non-functional properties, like explainability). Searching the model space by trial and error can be burdensome. Automated Machine Learning (AutoML) is a reasonable alternative to tackle these problems grounded on sharing previous knowledge for similar tasks. AutoML can help to handle the classification problems called Zero-shot [91; 92] or Cold-start [93], for which little context information (and even the complete list of classes) may not be available at the start of the training, by taking advantage of meta-features and information on similar models, akin to how human experts start an old-fashioned search for desirable models driven by their experience on related tasks [94]. Some PM research works based on AutoML discuss how to find a suitable PM pipeline by recommending steps. For example, [95] proposed a solution to suggest the encoding method, since the higher number of methods might lead to a tricky selection. Furthermore, there are encoding methods able to fit particular data. It is remarkable that traditional process mining tasks can be leveraged when matched with intelligent decision support approaches.
### Final Recommendations
In this final section, we present a set of recommendations that aim to be valuable for both PM practitioners and researchers.
Figure 3: From task to Task, an overview of PM and ML relationship
#### 6.2.1 Recommendation 1: Choose data representation carefully
When working with PM data structures, it is crucial to carefully translate them into a metric feature space that can be manipulated by ML algorithms. Additionally, it is important to preserve context information, such as control-flow constraints, which are essential for process analysis. The choice of encoding techniques should align with problem-specific goals and constraints.
#### 6.2.2 Recommendation 2: Fit the data distributions
PM often deals with non-Gaussian, non-stationary distributions. To achieve optimal performance in production, it is advisable to estimate the data distribution instead of relying on the best Gaussian mix approximation. Building training sets interactively poses a significant challenge in PM. Leveraging ML approaches such as AutoML and Active Learning can help reduce the manual burden and improve the process.
#### 6.2.3 Recommendation 3: Do not assume the availability of a labelled training set
In business process environments, obtaining pre-existing labelled training sets for PM tasks is uncommon. Constructing a training set by correctly sampling the data space is essential, particularly due to the high diversity of process execution conditions in PM tasks.
#### 6.2.4 Recommendation 4: Consider zero-shot learning
During the training of ML models, the complete set of possible outcomes (co-domain of \(f\)) may only be partially known. For instance, in process optimisation, the cost of certain sequences may not be available at the time of training the regression model. It is essential to assess the completeness of the available information when formulating the problem statement to ensure the quality of model inference.
#### 6.2.5 Recommendation 5: Ensure minimum ML quality at an early stage via constraints
As the estimation of data distribution converges over time, an extended convergence period is unacceptable as it results in a high model error during training. It is possible to impose control flow constraints on ML models when they are known in advance based on domain requirements and regulations.
#### 6.2.6 Recommendation 6: Incorporate domain knowledge
Domain knowledge plays a critical role in effective PM. Integrating domain-specific information and constraints into ML models can significantly enhance their performance and interpretability. It is important to actively involve domain experts in the feature engineering and model validation processes.
#### 6.2.7 Recommendation 7: Evaluate model interpretability
PM tasks often require interpretable models to gain insights into process behaviour and make informed decisions. It is essential to evaluate the interpretability of ML models and choose algorithms that provide transparent explanations of their predictions. This becomes particularly crucial when dealing with critical processes or compliance and regulatory requirements.
#### 6.2.8 Recommendation 8: Continuously monitor and update ML models
Process environments are dynamic, and changes over time can impact the performance of ML models. Establishing a framework for monitoring and evaluation allows the assessment of models' performance and facilitates their timely updates as needed. Continuous learning and retraining of models ensure their accuracy and relevance in evolving process scenarios.
#### 6.2.9 Recommendation 9: Share knowledge and best practices
Promote knowledge sharing and collaboration within the PM community. Encourage the dissemination of successful case studies, research findings, and best practices to foster learning and advancement in the field. Engage in conferences, workshops, and online forums to connect with fellow practitioners and researchers and stay updated with the latest developments in PM.
By following these recommendations, PM practitioners and researchers can improve the effectiveness and efficiency of process mining applications, enabling better process understanding, optimisation, and decision-making.
## 7 Conclusions
The growing use of ML methods in PM necessitates a robust and comprehensive methodology for integrating these algorithmic techniques. This paper aimed to address the challenges associated with the ML/PM mapping and identify the fundamental principles for establishing a methodological foundation in this field. Through the analysis conducted in this study, we have provided a set of recommendations that can guide practitioners and researchers in effectively applying ML to PM tasks. These recommendations encompass various aspects of the PM process, from data representation to model evaluation and monitoring. By following these recommendations, PM practitioners and researchers can enhance the effectiveness and efficiency of their ML-driven process mining applications. It is important to acknowledge that the field of ML in PM is constantly evolving, and new challenges and opportunities will continue to arise. As such, ongoing research and collaboration among practitioners and researchers are crucial to refine and expand upon the proposed recommendations. By embracing a methodological foundation that integrates ML techniques in PM, we can unlock the full potential of process mining and leverage the power of data-driven insights to drive process understanding, optimisation, and decision-making in various domains and industries.
|
2305.16260 | Statistical Characteristics of the Electron Isotropy Boundary | Utilizing observations from the ELFIN satellites, we present a statistical
study of $\sim$2000 events in 2019-2020 characterizing the occurrence in
magnetic local time (MLT) and latitude of $\geq$50 keV electron isotropy
boundaries (IBs) at Earth, and the dependence of associated precipitation on
geomagnetic activity. The isotropy boundary for an electron of a given energy
is the magnetic latitude poleward of which persistent isotropized pitch-angle
distributions ($J_{prec}/J_{perp}\sim 1$) are first observed to occur,
interpreted as resulting from magnetic field-line curvature scattering (FLCS)
in the equatorial magnetosphere. We find that energetic electron IBs can be
well-recognized on the nightside from dusk until dawn, under all geomagnetic
activity conditions, with a peak occurrence rate of almost 90% near $\sim$22
hours in MLT, remaining above 80% from 21 to 01 MLT. The IBs span a wide range
of IGRF magnetic latitudes from $60^\circ$-$74^\circ$, with a maximum
occurrence between $66^\circ$-$71^\circ$ (L of 6-8), shifting to lower
latitudes and pre-midnight local times with activity. The precipitating energy
flux of $\geq$50 keV electrons averaged over the IB-associated latitudes varies
over four orders of magnitude, up to $\sim$1 erg/cm$^2$-s, and often includes
electron energies exceeding 1 MeV. The local time distribution of IB-associated
energies and precipitating fluxes also exhibit peak values near midnight for
low activity, shifting toward pre-midnight for elevated activity. The
percentage of the total energy deposited over the high-latitude regions
($55^\circ$ to $80^\circ$; or IGRF $L\gtrsim 3$) attributed to IBs is 10-20%,
on average, or about 10 MW of total atmospheric power input, but at times can
be up to $\sim$100% of the total $\geq$50 keV electron energy deposition over
the entire sub-auroral and auroral zone region, exceeding 1 GW in atmospheric
power input. | Colin Wilkins, Vassilis Angelopoulos, Andrei Runov, Anton Artemyev, Xiao-Jia Zhang, Jiang Liu, Ethan Tsai | 2023-05-25T17:14:34Z | http://arxiv.org/abs/2305.16260v1 | # Statistical Characteristics of the Electron Isotropy Boundary
###### Abstract
Using observations by the ELFIN CubeSats in 2019 and 2020 we statistically characterize the properties of 50 keV to \(\sim\)5 MeV electron isotropy boundaries (IBs), including occurrence rates, spatial distribution, and associated precipitating energy fluxes versus magnetic local time, latitude, and geomagnetic activity indices * We found that electron IBs occur over a wide range of nightside local times and latitudes under any geomagnetic conditions, and exhibit four orders of magnitude variation in electron precipitation due to geomagnetic activity. They contribute on average up to 20% of the total high-latitude nightside \(\geq\)50 keV electron precipitation, at times becoming the predominant contributor to such precipitation * We discuss implications of IB precipitation for selected magnetospheric and ionospheric processes, global atmospheric power input, and applications of IB observations to magnetic field and particle flux modeling
+
Footnote †: journal: JGR: Space Physics
**Key Points:**
* Using observations by the ELFIN CubeSats in 2019 and 2020 we statistically characterize the properties of 50 keV to \(\sim\)5 MeV electron isotropy boundaries (IBs), including occurrence rates, spatial distribution, and associated precipitating energy fluxes versus magnetic local time, latitude, and geomagnetic activity indices
* We found that electron IBs occur over a wide range of nightside local times and latitudes under any geomagnetic conditions, and exhibit four orders of magnitude variation in electron precipitation due to geomagnetic activity. They contribute on average up to 20% of the total high-latitude nightside \(\geq\)50 keV electron precipitation, at times becoming the predominant contributor to such precipitation
* We discuss implications of IB precipitation for selected magnetospheric and ionospheric processes, global atmospheric power input, and applications of IB observations to magnetic field and particle flux modeling
###### Abstract
Utilizing particle data from the ELFIN satellites, we present a statistical study of \(\sim\)2000 events from 2019-2020 characterizing the occurrence in magnetic local time and latitude of \(\geq\)50 keV electron isotropy boundaries (IBs), and the dependence of associated precipitation on geomagnetic activity. The isotropy boundary for an electron of a given energy is the magnetic latitude poleward of which persistent isotropized pitch-angle distributions (\(J_{prec}/J_{perp}\sim\) 1) are first observed to occur. The boundary is interpreted as resulting from magnetic field-line curvature scattering (FLCS) in the equatorial magnetosphere, a process that violates the first adiabatic invariant of particle motion. The FLCS isotropization mechanism is readily recognizable over a wide range of electron energies (10s of keV to several MeV) in the often highly dynamic transition region between the outer radiation belt and plasma sheet, where such populations are commonly found. We find that electron IBs can be well-recognized on the nightside from dusk until dawn, under all geomagnetic activity conditions, with a peak occurrence rate (averaged over all activity levels) of almost 90% near \(\sim\)22 hours in magnetic local time (MLT), and remaining above 80% from pre- to post-midnight (21 to 01 MLT). The IBs span a wide range of IGRF magnetic latitudes from 60\({}^{\circ}\)-74\({}^{\circ}\), with a maximum occurrence between 66\({}^{\circ}\)-71\({}^{\circ}\) (L of 6-8), shifting to lower latitudes and pre-midnight local times as geomagnetic activity increases. The precipitating energy flux of \(\geq\)50 keV electrons averaged over the IB-associated latitudes varies over four orders of magnitude, up to \(\sim\)1 erg/cm\({}^{2}\)-s, and often includes electrons of energy exceeding 1 MeV. The local time distribution of IB-associated electron energies and precipitating fluxes also exhibit peak values near mid-night for low activity, shifting toward pre-midnight for elevated activity. The percentage of the total energy deposited over the high-latitude regions (55\({}^{\circ}\) to 80\({}^{\circ}\); or IGRF \(L\gtrsim\) 3) attributed to IBs is 10-20%, on average, or about 10 MW of total atmospheric power input. The IB-associated electron energy deposition, while narrow in latitude, can however be up to \(\sim\)100% of the total \(\geq\)50 keV electron energy deposition over the entire sub-auroral and auroral zone region, at times exceeding 1 GW in total atmospheric power input. Both the total IB-associated precipitating flux intensity and its relative contribution to the total precipitating energy over the sub-auroral and auroral zone increase with AE, \(|\)Dst\(|\), and Kp. We discuss implications of these results for atmospheric energy deposition, ionospheric conductivity enhancement, magnetospheric electron losses and magnetic field mapping.
+
Footnote †: slugcomment: Submitted to _JGR: Space Physics_
## Plain Language Summary
In Earth's magnetosphere, energetic electrons are often trapped by the geomagnetic field, which can shield the planet and orbiting satellites from potentially dangerous effects. However under certain conditions, especially on the nightside of the magnetosphere, the magnetic field can become too weak or curved to maintain such trapping, and electrons can be lost to the atmosphere, where they collide and give up their energy (i.e., precipitate). On a poleward moving satellite, the magnetic latitude at which the flux of precipitating and magnetically-reflected (locally trapped) electrons of some energy first become equal is known as its isotropy boundary (IB) latitude for that energy. This happens because at that latitude the magnetic field intensity at the magnetically conjugate equator becomes too small and thus unable to maintain the particle's cyclical motion, allowing it to scatter in its pitch-angle with respect to the field direction. The electron distribution consequently randomizes in pitch-angle and many electrons thus precipitate. This latitude varies with electron energy and local time, and with geomagnetic activity, which affects the underlying magnetic topology near Earth. Isotropy boundaries for various energies can be observed by low-altitude polar-orbiting satellite missions equipped with particle detectors able to discriminate the electron flux as a function of pitch-angle and energy, such as ELFIN. Here, we characterize statistically the properties of IBs, such as their occurrence rate versus longitude and latitude, as well as the energy they deposit into the atmosphere at different geomagnetic activity levels. This
ultimately allows for better understanding and predictive capability of space weather effects, as well as for improvement in magnetic field and electron flux models.
## 1 Introduction
### Background
A major objective in the study of Earth's magnetosphere-ionosphere system is to characterize the space processes leading to the precipitation of energetic charged particles. One such process is magnetic field-line curvature scattering (FLCS; occasionally referred-to as "current-sheet scattering"), in which the increased curvature and decreased strength in the equatorial magnetic field at large distances can scatter particles above a certain energy non-adiabatically in pitch-angle, isotropizing their flux distributions. Both electrons and ions (including heavy ions) can be affected in this way, though at different distances from Earth. The result of this flux isotropization is that precipitating and mirroring fluxes become comparable (i.e., \(J_{prec}/J_{perp}~{}\sim~{}1\)), for particle energies that have equatorial gyro-radii exceeding a critical threshold (Gray & Lee, 1982; V. Sergeev et al., 1983; Birmingham, 1984; Delcourt et al., 1996; Martin et al., 2000; Young, 2002). For electrons, such FLCS-isotropization can happen at energies as low as a few 10s of keV and upward, including the 100s of keV to multi-MeV electrons found in the vicinity of the outer radiation belts (W. Imhof et al., 1979). Furthermore, due to the night-side equatorial near-Earth magnetic field often exhibiting a gradual change in intensity and curvature versus radial distance near the dipole-tail transition region, it is possible for this scattering process to act on different minimum energy particles over an extended range of L-shells, magnetically mapping to an extended range of latitudes in the ionosphere (V. Sergeev & Tsyganenko, 1982). Fluxes which have been isotropized by this process can be detected by polar-orbit satellites traversing a range of field-lines connected to the equatorial scattering region. The magnetic latitude corresponding to the onset of isotropy as a function of increasing (absolute) latitude for a particular particle species and energy is known as the _isotropy boundary_ (or "IB") for that species and energy (V. Sergeev et al., 1983). Given its ability to rapidly fill the loss cone over a wide energy range, local time, and latitudinal range, the process of curvature scattering presents a potentially significant means by which energetic particles and their associated kinetic energy can be deposited into the atmosphere in the vicinity of IBs.
Contrary to proton IBs, whose properties have been well-explored previously (V. Sergeev & Tsyganenko, 1982; V. Sergeev et al., 1983; Newell et al., 1998; Donovan, 2003; Yue et al., 2014; V. A. Sergeev, Chernyaeva, et al., 2015; Dubyagin et al., 2018), the electron IBs have received less attention and their occurrence and properties have been poorly investigated (V. A. Sergeev et al., 2018; Capannolo et al., 2022). This has largely been due to observational constraints of past studies, which have contended with limited latitudinal coverage (e.g. RadSat, UARS, Van Allen Probes, GOES, Geotail, THEMIS, etc.), or issues with particle detection capability, such as insufficient electron pitch-angle resolution and energy range, dynamic range or sensitivity, uncertainties in sensor cross-calibration between look directions, and cross-species contamination (e.g. POES, DMSP). Because energetic electron IBs are found as high as 74\({}^{\circ}\), exhibit orders of magnitude variation in flux and precipitation energy, and often appear poleward of proton IBs, previous studies were ill-equipped to fully characterize their properties. To address these issues, we used recently acquired data from the Electron Losses and Fields Investigation (ELFIN) satellites (Angelopoulos et al., 2020), whose observations from circular polar Low Earth Orbit (LEO) provide latitudinal coverage spanning 55\({}^{\circ}\) to 80\({}^{\circ}\), 24 hours in aggregate local time, and electron energies between \(\sim\)50 keV and \(\sim\)5 MeV, with a single, low-noise, high-sensitivity, proton-rejecting electron sensor used to measure fluxes over all pitch-angles during each spin (once per 2.8 seconds).
Beyond characterization of the electron IB proper, the properties and role of isotropic \(\geq\)100 keV electrons associated with IBs in several important magnetospheric and iono
spheric processes have remained similarly veiled. For example, because FLCS-isotropized electrons can include populations of tens of keV and up to the MeV range, they can penetrate to lower altitudes into the high-latitude atmosphere than typical auroral fluxes, potentially increasing conductivity and chemical reactivity at altitudes as low as the ionospheric D-region (Fang et al., 2010). Additionally, because the FLCS isotropization mechanism can act rapidly (on the order of a bounce period) in the equatorial region where inward radial diffusion transports particles into the outer radiation belt, it can potentially prevent such particles from ever becoming trapped. These flux losses would not be accounted for as part of outer radiation belt precipitation due to being outside the trapped flux region. Such efforts are also confounded by the fact that the isotropy boundary is difficult to recognize on equatorial spacecraft traversing the tail-dipole transition region, because the loss cone at the equator is only \(\sim\)1\({}^{\circ}\), too small to resolve by typical particle instruments; thus their fluxes are not accounted for in energetic precipitation modeling and prediction based on such data. Further, knowledge of the IB location as a function of time can be a useful tool for near-instantaneous remote-sensing of the equatorial magnetic field configuration, and of the particle populations residing there. This can help refine equatorial magnetic field and particle flux models, especially when paired with equatorial satellites (V. A. Sergeev et al., 1993; V. A. Sergeev, Chernyaev, et al., 2015; Ilie et al., 2015; Shevchenko et al., 2010). Here, we expand on preceding observational results, which have reported the presence of both isotropic electrons and protons with energies above typical plasma sheet auroral processes (\(>\)10s of keV) throughout the magnetosphere, but especially on the nightside.
For electrons in particular, past studies have reported isotropic precipitation structures often found within \(\pm\)4 hours of midnight, persisting for multiple spacecraft orbits (timescales of hours) with well-populated (filled) loss cones up to several MeV in energy (W. L. Imhof et al., 1977; W. Imhof et al., 1979; W. L. Imhof et al., 1997). The most striking feature of these structures was the energy-latitude dispersion in the onset of isotropy: higher energies were monotonically isotropized at lower latitudes than lower energies (W. L. Imhof et al., 1997). These combined properties suggested that the underlying generation mechanism must persist over a wide range a geomagnetic conditions, and could not be due to transient processes, such as wave-particle scattering. It was proposed that such long-lasting isotropy structures were likely the result of particle scattering by the curvature of the background magnetic field within the cross-tail current, which extends both in latitude and in local time around midnight. Similar isotropy boundary effects for protons have been reported to occur at lower latitudes, the latter characteristic explained as due to the larger gyro-radii of protons. Interestingly, intense localized peaks in energetic proton precipitation were found to often occur in close poleward proximity to the proton IB (itself close to the auroral oval), with an apparent pre-midnight peak occurrence (Newell et al., 1998). This suggests that the IB precipitation may be replenished by repeated instances of freshly accelerated and transported particles from the magnetotail. The equivalent properties for electron IBs are addressed in this work. For completeness, we note that interactions with magnetopause may give rise to similar dispersive isotropic particle scattering onto Earth-connected field lines (Lyons et al., 1987), although these cases are not the focus of this study, and have been excised from our database.
### Model of energetic electron isotropization
Isotropy boundaries emerge when freshly accelerated (heated) particles convect or drift through the near-earth equatorial magnetosphere near locations of strong equatorial magnetic field curvature or weakened strength. After traversing the equator, the resulting modification of a particle's pitch-angle depends highly on the ratio of its incoming gyroradius (\(r_{L}\)) to the local scale length of variations in the background magnetic field. On the nightside in particular, as the distance from Earth increases, the most significant magnetic scale variations correspond to the smallest (sharpest) magnetic curvature radius \(R_{C}=|\hat{\bf b}\cdot\nabla\hat{\bf b}|^{-1}\) of the field, where \(\hat{\bf b}={\bf B}/B\) is the unit tangent vector
to the magnetic field \({\bf B}\). The most pronounced effects can be expected at the close Earthward vicinity of the cross-tail current, which marks a transition from a dipole-like field to an extended tail and is also accompanied by a local field strength gradient. If the particle gyroradius is much smaller than the field line curvature (i.e. \(r_{L}\ll R_{C}\)), the particle retains its first adiabatic invariant \(\mu\) (possibly imparting an impulsive change in gyrophase), and executes traditional guiding-center bounce and drift motions about the equatorial plane. However, if the particle gyroradius begins to approach the local equatorial radius of curvature (e.g., for higher energy particles), non-adiabatic effects emerge (Gray & Lee, 1982; Young, 2002).
When equatorial particle crossings become non-adiabatic due to the local geometry of the background magnetic field (as above), the resulting motion depends further on the ratio of the minimum magnetic curvature radius to the maximum equatorial particle gyroradius, defined in prevailing literature as \(\kappa^{2}~{}=~{}R_{C}/r_{L}\) (V. Sergeev & Tsyganenko, 1982; Martin et al., 2000), as well as the particle's incident gyrophase. The case of \(\kappa^{2}\sim 1\) results in Speiser-like motion (Speiser, 1965), while the range \(3\lesssim\kappa^{2}\leq\kappa_{cr}^{2}\) results in strong diffusive pitch-angle scattering, leading to isotropy upon repeated crossings of the equatorial plane (i.e. \(J_{prec}/J_{perp}\sim 1\)). Owing to the strong diffusion, only a few crossings are usually required to achieve isotropy. The particular critical value of \(\kappa_{cr}^{2}\) required for efficient isotropization is independent of particle species and varies only with the magnetic field configuration in the scattering region, although at a fixed location (and field-line curvature) electrons isotropize at a higher minimum-energy than protons due to their smaller mass and smaller gyroradius. A threshold of \(\kappa_{cr}^{2}=8\) is commonly taken as an _a priori_ value based on a Harris-type current sheet with constant \(B_{normal}\) (Gray & Lee, 1982; V. Sergeev et al., 1983), though it can take on values between 3 and 33 over the range of possible magnetotail configurations (Ilie et al., 2015)
Upon isotropization by equatorial curvature scattering, particles typically resume motion toward Earth along field lines, where they can be profiled in latitude by low-altitude polar spacecraft such as ELFIN. To provide a theoretical reference of the particle energy versus latitude at which FLCS-based isotropy could be expected to appear, we re-cast the critical \(R_{c}\leq\kappa_{cr}^{2}r_{L}\) relationship in terms of a minimum required particle kinetic energy \(E_{min}^{iso}\) for isotropic field-line scattering, computing it based on model-mapped equatorial magnetic field properties at the crossing location:
\[E_{min}^{iso}=\left(\gamma_{min}^{iso}-1\right)mc^{2}=\left[\left(1+\left( \frac{qBR_{C}}{\kappa_{cr}^{2}mc}\right)^{2}\right)^{1/2}-1\right]mc^{2} \tag{1}\]
where \(\gamma_{min}^{iso}\) is the minimum particle Lorentz factor corresponding to the minimum required particle energy for isotropization, \(q\) is the particle's charge, \(B\) is the equatorial magnetic field strength, \(R_{C}\) is the equatorial radius of curvature, \(\kappa_{cr}^{2}=8\) as explained above, \(m\) is the mass of the particle, and \(c\) is the speed of light.
Using this relation, an example spatial profile of minimum electron kinetic energies for efficient isotropization due to FLCS is shown in Fig. 1 over the 50 keV to 5 MeV energy range, using the combined IGRF and T89 field models (Alken et al., 2021; Tsyganenko, 1989) at midnight local time (GSM coordinate xz-cut) for \(\kappa_{cr}^{2}=8\) and \(K_{p}=2\) on 2020-09-02/14:22 UT. The profile shows that the regions capable of isotropizing 50 keV to 5 MeV electrons are confined to the equatorial plane at distances beyond several earth radii, corresponding to the dipole-tail transition from a dipole-like field to that of an extended magnetotail. The model-computed isotropy boundary field-line traces for 50 keV and 5 MeV energies are shown as blue and pink traces, respectively.
The above model predicts that on an equator-to-pole traversal of the pertinent field lines, as seen from a LEO polar satellite vantage point, the highest energies are isotropized first, at the lowest latitudes (smallest equatorial distances), due to their larger gyroradius and corresponding ratio to the equatorial curvature. The minimum energy of isotropization decreases monotonically as the vantage point latitude increases. This energy-latitude dispersion, characteristic of IBs, is a consequence of the field-line mapping progressively fur
ther away from Earth (especially on the nightside), where the equatorial field strength and/or radius of curvature are reduced and, as a result, the minimum required energy for isotropic scattering by Eqn. 1 is also reduced. In Fig. 1, the equatorial region corresponding to the portions of the IB dispersion ELFIN could observe is shaded yellow. The unit and light blue lines are the poleward and equatorward boundaries, respectively, of the IB dispersion region observable by ELFIN, corresponding to the ELFIN energetic particle detector's energy limits.
These dispersed energy-latitude isotropy signatures are the basis of IB event identification and modeling in our study. The modeled dispersion slope and minimum field-line scattering energy at a given latitude depend on the choice of magnetic field model and the quantity \(\kappa_{cr}^{2}\). Although the data may have significant uncertainties associated with true field-line mapping, and the field model of choice may not fully represent the instantaneous magnetic topology, we assume that both IB crossings and model exhibit a monotonic energy-latitude dispersion for event-identification and modeling purposes. We note also that this model predicts that electrons exceeding the minimum required scattering energy continue to be isotropized at latitudes extending beyond their IB; however, in reality as the mapped field-line distance increases appreciably beyond the IB dispersion region, the electron dynamics can become intertwined with other plasma sheet processes.
### Outline
In the following, we first describe the ELFIN dataset and then discuss the methods used to determine the presence of isotropy boundaries and the intensity of the associated precipitation. We next report observations of the IB occurrence rates versus MLT, L-shell, magnetic latitude and geomagnetic activity indices, alongside the observed electron energy ranges and slope of IB energy-latitude dispersion. We then discuss the computation methods and interpretation of the deposited electron energy flux associated with the IB/FLCS-dominated region. We finally summarize our findings and discuss their implications, potential applications, and the next steps in this line of investigations.
## 2 Methods and dataset
### ELFIN dataset
We use data collected by the Energetic Particle Detector for electrons (EPD-E) instrument aboard each of the two Electron Losses and Fields Investigation (ELFIN) Cube-Sats. The satellites were in polar LEO (at \(\sim\)450 km altitude), drifting about 1 hour in magnetic local time per month. The EPD-E instruments have an energy range of approximately 50 keV to 5 MeV, sub-divided into 16 logarithmically-spaced energy channels of width less than 40% (i.e. \(dE/E\leq 0.4\)), with a 22\({}^{\circ}\) field of view and geometric factor of \(\sim\)1 cm\({}^{2}\)-str. The satellites were spinning with a nominal rotation period of 2.8 s about an axis nearly perpendicular to the background magnetic field, allowing for pitch-angle determination of incident particles. The particle data collected in each spin period was subdivided in time into 16 spin sectors (\(\Delta t\sim 175\) ms) by the on-board data processing unit, which were combined in ground processing with IGRF and attitude data to determine the local pitch-angle distributions. The attitude of the spacecraft typically allows for resolution of both the loss cone and locally-mirroring particle populations. Proton contamination was mitigated by an absorbant aperture foil, while side-penetrating particles were rejected by a combination of dense shielding and detector coincidence logic. The data and processing tools are publicly available using the ELFIN routines within the SPEDAS framework (Angelopoulos et al., 2019).
To form the statistical event database, a list of \(\sim\)2600 ELFIN science zone crossings (\(\sim\)6-7 minute data collections encompassing outer-belt and auroral zone crossings) were identified and subjected to preliminary event-selection criteria. Since the FLCS mechanism depends on the large-scale background magnetic field configuration and bulk particle trans
port, which can vary on the timescale of tens of minutes to hours, each science zone collection can be regarded as an instantaneous snapshot of IB properties. These science zones were collected as a part of ELFIN's Outer Belt Observations (OBO) mode, which is designed to extend between 55\({}^{\circ}\) to 80\({}^{\circ}\) in (absolute) magnetic latitude in a single hemisphere (L-shells \(\gtrsim 3\)), computed using the International Geophysical Reference Field (IGRF) model, though the start and stop limits of each individual collection can vary by up to several degrees due to the operational implementation method. The data in our study were obtained in 2019 and 2020, spanning the full range of magnetic local times (MLTs). We binned these data in 1 hour magnetic local time intervals, encompassing \(>\)40 collections in each MLT bin. Significantly more science zone collections were available near midnight and on the dayside than at other MLTs, requiring normalization by residence time to remove this observational bias. The collections spanned a wide range of geomagnetic activity, including storms and substorms. Because in the early part of the mission satellite spin-axis attitude was not controlled to be orbit-normal (which would provide the widest pitch-angle coverage), but, rather, science zones were targeted based on attitude being closest to ideal, further checks of our dataset were necessary to ensure the loss-cone was cleanly measured. We inspected and culled the above preliminary database to ascertain that the satellite attitude permitted sufficient pitch-angle resolution to compute precipitating-to-perp flux ratios (only a few events had to be eliminated by this process). To limit the possibility of under-counting high-latitude IB events, we also required that all science zones spanned an (IGRF-computed) L-shell exceeding \(L=8\). Finally, we eliminated all events with known instrument performance issues (evident in quality flags). These constraints collectively reduced the qualifying science zones (QSZ) available for further analysis to 1922; these formed our final event database.
### Prototypical electron isotropy boundary crossing
A prototypical example of a qualifying ELFIN science zone collection containing an IB crossing is shown in Figure 2. During this \(\sim\)6 minute QSZ window the ELFIN-B satellite moved southward from the northern polar cap toward the magnetic equator. The top two panels show the in-situ perpendicular (\(J_{perp}\)) and precipitating (\(J_{prec}\)) energy flux spectrograms for 50 keV to 5 MeV electrons. The energy-time spectrogram of flux isotropy ratio \(R_{I}=J_{prec}/J_{perp}\) in Panel 3, is computed from the ratio of Panels 1 and 2. Until shortly before 14:21 UT, only relatively low energy (\(\leq 200\) keV) electrons are present and exhibit a high isotropy ratio (\(\sim\)1), suggesting a traversal of an extended electron plasma sheet. That the electrons are isotropic signifies a source location poleward of the isotropy boundary, where their energies exceed the threshold for field-line scattering. At around 14:21 UT, a rapid rise in energetic (\(\geq 300\) keV) isotropized electron fluxes is observed, suggesting that the satellite entered the transition region between the inner edge of the electron plasma sheet and outer radiation belt, or tail-dipole transition region (herein dubbed the "PS2ORB" interface, and adhering to a specific observational definition to be introduced later). The emergence of isotropic fluxes over such a high energy range not typically found in the plasma sheet, nor associated with any known persistent wave-particle acceleration process, is consistent with the satellite being on field lines connected to a dynamical transition region dominated by field-line scattering poleward of but in close proximity to the isotropy boundary. Around 14:22 UT, an abrupt transition from isotropic to anisotropic fluxes over the energy range from 50 keV to 4 MeV is observed. Viewed in reverse-time (i.e., from increasing latitude), the latitude where flux anisotropy transitions to flux isotropy at each energy (i.e., the isotropy boundary at that fixed energy) increases as the energy decreases. This inverse (negative) latitude-energy dispersion is exactly what is expected for the isotropy boundary dispersion with energy, in which higher energies isotropize first, at lower latitudes. Based on this, we conclude that this QSZ interval contains an IB crossing.
Panel 5 shows the isotropy boundary location (red) as a function of energy and latitude, as determined by an automated procedure. The jagged (occasionally non-monotonic) na
ture of the curve is a consequence of poor counting statistics at the highest energies. Also shown in the same panel is a model-based prediction of the boundary location, obtained by equatorial footpoint tracing of the ELFIN orbit with the Tsyganenko 1989 field model, and using \(\kappa_{cr}^{2}=8\) for the isotropy boundary determination for each energy. The blue line results from a direct application of the model, which is earlier in time (hence poleward in magnetic latitude) by \(\sim\)15 s (\(\sim\)1\({}^{\circ}\)) compared to observations due to model mapping uncertainties. This implies the equatorial peak cross-tail current density was apparently further away from Earth in reality at the collection time, causing the tail-dipole transition in the model to be further away from Earth and map to a higher latitude. By fitting a constant latitudinal offset to the model energy-latitude dispersion (blue line) to best match the observation (red line), we obtain the green line. The latitude-shift required to match the observations is a measure of how far the model peak cross-tail current distance from Earth deviates from the actual. For this event, the observed IB profile lasts \(\sim\)15 s across all energies, and is in good agreement with the modeled-then-shifted IB profile (errors less than 1 spin period of latitudinal uncertainty). The latitudinal shift required to co-locate the IB crossings provides valuable information for further investigation and model improvements that are beyond the scope of this work.
After crossing the IB, the satellite moved into the outer radiation belt and subsequently in the trough (where locally-trapped relativistic electron fluxes subsided). Around 14:22:30 UT and 14:22:50 UT, near-isotropic fluxes were again observed but only at the lowest energies, \(<\)300 keV, despite the fact that locally-mirrored fluxes were abundant at all energies up to \(\sim\)3 MeV. This fact, and the lack of an energy-latitude dispersion, suggests the cause of this precipitation is likely a process other than field-line scattering, such as wave-particle interactions, and therefore unrelated to the IB. Panel 6 shows the net precipitating electron energy flux, integrated over energy and solid angle within the loss cone. This is used to estimate both the relative and total amounts of average precipitating energy flux associated with the isotropy boundary. Panels 7-10 display the pitch-angle spectra for select energy channel ranges, alongside horizontal bars indicating the local bounce-loss/anti-loss cones. Panel 11 shows the IGRF magnetic field at the spacecraft location, for reference.
### Statistical characterization
To obtain the statistical results of the study, we performed the preceding type of analysis on each ELFIN QSZ data collection. We then manually inspected the events for the presence of an electron isotropy boundary. To assess whether an IB was present we computed the flux isotropy ratio \(R_{I}=J_{prec}/J_{perp}\) and checked for energy-latitude dispersion signatures consistent with the field-line curvature scattering mechanism (e.g. as in Fig. 1). In order for a science zone to be marked as containing an IB crossing, and to distinguish it from other potential precipitation processes, the candidate crossings were required to satisfy several criteria: 1) a poleward transition from anisotropy to isotropy (defined as \(R_{I}\geq 0.6\)) for energy channels with non-zero counts, followed by persistently isotropic flux (\(R_{I}\sim 1\)) for at least 50% of the subsequent non-zero poleward data samples; 2) at least three energy channels satisfying condition 1), and thus usable towards identification of an IB dispersion signature; 3) a negative data-fitted energy-latitude dispersion slope, one consistent with higher-energy electrons becoming isotropized at lower latitudes (or, at most, at apparently the same latitudes given the finite spin-period time-resolution of ELFIN measurements). To limit the effects of low-counting statistics, the quantity \(R_{I}\) was assumed to be 0 for cases in which the measurement uncertainty was more than a factor of 2. Additionally, we rejected IBs with equatorial footpoints residing within 3 Earth radii (or outside of) a model magnetopause (Fairfield, 1971). This was done to eliminate false IB-like particle signatures which may result from scattering at the magnetospheric boundary (particularly on the dayside and flanks). For cases which satisfy all of the preceding criteria, the event was marked as containing an IB crossing.
We next determined the occurrence rate distribution of IB crossings in our database of ELFIN QSZ collections versus MLT, magnetic latitude (MLAT), L-shell (IGRF or T89-based), and geomagnetic activity (based nominally on AE, and including Dst and Kp for characterization of IB-associated energy fluxes). We also binned likewise the IB-associated (latitude-averaged over the PS2ORB interface), integral (over energies \(\geq\)50 keV) electron precipitating energy flux. This average was presumed to be dominated by field-line curvature scattering. To obtain the average flux precipitating due to field-line scattering over the PS2ORB interface, we used the following operational definition of the PS2ORB interface region: The equatorward-most PS2ORB interface latitude was taken to be the IB for the maximum observable isotropic electron energy, while the poleward-most PS2ORB interface latitude was taken to be the omni-directional flux cutoff in \(\geq\)300 keV electrons poleward of its IB (see the Results Section for rationale). For cases in which the 300 keV electron channel was not present (or its IB could not be clearly identified), the next-closest energy channel cutoff was used to mark the PS2ORB poleward-most latitude. From these events we also determined the minimum and maximum electron energies of the IB crossing dispersion, alongside the average energy-latitude slope of the dispersion. We lastly computed the latitudinal width of the combined IB plus PS2ORB interface region, which provides an estimate of the spatial extent of the equatorial FLCS-dominated region at the collection time.
## 3 Results
### Occurrence, spatial distribution, and dispersion of electron IBs
To provide an initial picture of the spatial distribution of electron IBs in our dataset, the equatorial footpoints corresponding to the minimum and maximum L-shell of field-line traces bounding the IB dispersion region (based on T89) are shown in Figure 3 as a scatter plot, projected in the equatorial GSM xy-plane. The points marked \(L_{min}\) represent the most Earthward portions of the IB crossings (typically corresponding to the highest-present electron energy IB; variable from event to event), and the point marked \(L_{max}\) similarly represents the furthest portion of the IB dispersion in the crossings (almost always corresponding to the 50 keV electron IB). The static reference-magnetopause based on (Fairfield, 1971) is shown (solid) alongside the 3 earth radius cutoff (dashed) used to reject false IB signatures, which are marked in silver and black (counted as non-IB but otherwise valid events). It is evident that projected isotropy boundary locations span a wide range of MLTs and mapped equatorial distances throughout much of the nightside magnetosphere.
Figure 4 shows the electron IB spatial distribution. The top panel depicts the histogram of isotropy boundary occurrence (absolute and normalized) versus MLT in our database, binned with a bin size \(\Delta MLT=1\) hour. Blue bars denote the number distribution of all QSZ events in our database, orange bars depict the number of events with an IB (both referring to the absolute numbers on the left vertical axis), and red line is the normalized occurrence rate of IB crossings versus MLT in the dataset (corresponding to the right vertical axis), computed as the ratio of the orange to blue values. A running average with \(\pm\)1-hour MLT window was applied to the data, to improve statistics and reduce uncertainties, e.g., in field-line mappings.
We find that IB crossings occur in up to 90% of events near midnight, and exhibit a sustained occurrence rate of \(\geq\)80% between 21 to 01 MLT. Their occurrence rate gradually declines towards near-zero at pre-dawn (7 MLT) and near dusk (16 MLT). Interestingly, the peak IB occurrence is not centered at midnight but at pre-midnight, around 22 MLT. (We note that the apparent peaks in event number around 01 and 15 MLT are due to the reduced ELFIN data availability in the first two years of the mission used here; i.e., in the 2019-2020 period.) The associated spatial distribution of these IB events binned by L-shell (\(\Delta L=1\)) and magnetic latitude (\(\Delta\)MLAT = 1\({}^{\circ}\)) using the IGRF model are shown in Fig. 4 middle and bottom, respectively. IGRF alone, not T89, was used to com
pute the distribution in this case to provide a static reference independent of the behavior of the magnetic field far from earth, with the caveat that this will typically under-estimate the true L-shell and latitude mappings at the nightside as distances from Earth increase, and for different solar wind driving. We see that electron IBs span a wide range of L-shells (4 to beyond 12), and are most commonly found to span L-shells 6-8, typically corresponding to the tail-dipole transition region, or PS2ORB interface region. As anticipated from the L-shell to [MLAT] equivalence in mapping, a similar trend is observed in magnetic latitude: a broad range of occurrences between 60\({}^{\circ}\) to 74\({}^{\circ}\), with peaks around 66\({}^{\circ}\)-70\({}^{\circ}\). Additionally, we checked the occurrence of IB crossings versus the geomagnetic activity indices AE, Dst, and Kp (not shown), and found them to not vary appreciably from the overall MLT distribution described here.
In these events, we also characterized the minimum and maximum electron energies of the observed dispersion in the IB crossings, alongside the latitudinal spread and dispersive slope ("sharpness"), versus MLT and activity. Given that the IB generation mechanism is based on the background magnetic field configuration and the local availability of energetic particles (e.g. injections), it is reasonable to expect that the level of geomagnetic activity and solar driving at the time of (and preceding) the crossing would affect the electron energies appearing in the IB, as well as the latitudinal onset and sharpness of isotropic dispersion. In order to provide an assessment of this activity-dependent behavior, we separated the dataset into two categories: quiet-time intervals with 1 hour average AE less than 200 nT, and non-quiet intervals with 1 hour average AE above 200 nT. This choice of hourly-average AE split the dataset roughly in half and aimed to emphasize high-latitude geoeffectivity while avoiding complications of the potential time-delayed nature of other indices (such as Dst at storm time).
Figure 5 displays the activity-dependent minimum and maximum observed IB electron energies versus MLT, where solid points represent the mean values and error bars represent the occurrence-weighted standard deviation about the mean. The top panel shows the minimum and maximum electron energies appearing in the dispersion of IB crossings within the ELFIN EPD instrument sensitivity and energy limits. For low activity (AE\(<\)200 nT) the electron energies span a typical maximum of 700-800 keV, peaking at MLTs near midnight, and trailing toward a lower average maximum in the 300-400 keV range near dark and dusk. For higher activity (AE\(>\)200 nT), the maximum energies rise dramatically to a mean of 2 MeV, and are shifted in local time to pre-midnight (peaking around 22 MLT), with the maximum energies falling to 800-900 keV at MLTs approaching dawn and dusk. The apparent shift in peak energy to the pre-midnight during active times is consistent with the idea of the FLCS source being supplied by frequent (possibly continual) local energetic flux injections in the tail at these MLTs (Gabrielse et al., 2014; Liu et al., 2016), which the background field rapidly isotropizes. At both quiet and non-quiet times the low-energy minimum of the IB is almost always the lowest energy channel resolvable by the ELFIN EPD (\(\sim\)62 keV mean). This suggests that the IB likely extends to lower energies than ELFIN can resolve, and that such fluxes are highly available in the tail for FLCS at the rates reflected by IB occurrence in each MLT sector.
Using this activity criterion, we also determined the MLT-binned latitudinal bounds and sharpness of the energy-latitude dispersion in the IB crossings. The center row panels in Figure 5 show the IGRF L-shell distribution (left) and magnetic latitudes (right) bounding the onset in isotropic dispersion across all resolved energies. The quantities denoted "min" reflect the most Earthward (i.e. lowest latitude, often highest-energy) portion of the dispersion while the "max" values mark the more distant location at which the lowest energy channel is first observed to become isotropic. The results show that at low activity, there is a symmetric bowl-shape distribution with global minimum around 22 MLT with \(L_{min}\sim 7.1\), \(L_{max}\sim 7.3\) and \(MLAT_{min}\sim 67.6^{\circ}\), \(MLAT_{max}\sim 67.9^{\circ}\), rising to a maximum of \(L_{min}\sim 8.6\), \(L_{max}\sim 8.9\) and \(MLAT_{min}\sim 69.8^{\circ}\), \(MLAT_{max}\sim 70.0^{\circ}\) at 5-6 MLT. Interestingly at both activity levels the mean latitudinal width (max minus min) of the dispersion in the crossings is nearly constant across MLT, suggesting
a highly persistent supply of injected particles of appropriate energies (top panel) and appropriate background field configuration across many local times on the nightside, such that they are repeatably affected by FLCS. At higher activity, IB dispersion is found consistently at lower latitudes across all MLTs, with the emergence of a break from the symmetric bowl-shape distribution around 22 MLT. This feature is again consistent with the appearance of energetic electron injections at lower latitudes, e.g. during substorms. Rather than being localized to a single MLT, the active time latitudinal onset minima are instead spread between 20-23 MLT, taking on values of \(L_{min}~{}\sim~{}6.3\), \(L_{max}~{}\sim~{}6.5\) and \(MLAT_{min}\sim 65.9^{\circ}\), \(MLAT_{max}\sim 66.5^{\circ}\) and maxima around 4-5 MLT of \(L_{min}\sim 7.6\), \(L_{max}\sim 8.0\) and \(MLAT_{min}\sim 68.4^{\circ}\), \(MLAT_{max}\sim 68.8^{\circ}\). We note that MLT sectors with fewer than 5 IB events are not shown, which is the reason there are no data points for MLT 5-6 at active time versus quiet time.
Based on the observed energy and latitudinal ranges of the IB crossings, we finally computed the linear energy-latitude dispersion slopes \(dL/dE\) and \(dMLAT/dE\) versus activity and MLT in the top and center panels. The bottom panel shows the dispersion slope in terms of L-shell (left) and MLAT (right). The data reveal that during quiet intervals the MLT-based distribution is similarly bowl-shaped with a minimum slope (i.e. steepest latitudinal change in energy) around midnight. As with the energies and latitudes, the dispersion slope extrema shift to the pre-midnight at active times, with a universal trend toward apparently sharper IBs. Such values provide insight into both the equatorial profile of the magnetic field under these conditions and the typical energies available from injections there. We note again that while IGRF provides a practical activity-independent spatial reference, the reported IB latitudes and slopes would be quite different (especially away from midnight) versus models containing magnetospheric currents, such as the magnetotail and the ring current (e.g. T89).
### Precipitation associated with electron IBs
Next, we seek to quantify the amount and distribution of precipitating energy flux associated with electron isotropy boundaries. The isotropy boundary for each energy is by definition an instantaneous transition latitude (separating isotropy from anisotropy), and consequently does not capture the finite spatial extent of precipitation associated with the FLCS isotropization, which is expected to extend into adjacent poleward latitudes (as seen in Fig. 2). This prompts us to use an operational definition of the poleward extent of the near-isotropic precipitation associated with the IB. To achieve this, we rely on the fact that the inner edge of the plasma sheet proper (for which particle processes other than FLCS may become dominant) can be regarded as a proxy for the poleward edge in FLCS-dominated fluxes. To identify this edge in the data, we relied on the fact that the plasma sheet typically possesses a high-energy cutoff in electron fluxes at both quiet and active times (Christon et al., 1989, 1991). By statistically determining the most common maximum electron energy at which omni-directional fluxes experience a sustained drop-out at latitudes poleward of IBs, we were able to define an operational definition of the poleward edge of the IB-associated precipitation region, otherwise referred to here the plasma sheet-to-outer radiation belt interface region ("PS2ORB"). We note that the specific upper plasma sheet cutoff energy found by this method depends on the sensitivity and resolution of the instrument used to measure it, and thus had to be determined using the ELFIN dataset (akin to that performed with ISEE observations in Christon et al.).
Figure 6 explores the energy-latitude dependence of omni-directional electron flux cutoffs poleward of IBs in the ELFIN dataset, which are used to define the operational outer edge of the PS2ORB region. The top panel depicts as a function of energy the mean and median IGRF magnetic latitude of the lowest latitude (highest energy) portion of all IB events for comparison with the poleward cutoff in omni-directional electron fluxes. For energies \(\geq\)300 keV, there is a near-constant \(\sim\)1\({}^{\circ}\) separation in average magnetic latitude between the lowest latitude portion of the IB crossings and of the omni-directional flux
dropouts. For energies \(<\)300 keV, this difference begins to grow rapidly to \(>\)3\({}^{\circ}\), suggesting that a separate electron population from the plasma sheet has been encountered. The bottom panel repeats this analysis instead using the event-wise difference between the lowest latitude IB and the poleward omni-flux cutoff. This value is also around 1\({}^{\circ}\) at \(\geq\)300 keV but suddenly increases with decreasing energy, providing additional confirmation that the latitude of flux cutoff at 300 keV in each specific event is a reasonable proxy for the location of the poleward boundary of the IB FLCS-dominated precipitation region. We thus defined the poleward bounding latitude of the PS2ORB region as the latitude beyond the IB at which \(\geq\)300 keV omni-directional fluxes first drop out. Using this definition, the bottom panel shows the mean latitudinal width \(\Delta\theta\) of the PS2ORB region observed in the dataset versus local time during quiet and non-quiet intervals. The data reveal that at quiet intervals, the latitudinal width of the FLCS-dominated region is typically between 1-1.5\({}^{\circ}\), rising to 2-3\({}^{\circ}\) at non-quiet times. This is consistent at active times with the IB/FLCS source having greater access to source particles to be scattered (such as from injections and enhanced electric fields in the magnetotail), as well as from modified equatorial background magnetic field properties (such as during intervals of magnetospheric compression), which can further shift and extend the FLCS-dominated region in latitude.
Using the above operational definition for the latitudinal bounds of the PS2ORB interface region, we computed the average precipitating energy flux within IB-associated latitudes from isotropic \(\geq\)50 keV electrons. We also computed the net energy flux integrated over all latitudes including and poleward of the PS2ORB region, and for the entire ELFIN science zone (from all available latitudes - typically spanning 55\({}^{\circ}\) to 80\({}^{\circ}\)). This was done to provide a comparison of the relative magnitude of IB-associated energy flux versus that which is sourced by other ELFIN-observable regions, such as the outer radiation belts, electron plasma sheet, and the polar cap. To compute the energy flux, we integrated the precipitating electron distributions over energy and solid angle (pitch-angle and gyrophase), and averaged over the number of data samples the spacecraft spent in each region in each event. Figure 7 shows the distribution of energy flux in each of these latitudinal ranges, including the entire science zone (blue), the IB crossing and regions poleward of it (orange), and the IB/FLCS-dominated PS2ORB interface region (green). We immediately see that the PS2ORB interface region exhibits the highest average and maximum of the three categories. Comparing the green and orange bars also reveals that the precipitation is confined in latitude to the localized PS2ORB interface region adjacent to electron IBs, rather than being evenly distributed over all poleward latitudes (e.g. in the plasma sheet and polar cap).
To further investigate the occurrence and significance of IB-associated precipitation, we computed the local time distribution of latitude-averaged precipitating energy flux within the PS2ORB interface, as well as the fraction of the net energy flux (over the entire ELFIN science zone) contributed by the PS2ORB region. To assess how these quantities varied with geomagnetic activity, they were also binned by the activity indices AE, Dst, and Kp, taken independently of local time. We computed the total power deposition as the product of the crossing time and average flux. This is because as a polar-orbiting ELFIN satellite moves at roughly constant velocity across different latitudes in space (approximately at the same longitude), the time-average of the flux represents a line-integral over the latitudinal spatial dimension. The resultant quantity is power deposited over an ionospheric swath along the satellite track per unit cross-track distance (in ergs/s/cm or, re-scaled, in Watts/km). Thus the time-average represents average power deposition at the ionosphere at a given MLT, either over the entire science zone crossing, or over the more limited PS2ORB region. Ratios of time-average total precipitation in the PS2ORB region over that in the entire science zone represent the fractional total energy per unit time, or fractional power, in PS2ORB relative to the whole science zone. Figure 8 captures the total fractional power (top) and average electron energy flux (bottom) of the precipitation associated with the PS2ORB interface region (the IB FLCS source). For each MLT sector (horizontal axis) the value in color (see color bar) shows the fraction
of the IB events possessing a value greater or equal than shown vertical axis (i.e., cumulative probability). For example, we can see in the top panel than 0.15 (or 15%) of PS2ORB events (orange color in color bar) contribute, near midnight, \(>\)0.4 (\(>\)40%) of the total precipitating power within their individual science zones, and we can see in the bottom panel that 0.1 (or 10%) of PS2ORB events (yellow-green color) have, near midnight, an average energy flux \(\geq\)10\({}^{-1}\) erg/cm\({}^{2}\)-s. Note that here, as with the IB occurrence vs MLT distribution, we performed a three-hour MLT average on these values (central value plus and minus 1 hour). Pink lines in the two panels represent means of relative precipitating power (top panel) and average precipitating flux (bottom panel).
To summarize key features of these results, we see (top panel, pink line) that within \(\pm\)4 hours of pre-midnight, the PS2ORB interface region precipitation accounts for 0.1-0.2 (10%-20%) of the total precipitating power, on average. While this is a small value, on average, the cumulative probability distribution (top panel, color plot) shows that precipitating power \(>\)0.5 (\(>\)50%) of the total within a science zone (vertical axis value: 0.5) occur, near midnight, 15% of the time (yellow-orange colors: 0.15 in vertical color bar).
In extreme cases, near 22 MLT, up to \(\sim\)100% of the power can be delivered by PS2ORB (albeit more rarely). The average total precipitating energy fluxes (bottom panel) are found to span several orders of magnitude across all MLTs. Interestingly, the peak relative and total amounts of precipitating energy flux again occur in the pre-midnight sector around 22 MLT. As alluded to previously, this is likely due to preferential current sheet thinning at pre-midnight, the related proximity of the cross-tail current to Earth and the preponderance of substorm onset, and injections at that location.
The results from Fig. 8 additionally allow for an estimate of the total global atmospheric power input across all local times and latitudes from IB-associated \(\geq\)50 keV electron precipitation. To determine the average global power input across the dataset, we used the IB MLT and latitudinal occurrence rates from Fig. 4 combined with the average PS2ORB latitudinal extent from Fig. 7. Using the pink mean line from Fig. 8, we add up the occurrence-weighted total energy flux in each 1 hour (15\({}^{\circ}\) longitude) MLT sector scaled by the effective area of the PS2ORB projected onto the atmosphere at the IB latitude, assuming 1\({}^{\circ}\) in latitude equates to 111 km projected, and that both hemispheres contribute equally. This results in a typical total atmospheric power deposition of around 10 MW at any given time. However, in the most extreme (and rare) cases where energy fluxes approach 1 erg/cm\({}^{2}\)-s across several several hours in MLT, the total atmospheric power input from IB-associated \(\geq\)50 keV electron precipitation can exceed 1 GW--possibly exceeding the input from auroral sources, and thus likely playing an important role in ionospheric processes.
### Energy flux variations with geomagnetic activity
Lastly, we explored the dependence of precipitation on geomagnetic activity. Towards that goal we computed the distribution of relative precipitating power and total energy flux in the PS2ORB interface region (IB-associated) versus geomagnetic activity indices AE, Dst, and Kp. Figure 9 presents these results. The top row corresponds to the PS2ORB interface region precipitating power relative to the total power in the science zone (akin to Fig. 8 top) while the bottom row contains the average precipitating energy flux (akin to Fig. 8 bottom). The AE values (\(\Delta AE=100\) nT) were three-hour averaged leading into the collection interval in order to capture longer-term activity trends rather than individual short-term activity phases (e.g. as in substorms). The final bin is integral covering the cases of AE\(\geq\)600 nT. The trend is for both the relative power and total energy flux of PS2ORB precipitation to increase with AE (for AE increase from 0 nT to 600 nT the relative power increases by 30% and the total energy flux by several orders of magnitude). A similar increase in PS2ORB precipitation is evident for relative power and for total energy flux as Dst decreases from 0 nT (quiet times) to \(<\)-30 nT (storm-like times), as shown in Figure 9 in increments of \(\Delta Dst=\) 10 nT. In this case there is also a rise for positive Dst values, which may correspond to effects as
sociated with Storm Sudden Commencements (SSCs). We observed a similar relationship between Kp (\(\Delta Kp\) = 1) and the precipitating fluxes that was seen for AE: both the relative power and total energy flux increase nearly monotonically with this index. There are also changes in the slope of the relative intensities at Kp 2 and 5, indicating that on average, magnetospheric dynamics may exhibit an increased preference for FLCS over other processes under these conditions.
## 4 Summary and discussion
Using \(\sim\)1900 ELFIN-A and -B science zone collections from years 2019 and 2020 we have characterized the occurrence and spatial distribution of electron isotropy boundaries and associated electron energies and precipitation from \(\geq\)50 keV electrons due to magnetic field-line curvature scattering. We examined the spatial distributions in MLT, L-shell, and magnetic latitude and the dependence of precipitating power and energy flux on geomagnetic activity indices AE, Dst, and Kp. We found that electron IBs are present under all activity levels over nightside local times, from dusk through midnight to dawn. They have \(\sim\)90% peak occurrence rate around 22 MLT, remaining above 80% occurrence rate between MLTs of 21 to 01. The most common latitudes associated with IB energy-latitude dispersion are between 66\({}^{\circ}\)-68\({}^{\circ}\) (IGRF), or L-shells between 6 and 8, with a total observed latitudinal range between 60\({}^{\circ}\) and 74\({}^{\circ}\), or \(L\gtrsim 3.5\). The latitude-averaged precipitating energy flux associated with IBs, as well as the contribution to the total high-latitude precipitation power also peaks around 22 MLT, and has an MLT distribution similar to that of the IB spatial occurrence rate. The peaks in electron IB occurrence in MLT and magnetic latitude and in precipitation energy are statistically collocated with substorm onset MLT and latitude.
The evolution of electron IB latitude and peak energy with activity is also consistent with the equatorward evolution of the edge of the oval during active times. This suggests that geomagnetic activity is conducive to the formation of better-defined IBs, which implies more intense fluxes at the tail-dipole transition region. We interpret this as due to the availability of intense fluxes over broad energies from 10s to 100s of keV (and often \(>\)1 MeV) of electrons injected to that region by reconnection outflows, as in Gabrielse et al. (2014). The maximum average IB-associated energy flux observed was on the order of \(\sim\)1 erg/cm\({}^{2}\)-s but varies considerably spanning four orders of magnitude depending on geomagnetic activity. The IB/FLCS source contributes, on average, between 10-20% of the total nightside high latitude power, but contributes more than 50% of the total around 20% of the time (in rare cases even approaching 100% of the total). The total global atmospheric power input from the IB/FLCS source is on average around 10 MW, but can exceed 1 GW in the most extreme cases. Both the total intensity and the relative contribution to high-latitude precipitation of the equatorial FLCS source are seen to increase with increasing AE, \(|\)Dst\(|\), and Kp, suggesting that IBs may play an important role in high-latitude energetic electron precipitation at active times.
Our study establishes a baseline for further investigations of the effects of energetic electron precipitation from IB-associated field-line scattering for magnetospheric and ionospheric processes. While we do not expect the precipitating power from the IB source to typically out-compete the total power from lower-energy (\(<\)50 keV), higher number flux auroral sources at higher latitudes (e.g. as in Newell et al. (2009)), more than 50% of the IB events in our database contained \(\geq\)500 keV precipitating electrons. This indicates that on average the IB-associated source will tend to penetrate much deeper into the atmosphere than 10s of keV particles, and it is therefore worth considering in models of ionospheric conductivity and chemical reactivity at lower altitudes. This also suggests that models of energetic particle transport from the plasma sheet to the outer radiation belts should also incorporate these results for a proper accounting of magnetospheric losses. The electron IB properties are all found to be positively correlated with geomagnetic activity as inferred from splitting the dataset about AE=200 nT. Further
investigation is needed to determine the causal relationship between solar wind driving and magnetospheric responses that result in IB-related precipitation. These studies could be further augmented by conjunctions with equatorial spacecraft which may provide simultaneous equatorial observations of the magnetic topology and electron populations, while also reducing field-line mapping uncertainties.
Similar to the proton IBs (e.g. as in Newell et al. (1998)), electron IBs exhibit peak pre-midnight occurrence rate and precipitation intensity around 22 MLT. This is presumably due to the proximity of the cross-tail current to Earth in that region due to its magnitude being stronger at dusk than at dawn (Lu et al., 2018). This results in a tail-dipole transition region closer to Earth at dusk than at dawn, which can be associated with sharper field-line curvature. Moreover, we note that the highest energy portion of electron IBs is sometimes observed to occur in the vicinity of characteristic signatures of electromagnetic ion cyclotron (EMIC) wave scattering in ELFIN data. Such signatures are most common at pre-midnight (where EMIC waves peak in occurrence rate). This suggests that occasionally both mechanisms can act at the same location, and may, in fact, share common driving: magnetotail injections. Beyond the technique used to identify IB dispersion, we did not attempt to separate the two processes in our analysis, but we note that in future studies it is possible to distinguish the two thanks to the characteristic narrowed energy and lower-latitude extent of EMIC precipitation.
Around 5% of events in our dataset exhibit unusually intense IB-like wide-energy isotropic electron precipitation signatures at either very low latitudes (appearing as low as L\(\sim\)4 near the plasmapause), or over a highly extended latitudinal range (exceeding 10\({}^{\circ}\) in apparent poleward extent), corresponding to significantly disturbed magnetospheric conditions during storms or strong substorms. We excluded such events from our database because they did not exhibit the characteristic energy-latitude dispersion of the isotropy boundary assumed in the study. Nonetheless, their properties suggest that wave-particle scattering sources alone may not sufficiently explain them, leaving FLCS as a potential cause. One possibility is that these structures correspond to an active-time type of isotropy boundary in which the equatorial magnetic field is varying over FLCS timescales, possibly accompanied by rapid particle energization in vicinity of the scattering region. Such signatures may be compatible with those of local tail \(B_{z}\) minimum as reported e.g. by V. A. Sergeev et al. (2018), or of active tail reconfiguration, which would appreciably vary the critical FLCS scattering threshold \(\kappa_{cr}^{2}\) and field-line mappings in real-time. Additionally, during periods of moderate activity, strongly populated IB-like structures can also sometimes be seen at dawn/dusk. A possible explanation may be that ULF waves of appropriate frequency and comparable strength to the equatorial magnetic field may allow for electron isotropization far away from midnight. This would present an alternative mechanism to magnetopause scattering at the flanks, effectively expanding the IB local time occurrence distribution to events that were filtered out by the 3 earth radius magnetospheric boundary culling criterion in our analysis, as well as substantially increasing the maximum amount of precipitating energy flux contributed by IBs/FLCS. Further simulation and observational work will be helpful in addressing these and related questions.
In addition to its relevance for electron loss and atmospheric energy deposition, our study also has implications for improvements in magnetic field modeling. Such models are still far from encompassing dynamical variations of the magnetotail in space and time. By forming an observational dataset of isotropy boundary properties of the type used here, magnetic models can be further constrained (e.g. as alluded in V. A. Sergeev et al. (1993)), either by fine-tuning their existing parameters (such as cross-tail current sheet thickness and Earthward-most extent), or by direct value sampling in modern assimilative models. This method becomes especially powerful when combined with equatorial measurements of the magnetic field for mapping, constituting an avenue ripe for future effort. We note that in this work, predictions based on magnetic field models were only used as a qualitative estimate of the expected isotropy boundary properties.
A final clarification concerns the extended isotropic precipitation region associated with and poleward of the IBs, which we referred to as the "electron plasma sheet-to-outer radiation belt", PS2ORB. Using ELFIN data alone at a single local time, it is not feasible to ascertain a definite instantaneous inner edge to the electron plasma sheet, nor to infer the location of the last-closed drift shells of the outer belt. We regard this region as the transition in global background magnetic field from the outer to the inner magnetosphere in which freshly injected plasma sheet electrons acquire sufficient energy to be visible above the highest plasma sheet energies (\(>\)300 keV) and for which freshly injected, trapped or quasi-trapped outer belt electrons curvature scatter on timescales faster than a drift period. Both populations then manage to precipitate with high intensities prior to reaching the low-latitude boundary layer or the magnetopause boundary (meeting the phenomenological definition of a transition separating the plasma sheet from outer belt). We envision that precipitation from FLCS in this region out-competes other processes (as evidenced by the energy-latitude distribution of flux cutoffs that dominates over other wave-associated particle spectral shapes, e.g., due to whistler-mode chorus or EMIC waves). This assumption can benefit from ELFIN comparison with in-situ equatorial measurements of electron flux, wave power and magnetic field strengths, as well as from data-constrained adaptive magnetic field models.
## 5 Open Research
The ELFIN data and software used in this study are part of the SPEDAS framework (Angelopoulos et al., 2019), which is freely available to the public at the following url: [http://spedas.org/wiki/index.php?title=Main_Page](http://spedas.org/wiki/index.php?title=Main_Page)
###### Acknowledgements.
This work has been supported by NASA awards NNX14AN68G, 80NSSC19K1439, and NSF grants AGS-1242918 and AGS-2021749. We are grateful to NASA's CubeSat Launch Initiative for ELFIN's successful launch in the desired orbits. We acknowledge early support of ELFIN project by the AFOSR, under its University Nanosat Program, UNP-8 project, contract FA9453-12-D-0285, and by the California Space Grant program. We acknowledge critical contributions of numerous ELFIN undergraduate student interns and volunteers.
[MISSING_PAGE_POST]
Journal of Geophysical Research: Space Physics_, _119_(4), 2512-2535. Retrieved from [https://doi.org/10.1002/2013ja019638](https://doi.org/10.1002/2013ja019638) doi: 10.1002/2013ja019638
* Gray & Lee (1982) Gray, P. C., & Lee, L. C. (1982). Particle pitch angle diffusion due to nonadiabatic effects in the plasma sheet. _Journal of Geophysical Research_, _87_(A9), 7445. Retrieved from [https://doi.org/10.1029/ja087ia09p07445](https://doi.org/10.1029/ja087ia09p07445)
* Ilie et al. (2015) Ilie, R., Ganushkina, N., Toth, G., Dubyagin, S., & Liemohn, M. W. (2015, December). Testing the magnetotail configuration based on observations of low-altitude isotropic boundaries during quiet times. _Journal of Geophysical Research: Space Physics_, _120_(12). Retrieved from [https://doi.org/10.1002/2015ja021858](https://doi.org/10.1002/2015ja021858) doi: 10.1002/2015ja021858
* Imhof et al. (1979) Imhof, W., Reagan, J., & Gaines, E. (1979). Studies of the sharply defined \(l\)-dependent energy threshold for isotropy at the midnight trapping boundary. _Journal of Geophysical Research_, _84_(A11), 6371. Retrieved from [https://doi.org/10.1029/ja084ia11p06371](https://doi.org/10.1029/ja084ia11p06371)
* Imhof et al. (1997) Imhof, W. L., Chenette, D. L., Gaines, E. E., & Winningham, J. D. (1997, January). Characteristics of electrons at the trapping boundary of the radiation belt. _Journal of Geophysical Research: Space Physics_, _102_(A1), 95-104. Retrieved from [https://doi.org/10.1029/96ja02797](https://doi.org/10.1029/96ja02797)
* Imhof et al. (1977) Imhof, W. L., Reagan, J. B., & Gaines, E. E. (1977, November). Fine-scale spatial structure in the pitch angle distributions of energetic particles near the midnight trapping boundary. _Journal of Geophysical Research_, _82_(32), 5215-5221. Retrieved from [https://doi.org/10.1029/ja082i032p05215](https://doi.org/10.1029/ja082i032p05215)
* Liu et al. (2016) Liu, J., Angelopoulos, V., Zhang, X.-J., Turner, D. L., Gabrielse, C., Runov, A.,... Spence, H. E. (2016, February). Dipolarizing flux bundles in the eisgeosynchronous magnetosphere: Relationship between electric fields and energetic particle injections. _Journal of Geophysical Research: Space Physics_, _121_(2), 1362-1376. Retrieved from [https://doi.org/10.1002/2015ja021691](https://doi.org/10.1002/2015ja021691)
* Lu et al. (2018) Lu, S., Pritchett, P. L., Angelopoulos, V., & Artemyev, A. V. (2018, April). Formation of dawn-dusk asymmetry in earth's magnetotail thin current sheet: A three-dimensional particle-in-cell simulation. _Journal of Geophysical Research: Space Physics_, _123_(4), 2801-2814. Retrieved from [https://doi.org/10.1002/2017ja025095](https://doi.org/10.1002/2017ja025095)
* Lyons et al. (1987) Lyons, L. R., Vampola, A. L., & Speiser, T. W. (1987). Ion precipitation from the magnetopause current sheet. _Journal of Geophysical Research_, _92_(A6), 6147. Retrieved from [https://doi.org/10.1029/ja092ia06p06147](https://doi.org/10.1029/ja092ia06p06147)
* Martin et al. (2000) Martin, R., Delcourt, D., Holland, D., & Asbury, M. (2000, April). Magneto-tail particle distributions associated with pitch angle isotropization. _Journal of Atmospheric and Solar-Terrestrial Physics_, _62_(6), 513-519. Retrieved from [https://doi.org/10.1016/s1364-6826](https://doi.org/10.1016/s1364-6826)(00)00020-1
* Newell et al. (1998) Newell, P. T., Sergeev, V. A., Bikkuzina, G. R., & Wing, S. (1998, March). Characterizing the state of the magnetosphere: Testing the ion precipitation maxima latitude (b2i) and the ion isotropy boundary. _Journal of Geophysical Research: Space Physics_, _103_(A3), 4739-4745. Retrieved from [https://doi.org/10.1029/97ja03622](https://doi.org/10.1029/97ja03622)
* Newell et al. (2009) Newell, P. T., Sotirelis, T., & Wing, S. (2009, September). Diffuse, monoenergetic, and broadband aurora: The global precipitation budget. _Journal of Geophysical Research: Space Physics_, _114_(A9), n/a-n/a. Retrieved from [https://doi.org/10.1029/2009ja014326](https://doi.org/10.1029/2009ja014326)
* Sergeev et al. (1983) Sergeev, V., Sazhina, E., Tsyganenko, N., Lundblad, J., & Soraas, F. (1983, October). Pitch-angle scattering of energetic protons in the magnetotail cur
rent sheet as the dominant source of their isotropic precipitation into the nightside ionosphere. _Planetary and Space Science_, _31_(10), 1147-1155. Retrieved from [https://doi.org/10.1016/0032-0633](https://doi.org/10.1016/0032-0633)(83)90103-4 doi:10.1016/0032-0633(83)90103-4
* Sergeev & Tsyganenko (1982) Sergeev, V., & Tsyganenko, N. (1982, October). Energetic particle losses and trapping boundaries as deduced from calculations with a realistic magnetic field model. _Planetary and Space Science_, _30_(10), 999-1006. Retrieved from [https://doi.org/10.1016/0032-0633](https://doi.org/10.1016/0032-0633)(82)90149-0
* Sergeev et al. (2015) Sergeev, V. A., Chernyaev, I. A., Angelopoulos, V., & Ganushkina, N. Y. (2015, December). Magnetospheric conditions near the equatorial footpoints of proton isotropy boundaries. _Annales Geophysicae_, _33_(12), 1485-1493. Retrieved from [https://doi.org/10.5194/angeo-33-1485-2015](https://doi.org/10.5194/angeo-33-1485-2015) doi:10.5194/angeo-33-1485-2015
* Sergeev et al. (2015) Sergeev, V. A., Chernyaeva, S. A., Apatenkov, S. V., Ganushkina, N. Y., & Dubyagin, S. V. (2015, August). Energy-latitude dispersion patterns near the isotropy boundaries of energetic protons. _Annales Geophysicae_, _33_(8), 1059-1070. Retrieved from [https://doi.org/10.5194/angeo-33-1059-2015](https://doi.org/10.5194/angeo-33-1059-2015)
* Sergeev et al. (2018) Sergeev, V. A., Gordeev, E. I., Merkin, V. G., & Sitnov, M. I. (2018, March). Does a local b-minimum appear in the tail current sheet during a substorm growth phase? _Geophysical Research Letters_, _45_(6), 2566-2573. Retrieved from [https://doi.org/10.1002/2018gl077183](https://doi.org/10.1002/2018gl077183) doi:10.1002/2018gl077183
* Sergeev et al. (1993) Sergeev, V. A., Malkov, M., & Mursula, K. (1993, May). Testing the isotropic boundary algorithm method to evaluate the magnetic field configuration in the tail. _Journal of Geophysical Research: Space Physics_, _98_(A5), 7609-7620. Retrieved from [https://doi.org/10.1029/92jab2587](https://doi.org/10.1029/92jab2587)
* Shevchenko et al. (2010) Shevchenko, I. G., Sergeev, V., Kubyshkina, M., Angelopoulos, V., Glassmeier, K. H., & Singer, H. J. (2010, November). Estimation of magnetosphere-ionosphere mapping accuracy using isotropy boundary and THEMIS observations. _Journal of Geophysical Research: Space Physics_, _115_(A11), n/a-n/a. Retrieved from [https://doi.org/10.1029/2010ja015354](https://doi.org/10.1029/2010ja015354) doi:10.1029/2010ja015354
* Speiser (1965) Speiser, T. W. (1965, September). Particle trajectories in model current sheets: 1. analytical solutions. _Journal of Geophysical Research_, _70_(17), 4219-4226. Retrieved from [https://doi.org/10.1029/j2070i017p04219](https://doi.org/10.1029/j2070i017p04219)
* Tsyganenko (1989) Tsyganenko, N. (1989, January). A magnetospheric magnetic field model with a warped tail current sheet. _Planetary and Space Science_, _37_(1), 5-20. Retrieved from [https://doi.org/10.1016/0032-0633](https://doi.org/10.1016/0032-0633)(89)90066-4 doi:10.1016/0032-0633(89)90066-4
* Young (2002) Young, S. L. (2002). Empirical model for \(\mu\)-scattering caused by field line curvature in a realistic magnetosphere. _Journal of Geophysical Research_, _107_(A6). Retrieved from [https://doi.org/10.1029/2000ja000294](https://doi.org/10.1029/2000ja000294)
* Yue et al. (2014) Yue, C., Wang, C.-P., Lyons, L., Liang, J., Donovan, E. F., Zaharia, S. G., & Henderson, M. (2014, October). Current sheet scattering and ion isotropic boundary under 3-d empirical force-balanced magnetic field. _Journal of Geophysical Research: Space Physics_, _119_(10), 8202-8211. Retrieved from [https://doi.org/10.1002/2014ja020172](https://doi.org/10.1002/2014ja020172) doi:10.1002/2014ja020172
## 6 Summary
Figure 1: Model-based spatial profile (GSM xz-cut) on 2020-09-02/14:22:00 of equatorial source locations corresponding to minimum electron kinetic energies for isotropization by field-line curvature scattering, resulting in an isotropy boundary observed at LEO. The minimum required energy for scattering is observed to be lowest at the center of the cross-tail current. The pink and blue curves depict the field-line mappings for the 5 MeV and 50 keV IBs, respectively. Clear energy-latitude dispersion is observed in the IB location for a given energy, with more energetic particles isotropized closer to Earth. The region shaded in yellow shows the set of field-lines mapping into the equatorial energy-latitude dispersion region, while the mint and light blue colors show the poleward and equatorward latitudinal extent ELFIN can typically observe. The critical field geometry parameter \(\kappa_{cr}^{2}\) was taken to be 8.
Figure 2: Example isotropy boundary crossing observed by the ELFIN-B on 2020-09-02. The spacecraft began in the northern polar cap and moved southward toward the equator, crossing first plasma sheet and then outer radiation belt field lines. The top two panels show perpendicular (locally-mirroring) and precipitating electron energy spectra, respectively. The third panel
## 6 Conclusion
Figure 3: Spatial distribution of isotropy boundary locations in our dataset, observed by ELFIN and magnetically mapped to the equator. Points marked in blue represent the closest Earthward footpoint of each observed IB crossing (nominally corresponding to the highest energy observed to have isotropized electrons). Orange points represent the furthest footpoint in the IB dispersion signature (almost always representing the onset of 50 keV electron isotropy – the lowest energy ELFIN can measure). Points marked in gray and black represent the same IB dispersion signatures as blue and orange, respectively, but are too close to the magnetopause (solid red line). Those crossings (both gray and black) have been rejected as likely being of magnetopause origin, based on the criterion that the 50 keV IB (gray point) is farther from Earth than 3 Earth radii inside of the magnetopause (dashed yellow line).
## 6 Discussion
Figure 4: Occurrence rates of IB crossings versus MLT (top), L-shell (middle), and magnetic latitude (bottom). Peak occurrence in MLT is at 22 MLT. Min and max refer to the low and high latitude boundary of the IB-dispersion signature. Median values for L-shell and MLT are shown in the panels, and are L of 6-8 and 66\({}^{\circ}\)-68\({}^{\circ}\), respectively, regardless of which boundary (low or high): the spread of these distributions due to geomagnetic activity and MLT is much larger than the thickness of the IB-dispersion. As a result the L-shell and MLAT distributions vary over wide ranges (several L-shells and \(\sim\)7\({}^{\circ}\) in MLT around the respective medians).
## 6 Conclusion
Figure 5: Activity-dependent mean and probability-weighted standard deviation of IB dispersion region properties, including minimum and maximum energies appearing in the dispersion (top), minimum and maximum L-shells and magnetic latitudes (center), and slope of the dispersion in terms of L-shell and magnetic latitude (bottom). The general trend with increasing geomagnetic activity is for IBs to appear at lower latitudes and shift toward pre-midnight, accompanied by an increase in maximum electron energies. MLTs for which fewer than 5 events were observed were left blank.
Figure 6: Average absolute ELFIN-observed latitudinal profile of IB crossing locations and omnidirectional flux cutoff versus electron energy alongside average latitudinal extent from the IB crossing to omnidirectional flux cutoff versus energy (top), alongside the event-wise difference between these quantities (middle). A clear break in slopes is observed in the vicinity of 300 keV, suggesting a transition from the region of electrons dominated by field-line curvature scattering to that of the electron plasma sheet population (operationally used to define the latitudinal extent of the “PS2ORB” region). The mean latitudinal extent of the PS2ORB region in each MLT bin using the 300 keV cutoff criterion is shown in the bottom panel. The latitudinal width of the region dominated by FLCS exhibits clear variations with local time and activity.
## 6 Discussion
Figure 7: Distribution of latitude-averaged precipitating energy flux from \(\geq\)50 keV electrons for three categories: The entire ELFIN science zone (blue); IB crossing and latitudes poleward of it (orange); and PS2ORB interface region (encompasses IB crossing). The quantity \(\Delta\tilde{\theta}\) represents the average latitudinal extent of the ELFIN observations in each region, while the corresponding quantity \(\left\langle J_{E,prec}\right\rangle_{\Delta\theta}\) represents the mean latitude-averaged integral energy flux over all events in erg/cm\({}^{2}\)-s (or “pfu”). The plot reveals that the IB-associated latitudes have the highest average energy flux of the latitude-mapped source regions under consideration.
## 6 Conclusion
Figure 8: Distribution of the precipitating kinetic energy flux of \(\geq\)50 keV electrons in the PS2ORB region versus MLT. Top: Fraction of the average total precipitating electron energy flux \(\geq\)50 keV compared with all latitudes observed in ELFIN science zones possessing an IB crossing. Bottom: average total precipitating energy flux from precipitating electrons versus MLT. The values were aggregated over all activity levels in the dataset, with each column representing a separate MLT-binned cumulative row-wise probability of occurrence. |
2303.14921 | Sources of torsion in Poincare gauge theory | We study sources for torsion in Poincare gauge theory of any dimension,
signature, and spin. We find that symmetric kinetic terms for non-Yang-Mills
bosonic fields of arbitrary rank drive torsion. Our detailed discussion of
spin-3/2 Rarita-Schwinger fields shows that they source all independent parts
of the torsion. We develop systematic notation for spin-(2k+1)/2 fields and
find the spin tensor for arbitrary k in n > 2k dimensions. For k > 0 there is a
novel direct coupling between torsion and spinor fields. We also cast the
well-known gauge relation between the canonical and Belinfante-Rosenfield
energy tensors in terms of different choices of independent variables. | James T. Wheeler | 2023-03-27T05:35:17Z | http://arxiv.org/abs/2303.14921v2 | # Sources of torsion in Poincare gauge gravity
###### Abstract
We give a concise geometric development of Poincare gauge theory in any dimension and signature, and trace the difference between the canonical and Belinfante-Rosenfield energy tensors to different choices of independent variables. Then we give extensive attention to sources for torsion, finding that symmetric kinetic terms for non-Yang-Mills bosonic fields of arbitrary rank drive torsion. Our detailed discussion of spin-3/2 Rarita-Schwinger fields shows that they source all independent parts of the torsion.
We develop systematic notation for spin-(2k+1)/2 fields and find the spin tensor for arbitrary k in n \(\geq\) 2k+1 dimensions. For \(k>0\) there is a novel direct coupling between torsion and spinor fields.
Introduction
### General relativity as a gauge theory
The Standard Model emerged as a gauge theory over a period of half a century. Early developments [1] coupling the electromagnetic interaction to quantized matter as \(U\left(1\right)\) gauge theory evolved into to our current understanding of the electroweak and strong interactions as arising from local symmetries. Gauge theories' success motivated the later but parallel development of general relativity as a Poincare gauge theory.
Utiyama [2] gave the first treatment of general relativity as a gauge theory, choosing the Lorentz group as the local symmetry. Later, Sciama [3] developed the Lorentz gauge theory further, while Kibble [4] generalized to the full Poincare group by identifying translational gauge fields with the co-tangent basis. With the use of Cartan's quotient method for constructing homogeneous manifolds and generalizing them to curved geometries [5, 6], Ne'eman and Regge [7, 8] applied the gauging to supergravity. These methods still provide a powerful tool for the study of general relativity within broader symmeties. Shortly afterward Ivanov and Niederle used the techniques to study gravity theories based on the Poincare, de Sitter, anti-de Sitter, and conformal groups[9, 10].
While a number of symmetry groups lead to general relativity or equivalent gravity theories with additional structure [9, 10, 11], Poincare gauge theory employs the smallest group yielding the essential features and therefore enjoys the most consistent attention as a gauge theory of gravity. Yet even this modest extention of general relativity introduces new features, most notably the torsion.
Our version of Poincare gauging using Cartan's methods is described in Sections (2) and (3). The principle fields are the curvature and torsion 2-forms, given in terms of the solder form and spin connection. The inclusion of torsion produces a Riemann-Cartan geometry rather than Riemannian.
To reproduce general relativity from Poincare gauging in Riemannian geometry we can disregard the torsion and vary only the metric. The resulting Riemannian geometry is known to be consistent and metric variation leads to a symmetric energy tensor. Exploration of the unconstrained Riemann-Cartan geometry is the purview of ECSK theory and its generalizations to dynamical torsion.
### Palatini variation and ECSK
The original formulation of general relativity assumed the metric compatible Christoffel connection, with the metric as the independent variable so the Einstein-Hilbert action is a functional of the metric \(g\) alone \(S_{EH}\left[g\right]\). It was soon shown by Palatini [12] that if the action is regarded as a functional of the metric and an arbitrary symmetric connection \(S_{EH}\left[g,\Gamma\right]\), we find the usual field equation along with the condition of metric compatibility. With this Palatini variation, the use of the Christoffel connection is derived. However, the assumption of a symmetric connection rules out any role for torsion.
In a gravitational gauge theory built from Poincare symmetry the connection forms are dual to the generators of the original symmetry and it is natural to vary all of them independently. This means varying both the solder form \(\mathbf{e}^{a}\) and the spin connection \(\boldsymbol{\omega}^{a}_{\phantom{a}b}\) in the style of Palatini. When the \(\left(\mathbf{e}^{a},\boldsymbol{\omega}^{a}_{\phantom{a}b}\right)\) variation is carried out with vanishing torsion the usual Einstein theory of gravity results. However, when the full Riemann-Cartan geometry including torsion is allowed, the spin tensor of matter sources will lead to nonvanishing torsion in the same way that the energy tensor drives curvature. In this sense Poincare gauge theory can make predictions beyond those of general relativity.
The development of Riemann-Cartan geometry using the Einstein-Hilbert action is now known as the Einstein-Cartan-Sciama-Kibble (ECSK) model of gravity. Its long history begins with Cartan's generalization of Riemannian geometry [13, 14, 15, 16]. A few years later Einstein used torsionful geometry to discuss teleparallel model [17] though this theory is not cast in the same terms as general relativity. Originally, the evolving ECSK theory was the study of the metric variation of the Einstein-Hilbert action \(S_{EH}\left[g\right]\) in a Riemann-Cartan geometry. The gauge theory approach was more fully developed starting with Utiyama and continuing as outlined above [2, 3, 4, 7, 8, 9, 10]. A detailed review is given in [18]. With the advent of modern gauge theory it has become natural to vary both metric and connection \(S_{EH}\left[g,\Gamma\right]\) or both solder form and spin connection \(S_{EH}\left[e,\omega\right]\).
Basing gravity theory on the Einstein-Hilbert action with source fields, torsion is found to be non-propagating and vanishing away from material sources. This is perhaps a benefit, since the geometric understanding of torsion implies non-integrability of functions around closed curves, in much the same way as vectors are rotated under parallel transport around loops in Riemannian geometry. Since there is no experimental evidence in favor of torsion, and limits on torsion coupling to matter are strong (see Donald E. Neville\({}^{1}\)[19]) much study of ECSK has focussed on showing that torsion does not persist in physical situations (e.g., [20]). It is natural that the seemingly pathological non-integrability, the anomolous effect on angular momentum, and in general the extreme success of general relativity should have this effect. Nonetheless, the study of ECSK theory has drawn considerable attention over the last century, including generalizations to propagating torsion [19, 21, 20, 22, 23]. The latter have been criticized as incapable of simultaneous unitarity and normalizability [24].
On the other hand, sometimes a deeper understanding of geometry and general relativity is to be gained by fully exploring nearby theories. This is the goal of the present work: to describe broad classes of sources for torsion in Poincare gauge theory. Our results hold in any dimension \(n\) and any signature \((p,q)\). The exercise includes some important physical predictions, since some of the sources we discuss, notably the spin-\(\frac{3}{2}\) Rarita-Schwinger field, are predicted by string and other supergravity theories.
In the next Section we present the basic properties of Poincare gauge theory using Cartan methods. We include the structure equations, Bianchi identities, the solution for the spin connection in terms of the compatible connection and the contorsion, and the decomposition of the torsion into invariant parts. These results are geometrrical.
The ECSK action is introduced in Section (3), where we discuss two distinct methods of variation. For the first method the action is taken as a functional of the solder form and the full spin connection, \(S\left[\mathbf{e}^{a},\boldsymbol{\omega}^{a}_{\phantom{a}b}\right]\), in the spirit of Palatini but allowing torsion. The second method uses the decomposition of the spin connection into a compatible piece and the contorsion tensor \(\boldsymbol{\omega}^{a}_{\phantom{a}b}=\boldsymbol{\alpha}^{a}_{\phantom{a}b }+\mathbf{C}^{a}_{\phantom{a}b}\). This allows us to respect the Lorentz fiber structure of the bundle by varying only the Lorentz tensors-the solder form and the contorsion, while treating the compatible part of the spin connection as a functional of the solder form \(\boldsymbol{\alpha}^{a}_{\phantom{a}b}=\boldsymbol{\alpha}^{a}_{\phantom{a}b }\left(\mathbf{e}^{a}\right)\).
The effect of generic matter fields is studied in Section (4), where the contrast between the two variational approaches of the previous Section become important: different choices of independent variables give different energy tensors. We show that this leads to the difference between the canonical energy tensor and the Belinfante-Rosenfield tensor. Additionally, we show that while the solder form variation leads to an antisymmetric piece of the Einstein equation, Lorentz invariance restores symmetry.
The bulk of our investigation, presented in Section (5), concerns the effects of various types of fundamental fields on torsion. The exceptional cases of Klein-Gordon and Yang-Mills fields are treated first. The actions for these fields do not depend on the spin connection and therefore do not provide sources for torsion. Next, we study a class of bosonic fields of arbitrary spin with actions quadratic and symmetric in covariant derivatives. Except for scalars, these drive torsion. In Subsection (5.3) we derive the well-known axial current source for totally antisymmetric torsion arising from Dirac fields. We also check the effect of nonvanishing spin tensor in the limit of general relativity where the torsion vanishes.
The effect of the less thoroughly studied Rarita-Schwinger field on torsion is examined in Subsection (5.4). While the axial source for Dirac fields arises from the anticommutator of a \(\gamma\)-matrix with the spin connection, the Rarita-Schwinger field couples through a similar anticommutator but with the product of three \(\gamma\)-matrices. In addition, we find a new direct coupling of the spin-\(\frac{3}{2}\) field to torsion. Unlike the Dirac field with only an axial current source, the Rarita-Schwinger field drives all three independent pieces of the torsion. Except in dimensions \(5,7\) and \(9\), spin-\(\frac{3}{2}\) fields have enough degrees of freedom to drive all components of the torsion independently.
Finally, we introduce new compact notation for spin-\(\frac{2k+1}{2}\) spinor-valued \(p\)-form fields in Subsection (5). This enables us to write actions for arbitrary \(k\) and find the general form of the spin tensor. The physical properties appear to echo those of the Rarita-Schwinger field.
We conclude with a brief summary of our results.
## 2 Poincare gauge theory
All results below hold in arbitrary dimension \(n=p+q\) and signature \(s=p-q\). The group we gauge is then \(SO\left(p,q\right)\) or \(Spin\left(p,q\right)\) with the familiar spacetime case having \(p=3,q=1\).
There are two stages to building the Poincare gauge theory: First, we apply Cartan's construction to develop a fiber bundle and second, we specify an action functional.
The construction of the geometry is described in Section (2). Briefly, we use structure constants of Poincare Lie algebra to write the Maurer-Cartan equations, a set of first order differential equations. These equations are equivalent to the Lie algebra. Next, we form the quotient of the Poincare group by its Lorentz subgroup and the Lorentz equivalence classes (cosets) form a manifold. Defining a projection from the cosets to this manifold gives a principal fiber bundle. The manifold is homogeneous and the fibers are Lorentz. The final step is to change the connection forms to give horizontal curvatures and to (perhaps) change the manifold.
### Geometric relations of Riemann-Cartan geometry
By Poincare gauge theory, we mean physical models based on the unrestricted Cartan gauge theory of the Poincare group. Starting with the generators \(M^{a}_{\phantom{a}b}\) and \(P_{a}\) of the Poincare Lie algebra, we define 1-forms \(\boldsymbol{\omega}^{a}_{\phantom{a}b}\) and \(\mathbf{e}^{b}\)
\[\left\langle M^{c}_{\phantom{c}d},\boldsymbol{\omega}^{a}_{ \phantom{a}b}\right\rangle = \eta^{ac}\eta_{bd}-\delta^{c}_{b}\delta^{a}_{d}\] \[\left\langle P_{a},\mathbf{e}^{b}\right\rangle = \delta^{b}_{a}\]
The Maurer-Cartan equations for dual forms for any Lie algebra \(\left\langle G_{A},\boldsymbol{\omega}^{B}\right\rangle=\delta^{B}_{A}\) are given by \(\mathbf{d}\tilde{\boldsymbol{\omega}}^{A}=-\frac{1}{2}c^{A}_{\phantom{A}BC} \tilde{\boldsymbol{\omega}}^{B}\wedge\tilde{\boldsymbol{\omega}}^{C}\) where \(c^{A}_{\phantom{A}BC}\) are the structure constants. For the Poincare group \(\mathcal{P}\) this gives
\[\mathbf{d}\tilde{\boldsymbol{\omega}}^{a}_{\phantom{a}b} = \tilde{\boldsymbol{\omega}}^{c}_{\phantom{a}b}\wedge\tilde{ \boldsymbol{\omega}}^{a}_{\phantom{a}c}\] \[\mathbf{d}\tilde{\mathbf{e}}^{a} = \tilde{\mathbf{e}}^{b}\wedge\tilde{\boldsymbol{\omega}}^{a}_{ \phantom{a}b}\]
and we take the quotient by the Lorentz subgroup \(\mathcal{L}\) allows us to develop a principal fiber bundle with Lorentz symmetry over a homogeneous \(n\)-dimensional manifold \(\mathcal{M}^{(n)}\).
By modifying the solder form and the spin connection 1-forms \(\left(\tilde{\mathbf{e}}^{b},\tilde{\boldsymbol{\omega}}^{a}_{\phantom{a}b} \right)\rightarrow\left(\mathbf{e}^{b},\boldsymbol{\omega}^{a}_{\phantom{a}b}\right)\) we introduce a Poincare covariant tensors with two Lorentz covariant components: the _curvature_\(\boldsymbol{\mathcal{R}}^{a}_{\phantom{a}b}\) and the _torsion_\(\mathbf{T}^{a}\)
\[\mathbf{d}\boldsymbol{\omega}^{a}_{\phantom{a}b} = \boldsymbol{\omega}^{c}_{\phantom{a}b}\wedge\boldsymbol{\omega}^ {a}_{\phantom{a}c}+\boldsymbol{\mathcal{R}}^{a}_{\phantom{a}b} \tag{1}\] \[\mathbf{d}\mathbf{e}^{a} = \mathbf{e}^{b}\wedge\boldsymbol{\omega}^{a}_{\phantom{a}b}+ \mathbf{T}^{a} \tag{2}\]
We require the \(\boldsymbol{\mathcal{R}}^{a}_{\phantom{a}b}\) and \(\mathbf{T}^{a}\) to be horizontal,
\[\boldsymbol{\mathcal{R}}^{a}_{\phantom{a}b} = \frac{1}{2}\mathcal{R}^{a}_{\phantom{a}bcd}\mathbf{e}^{c}\wedge \mathbf{e}^{d} \tag{3}\] \[\mathbf{T}^{a} = \frac{1}{2}T^{a}_{\phantom{a}bce}\mathbf{e}^{b}\wedge\mathbf{e}^ {c} \tag{4}\]
thereby preserving the bundle structure. Integrability of the Cartan equations Eqs.(1) and (2) is insured by \(\mathbf{d}^{2}\boldsymbol{\omega}^{a}_{\phantom{a}b}\equiv 0\) and \(\mathbf{d}^{2}\mathbf{e}^{a}\equiv 0\), which require the Bianchi identities,
\[\boldsymbol{\mathcal{D}}\mathbf{T}^{a} = \mathbf{e}^{b}\wedge\boldsymbol{\mathcal{R}}^{a}_{\phantom{a}b} \tag{5}\] \[\boldsymbol{\mathcal{D}}\boldsymbol{\mathcal{R}}^{a}_{\phantom{a}b} = 0 \tag{6}\]
where the covariant exterior derivatives are given by
\[{\cal D}{\cal R}^{a}{}_{b} = {\bf d}{\cal R}^{a}{}_{b}+{\cal R}^{c}{}_{b}\wedge{\mathbf{ \omega}}^{a}{}_{c}-{\mathbf{\omega}}^{c}{}_{b}\wedge{\cal R}^{a}{}_{c}\] \[{\cal D}{\bf T}^{a} = {\bf d}{\bf T}^{a}+{\bf T}^{b}\wedge{\mathbf{\omega}}^{a}{ }_{b}\]
When the connection is assumed to be compatible with the metric, Eqs.(1)-(6) describe _Riemann-Cartan geometry_ in the Cartan formalism. Note that the _Cartan-Riemann_ curvature, \({\cal R}^{a}{}_{b}\), differs from the Riemann curvature \({\bf R}^{a}{}_{b}\) by terms dependent on the torsion. When the torsion vanishes, \({\bf T}^{a}=0\), the Riemann-Cartan curvature \({\cal R}^{a}{}_{b}\) reduces to the Riemann curvature \({\bf R}^{a}{}_{b}\) and Eqs.(1) and (2) exactly reproduce the expressions for the connection and curvature of a general Riemannian geometry. At the same time, Eqs.(5) and (6) reduce to the usual first and second Bianchi identities.
The orthonormal frame fields \({\bf e}^{a}\) satisfy
\[\left\langle{\bf e}^{a},{\bf e}^{b}\right\rangle=\eta^{ab}\]
In ECSK theory, the connection is assumed compatible with the Lorentz (or \(SO\left(p,q\right)\), \(Spin\left(p,q\right)\)) metric \(\eta^{ab}\). This implies antisymmetry of the spin connection.
\[0 = {\cal D}\eta_{ab}\] \[= {\bf d}\eta_{ab}-\eta_{cb}{\mathbf{\omega}}^{c}{}_{a}- \eta_{ac}{\mathbf{\omega}}^{c}{}_{b}\] \[= -\left({\mathbf{\omega}}_{ba}+{\mathbf{\omega}} _{ab}\right)\]
Antisymmetry together with Eq.(2) fully determines the spin connection up to local Lorentz transformations.
These results are geometric; a physical model follows when we posit an action functional. The action may depend on the bundle tensors \({\bf e}^{b}\), \({\bf T}^{a}\), \({\mathbf{\cal R}}^{a}{}_{b}\) and the invariant tensors \(\eta_{ab}\) and \(e_{ab...d}\). To this we may add action functionals built from any field representations of the fiber symmetry group (Lorentz, \(SO\left(p,q\right),Spin\left(p,q\right)\))-scalars, spinors, vector fields, etc.
The relation between the Riemann-Cartan curvature \({\cal R}^{a}{}_{b}\) and the Riemann curvature \({\bf R}^{a}{}_{b}\) is developed below.
From the known consistency of Riemannian geometry, we know we may set \({\bf T}^{a}=0\) in the Cartan equations of Riemann-Cartan geometry. However, this does not mean that a Poincare theory of gravity following from an action based on Poincare symmetry leads to the same restriction. Vanishing torsion must also be a satisfactory solution to the field equations, including sources.
We continue to develop geometric properties in the remainder of this Section. We first solve for the spin connection in the presence of torsion. This allows us to express the Riemann-Cartan curvature in terms of the torsion and Riemann curvature. For use in some subsequent calculations we also find these results in a coordinate basis. We conclude the Section with the decomposition of the torsion into invariant subspaces before moving on to the ECSK action in Section 3.
### Solving for the connection
The structure equations, Eqs.(1) and (2), allow us to derive explicit forms for the connection and curvature. Starting from the Cartan structure equation, Eq.(2), write the spin connection as the sum of two terms
\[{\mathbf{\omega}}^{a}{}_{b}={\mathbf{\alpha}}^{a}{}_{b}+{ \mathbf{\beta}}^{a}{}_{b}\]
where \({\mathbf{\alpha}}^{a}{}_{b}\) is defined to be the torsion-free connection, \({\bf d}{\bf e}^{a}={\bf e}^{b}\wedge{\mathbf{\alpha}}^{a}{}_{b}\). Then \({\mathbf{\beta}}^{a}{}_{b}\) must satisfy
\[0 = {\bf e}^{b}\wedge{\mathbf{\beta}}^{a}{}_{b}+{\bf T}^{a} \tag{7}\]
To solve this the 1-form \({\mathbf{\beta}}_{ab}\) must linear in the torsion and antisymmetric. These conditions dictate the ansatz
\[{\mathbf{\beta}}_{ab} = a{\bf e}^{c}T_{cab}+b{\bf e}^{c}\left(T_{acb}-T_{bca}\right)\]
for some constants \(a,b\). Substitution into Eq.(7) quickly leads to \(a=b=\frac{1}{2}\), and the spin connection is
\[{\mathbf{\omega}}^{a}{}_{b} = {\mathbf{\alpha}}^{a}{}_{b}+\frac{1}{2}\left(T_{c}{}^{a}{}_ {b}+T^{a}{}_{cb}-T_{bc}{}^{a}\right){\bf e}^{c} \tag{8}\] \[= {\mathbf{\alpha}}^{a}{}_{b}+{\bf C}^{a}{}_{b}\]
where \({\bf C}^{a}{}_{b}\) is the _contorsion_,
\[{\bf C}^{a}{}_{b}=\frac{1}{2}\left(T_{c}{}^{a}{}_{b}+T^{a}{}_{cb}-T_{bc}{}^{a }\right){\bf e}^{c} \tag{9}\]
The decomposition of the connection is unique. Local Lorentz transformations transform \({\mathbf{\alpha}}^{a}{}_{b}\) inhomogeneously in the familiar way while torsion and contorsion are tensors. The form of contorsion (9) in terms of torsion is unique and invertible.
We may recover the torsion by wedging and contracting with \({\bf e}^{b}\).
\[{\bf C}^{a}{}_{b}\wedge{\bf e}^{b} = {\bf T}^{a}\]
Conversely, we can write the contorsion in terms of the torsion 2-form. First, write the contorsion as
\[{\bf C}_{ab} = \left(\frac{3}{2}T_{[abc]}+T_{bac}-T_{abc}\right){\bf e}^{c}\]
Now convert the 2-form \({\bf T}^{b}\) and the 3-form \({\bf e}^{c}\wedge{\bf T}_{c}\) to 1-forms \({}^{*}\left({\bf e}^{a}\wedge{}^{*}{\bf T}^{b}\right)\) and \({}^{*}{\bf e}^{a}\wedge{\bf e}^{b}\wedge{}^{*}\left({\bf e}^{c}\wedge{\bf T}_{c}\right)\) respectively, leading to the somewhat daunting form
\[{\bf C}^{ab} = (-1)^{p}\ {}^{*}\left({\bf e}^{a}\wedge{}^{*}{\bf T}^{b}-{\bf e}^{b} \wedge{}^{*}{\bf T}^{a}-\frac{1}{2}{\bf e}^{a}\wedge{\bf e}^{b}\wedge{}^{*} \left({\bf e}^{c}\wedge{\bf T}_{c}\right)\right)\]
Clearly, for some calculations, the component notation is simpler.
The torsion now enters the curvature through the connection. Expanding the Cartan-Riemann curvature of Eq.(1) using Eq.(8) then identifying the \(\alpha\)-covariant derivative, \({\bf DC}^{a}{}_{b}={\bf d}{\bf C}^{a}{}_{b}-{\bf C}^{c}{}_{b}\wedge{\mathbf{\alpha}}^{a}{}_{c}-{\mathbf{\alpha}}^{c}{}_{b}\wedge{\bf C }^{a}{}_{c}\) leads to
\[{\mathbf{\cal R}}^{a}{}_{b} = {\bf R}^{a}{}_{b}+{\bf DC}^{a}{}_{b}-{\bf C}^{c}{}_{b}\wedge{\bf C }^{a}{}_{c} \tag{10}\]
This is the Riemann-Cartan curvature expressed in terms of the Riemann curvature and the contorsion. Note that the \(\alpha\)-covariant derivative is compatible with the solder form, \({\bf De}^{a}={\bf de}^{a}-{\bf e}^{b}\wedge{\mathbf{\alpha}}^{a}{}_{b}=0\).
Given Eq.(10) for the Cartan-Riemann curvature in terms of the Riemannian curvature and connection, we may also expand the generalized Bianchi identities of Eqs.(5) and (6). The first Bianchi becomes
\[{\bf d}{\bf T}^{a}+{\bf T}^{b}\wedge({\mathbf{\alpha}}^{a}{}_{b}+{ \bf C}^{a}{}_{b}) = {\bf e}^{b}\wedge{\bf R}^{a}{}_{b}+{\bf e}^{b}\wedge{\bf DC}^{a}{} _{b}-{\bf e}^{b}\wedge{\bf C}^{c}{}_{b}\wedge{\bf C}^{a}{}_{c}\]
Using \({\bf D}{\bf e}^{a}=0\) and replacing \({\bf C}^{c}{}_{b}\wedge{\bf e}^{b}={\bf T}^{c}\) leads to the Riemannian Bianchi \({\bf e}^{b}\wedge{\bf R}^{a}{}_{b}=0\).
Similarly, expanding the derivative in the second Bianchi gives
\[0={\bf D}{\mathbf{\cal R}}^{a}{}_{b}+{\mathbf{\cal R}}^{c}{ }_{b}\wedge{\bf C}^{a}{}_{c}-{\bf C}^{c}{}_{b}\wedge{\mathbf{\cal R}} ^{a}{}_{c}\]
Replacing \({\mathbf{\cal R}}^{a}{}_{b}={\bf R}^{a}{}_{b}+{\bf DC}^{a}{}_{b}-{ \bf C}^{c}{}_{b}\wedge{\mathbf{\cal C}}^{a}{}_{c}\) throughout then using \({\bf e}^{b}\wedge{\bf C}^{c}{}_{b}={\bf T}^{c}\) and \({\bf D}^{2}{\bf C}^{a}{}_{b}={\bf C}^{c}{}_{b}\wedge{\bf R}^{a}{}_{c}-{\bf C}^ {a}{}_{c}\wedge{\bf R}^{c}{}_{b}\) leads to several cancellations and finally
\[{\bf D}{\bf R}^{a}{}_{b} = 0\]
so that the Cartan-Riemann Bianchi identities hold if and only if the Riemann Bianchi identities hold.
The first Bianchi identity relates the triply antisymmetric part of the curvature tensor \({\mathbf{\cal R}}^{a}{}_{b}\) to the exterior derivative of the torsion. Expanding both sides of Eq.(5), antisymmetrizing, then stripping the basis,
\[{\cal R}^{a}{}_{bcd}+{\cal R}^{a}{}_{cdb}+{\cal R}^{a}{}_{dbc}={\cal D}_{d}T^{a }{}_{bc}+{\cal D}_{b}T^{a}{}_{cd}+{\cal D}_{c}T^{a}{}_{db}\]
Contracting \(ad\) and using \({\cal R}^{c}_{\ \ \ cclb}=0\) (by the structure equation Eq.(1) and the antisymmetry of the spin connection) we have
\[{\cal R}_{cb}-{\cal R}_{bc}={\cal D}_{a}{\cal T}^{a}_{\ \ bc}\]
where we define
\[{\cal T}^{a}_{\ \ bc}=T^{a}_{\ \ bc}-\delta^{a}_{b}T^{e}_{\ \ ec}+\delta^{a}_{c}T^{e }_{\ \ eb}\]
For all \(n>2\) this is invertible, \(T^{a}_{\ \ bc}={\cal T}^{a}_{\ \ bc}+\frac{1}{n-2}\left(\delta^{a}_{c}{\cal T}^{e }_{\ \ eb}-\delta^{a}_{b}{\cal T}^{e}_{\ \ ec}\right)\). Then the antisymmetric part of the Ricci-Cartan tensor is simply minus the divergence
\[{\cal R}_{ab}-{\cal R}_{ba} = -{\cal D}_{c}{\cal T}^{c}_{\ \ ab} \tag{11}\]
Therefore the Ricci tensor of the Cartan-Riemann curvature acquires an antisymmetric part dependent on derivatives of the torsion.
Because the curvature is a 2-form, and the spin connection is antisymmetric, the curvature satisfies \({\cal R}_{abcd}={\cal R}_{ab[cd]}={\cal R}_{[ab]cd}\) and there is still only one independent contraction.
#### 2.2.1 Coordinate expressions
The solder form equation (2) may be solved algebraically for the either the spin connection or the general linear connection. Here we solve for the general linear case. The combined components of the vanishing 2-form \({\bf de}^{a}-{\bf e}^{b}\wedge{\mathbf{\omega}^{a}}_{\ b}-{\bf T}^{a}=0\) must be symmetric
\[\partial_{\mu}e_{\nu}^{\ \ a}+e_{\nu}^{\ \ b}\omega^{a}_{\ \ b\mu}+\frac{1}{2}T^{a}_{\ \ \nu\mu}= \Lambda^{a}_{\ \ \nu\mu} \tag{12}\]
where lower case Latin indices refer to the pseudo-orthonormal frames \({\bf e}^{a}\) while lower case Greek indices refer to a coordinate basis, \({\bf d}x^{\mu}\). We recognize Eq.(12) as a vanishing covariant derivative
\[{\cal D}_{\mu}e_{\nu}^{\ \ a} = \partial_{\mu}e_{\nu}^{\ \ a}+e_{\nu}^{\ \ b}\omega^{a}_{\ \ b\mu}-e_{\sigma}^{\ \ a}\Sigma^{\sigma}_{\ \ \nu\mu}=0\]
where \(\Sigma^{\beta}_{\ \ \nu\mu}=\Lambda^{\beta}_{\ \nu\mu}-\frac{1}{2}T^{\beta}_{\ \nu\mu}\). Contracting Eq.(12) with \(\eta_{ac}e_{\beta}^{\ \ c}\) we symmetrize on \(\beta\nu\). The spin connection terms cancel and the derivatives combine into a single covariant derivative of the metric.
\[0=\partial_{\mu}g_{\beta\nu}-g_{\beta\sigma}\Sigma^{\sigma}_{\ \ \nu\mu}-g_{\nu \sigma}\Sigma^{\sigma}_{\ \ \beta\mu}={\cal D}_{\mu}g_{\beta\nu}\]
We solve this familiar form of metric compatibility in the usual way by cycling indices then adding two permutations and subtracting the third, but using \(\Sigma_{\beta\nu\mu}-\Sigma_{\beta\mu\nu}=T_{\beta\mu\nu}\) to rearrange index order. Restoring the usual index positions the result is
\[\Sigma^{\nu}_{\ \beta\mu}=\Gamma^{\nu}_{\ \mu\beta}-C^{\nu}_{\ \beta\mu}\]
wherewhere \(\Gamma^{\alpha}_{\ \mu\nu}\) is the Christoffel connection and we recognize the contorsion tensor,
\[C_{\beta\nu\mu}=-C_{\nu\beta\mu}=\frac{1}{2}\left(T_{\beta\nu\mu}+T_{\nu\mu \beta}-T_{\mu\beta\nu}\right)\]
The vanishing covariant derivative of the vielbein takes the form
\[0={\cal D}_{\mu}e_{\nu}^{\ \ a} = \partial_{\mu}e_{\nu}^{\ \ a}+e_{\nu}^{\ \ b}\omega^{a}_{\ \ b\mu}-e_{\sigma}^{\ \ a}\Gamma^{\sigma}_{\ \ \nu\mu}+e_{\sigma}^{\ \ a}C^{\sigma}_{\ \ \nu\mu}\]
### Decompostion of the torsion
We identify well-known invariant pieces of the torsion. The torsion includes a totally antisymmetric piece
\[{\bf T} \equiv \frac{1}{3}{\bf e}^{a}\wedge{\bf T}_{a}=\frac{1}{3!}T_{abc}{\bf e }^{a}\wedge{\bf e}^{b}\wedge{\bf e}^{c} \tag{13}\]
with \(\frac{1}{\Theta}n\left(n-1\right)\left(n-2\right)\) degrees of freedom. Note that in 4 or 5 dimensions the dual of \(\mathbf{T}\) is a lower rank object.
\[{}^{*}\mathbf{T} = \frac{1}{3!}T^{abc}e_{abcd}\mathbf{e}^{d}\] \[{}^{*}\mathbf{T} = \frac{1}{3!2!}T^{abc}e_{abcde}\mathbf{e}^{d}\wedge\mathbf{e}^{e}\]
in particular giving the well-known axial vector in 4-dimensions. There is also a single vectorial contraction.
\[T^{b}{}_{ba}\mathbf{e}^{a} = \left(-1\right)^{p}\,{}^{*}\left(\mathbf{e}^{b}\wedge\,{}^{*} \mathbf{T}_{b}\right) \tag{14}\]
Writing Eqs.(13) and (14) as 2-forms
\[\frac{1}{2}\eta^{ab}T_{[bcd]}\mathbf{e}^{c}\wedge\mathbf{e}^{d} = \left(-1\right)^{q}3!\,{}^{*}\left(\mathbf{e}^{a}\wedge\,{}^{*} \mathbf{T}\right)\] \[\mathbf{e}^{b}\wedge\left(T^{c}{}_{ca}\mathbf{e}^{a}\right) = \left(-1\right)^{p}\mathbf{e}^{b}\wedge\,{}^{*}\left(\mathbf{e} ^{c}\wedge\,{}^{*}\mathbf{T}_{c}\right)\]
we may decompose the full torsion in \(n=p+q\) dimensions as
\[\mathbf{T}^{a} = \boldsymbol{\tau}^{a}+\frac{1}{n-1}\left(-1\right)^{p}\mathbf{e}^ {b}\wedge\,{}^{*}\left(\mathbf{e}^{c}\wedge\,{}^{*}\mathbf{T}_{c}\right)+ \left(-1\right)^{q}3!\,{}^{*}\left(\mathbf{e}^{a}\wedge\,{}^{*}\mathbf{T}\right) \tag{15}\]
where \(\boldsymbol{\tau}^{a}\) is a traceless, mixed symmetry 2-form with \(N=\frac{n}{3}\left(n^{2}-4\right)\) degrees of freedom. This remaining piece may be further decomposed into symmetric \(\tau_{(ab)c}\) and antisymmetric \(\tau_{[ab]c}\) parts.
In components the decomposition is simply
\[T^{a}{}_{bc} = \tau^{a}{}_{bc}+\frac{1}{n-1}\left(\delta_{b}^{a}T^{e}{}_{ec}- \delta_{c}^{a}T^{e}{}_{eb}\right)+\eta^{ac}T_{[ebc]} \tag{16}\]
While the vector and pseudovector each have 4 degrees of freedom in 4-dimensions, the situation is very different in higher dimensions. In general the torsion has a total of \(\frac{n^{2}\left(n-1\right)}{2}\) degrees of freedom. Therefore, while the trace contains only \(n\) degrees of freedom for a fraction \(\frac{2}{n\left(n-1\right)}\sim\frac{1}{n^{2}}\) of the total, the antisymmetric part includes \(\frac{1}{3!}n\left(n-1\right)\left(n-2\right)\) or roughly
\[\frac{n-2}{3n}\sim\frac{1}{3}\]
The residual tensor \(\boldsymbol{\tau}^{a}\) includes the remaining \(\frac{2\left(n^{2}-4\right)}{3n\left(n-1\right)}\sim\frac{2}{3}\). Thus, the antisymmetric part is a major contributor in higher dimensions.
## 3 Vacuum ECSK theory
The physical content of the Einstein-Cartan-Sciama-Kibble theory enters through use of the Einstein-Hilbert action in Riemann-Cartan geometry. The physical content also depends on making one of several possible choices of independent variables: the metric \(g_{\alpha\beta}\) alone, the metric and connection \(\left(g_{\alpha\beta},\Gamma^{\mu}{}_{\alpha\beta}\right)\), the solder form and spin connection \(\left(\mathbf{e}^{a},\boldsymbol{\omega}^{a}{}_{b}\right)\) or the solder form and contorsion \(\left(\mathbf{e}^{a},\mathbf{C}^{a}{}_{b}\right)\). We carry out two forms of the variation, \(\left(\mathbf{e}^{a},\boldsymbol{\omega}^{a}{}_{b}\right)\) and \(\left(\mathbf{e}^{a},\mathbf{C}^{a}{}_{b}\right)\).
Two differences from general relativity arise with these choices. First, the asymmetry of the solder form means that the Einstein tensor and energy tensor acquire antisymmetric parts [25]. We show in general in Section (3) and explicitly for the Dirac field in Subsection (5.3.3), that the antisymmetric parts vanish as a consequence of Lorentz invariance. The second issue is that varying the spin connection in a Riemann-Cartan geometry gives nonvanishing sources for torsion. We explore the nature of these sources for a variety of types of field.
For the gravity action, we restrict attention to the Einstein-Hilbert form but with the Riemann-Cartan scalar curvature. Alternatives with propagating torsion are considered in [19, 21, 20, 22], and with additional modification in [26].
### Gravity action
The Einstein-Hilbert form of the action with the Riemann-Cartan curvature scalar, in \(n\)-dimensions is
\[S_{ECSK}\left[\mathbf{e}^{a},\boldsymbol{\omega}^{a}{}_{b}\right]=\frac{\kappa}{ 2\left(n-2\right)!}\int\boldsymbol{\mathcal{R}}^{ab}\wedge\mathbf{e}^{c}\wedge \ldots\wedge\mathbf{e}^{d}e_{abc\ldots d} \tag{17}\]
This action, plus arbitrary source terms, is our definition of the ECSK theory of gravity.
We define a volume form as the Hodge dual of unity, \(\boldsymbol{\Phi}={}^{*}1=\frac{1}{n!}e_{ab\ldots c}\mathbf{e}^{a}\wedge \mathbf{e}^{b}\wedge\ldots\wedge\mathbf{e}^{c}\) and therefore, \({}^{*}\boldsymbol{\Phi}=(-1)^{q}\) in signature \((p,q)\). It follows that
\[\underbrace{\mathbf{e}^{a}\wedge\mathbf{e}^{b}\wedge\ldots\wedge \mathbf{e}^{c}}_{n\;terms} = (-1)^{q}\,e^{ab\ldots c}\boldsymbol{\Phi}\]
where \(e_{ab\ldots c}\) is the Levi-Civita tensor. Let \(\varepsilon_{ab\ldots c}\) be the totally antisymmetric symbol with \(\varepsilon_{12\ldots n}=1\) and \(e=\det\left(e_{\mu}^{\phantom{\mu}a}\right)=\sqrt{|g|}\), so that \(e_{12\ldots n}=e\varepsilon_{12\ldots n}\) and \(e^{12\ldots n}=(-1)^{q}\,\frac{1}{e}\varepsilon_{abcd}\). Expanding the curvature 2-form,
\[\boldsymbol{\mathcal{R}}^{ab}\wedge\mathbf{e}^{c}\wedge\ldots \wedge\mathbf{e}^{d}e_{abc\ldots d} = \frac{1}{2}\mathcal{R}^{ab}\;\;_{ef}\mathbf{e}^{c}\wedge\mathbf{ e}^{f}\wedge\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{e}^{d}e_{abc\ldots d}\] \[= \frac{1}{2}\mathcal{R}^{ab}\;\;_{ef}\left(-1\right)^{q}e^{ef \ldots d}e_{abc\ldots d}\boldsymbol{\Phi}\] \[= (n-2)!\mathcal{R}^{ab}\;\;_{ab}\boldsymbol{\Phi}\]
shows the equivalence to the scalar curvature and we may write \(S\left[\mathbf{e}^{a},\boldsymbol{\omega}^{a}{}_{b}\right]=\frac{1}{2}\kappa \int\mathcal{R}\boldsymbol{\Phi}\).
We first vary the solder form and the spin connection. As noted above, some differences arise from the metric \(S\left[g\right]\) or metric/connection \(S\left[g,\Gamma\right]\) variations because the solder form is not symmetric.
#### 3.1.1 Two considerations
There are two subtle points regarding the independent variation of the solder form and connection.
First, we require the Gibbons-Hawking-York surface term [[27, 28, 29, 30]] because fixing both \(\delta\mathbf{e}^{a}=0\) and \(\delta\boldsymbol{\omega}^{a}{}_{b}=0\) overdetermines the solution in the bulk. This can be seen from the conditions for the initial value problem-specifying the metric and the intrinsic curvature of an initial Cauchy surface is enough to propagate a unique solution as the time evolution. It is straightforward to check that adding the Gibbons-Hawking-York surface term resolves the issue, while leaving the expected field equations in the bulk.
The second point is that the decomposition of the connection \(\boldsymbol{\omega}^{ab}=\boldsymbol{\alpha}^{ab}+\mathbf{C}^{ab}\) makes it possible treat the action as either a functional \(S\left[\mathbf{e}^{a},\boldsymbol{\omega}^{a}{}_{b}\right]\) or as \(S\left[\mathbf{e}^{a},\mathbf{C}^{a}{}_{b}\right]\). In the latter case the remainder of the connection is taken as \(\boldsymbol{\alpha}^{ab}=\boldsymbol{\alpha}^{ab}\left(\mathbf{e}^{c}\right)\) where the form of \(\delta_{e}\boldsymbol{\alpha}^{ab}\left(\mathbf{e}^{c}\right)\) follows from the structure equation. Varying \(\mathbf{d}\mathbf{e}^{a}=\mathbf{e}^{b}\wedge\boldsymbol{\alpha}^{a}{}_{b}\) we find
\[\boldsymbol{\mathcal{D}}\left(\delta\mathbf{e}^{a}\right) = \mathbf{e}^{b}\wedge\delta\boldsymbol{\alpha}^{a}{}_{b}\]
Then expanding in components \(\delta\alpha^{a}{}_{bc}-\delta\alpha^{a}{}_{cb}=e_{c}{}^{\nu}\mathcal{D}_{b} \left(\delta e_{\nu}^{\phantom{\nu}a}\right)-e_{b}{}^{\nu}\mathcal{D}_{c} \left(\delta e_{\nu}^{\phantom{\nu}a}\right)\) and solving by cycling indices yields
\[\delta\boldsymbol{\alpha}^{a}{}_{b} = \frac{1}{2}\left(\delta^{a}_{d}\delta^{c}_{b}-\eta_{bd}\eta^{ac} \right)\left[D_{c}\left(\delta\mathbf{e}^{d}\right)-e_{c}{}^{\mu}\eta_{gh} \mathbf{e}^{g}D^{d}\left(\delta e_{\mu}^{\phantom{\mu}h}\right)-e_{c}{}^{ \alpha}\mathbf{D}\left(\delta e_{\alpha}^{\phantom{\alpha}d}\right)\right] \tag{18}\]
If the action includes no explicit torsion dependence, the linear relation between \(\boldsymbol{\omega}^{ab}\) and \(\mathbf{C}^{ab}\) means varying either gives the same result, but the solder form variations give different results for the energy tensor.
The conceptual difference between the variations is seen from the fiber bundle structure. While the first variation \(S\left[\mathbf{e}^{a},\boldsymbol{\omega}^{a}{}_{b}\right]\) embodies the Palatini principle fully, varying the Lorentz gauge symmetry gives a different combination of the field equations. The second form of variation, \(S\left[\mathbf{e}^{a},\mathbf{C}^{a}{}_{b}\right]\), gauge transformations are all included in the solder form variation. The difference shows up physically in the source for the Einstein equation, producing the difference between the canonical energy tensor and the Belinfante-Rosenfield energy tensor [31, 32]. We examine this in detail, carrying out both methods.
### Palatini variation
We vary \(\mathbf{e}^{a}\) and \(\boldsymbol{\omega}^{ab}\) independently. The connection variation of the gravity action is
\[\delta S_{ECSK}\left[\mathbf{e}^{a},\boldsymbol{\omega}^{a}{}_{b}\right] = \frac{\kappa}{2\left(n-2\right)!}\int\delta\boldsymbol{\mathcal{R }}^{ab}\wedge\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{e}^{d}e_{abc\ldots d}+ \delta S_{GHY}\] \[= \frac{\kappa}{2\left(n-2\right)!}\int\boldsymbol{\mathcal{D}} \left(\delta\boldsymbol{\omega}^{ab}\right)\wedge\mathbf{e}^{c}\wedge\ldots \wedge\mathbf{e}^{d}e_{abc\ldots d}+\delta S_{GHY}\]
where \(\boldsymbol{\mathcal{D}}\left(\delta\boldsymbol{\omega}^{ab}\right)=\mathbf{d }\left(\delta\boldsymbol{\omega}^{ab}\right)-\left(\delta\boldsymbol{\omega}^ {eb}\right)\wedge\boldsymbol{\omega}^{a}{}_{e}-\left(\delta\boldsymbol{\omega }^{ae}\right)\wedge\boldsymbol{\omega}^{b}{}_{e}\). We integrate only the exterior derivative by parts, using Lorentz invariance of the Levi-Civita tensor to redistribute the spin connections.
As mentioned above, the normal derivative of the connection must be allowed to vary on the boundary, so the surface term does not vanish. This contribution is cancelled by including the Gibbons-Hawking-York surface term, \(\delta S_{GHY}\), which depends only on the induced metric and the extrinsic curvature of the boundary. Here we assume \(S_{GHY}\) is used and focus on the variation in the interior.
Disregarding surface terms the variation becomes
\[\delta\mathcal{S}_{ECSK}=I_{1}+I_{2} = \frac{\kappa}{2\left(n-2\right)!}\int\delta\boldsymbol{\omega}^{ ab}\wedge\left(\mathbf{d}\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{e}^{d}+ \ldots+\left(-1\right)^{n-3}\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{d}\mathbf{ e}^{d}\right)e_{abc\ldots d}\] \[-\frac{\kappa}{2\left(n-2\right)!}\int\left(\left(\delta \boldsymbol{\omega}^{eb}\right)\wedge\boldsymbol{\omega}^{a}{}_{e}+\left( \delta\boldsymbol{\omega}^{ae}\right)\wedge\boldsymbol{\omega}^{b}{}_{e} \right)\wedge\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{e}^{d}e_{abc\ldots d}\]
Now use the invariance of \(e_{abc\ldots d}\) under infinitesimal \(SO\left(p,q\right)\) to write
\[0 = \boldsymbol{\omega}^{e}{}_{a}e_{ebc\ldots d}+\boldsymbol{\omega}^{e}{} _{b}e_{aec\ldots d}+\ldots+\boldsymbol{\omega}^{e}{}_{d}e_{abc\ldots e}\]
so that the second integral becomes may be rearranged to give
\[I_{2} = -\frac{\kappa}{2\left(n-2\right)!}\int\delta\boldsymbol{\omega}^ {ab}\wedge\left(\mathbf{e}^{c}\wedge\boldsymbol{\omega}^{e}{}_{c}\wedge \mathbf{e}^{f}\ldots\wedge\mathbf{e}^{d}e_{abe\ldots d}-\ldots+\left(-1\right) ^{n-3}\mathbf{e}^{c}\ldots\wedge\mathbf{e}^{f}\wedge\mathbf{e}^{d}\wedge \boldsymbol{\omega}^{e}{}_{d}e_{abc\ldots fe}\right)\]
The \(\mathbf{d}\mathbf{e}^{a}\) and \(\mathbf{e}^{c}\wedge\boldsymbol{\omega}^{a}{}_{c}\) terms recombine as \(n-2\) factors of the torsion, \(\mathbf{T}^{a}=\mathbf{d}\mathbf{e}^{a}-\mathbf{e}^{c}\wedge\boldsymbol{\omega }^{a}{}_{c}\) so
\[\delta\mathcal{S}_{ECSK} = \frac{\kappa}{2\left(n-3\right)!}\int\delta\boldsymbol{\omega}^ {ab}\wedge\mathbf{T}^{c}\wedge\mathbf{e}^{d}\ldots\wedge\mathbf{e}^{e}e_{abcd \ldots e} \tag{19}\]
Setting \(\delta\boldsymbol{\omega}^{ab}=\delta A^{ab}{}_{c}\mathbf{e}^{c}\) and resolving the product of solder forms into a volume element, the vacuum field equation is the vanishing of
\[\frac{\kappa}{2\left(n-3\right)!}\mathbf{e}^{c}\wedge\mathbf{T}^{d}\wedge \mathbf{e}^{c}\ldots\wedge\mathbf{e}^{f}e_{abde\ldots f} = \frac{\kappa}{2}\left(T^{c}{}_{ab}+\delta^{c}_{a}T^{d}{}_{bd}- \delta^{c}_{b}T^{d}{}_{ad}\right)\boldsymbol{\Phi}=\frac{\kappa}{2}\mathscr{T }^{c}{}_{ab}\boldsymbol{\Phi}\]
Notice that \(\mathscr{T}^{c}{}_{ab}\) is the same combination found for the Bianchi identity. Here it arises here from the connection variation.
Varying the solder form now involves only the explicit solder forms. The result is the usual Einstein tensor, but with the Riemann-Cartan curvature.
\[\frac{\kappa}{2\left(n-2\right)!}\delta_{e}\int\boldsymbol{\mathcal{R }}^{ab}\wedge\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{e}^{d}e_{abc\ldots d} = \frac{\kappa}{2\left(n-3\right)!}\int\delta A^{a}{}_{b}\mathbf{e}^ {b}\wedge\boldsymbol{\mathcal{R}}^{cd}\wedge\mathbf{e}^{e}\wedge\ldots\wedge \mathbf{e}^{f}e_{acde\ldots f}\] \[= -\frac{\kappa}{2}\int\delta A^{a}{}_{b}\left(\mathcal{R}^{cb}{}_{ ca}+\mathcal{R}^{bc}{}_{ac}-\delta^{b}_{a}\mathcal{R}^{cd}{}_{cd}\right) \boldsymbol{\Phi}\]
taking care to keep indices in the correct order. Since the first and second pairs of \(\mathcal{R}^{ab}{}_{cd}\) retain their antisymmetry, \(\mathcal{R}^{cb}{}_{ca}=\mathcal{R}^{bc}{}_{ac}\) the vacuum field equations are
\[-\kappa\left(\mathcal{R}_{ab}-\frac{1}{2}\eta_{ab}\mathcal{R}\right) = 0 \tag{20}\] \[\frac{\kappa}{2}\mathscr{T}^{c}{}_{ab} = 0 \tag{21}\]
For all \(n>2\) Eq.(21) immediately leads to vanishing torsion and therefore vanishing contorsion, \({\bf C}^{a}_{\ \ c}=0\). Using Eq.(10) to separate the usual Einstein tensor from the contorsion contributions
\[{\mathbf{{\cal R}}}^{a}_{\ \ b} = {\bf R}^{a}_{\ \ b}+{\bf D}{\bf C}^{a}_{\ \ b}-{\bf C}^{c}_{\ \ b}\wedge{\bf C}^{a}_{\ \ c}\]
and setting \({\bf C}^{a}_{\ \ c}=0\) reduces \({\mathbf{{\cal R}}}^{a}_{\ \ b}\) to the usual Einstein equation of Riemannian geometry, \(R_{ab}-\frac{1}{2}\eta_{ab}R=0\). Therefore vacuum Poincare gauge theory reproduces vacuum general relativity. The theories typically differ when matter fields other than Yang-Mills or Klein-Gordon type are included.
Notice a crucial difference between the solder form variation and the metric variation. The metric variation takes the form
\[\delta S = \delta\int\delta g^{\alpha\beta}\left({\cal R}_{\alpha\beta}- \frac{1}{2}g_{\alpha\beta}{\cal R}\right)\sqrt{|g|}d^{n}x\]
so the symmetry of the metric is induced upon the Einstein tensor to give
\[{\cal G}_{(\alpha\beta)}={\cal R}_{(\alpha\beta)}-\frac{1}{2}g_{ \alpha\beta}{\cal R}=0\]
This gives ten equations that determine the ten components of the metric. By contrast, the coefficient \(\delta A^{a}_{\ \ b}\) of the solder form variation \(\delta_{e}{\bf e}^{c}=\delta A^{a}_{\ \ b}{\bf e}^{b}\) is asymmetric. This results in the vanishing of the entire asymmetric Einstein tensor
\[{\cal G}_{\alpha\beta} = 0\]
Accordingly, this determines the sixteen components of the solder form. While an additive term [31, 32] is known to symmetrize the energy tensor-thereby forcing the antisymmetric part of the Einstein tensor to zero-we retain asymmetry on both sides of the gravity equation and find a systematic approach to the antisymmetric part. With the alternate form of the variation, variation of the Lorentz gauge affects only the Einstein equation, accounting for the different number of degrees of freedom due to differing symmetry.
### Fiber preserving variation
While the \(({\bf e}^{a},{\mathbf{{\omega}}}^{a}_{\ \ b})\) form of variation gets us quickly to general relativity, a significant issue arises.
The variables \(({\bf e}^{a},{\mathbf{{\omega}}}^{a}_{\ \ b})\) do not transform independently under the fiber symmetry. Specifically, under Lorentz transformation \(\Lambda^{a}_{\ \ b}\) the solder form transforms as a tensor, \(\tilde{\bf e}^{a}=\Lambda^{a}_{\ \ b}{\bf e}^{b}\) but the spin connection _also_ transforms as a local Lorentz connection, \(\tilde{\mathbf{{\omega}}}=\Lambda\mathbf{{\omega}}\Lambda^ {-1}-d\Lambda\,\Lambda^{-1}\). This means that while the field equations arising from separate \({\bf e}^{a}\) and \({\mathbf{{\omega}}}^{a}_{\ \ b}\) variations are correct, they will be shuffled by the fiber symmetry. This is most evident with matter sources, where it leads to the difference between the asymmetric canonical energy tensor and the symmetric Belinfante-Rosenfield energy tensor. We show this explicitly with our discussion of sources in the next Section.
We now consider the variation of two Lorentz tensors, \(({\bf e}^{a},{\bf C}^{a}_{\ \ b})\). Writing \({\mathbf{{\alpha}}}^{a}_{\ \ b}={\mathbf{{\alpha}}}^{a}_{\ \ b}\ (e)\) places the effect of a lifting in the bundle entirely within the solder form variation. Explicitly separating the compatible and torsion pieces leads in a straightforward way to the Belinfante-Rosenfield energy tensor.
Before we begin, note that when we separate the contorsion parts of the curvature
\[S_{ECSK} = \frac{\kappa}{2\,(n-2)!}\delta_{e,\alpha(e)}\int{\mbox{\boldmath ${\cal R}$}}^{ab}\wedge{\bf e}^{c}\wedge\ldots\wedge{\bf e}^{d}e_{abc\ldots d} +\delta S_{GHY}\] \[= \frac{\kappa}{2\,(n-2)!}\delta_{\alpha(e)}\int\left({\bf R}^{ab} +{\bf D}{\bf C}^{ab}-{\bf C}^{eb}\wedge{\bf C}^{a}_{\ \ e}\right)\wedge{\bf e}^{c}\wedge\ldots\wedge{\bf e}^{d}e_{abc\ldots d}+ \delta S_{GHY}\]
it is tempting to integrate the derivative term by parts and use \({\bf D}{\bf e}^{c}=0\) to set it to zero
\[\int{\bf D}{\bf C}^{ab}\wedge{\bf e}^{c}\wedge\ldots\wedge{\bf e }^{d}e_{abc\ldots d}=\int{\bf C}^{ab}\wedge{\bf D}{\bf e}^{c}\wedge{\bf e}^{ e}\wedge\ldots\wedge{\bf e}^{e}e_{abcd\ldots e}=0\]
However, this is inconsistent with the solder form variation
\[(n-2)\int\mathbf{D}\mathbf{C}^{ab}\wedge\delta\mathbf{e}^{c}\wedge\ldots\wedge \mathbf{e}^{d}e_{abc\ldots d}=\int\mathbf{C}^{ab}\wedge\mathbf{D}\left(\delta \mathbf{e}^{c}\right)\wedge\mathbf{e}^{e}\wedge\ldots\wedge\mathbf{e}^{e}e_{abcd \ldots e}\neq 0\]
For this reason it is important to vary the action before integrating.
#### 3.3.1 Varying the contorsion
The contorsion variation is straightforward. After variation of the contorsion the compatible derivative term is integrated by parts, where \(\mathbf{D}\mathbf{e}^{c}\) vanishes.
\[\frac{\kappa}{2\left(n-2\right)!}\int\mathbf{D}\left(\delta\mathbf{C}^{ab} \right)\wedge\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{e}^{d}e_{abc\ldots d}= \frac{\kappa}{2\left(n-3\right)!}\int\delta\mathbf{C}^{ab}\wedge\cdot \mathbf{D}\mathbf{e}^{\overrightarrow{\mathbf{\tau}}\mathbf{\wedge}} \mathbf{e}^{d}\wedge\ldots\wedge\mathbf{e}^{e}e_{abcd\ldots e}=0\]
For the remaining contorsion term
\[\delta S_{C} = -\frac{\kappa}{2\left(n-2\right)!}\int 2\delta\mathbf{C}^{cb} \wedge\mathbf{C}^{a}_{\phantom{a}c}\wedge\mathbf{e}^{c}\wedge\ldots\wedge \mathbf{e}^{d}e_{abc\ldots d}\] \[= \kappa\int\delta C^{bc}_{\phantom{bc}c}\left(C^{e}_{\phantom{e}[ cb]}-\delta^{e}_{[b}C^{a}_{\phantom{a}c]a}\right)\mathbf{\Phi}\]
Substituting \(C^{a}_{\phantom{a}bc}=\frac{1}{2}\left(T_{c}^{\phantom{c}a}_{\phantom{c}b}+T^{ a}_{\phantom{a}cb}-T_{bc}^{\phantom{bc}a}\right)\) to express the resulting field equation in terms of the torsion yields
\[\frac{\kappa}{2}\mathscr{T}^{a}_{\phantom{a}bc} = 0\]
This is the same result as from the original Palatini variation.
#### 3.3.2 Varying the solder form
The solder form variation is now more involved. After setting \(\boldsymbol{\mathcal{R}}^{ab}=\mathbf{d}\boldsymbol{\omega}^{ab}-\boldsymbol{ \omega}^{cb}\wedge\boldsymbol{\omega}^{a}_{\phantom{a}c}\) and substituting \(\boldsymbol{\omega}^{ab}=\boldsymbol{\alpha}^{ab}+\mathbf{C}^{ab}\) we vary both \(\mathbf{e}^{a}\) and \(\boldsymbol{\alpha}^{ab}\) to find
\[\delta_{e,\alpha(e)}S_{ECSK} = \frac{\kappa}{2\left(n-2\right)!}\delta_{e,\alpha(e)}\int \boldsymbol{\mathcal{R}}^{ab}\wedge\mathbf{e}^{c}\wedge\ldots\wedge\mathbf{e} ^{d}e_{abc\ldots d}+\delta S_{GHY}\] \[= \frac{\kappa}{2\left(n-2\right)!}\int\boldsymbol{\mathcal{D}} \left(\delta\boldsymbol{\alpha}^{ab}\right)\wedge\mathbf{e}^{c}\wedge\ldots \wedge\mathbf{e}^{d}e_{abc\ldots d}+\delta S_{GHY}\] \[-\kappa\int\delta A^{c}_{\phantom{c}k}\left(\mathcal{R}^{k}_{ \phantom{k}c}-\frac{1}{2}\mathcal{R}\delta^{k}_{c}\right)\mathbf{\Phi}\]
From here the handling of the first integral is parallel to that leading up to Eq.(19) but with the compatible connection instead. The result is
\[\delta\mathcal{S}_{ECSK} = \frac{\kappa}{2\left(n-3\right)!}\int\delta\boldsymbol{\alpha}^{ ab}\wedge\mathbf{T}^{e}\wedge\mathbf{e}^{f}\ldots\wedge\mathbf{e}^{k}e_{ abef\ldots k}-\kappa\int\delta A^{c}_{\phantom{c}k}\left(\mathcal{R}^{k}_{ \phantom{k}c}-\frac{1}{2}\mathcal{R}\delta^{k}_{c}\right)\mathbf{\Phi}\]
but there is now a further variation using Eq.(18). Substituting and integrating by parts, then replacing the basis forms with the volume form gives an imposing product.
\[\delta_{e,\alpha(e)}S_{ECSK} = \frac{\kappa}{2}\int\frac{1}{2}\left(\delta^{a}_{d}\eta^{bc}- \delta^{b}_{d}\eta^{ac}\right)\delta e_{\mu}^{\phantom{\mu}h}\left[-\frac{1}{2 }e_{g}^{\phantom{\mu}\mu}\delta^{d}_{h}D_{c}T^{e}_{\phantom{e}mn}+\frac{1}{2}e _{c}^{\phantom{e}\mu}\eta_{gh}D^{d}T^{e}_{\phantom{e}mn}+\frac{1}{2}e_{c}^{ \phantom{e}\mu}\delta^{d}_{h}\wedge D_{g}T^{e}_{\phantom{e}mn}\right]\] \[\times\left(\delta^{m}_{a}\left(\delta^{n}_{b}\delta^{g}_{e}- \delta^{g}_{b}\delta^{n}_{e}\right)+\delta^{n}_{a}\left(\delta^{g}_{b}\delta^{ m}_{e}-\delta^{m}_{b}\delta^{g}_{e}\right)+\delta^{g}_{a}\left(\delta^{m}_{b} \delta^{n}_{e}-\delta^{n}_{b}\delta^{m}_{e}\right)\right)\mathbf{\Phi}\] \[-\kappa\int\delta A^{c}_{\phantom{c}k}\left(\mathcal{R}^{k}_{ \phantom{k}c}-\frac{1}{2}\mathcal{R}\delta^{k}_{c}\right)\mathbf{\Phi}\]
Distributing and collecting terms eventually leads to
\[\delta{\cal S}_{ECSK} = \kappa\int\left(\delta A^{cb}\right)\left(D^{a}\left(\frac{1}{2} \left(T_{bac}+T_{acb}+T_{cab}\right)+\eta_{ac}T^{e}{}_{be}-\eta_{bc}T^{e}{}_{ae} \right)-\left({\cal R}_{bc}-\frac{1}{2}{\cal R}\eta_{bc}\right)\right){\bf\Phi}\]
where \(\delta{\bf e}^{a}=\delta A^{a}{}_{b}{\bf e}^{b}\). Replacing \(T_{abc}={\mathscr{T}}_{abc}-\eta_{ac}T^{e}{}_{eb}+\eta_{ab}T^{e}{}_{ec}\) the resulting field equation takes the simpler form
\[-\kappa\left({\cal R}_{bc}-\frac{1}{2}{\cal R}\eta_{bc}-\frac{1} {2}D^{a}\left({\mathscr{T}}_{bac}+{\mathscr{T}}_{acb}+{\mathscr{T}}_{cab} \right)\right) = 0\]
This field equation is most revealing when written in terms of symmetric and antisymmetric parts. Together with the contorsion variation we find:
\[{\cal R}_{(bc)}-\frac{1}{2}{\cal R}\eta_{bc} = \frac{1}{2}D^{a}\left({\mathscr{T}}_{cba}+{\mathscr{T}}_{bca}\right)\] \[{\cal R}_{bc}-{\cal R}_{cb} = -D_{a}{\mathscr{T}}^{a}{}_{bc}\] \[{\mathscr{T}}^{a}{}_{bc} = 0 \tag{22}\]
Notice the tight relationship between the torsion equation and the antisymmetric part of the Ricci tensor. Combining these imposes symmetry on the Ricci tensor. This is the same conclusion as we reach from the first variation, but with the added insight that the antisymmetric part of the Ricci tensor is the divergence of the contorsion equation, hence zero.
The antisymmetric equality \({\cal R}_{[bc]}+\frac{1}{2}D^{a}{\mathscr{T}}_{abc}=0\) is just what we get if we restrict the variation to an infinitesimal Lorentz transformation, \(\delta A^{bc}=\varepsilon^{[bc]}\).
Without matter fields, it follows that the torsion and Einstein tensor vanish, in agreement with the purely metric variation of general relativity.
## 4 ECSK theory with matter
ESCK theory with sources differs from general relativity when the source action depends on the connection.
Let the action now be
\[S = S_{ECSK}+S_{matter}\] \[= \frac{\kappa}{2\left(n-2\right)!}\int{\bf\mathcal{R}}^{ab}\wedge {\bf e}^{c}\wedge\ldots\wedge{\bf e}^{d}e_{abc\ldots d}+\int L\left(\xi^{A},{ \cal D}_{\mu}\xi^{A},{\bf e}^{a}\right){\bf\Phi}\]
for fields \(\xi^{A}\) of any type. Returning to the Palatini approach we vary the connection \({\boldsymbol{\omega}}^{a}{}_{b}\) and the solder form \({\bf e}^{a}\) to find
\[0 = -\int\delta A^{ba}\left[\kappa\left(R_{ab}-\frac{1}{2}R\eta_{ab} \right)-\frac{\delta L}{\delta e_{\mu}{}^{b}}e_{\mu}{}^{c}\eta_{ca}\right]{\bf\Phi}\] \[+\int\delta\omega^{ab}{}_{c}\left[\frac{\kappa}{2}{\mathscr{T}}^{c }{}_{ab}+\frac{\delta L}{\delta\omega^{ab}{}_{c}}\right]{\bf\Phi}\]
Here the solder form variation is written as \(\delta{\bf e}^{a}=\delta A^{a}{}_{b}{\bf e}^{b}\) for arbitrary \(\delta A^{a}{}_{b}\). With the solder form and spin connection as independent variables there is a natural association of sources with the curvature and the torsion.
\[\kappa\left(R_{ab}-\frac{1}{2}R\eta_{ab}\right) = \frac{\delta L}{\delta e_{\mu}{}^{b}}e_{\mu}{}^{c}\eta_{ca}\] \[\frac{\kappa}{2}{\mathscr{T}}^{c}{}_{ab} = -\frac{\delta L}{\delta\omega^{ab}{}_{c}} \tag{23}\]
The Einstein tensor is sourced by the asymmetric _canonical energy tensor_\(T_{ba}=\frac{\delta L}{\delta e_{\mu}}{}^{c}\epsilon_{\mu}{}^{c}\eta_{ca}\) while the torsion is sourced by the _spin tensor_
\[\sigma^{c}{}_{ab} \equiv \frac{\delta L}{\delta\omega^{ab}{}_{c}}=\frac{\delta L}{\delta C ^{ab}{}_{c}} \tag{24}\]
with \(\sigma^{c}{}_{ab}=-\sigma^{c}{}_{ba}\).
However, this association depends on the choice of independent variables. As discussed in the previous section, these sources are mixed when we apply the fiber symmetry. For this reason, we now consider the action as a functional of the solder form and contorsion, setting \(\mathbf{\alpha}^{ab}=\mathbf{\alpha}^{ab}\left({\bf e}^{c}\right)\).
Because the contorsion variation leads to the same expression for the torsion as the \(\mathbf{\omega}^{ab}\) variation, the \(\delta\mathbf{\omega}^{ab}\) equation remains unchanged. The torsion now has source \(\sigma^{c}{}_{ab}\).
\[\frac{\kappa}{2}\mathscr{T}^{c}{}_{ab} = -\sigma^{c}{}_{ab} \tag{25}\]
Before carrying out the solder form variation we show the mixing under the fiber symmetry explicitly.
### Lorentz symmetry
Under local Lorentz transformation, both the solder form and spin connection change. The change in the spin connection is given by the usual gauge form \(\tilde{\mathbf{\omega}}=g\mathbf{\omega}g^{-1}-{\bf d}gg^{-1}\). In detail, for an infinitesimal gauge transformation \(g^{a}{}_{b}=\delta^{a}{}_{b}+\varepsilon^{a}{}_{b}\) where \(\varepsilon_{ab}=-\varepsilon_{ba}\) the change in the spin connection is
\[\delta_{L}\mathbf{\omega}^{a}{}_{b} = \left[\left(\delta^{a}_{c}+\varepsilon^{a}{}_{c}\right)\mathbf{\omega}^{c}{}_{d}\left(\delta^{d}_{b}-\varepsilon^{d}{}_{b}\right)- \mathbf{{\rm d}}\varepsilon^{a}{}_{c}\left(\delta^{c}_{b}-\varepsilon ^{c}{}_{b}\right)\right]-\mathbf{\omega}^{a}{}_{b}\] \[= -{\cal D}\varepsilon^{a}{}_{b}\]
At the same time the solder form transforms as a Lorentz tensor, \(\delta_{L}\mathbf{{\rm e}}^{a}=\varepsilon^{a}{}_{b}\mathbf{ {\rm e}}^{b}\). This means that under an infinitesimal gauge transformation we must include changes in both the solder form and the spin connection.
\[\delta_{L}S_{matter}\equiv 0 = \int\frac{\delta L}{\delta e_{\mu}{}^{b}}\delta_{L}e_{\mu}{}^{b} \mathbf{\Phi}+\int\frac{\delta L}{\delta\omega^{ab}{}_{c}}\delta_{L }\omega^{ab}{}_{c}\mathbf{\Phi}\] \[= \int\frac{\delta L}{\delta e_{\mu}{}^{b}}\varepsilon^{b}{}_{c} \varepsilon_{\mu}{}^{c}\mathbf{\Phi}-\int\sigma^{c}{}_{ab}{}_{c}{ \cal D}_{c}\varepsilon^{ab}\mathbf{\Phi}\] \[= \int\left(-e_{\mu}{}^{c}\eta_{ca}\frac{\delta L}{\delta e_{\mu}{}^ {b}}+{\cal D}_{c}\sigma^{c}{}_{ab}\right)\varepsilon^{ab}\mathbf{\Phi}\]
Here we may require the variation to vanish on the boundary. Since \(\varepsilon^{ab}=-\varepsilon^{ba}\) is otherwise arbitrary, the antisymmetric part of the direct solder form variation must equal the divergence of the spin tensor.
\[{\cal D}_{c}\sigma^{c}{}_{ab}+e_{\mu}{}^{c}\frac{\delta L}{\delta e _{\mu}{}^{[a}}\eta_{b]c} = 0 \tag{26}\]
More importantly, the choice of independent variables determines the form of the energy tensor. Respecting the bundle structure we include the dependence of the compatible part of the spin connection on the solder form, \(\mathbf{\alpha}^{a}{}_{b}=\mathbf{\alpha}^{a}{}_{b}\left(e\right)\) when we carry out the solder form variation.
Varying the solder form and the contorsion independently, and using Eq.(24) for the contorsion variation
\[0 = -\kappa\int\eta^{ce}e_{e}{}^{\mu}\delta e_{\mu}{}^{b}\left({\cal R }_{bc}-\frac{1}{2}{\cal R}\eta_{bc}-\frac{1}{2}D^{a}\left({\mathscr{T}}_{ bac}+{\mathscr{T}}_{acb}+{\mathscr{T}}_{cab}\right)\right)\mathbf{\Phi}\] \[+\int\left(\frac{\delta L}{\delta e_{\mu}{}^{d}}+\frac{\delta L}{ \delta\alpha^{ab}{}_{c}}\frac{\delta\alpha^{ab}{}_{c}}{\delta e_{\mu}{}^{d}} \right)\delta e_{\mu}{}^{d}\mathbf{\Phi}\] \[0 = \int\limits_{V}\delta C^{ab}{}_{c}\left(\frac{\kappa}{2}{\mathscr{T }}^{c}_{ab}+\sigma^{c}{}_{ab}\right)\mathbf{\Phi}\]
Next, carry out the solder form variation \(\frac{\delta L}{\delta e_{\mu}}{}^{a}+\frac{\delta L}{\delta\alpha^{ab}}{}_{c} \frac{\delta\alpha^{ab}}{\delta e_{\mu}}{}^{a}\) in detail.
### Variation of the solder form
The source for the Einstein equation now depends on
\[\delta_{e}S_{matter} = \int\left(\frac{\delta L}{\delta e_{\mu}}{}^{d}+\frac{\delta L}{ \delta\alpha^{ab}}{}_{c}\frac{\delta\alpha^{ab}}{\delta e_{\mu}}{}^{c}\right)e _{\mu}{}^{e}\delta A^{d}{}_{e}{}^{e}\mathbf{\Phi}\]
Setting \(\frac{\delta L}{\delta\alpha^{ab}}{}_{c}=\frac{\delta L}{\delta C^{ab}}{}_{c} =\sigma^{c}{}_{ab}\) this becomes \(\delta_{e}S_{matter}=\int\left(\frac{\delta L}{\delta e_{\mu}}{}^{d}\delta e_{ \mu}{}^{d}+\sigma^{c}{}_{ab}\delta\alpha^{ab}{}_{c}\right)\mathbf{\Phi}\). Then substituting (18) and integrating by parts
\[\delta_{e}S_{matter} = \int\left(\frac{\delta L}{\delta e_{\mu}}{}^{d}+\frac{1}{2} \left(\delta_{d}^{a}\eta^{be}-\delta_{d}^{b}\eta^{ae}\right)\left[-e_{c}{}^{ \mu}D_{e}\sigma^{c}{}_{ab}+D^{d}\sigma^{c}{}_{ab}e_{e}{}^{\mu}\eta_{cd}+D_{c} \sigma^{c}{}_{ab}e_{e}{}^{\mu}\right]\right)\delta e_{\mu}{}^{d}\mathbf{\Phi}\] \[= \int\left(\frac{\delta L}{\delta e_{\mu}}{}^{d}+D^{a}\sigma^{e}{} _{ad}-D_{c}\sigma^{ce}{}_{d}-D^{a}\sigma_{d}{}^{e}{}_{a}\right)e_{e}{}^{\mu} \delta e_{\mu}{}^{d}\mathbf{\Phi}\]
Combining this with the curvature contributions the field equation becomes
\[\kappa\left(\mathcal{R}_{bc}-\frac{1}{2}\mathcal{R}\eta_{bc} \right)-\frac{\kappa}{2}D^{a}\left(\mathscr{T}_{bac}+\mathscr{T}_{cab}+ \mathscr{T}_{acb}\right) = T_{bc}+D^{a}\left(\sigma_{bac}+\sigma_{cab}-\sigma_{acb}\right) \tag{27}\]
The source for the gravitational part is the Belinfante-Rosenfield energy tensor
\[T_{bc}^{{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^ {\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{ \prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\prime}{}^{\
Sources for torsion
Before considering fields with nonvanishing spin tensor, we note some classes for with \(\sigma^{a}_{\ \ bc}=0\). Fields other than these exceptional types generically drive torsion.
### Exceptional cases
There are two important exceptional cases-Klein-Gordon fields and Yang-Mills fields.
#### 5.1.1 Klein-Gordon field
For Klein-Gordon fields, the covariant derivative contains no connection, \(D_{\mu}\phi=\partial_{\mu}\phi\).
\[S_{KG} = \frac{1}{2}\int\left(g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu} \phi+m^{2}\phi^{2}\right)\sqrt{|g|}d^{n}x\]
Appropriately for a scalar field, there is no spin tensor. This holds true for internal multiplets of scalar fields \(\phi^{i}\) as well.
#### 5.1.2 Yang-Mills fields
Yang-Mills fields comprise the second important class of exceptions. Let \(i,j,\ldots\) index the generators of an internal Lie symmetry \(g\in{\cal G}\), that is, the fiber symmetry of a principal fiber bundle. Then the connection satisfies the Maurer-Cartan equation, \({\bf dA}^{i}=-\frac{1}{2}c^{i}_{\ jk}{\bf A}^{j}\wedge{\bf A}^{k}\) where \(c^{i}_{\ jk}\) are the structure constants. Curving the bundle the field strength
\[{\bf F}^{i} = {\bf dA}^{i}+\frac{1}{2}c^{i}_{\ jk}{\bf A}^{j}\wedge{\bf A}^{k}\]
is independent of the spacetime connection and the corresponding action
\[S=\int{\bf F}^{i}\wedge\,^{*}{\bf F}_{i}\]
has vanishing spin density. The result also holds for \(p\)-form electromagnetism [33].
These observations mean that the Higgs and Yang-Mills fields of the standard model do not drive torsion.
### Bosonic matter sources
The currents of generic bosonic sources have nonvanishing spin tensors. We consider source fields of arbitrary integer spin \(\Theta^{a\ldots b}\) having quadratic kinetic energies.
When the kinetic term of the fields is symmetric in derivatives we have
\[S_{kinetic}=\frac{1}{2}\int Q_{a\ldots bc\ldots d}{\cal D}\Theta^{a\ldots b} \,^{*}{\cal D}\Theta^{c\ldots d}\]
where \(Q_{a\ldots bc\ldots d}=Q_{c\ldots da\ldots b}\) for some invariant tensor field \(Q\). The contracted labels play no role in the solder form variation, so we may write them collectively as \(A=a\ldots b,B=c\ldots d\). The action is then
\[S_{kinetic} = \frac{1}{4}\int Q_{AB}{\cal D}\Theta^{A}\,^{*}{\cal D}\Theta^{B}\]
where we assume \(Q_{AB}=Q_{BA}\) is independent of the connection, though it may depend on the metric.
The field equations (28)-(30) or the reduced equations (31) hold without modification. We need only find the relevant variations of the matter actions.
For these fields the solder form variation only enters through the metric variation as \(\eta^{ab}\left(\delta e_{a}{}^{\mu}e_{b}{}^{\nu}+e_{a}{}^{\mu}\delta e_{b}{}^{\nu} \right)=\delta g^{\mu\nu}\) since
\[S_{kinetic} = \frac{1}{2}\int Q_{AB}{\cal D}\Theta^{A}\,{}^{*}{\cal D}\Theta^{B}\] \[= \frac{1}{2}\int Q_{AB}\left(g\right)g^{\mu\nu}{\cal D}_{\mu} \Theta^{A}{\cal D}_{\nu}\Theta^{B}\sqrt{-g}d^{n}x\]
Therefore the energy tensor takes the usual symmetric form plus any (symmetric) dependence on \(Q_{AB}\).
\[T_{ab} = Q_{AB}{\cal D}_{a}\Theta^{A}{\cal D}_{b}\Theta^{B}-\frac{1}{4} \eta_{ab}\left(Q_{AB}g^{\mu\nu}{\cal D}_{\mu}\Theta^{A}{\cal D}_{\nu}\Theta^{B }\right)+e_{a}{}^{\mu}e_{b}{}^{\nu}\frac{\delta Q_{AB}}{\delta g^{\mu\nu}}\]
despite the asymmetric solder form variation.
However, the connection variation leads to a nonvanishing spin density. Restoring \(A\to a\ldots b,B\to c\ldots d\)
\[\delta_{\omega}S_{kinetic} = \frac{1}{4\left(n-1\right)!}\delta_{\omega}\int Q_{a\ldots bc \ldots d}\left(\Phi^{a\ldots b}+\Theta^{e\ldots b}\mathbf{\omega}^{a }{}_{e}+\ldots+\Theta^{a\ldots e}\mathbf{\omega}^{b}{}_{e}\right)\,{} ^{*}{\bf D}\Theta^{c\ldots d}\] \[= \frac{1}{2}\int\delta\omega_{feg}Q_{am\ldots nbc\ldots d}\left( \eta^{a[f}\Theta^{e]m\ldots nb}+\ldots+\eta^{b[f}\Theta^{|am\ldots n|e]} \right)D^{g}\Theta\,\Theta^{c\ldots d}\mathbf{\Phi}\]
The spin tensor is therefore
\[\sigma_{g}{}^{fe} = \frac{1}{2}Q_{am\ldots nbc\ldots d}\left(\eta^{a[f}\Phi^{e]m\ldots n b }+\ldots+\eta^{b[f}\Phi^{|am\ldots n|e]}\right)D_{g}\Phi^{c\ldots d} \tag{32}\]
This has the form of a current density.
From Lorentz invariance Eq.(26) and the symmetry of the energy tensor \(T_{[ab]}=0\) we immediately have conservation of the spin tensor
\[D_{c}\sigma^{c}{}_{ab} = 0 \tag{33}\]
We conclude that for the types of bosonic action considered the Poincare gauge equations take the form
\[\kappa\left({\cal R}_{(ab)}-\frac{1}{2}{\cal R}\eta_{ab}\right) = Q_{AB}D_{a}\Theta^{A}D_{b}\Theta^{B}-\frac{1}{2}\eta_{ab}\left( Q_{AB}g^{\mu\nu}D_{\mu}\Theta^{A}D_{\nu}\Theta^{B}\right)\] \[\frac{\kappa}{2}\mathscr{T}_{c} {}^{ab} = -\frac{1}{2}Q_{dm\ldots nef\ldots g}\left(\eta^{d[b}\Theta^{a]m \ldots ne}+\ldots+\eta^{e[b}\Theta^{|dm\ldots n|e]}\right)D_{c}\Theta^{f \ldots g}\]
Coupling such higher spin fields to other sources may lead to failure of causality or other pathologies.
For example, for a vector field with \(Q_{ab}=\eta_{ab}\) the kinetic action is simply
\[S_{kinetic}=\frac{1}{2}\int g^{\mu\nu}g^{\alpha\beta}{\cal D}_{\mu}\Theta_{\nu }{\cal D}_{\alpha}\Theta_{\beta}\sqrt{-g}d^{n}x\]
so the energy tensor has the usual form and the current density is simply \(\sigma_{\mu}{}^{ab}=\frac{1}{2}\left(\Theta^{b}D_{\mu}\Theta^{a}-\Theta^{a}D _{\mu}\Theta^{b}\right)\). The field equations are
\[T_{ab} = \eta_{cd}D_{a}\Theta^{c}D_{b}\Theta^{d}-\frac{1}{2}\eta_{ab} \left(Q_{cd}g^{\mu\nu}D_{\mu}\Theta^{c}D_{\nu}\Theta^{d}\right)\] \[\sigma_{c}{}^{ab} = \frac{1}{4}\left(\Theta^{b}D_{c}\Theta^{a}-\Theta^{a}D_{c}\Theta^ {b}\right)\]
The torsion remains nonpropagating and vanishes whenever the source field \(\Theta^{b}\) vanishes.
### Dirac fields with torsion
It is well-known that the Dirac field provides a source for torsion (among the earliest references see, e.g., [34, 35, 36, 37, 38, 18, 39]). The flat space Dirac action takes the same form in any dimension
\[{\cal S}_{D} = \alpha\int\left(\bar{\psi}\left(i\not{\partial}-m\right)\psi \right)\,ed^{n}x \tag{34}\]
where \(\not{\partial}=\gamma^{a}e_{a}{}^{\mu}\partial_{\mu}\). The principal difference in dimension \(n\) is that the spinors are representations of \(Spin\left(p,q\right)\) and therefore elements of a \(2^{\left[\frac{\pi}{2}\right]}\)-dimensional complex vector space while the \(\gamma^{a}\) satisfy the Clifford algebra relations
\[\left\{\gamma^{a},\gamma^{b}\right\}=-2\eta^{ab}1 \tag{35}\]
where \(\eta_{ab}\) is the \(\left(p,q\right)\) metric.
However, in a curved space the spin connection introduces an additional term. The covariant derivative of a spinor is given by
\[D_{\mu}\psi=\partial_{\mu}\psi-\frac{1}{2}\omega^{bc}\ _{\mu}\sigma_{bc}\psi\]
where \(\sigma_{bc}=\left[\gamma_{b},\gamma_{c}\right]\). The action becomes
\[\tilde{\cal S}_{D} = \alpha\int\left(\bar{\psi}\left(i\not{D}-m\right)\psi\right)\,ed ^{n}x\] \[= \alpha\int\left(\psi^{\dagger}h\left(ie_{a}{}^{\mu}\gamma^{a}D_{ \mu}-m\right)\psi\right)\,ed^{n}x\]
where \(h\) is Hermitian \(h^{\dagger}=h\) and reality of a vector \(v^{a}=\psi^{\dagger}h\gamma^{a}\psi\) under \(Spin\left(p,q\right)\) requires
\[\gamma^{a\dagger}h = h\gamma^{a}\]
It follows that \(\sigma^{ab\dagger}h=-h\sigma^{ab}\). While \(h\) is generally taken to be \(\gamma^{0}\) in spacetime, \(h\) transforms as a \(\left(\begin{array}{c}0\\ 2\end{array}\right)\) spin tensor while \(\gamma^{0}\) transforms as a \(\left(\begin{array}{c}1\\ 1\end{array}\right)\) spin tensor so that \(h=\gamma^{0}\) can hold only in a fixed basis. There exist satisfactory choices for \(h\) in any dimension or signature (see below). The solder form components \(e_{a}{}^{\mu}\) connect the orthonormal basis of the Clifford algebra to the coordinate basis for the covariant derivative, \(\gamma^{a}e_{a}{}^{\mu}D_{\mu}\).
The conjugate action now differs,
\[\tilde{\cal S}_{D}^{*} = \alpha\int\left(\bar{\psi}\left(-i\overleftarrow{D}_{\mu}\gamma^ {\mu}-m\right)\psi\right)\,ed^{n}x\]
so we take the manifestly real combination
\[{\cal S}_{D} = \frac{1}{2}\left(\tilde{\cal S}_{D}+\tilde{\cal S}_{D}^{*}\right)\] \[= \frac{\alpha}{2}\int\bar{\psi}\left(i\gamma^{a}\overrightarrow{ \partial}_{a}-i\overleftarrow{\partial}_{a}\gamma^{a}-2m-\frac{i}{2}\omega_{ bca}\left\{\gamma^{a},\sigma^{bc}\right\}\right)\psi\,ed^{n}x\]
showing that the connection now couples to a triple of Dirac matrices \(-\frac{i}{2}\omega_{bca}\left\{\gamma^{a},\sigma^{bc}\right\}=-2i\omega_{bca }\gamma^{[a}\gamma^{b}\gamma^{c]}\). This form is valid in any dimension. In 4- or 5-dimensions the triple antisymmetrization may be shortened using \(\gamma_{5}\). The action is now
\[{\cal S}_{D} = \alpha\int\bar{\psi}\left(\frac{i}{2}e_{a}{}^{\mu}\bar{\psi} \gamma^{a}\overleftarrow{\partial}_{\mu}\psi-m-ie_{a}{}^{\mu}\omega_{bc\mu} \gamma^{[a}\gamma^{b}\gamma^{c]}\right)\psi\,ed^{n}x \tag{36}\]
where \(\bar{\psi}\gamma^{a}\overrightarrow{\partial}_{\mu}\psi=\bar{\psi}\gamma^{a} \partial_{\mu}\psi-\partial_{\mu}\bar{\psi}\gamma^{a}\psi\).
The simple form for the anticommutator turns out to be a low-dimensional accident. In the Appendix we show that the general form for
It is convenient to define \(\Gamma^{a_{1}a_{2}\ldots a_{k}}\equiv\gamma^{[a_{1}}\gamma^{a_{2}}\ldots\gamma ^{a_{k}]}\), including the particular cases \(\Gamma=1\) and \(\sigma^{ab}=\left[\gamma^{a},\gamma^{b}\right]\) for the \(Spin\left(p,q\right)\) generators. For \(k<\frac{n}{2}\) we may write \(\Gamma^{a_{1}a_{2}\ldots a_{k}}\) in terms of \(\gamma_{5}\equiv i^{m}\Gamma^{a_{1}\ldots a_{n}}\) and \(\Gamma^{a_{1}a_{2}\ldots a_{n-k}}\), where \(i^{m}\) is chosen so that \(\gamma_{5}^{\dagger}=\gamma_{5}\).
The simple form for the anticommutator turns out to be a low-dimensional accident. In the Appendix we show that the general form for the anticommutator \(\left\{\Gamma^{a_{1}a_{2}\ldots a_{k}},\sigma^{bc}\right\}\) depends on both \(\Gamma^{a_{1}a_{2}\ldots a_{k+1}}\) and \(\Gamma^{a_{1}a_{2}\ldots a_{k-1}}\) with the second form absent for the Dirac \(k=1\) case.
#### 5.3.1 Spinor metric
The Clifford relation for the gamma matrices is
\[\left\{\gamma^{a},\gamma^{b}\right\}=-2\eta^{ab}\]
with \(\eta^{ab}=diag\left(-1,\ldots,-1,1,\ldots,1\right)\). Here the \(\gamma\)-matrices are numbered \(\gamma^{1}\ldots\gamma^{q}\gamma^{q+1}\ldots\gamma^{q+p}\) and we take the first \(q\) matrices hermitian. Then for \(a,b\leq q\) the \(\gamma s\) satisfy the timelike Clifford relation
\[\left\{\gamma^{a},\gamma^{b}\right\}=-\eta^{ab}=+1\]
The final \(p\)\(\gamma s\) must be antihermitian to give hermiticities of \(\sigma^{ab}\) appropriate for generating both rotations and boosts.
We seek a spinor metric \(h\) such that both the spinor inner product
\[\langle\psi,\psi\rangle=\psi^{\dagger A}h_{AB}\psi^{B}\]
and the \(n\)-vector
\[v^{a}\equiv\psi^{\dagger}h\gamma^{a}\psi\]
are real. These immediately imply
\[h^{\dagger} = h\] \[\gamma^{a\dagger}h = h\gamma^{a}\]
To satisfy the second condition we take \(h\) proportional to the product of all timelike \(\gamma s\), \(h=\lambda\gamma^{1}\ldots\gamma^{q}\). This insures that \(\gamma^{a\dagger}h=\left(-1\right)^{q-1}h\gamma^{a}\) with the same sign for all \(\gamma^{a}\). Then hermiticity requires \(\lambda=i^{\frac{q(q-1)}{2}}\).
This is all we need for \(q\) odd. When \(q\) is even we include an additional factor of \(\gamma_{5}\) where \(\gamma_{5}=i^{p+\frac{n(n-1)}{2}}\gamma^{1}\ldots\gamma^{n}\). In this case we must also include an additional \(i^{q}\). Therefore we define
\[h=\left\{\begin{array}{cc}i^{q}i^{\frac{q(q-1)}{2}}\gamma^{1}\gamma^{2} \cdots\gamma^{q}\gamma_{5}&q\;even\\ i^{\frac{q(q-1)}{2}}\gamma^{1}\gamma^{2}\cdots\gamma^{q}&q\;odd\end{array}\right.\]
Adopting the usual notation, we may now let \(\bar{\psi}=\psi^{\dagger}h\) for spinors in any dimension. We note that \(\gamma_{5}h=\left(-1\right)^{q}h\gamma_{5}\)
#### 5.3.2 Energy tensor and spin density from the Dirac equation
From the action (36) the energy tensor and spin current are immediate. Since the Dirac Lagrangian is proportional to the Dirac equation, there is no contribution from the volume form. Therefore the source for the Einstein tensor is
\[\frac{\delta L}{\delta e_{\mu}}^{\phantom{\mu}\phantom{\mu}\phantom{\mu}e_{ \mu}}e_{\mu}^{\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{ \mu}\phantom{\mu}}\eta_{ca} = -i\alpha\bar{\psi}\gamma_{a}e_{b}^{\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom{\mu} \phantom{\mu}\phantom{\mu}\phantom{\mu}\phantom
giving the reduced curvature equation (31) the form
\[\kappa\left(R_{ab}-\frac{1}{2}R\eta_{ab}\right) = -i\alpha\bar{\psi}\gamma_{(a}e_{b)}^{\ \ \mu}\overleftrightarrow{D}_{\mu}\psi+2i\alpha\omega_{de(b}\eta_{a)c}\bar{ \psi}\Gamma^{cde}\psi\]
with \(2i\alpha\omega_{de(b}\eta_{a)c}\bar{\psi}\Gamma^{cde}\psi\) becoming the axial current \(\alpha\omega^{cd}\ _{(a}\varepsilon_{b)cde}\bar{\psi}\gamma^{e}\gamma_{5}\psi\) in 4-dimensions.
The spin density is
\[\sigma^{cab} \equiv \frac{\delta L}{\delta\omega_{abc}}\] \[= -i\alpha\bar{\psi}\Gamma^{abc}\psi\]
so the torsion is given by
\[\frac{\kappa}{2}\mathscr{T}^{cab}=i\alpha\bar{\psi}\Gamma^{abc}\psi\]
This axial current in 4-dimensions. Many studies of torsion in ECSK and generalizations to propagating torsion are restricted to this totally antisymmetric form of \(\mathscr{T}^{cab}\).
#### 5.3.3 The general relativity limit
We wish to examine general relativity with coupled Dirac sources. This source still has a spin density, despite the absence of torsion, and it is necessary to determine whether this puts a constraint on the Dirac field.
With vanishing torsion the connection reduces the connection is compatible, \(\omega^{bc}\ _{\ \mu}\rightarrow\alpha^{bc}\ _{\ \mu}\) though the action must still be made real by adding the conjugate. From the curvature field equation Eq.(27) with \(\mathscr{T}_{abc}=0\),
\[\kappa\left(\mathcal{R}_{bc}-\frac{1}{2}\mathcal{R}\eta_{bc}\right) = T_{bc}+D^{a}\left(\sigma_{bac}+\sigma_{cab}-\sigma_{acb}\right)\]
Although there is nonvanishing spin density there is no second field equation. There is now an antisymmetric part to the Einstein equation.
\[0 = T_{[bc]}+D^{a}\sigma_{abc}\]
This is exactly the part that vanishes by Lorentz symmetry. The Einstein equation therefore reduces to the symmetric expression
\[\kappa\left(R_{bc}-\frac{1}{2}R\eta_{bc}\right) = T_{(bc)}+D^{a}\left(\sigma_{bac}+\sigma_{cab}\right)\]
where the spin tensor is the antisymmetric current
\[\sigma^{cab}\equiv\frac{\delta L}{\delta\omega_{abc}}=-i\alpha\bar{\psi} \Gamma^{abc}\psi\]
Because this is totally antisymmetric, \(\sigma_{bac}+\sigma_{cab}=0\) and we recover the Einstein equation with the usual symmetrized energy tensor and no additional coupling.
\[\kappa\left(R_{bc}-\frac{1}{2}R\eta_{bc}\right) = T_{(bc)}\]
Therefore, despite nonvanishing spin tensor, Dirac fields make only the expected contribution to the field equation of general relativity with no additional constraint.
### Rarita-Schwinger
The spin-\(\frac{3}{2}\) Rarita-Schwinger field [40] is known to give rise to acausal behavior when coupled to other fields [41]. This problem is overcome when a spin-\(\frac{3}{2}\) field representing the gravitino is coupled supersymmetrically. Therefore, we first examine the 11-dimensional supergravity Lagrangian.
#### 5.4.1 11-d Supergravity
Here the basic Lagrangian
\[{\cal L}=\frac{1}{2\kappa^{2}}eR-\frac{1}{2}e\overline{\psi}_{\mu}\Gamma^{\mu \nu\alpha}D_{\nu}\psi_{\alpha}+\frac{1}{48}eF_{\mu\nu\alpha\beta}^{2}\]
includes the scalar curvature \(R\), the spin-\(\frac{3}{2}\) Majorana gravitino field \(\psi_{\alpha}\), and a complex 4-form field built from a 3-form potential as \({\bf F}={\bf dA}\). The covariant derivative has connection \({\mathbf{\omega}}^{a}_{\phantom{a}b}\) and \(\gamma^{\mu}=e_{a}^{\phantom{a}\mu}\gamma^{a}\).
This starting Lagrangian is augmented by \(\psi_{\alpha}\)-\({\bf F}\) coupling terms and a Chern-Simons term required to enforce the supersymmetry ([42, 43, 45, 46]). The result is the Lagrangian for 11D supergravity, first found by Cremmer, Julia and Scherk [45].
\[{\cal L} = \frac{1}{2\kappa^{2}}eR-\frac{1}{2}e\overline{\psi}_{\mu}\Gamma^ {\mu\nu\alpha}D_{\nu}\left[\frac{1}{2}\left(\omega-\overline{\omega}\right) \right]\psi_{\alpha}\] \[+\frac{1}{48}eF_{\mu\nu\alpha\beta}^{2}+\frac{\sqrt{2}\kappa}{384} e\left(\overline{\psi}_{\mu}\Gamma^{\mu\nu\alpha\beta\rho\sigma}\psi_{ \sigma}+12\overline{\psi}^{\nu}\Gamma^{\alpha\beta}\psi^{\rho}\right)\left(F+ \overline{F}\right)_{\nu\alpha\beta\rho}\] \[+\frac{\sqrt{2}\kappa}{3456}\varepsilon^{\alpha_{1}\ldots\alpha_{ 11}}F_{\alpha_{1}\ldots\alpha_{4}}F_{\alpha_{5}\ldots\alpha_{8}}A_{\alpha_{9} \alpha_{10}\alpha_{11}}\]
Since we are primarily interested in sources for torsion, we will only need the kinetic term for the Rarita-Schwinger field. While is it possible that supergravity theories-which exist only in certain dimensions-are the only consistent formulation of spin-\(\frac{3}{2}\) fields, there may be alternative couplings that allow them. For this reason, we will consider the original Rarita-Schwinger kinetic term in arbitrary dimension as a source for torsion, omitting additional couplings.
#### 5.4.2 The Rarita-Schwinger equation
In flat 4-dimensional space the uncoupled Rarita-Schwinger equation may be written as
\[\varepsilon^{\mu\nu\alpha\beta}\gamma_{\nu}\gamma_{5}\partial_{ \alpha}\psi_{\beta}+\frac{1}{2}m\sigma^{\mu\beta}\psi_{\beta} = 0\]
with real action
\[S_{RS}^{0} = \int\bar{\psi}_{\mu}\left(\epsilon^{\mu\kappa\rho\nu}\gamma_{5} \gamma_{\kappa}\partial_{\rho}-\frac{1}{2}m\sigma^{\mu\nu}\right)\psi_{\nu}\]
In curved spacetime, generalizing to the covariant derivative \(\partial_{\alpha}\psi_{\beta}\rightarrow{\cal D}_{\alpha}\psi_{\beta}\) where
\[{\cal D}_{\alpha}\psi_{\beta} = \partial_{\alpha}\psi_{\beta}-\psi_{\mu}\Gamma^{\mu}_{\phantom{ \mu}\beta\alpha}-\frac{1}{2}\omega_{ab\alpha}\sigma^{ab}\psi_{\beta}\]
we must explicitly make it real. As with the Dirac field, the extra terms give an anticommutator. Noticing that
\[\varepsilon^{\mu\kappa\alpha\nu}\Gamma^{\rho}_{\phantom{\rho} \nu\alpha} = \frac{1}{2}\varepsilon^{\mu\kappa\alpha\nu}T^{\rho}_{\phantom{\rho} \alpha\nu}\]
we have
\[S_{RS} = \frac{1}{2}\left(S+S^{*}\right)\] \[= S_{RS}^{0}-\frac{1}{2}\int\left(\epsilon^{\mu\kappa\alpha\nu} \left(\frac{1}{2}\bar{\psi}_{\mu}\gamma_{5}\gamma_{\kappa}\psi_{\rho}{T^{\rho}} _{\alpha\nu}+\frac{1}{2}\left[\bar{\psi}_{\mu}\gamma_{5}\gamma_{\kappa}\psi_{ \rho}{T^{\rho}}_{\alpha\nu}\right]^{\dagger}\right)\right)\] \[+\frac{1}{2}\int\left(-\frac{1}{2}\omega_{ab\alpha}\epsilon^{\mu \kappa\alpha\nu}\left(\bar{\psi}_{\mu}\gamma_{5}\gamma_{\kappa}\sigma^{ab} \psi_{\nu}+\left[\bar{\psi}_{\mu}\gamma_{5}\gamma_{\kappa}\sigma^{ab}\psi_{\nu }\right]^{\dagger}\right)\right)\]
and therefore, taking the adjoint and rearranging
\[S_{RS} = S_{RS}^{0}-\frac{1}{4}\int\epsilon^{\mu\kappa\alpha\nu}\left( \bar{\psi}_{\mu}\gamma_{5}\gamma_{\kappa}\psi_{\rho}{T^{\rho}}_{\alpha\nu}+ \bar{\psi}_{\rho}{T^{\rho}}_{\alpha\nu}\gamma_{5}\gamma_{\kappa}\psi_{\mu}\right)\] \[-\frac{1}{4}\int\omega_{ab\alpha}\epsilon^{\mu\kappa\alpha\nu} \bar{\psi}_{\mu}\gamma_{5}\left\{\gamma_{\kappa},\sigma^{ab}\right\}\psi_{\nu}\]
The explicit torsion coupling here is surprising, and forces us to be clear about the independent variables. We may set \({\bf T}^{a}={\bf d}{\bf e}^{a}-{\bf e}^{b}\wedge{\boldsymbol{\omega}^{a}}_{b}\) and vary \(({\bf e}^{a},{\boldsymbol{\omega}^{a}}_{b})\) or we may write \({\boldsymbol{\omega}^{a}}_{b}={\boldsymbol{\alpha}^{a}}_{b}\left({\bf e}^{c} \right)+{\bf C}^{a}_{b}\) and write the torsion in terms of the contorsion \({\bf T}^{a}={\bf C}^{a}_{b}\wedge{\bf e}^{b}\), then vary \(({\bf e}^{a},{\bf C}^{a}_{b})\). We choose the latter course, since this respects the Lorentz fiber symmetry and yields the Belinfante-Rosenfield tensor as source. For the spin tensor it makes no difference because
\[\delta_{\omega}{\bf T}^{a} = -{\bf e}^{b}\wedge\delta{\boldsymbol{\omega}^{a}}_{b}\] \[\delta_{C}{\bf T}^{a} = -{\bf e}^{b}\wedge\delta{\bf C}^{a}_{b}\]
Before carrying out the variation we develop Rarita-Schwinger action in higher dimensions.
#### 5.4.3 The Rarita-Schwinger action in arbitrary dimension
To explore higher dimensions we introduce some general notation. Clearly we will need the Hodge dual, but it yields a more systematic result if we combine the dual with the gamma matrices.
Define:
\[{\boldsymbol{\gamma}} \equiv \gamma_{a}{\bf e}^{a}\] \[{\boldsymbol{\psi}} \equiv \psi_{a}{\bf e}^{a}\] \[\left(\wedge{\boldsymbol{\gamma}}\right)^{k} \equiv \gamma_{a_{1}}\ldots\gamma_{a_{k}}{\bf e}^{a_{1}}\wedge\ldots \wedge{\bf e}^{a_{k}}\] \[{\bf\Gamma}^{k} \equiv {}^{*}\left[\frac{1}{k!}\left(\wedge{\boldsymbol{\gamma}}\right)^ {k}\right]\]
In particular, \({\bf\Gamma}^{0}\) is just the volume form \({\bf\Phi}\).
It is not hard to check that the Dirac case may be written as
\[S_{D}^{0} = \int\left(\bar{\psi}{\bf\Gamma}^{1}\wedge i{\bf d}\psi-m\bar{\psi }{\bf\Gamma}^{0}\psi-\frac{i}{4}\left\{{\bf\Gamma}^{1},\sigma^{cd}\right\} \wedge{\boldsymbol{\omega}}_{cd}\psi\right)\]
by expanding the forms.
To rewrite the Rarita-Schwinger action in arbitrary dimensions we replace the volume form and set
\[\sigma^{\mu\nu}=(-1)^{q}\left(\frac{1}{2}\sigma^{\rho\sigma}\right)\left(\frac {1}{2}e^{\alpha\beta\mu\nu}e_{\alpha\beta\rho\sigma}\right)\]
Then
\[S^{0}_{RS} = \int\bar{\psi}_{\mu}\left(\epsilon^{\mu\kappa\rho\nu}\gamma_{5} \gamma_{\kappa}\partial_{\rho}-\frac{1}{2}m\sigma^{\mu\nu}\right)\psi_{\nu} \mathbf{\Phi}\] \[= \int\left(\epsilon^{\mu\kappa\rho\nu}\bar{\psi}_{\mu}\gamma_{5} \gamma_{\kappa}\partial_{\rho}\psi_{\nu}\frac{(-1)^{q}}{4!}e_{defg}\mathbf{{\rm e}}^{d}\wedge\mathbf{{\rm e}}^{e}\wedge\mathbf{{\rm e}}^{f}\wedge\mathbf{{\rm e}}^{g}\right)\] \[-\int\frac{1}{2}m\bar{\psi}_{\mu}\left(-1\right)^{q}\frac{1}{2} \sigma^{\rho\sigma}\frac{1}{2}e^{\alpha\beta\mu\nu}e_{\alpha\beta\rho\sigma} \psi_{\nu}\frac{1}{4!}e_{defg}\mathbf{{\rm e}}^{d}\wedge\mathbf{{\rm e}}^{e}\wedge\mathbf{{\rm e}}^{f}\wedge\mathbf{{\rm e}}^{g}\]
This allows us to eliminate the 4-dimensional Levi-Civita tensor by reducing the Levi-Civita pairs \(\frac{(-1)^{q}}{4!}\epsilon^{\mu\kappa\rho\nu}e_{defg}\) and \(\frac{(-1)^{q}}{4!}e^{\alpha\beta\mu\nu}e_{defg}\), to combine a solder form with each spinor. Then
\[S^{0}_{RS} = \int\mathbf{\bar{\psi}}\wedge\gamma_{5}\gamma_{e}\wedge \mathbf{{\rm e}}^{e}\mathbf{{\rm d}}\mathbf{\psi}\] \[-\int\frac{1}{8}m\mathbf{\bar{\psi}}\wedge\left(\frac{1} {8}\sigma^{\rho\sigma}e_{\rho\sigma de}\mathbf{{\rm e}}^{d}\wedge \mathbf{{\rm e}}^{e}\right)\wedge\mathbf{\psi}\]
Now set
\[\gamma_{5}\gamma_{\kappa}\mathbf{{\rm e}}^{\kappa} = \frac{i}{3!}\gamma_{[a}\gamma_{b]}\gamma_{c]}\varepsilon^{abc} _{\quad\kappa}\mathbf{{\rm e}}^{\kappa}=i\mathbf{\Gamma}^{3}\]
and
\[\frac{1}{8}\sigma^{ab}e_{abcd}\mathbf{{\rm e}}^{c} \wedge\mathbf{{\rm e}}^{d} = \frac{1}{2!}\stackrel{{*}}{{}}\left(\gamma^{a} \gamma^{b}\mathbf{{\rm e}}_{a}\wedge\mathbf{{\rm e}}_{b} \right)=\mathbf{\Gamma}^{2}\]
to write the action as
\[S^{0}_{RS} = \int\left(\mathbf{\bar{\psi}}\wedge\mathbf{ \Gamma}^{3}\wedge i\mathbf{{\rm d}}\mathbf{\psi}-m\mathbf{\bar{\psi}}\wedge\mathbf{\Gamma}^{2}\wedge\mathbf{\psi}\right) \tag{37}\]
By using the Hodge dual in \(\mathbf{\Gamma}^{2}\) and \(\mathbf{\Gamma}^{3}\) we have eliminated the specific reference to dimension. Equation (37) is the Rarita-Schwinger action in flat \((p,q)\)-space.
#### 5.4.4 Rarita-Schwinger in curved spaces
To generalize Eq.(37) we now replace the exterior derivative with the covariant exterior derivative
\[\tilde{S}_{RS}=\int\left(\mathbf{\bar{\psi}}\wedge\mathbf{ \Gamma}^{3}\wedge i\mathbf{{\cal D}}\mathbf{\psi}-m\mathbf{\bar{\psi}}\wedge\mathbf{\Gamma}^{2}\wedge\mathbf{\psi}\right)\]
keeping the action real by taking \(S_{RS}=\frac{1}{2}\left(\tilde{S}_{RS}+\tilde{S}^{\dagger}_{RS}\right)\). The covariant derivative 2-form \(\mathbf{{\cal D}}\mathbf{\psi}\) is
\[\mathbf{{\cal D}}\mathbf{\psi} = \mathbf{{\rm d}}\mathbf{\psi}-\psi_{\mu}\mathbf{{\rm T}}^{\mu}-\frac{1}{2}\mathbf{\omega}_{mn}\sigma^{mn} \wedge\mathbf{\psi} \tag{38}\]
Therefore, the direct torsion-Rarita-Schwinger coupling will occur in higher dimensions as well.
Expanding the action and separating the free contribution
\[S_{RS} = S^{0}_{RS}+\frac{1}{2}\int\left(\mathbf{\bar{\psi}} \wedge\mathbf{\Gamma}^{3}\wedge\left(-i\psi_{\mu}\mathbf{{ \rm T}}^{\mu}\right)+\left(\mathbf{\bar{\psi}}\wedge\mathbf{\Gamma}^{3}\wedge\left(-i\psi_{\mu}\mathbf{{\rm T}}^{\mu} \right)\right)^{\dagger}\right)\] \[+\frac{1}{2}\int\left(\mathbf{\bar{\psi}}\wedge\mathbf{\Gamma}^{3}\wedge\left(-\frac{i}{2}\mathbf{\omega}_{mn} \wedge\sigma^{mn}\mathbf{\psi}\right)+\left(\mathbf{\bar{\psi} }\wedge\mathbf{\Gamma}^{3}\wedge\left(-\frac{i}{2}\mathbf{ \omega}_{mn}\wedge\sigma^{mn}\mathbf{\psi}\right)\right)^{\dagger}\right)\]
The conjugate torsion piece us given by
\[\frac{1}{2}\int\left(\bar{\boldsymbol{\psi}}\wedge\boldsymbol{\Gamma}^{3}\wedge \left(-i\psi_{m}\mathbf{T}^{m}\right)\right)^{\dagger}=-\frac{i}{2}\int\left(- 1\right)^{n+1}\bar{\psi}_{m}\mathbf{T}^{m}\wedge\boldsymbol{\Gamma}^{3}\wedge \boldsymbol{\psi}\]
and the conjugate spin connection piece becomes
\[\frac{1}{2}\int\left(\bar{\boldsymbol{\psi}}\wedge\boldsymbol{ \Gamma}^{3}\wedge\left(-\frac{i}{2}\boldsymbol{\omega}_{mn}\wedge\sigma^{mn} \boldsymbol{\psi}\right)\right)^{\dagger} = \frac{1}{2}\int\left[\bar{\boldsymbol{\psi}}\wedge\sigma^{mn} \boldsymbol{\Gamma}^{3}\left(-\frac{i}{2}\wedge\boldsymbol{\omega}_{mn} \right)\wedge\boldsymbol{\psi}\right]\]
Therefore, the full action is
\[S_{RS} = \int\left(\bar{\boldsymbol{\psi}}\wedge\boldsymbol{\Gamma}^{3} \wedge i\mathbf{d}\boldsymbol{\psi}-m\bar{\boldsymbol{\psi}}\wedge\boldsymbol {\Gamma}^{2}\wedge\boldsymbol{\psi}\right)\] \[-\frac{i}{2}\int\left(\bar{\boldsymbol{\psi}}\wedge\boldsymbol{ \Gamma}^{3}\wedge\mathbf{T}^{a}\psi_{a}-\left(-1\right)^{n}\mathbf{T}^{a}\bar {\psi}_{a}\wedge\boldsymbol{\Gamma}^{3}\wedge\boldsymbol{\psi}\right)\] \[-\frac{i}{4}\int\bar{\boldsymbol{\psi}}\wedge\left\{\mathbf{T}^{ 3},\sigma^{cd}\right\}\wedge\boldsymbol{\omega}_{cd}\wedge\boldsymbol{\psi}\]
The anticommutator is
\[\left\{\gamma^{[a_{1}}\gamma^{a_{2}}\gamma^{a_{3}]},\sigma^{de}\right\} = 4\sum_{a_{1}<a_{2}<a_{3}}\left(\gamma^{[a_{1}}\gamma^{a_{2}} \gamma^{a_{3}}\gamma^{d}\gamma^{e]}-\left(\eta^{a_{1}d}\eta^{a_{2}e}-\eta^{a_{ 2}d}\eta^{a_{1}e}\right)\eta^{dd}\eta^{ee}\gamma^{a_{3}}\right.\] \[+\left.\left(\eta^{a_{1}d}\eta^{a_{3}e}-\eta^{a_{3}d}\eta^{a_{1}e }\right)\eta^{dd}\eta^{ee}\gamma^{a_{2}}-\left(\eta^{a_{2}d}\eta^{a_{3}e}-\eta ^{a_{3}d}\eta^{a_{2}e}\right)\eta^{dd}\eta^{ee}\gamma^{a_{1}}\right)\]
so the Rarita-Schwinger spin tensor contains couplings involving \(\boldsymbol{\Gamma}^{1},\boldsymbol{\Gamma}^{3},\boldsymbol{\Gamma}^{5}\).
#### 5.4.5 The Rarita-Schwinger spin tensor
Varying the action with respect to the spin connection or contorsion
\[\delta_{\omega}S_{RS} = -\int\frac{i}{2}\bar{\boldsymbol{\psi}}\wedge\boldsymbol{\Gamma }^{3}\wedge\left(-\mathbf{e}^{b}\wedge\delta\boldsymbol{\omega}^{a}{}_{b} \right)\psi_{a}-\frac{i}{2}\int\left(-1\right)^{n+1}\left(-\mathbf{e}^{b}\wedge \delta\boldsymbol{\omega}^{a}{}_{b}\right)\bar{\psi}_{a}\wedge\boldsymbol{ \Gamma}^{3}\wedge\boldsymbol{\psi}\] \[+\frac{i}{4}\int\bar{\boldsymbol{\psi}}\wedge\left\{\mathbf{ \Gamma}^{3},\sigma^{b}{}_{a}\right\}\wedge\delta\boldsymbol{\omega}^{a}{}_{b} \wedge\boldsymbol{\psi}\]
Expanding the forms, setting \(\delta\boldsymbol{\omega}^{a}{}_{b}=A^{a}{}_{bc}\mathbf{e}^{c}\), and collecting the basis into volume forms this becomes
\[\delta_{\omega}S_{RS} = \frac{i}{2}\int A_{abc}\left(\bar{\psi}_{e}\gamma^{[e}\gamma^{b} \gamma^{c]}\psi^{a}-\bar{\psi}^{a}\gamma^{[b}\gamma^{e}\gamma^{c]}\psi_{e}- \frac{1}{2}\bar{\psi}_{d}\left\{\gamma^{[d}\gamma^{e}\gamma^{c]},\sigma^{ba} \right\}\psi_{e}\right)\boldsymbol{\Phi}\]
so antisymmetrizing on \(ab\) and expanding the anticommutator as
\[\frac{i}{4}\bar{\psi}_{d}\left\{\gamma^{[d}\gamma^{e}\gamma^{c]},\sigma^{ab}\right\}\psi_{e} = i\bar{\psi}_{d}\gamma^{[a}\gamma^{b}\gamma^{c}\gamma^{d}\gamma^{ c]}\psi_{e}+i\left(\eta^{ac}\eta^{bd}-\eta^{bc}\eta^{ad}\right)\bar{\psi}_{d} \gamma^{e}\psi_{e} \tag{39}\] \[+i\left(\eta^{ae}\eta^{bc}-\eta^{ac}\eta^{be}\right)\bar{\psi}_{d} \gamma^{d}\psi_{e}+i\left(\eta^{ad}\eta^{be}-\eta^{ae}\eta^{bd}\right)\bar{ \psi}_{d}\gamma^{c}\psi_{e}\]
the spin tensor is
\[\sigma^{cab} = \frac{i}{4}\left(\bar{\psi}_{e}\gamma^{[e}\gamma^{b}\gamma^{c]} \psi^{a}-\bar{\psi}_{e}\gamma^{[e}\gamma^{a}\gamma^{c]}\psi^{b}+\bar{\psi}^{b} \gamma^{[a}\gamma^{e}\gamma^{c]}\psi_{e}-\bar{\psi}^{a}\gamma^{[b}\gamma^{e} \gamma^{c]}\psi_{e}\right) \tag{40}\] \[+i\bar{\psi}_{d}\gamma^{[a}\gamma^{b}\gamma^{c}\gamma^{d}\gamma^{ c]}\psi_{e}+i\left(\eta^{ae}\eta^{bd}-\eta^{bc}\eta^{ad}\right)\bar{\psi}_{d} \gamma^{e}\psi_{e}\] \[+i\left(\eta^{ae}\eta^{bc}-\eta^{ae}\eta^{be}\right)\bar{\psi}_{d} \gamma^{d}\psi_{e}+i\left(\eta^{ad}\eta^{be}-\eta^{ae}\eta^{bd}\right)\bar{ \psi}_{d}\gamma^{c}\psi_{e}\]
After using the torsion equation, the source for the Einstein tensor is always the symmetrized canonical tensor (31) but the torsion is now driven by much more than the axial current. We next use the full spin tensor, Eq.(39), to compute the source for each indepdendent part of the torsion. Since the reduced field equation shows that \(\frac{\kappa}{2}\mathscr{T}_{\phantom{\kappa}ab}^{c}=-\sigma_{\phantom{c}ab}^{c}\) it suffices to find the trace, totally antisymmetric, and traceless, mixed symmetry parts of \(\sigma^{cab}\). The corresponding parts of the torsion are proportional to these.
First, the trace of the spin tensor reduces to a simple vector current.
\[\sigma_{c}^{\phantom{c}cb} = i\left(n-2\right)\left(\bar{\psi}^{b}\gamma^{e}\psi_{e}-\bar{ \psi}_{e}\gamma^{e}\psi^{b}\right)\]
For the antisymmetric part there is no change in the totally antisymmetric piece \(i\bar{\psi}_{d}\gamma^{[a}\gamma^{b}\gamma^{c}\gamma^{d}\gamma^{e]}\psi_{e}\). Of the last three terms involving metrics, the first two vanish while the antisymmetrization of the third gives
\[\left(i\left(\eta^{ad}\eta^{be}-\eta^{ae}\eta^{bd}\right)\bar{ \psi}_{d}\gamma^{c}\psi_{e}\right)_{[abc]} = -2i\bar{\psi}^{[a}\gamma^{b}\psi^{c]}\]
The remaining terms require the \(abc\) antisymmetrization of \(\bar{\psi}_{e}\gamma^{[e}\gamma^{b}\gamma^{c]}\psi^{a}\) and \(\bar{\psi}^{b}\gamma^{[a}\gamma^{e}\gamma^{c]}\psi_{e}\). This is complicated by the existing antisymmetry of \(ebc\). Write these out in detail and collecting terms we find
\[\left(\bar{\psi}_{e}\gamma^{[e}\gamma^{b}\gamma^{c]}\psi^{a} \right)_{[abc]} = \frac{4}{3}\bar{\psi}_{e}\gamma^{[e}\gamma^{a}\gamma^{b}\psi^{c] }+\frac{1}{3}\bar{\psi}_{e}\gamma^{[a}\gamma^{b}\gamma^{c]}\psi^{e}\] \[\left(\bar{\psi}^{a}\gamma^{[b}\gamma^{c}\gamma^{c]}\psi_{e} \right)_{[abc]} = \frac{4}{3}\bar{\psi}^{[a}\gamma^{b}\gamma^{c}\gamma^{e]}\psi_{e} +\frac{1}{3}\bar{\psi}^{e}\left(\gamma^{[a}\gamma^{b}\gamma^{c]}\right)\psi_{e}\]
with the full contribution to \(\sigma^{[abc]}\) being \(\frac{i}{2}\) times these. Combining everything, the source for the totally antisymmetric part of the torsion is
\[\sigma^{[cab]} = i\bar{\psi}_{d}\gamma^{[a}\gamma^{b}\gamma^{c}\gamma^{d}\gamma^{ c]}\psi_{e}+\frac{2i}{3}\left(\bar{\psi}_{e}\gamma^{[e}\gamma^{a}\gamma^{b}\psi^{c] }-\bar{\psi}^{[a}\gamma^{b}\gamma^{c}\gamma^{e]}\psi_{e}\right)\] \[+\frac{i}{6}\left(\bar{\psi}_{e}\gamma^{[a}\gamma^{b}\gamma^{c]} \psi^{e}-\bar{\psi}^{e}\left(\gamma^{[a}\gamma^{b}\gamma^{c]}\right)\psi_{e} \right)-2i\bar{\psi}^{[a}\gamma^{b}\psi^{c]}\]
containing 1-, 3-, and 5-gamma currents.
The traceless, mixed symmetry part \(\tilde{\sigma}^{cab}\) is found by subtracting the trace and antisymmetric pieces.
\[\tilde{\sigma}^{cab}=\sigma^{cab}-\sigma^{[cab]}-\frac{1}{n-1}\left(\eta^{ac }\sigma_{c}^{\phantom{c}cb}-\eta^{bc}\sigma_{c}^{\phantom{c}ca}\right)\]
The result is
\[\tilde{\sigma}^{cab} = \frac{i}{4}\left(\bar{\psi}_{e}\gamma^{[e}\gamma^{b}\gamma^{c]} \psi^{a}-\bar{\psi}_{e}\gamma^{[e}\gamma^{a}\gamma^{c]}\psi^{b}+\bar{\psi}^{b }\gamma^{[a}\gamma^{e}\gamma^{c]}\psi_{e}-\bar{\psi}^{a}\gamma^{[b}\gamma^{c} \gamma^{c]}\psi_{e}\right)\] \[-\frac{2i}{3}\left(\bar{\psi}_{e}\gamma^{[e}\gamma^{a}\gamma^{b} \psi^{c]}-\bar{\psi}^{[a}\gamma^{b}\gamma^{c}\gamma^{e]}\psi_{e}\right)+2i\bar{ \psi}^{[a}\gamma^{b}\psi^{c]}\] \[+\frac{i}{n-1}\eta^{ac}\left(\bar{\psi}^{b}\gamma^{e}\psi_{e}- \bar{\psi}_{e}\gamma^{e}\psi^{b}\right)-\frac{i}{n-1}\eta^{bc}\left(\bar{\psi}^ {a}\gamma^{e}\psi_{e}-\bar{\psi}_{e}\gamma^{e}\psi^{a}\right)+i\left(\bar{ \psi}^{a}\gamma^{c}\psi^{b}-\bar{\psi}^{b}\gamma^{c}\psi^{a}\right)\]
The traceless, mixed symmetry piece therefore depends on 1- and 3-gamma currents.
Therefore, while the Dirac field produces only an axial vector source for torsion, the Rarita-Schwinger field provides a souce for each independent piece. Moreover, since a spin-\(\frac{3}{2}\) field in \(n\)-dimensions has \(n\times 2^{\left[\frac{n}{2}\right]+1}\) degrees of freedom while the torsion has \(\frac{1}{2}n^{2}\left(n-1\right)\), generic solutions may be expected to produce generic torsion except in dimensions \(n=5,7\) or \(9\).
### Higher spin fermions
We have seen that the vacuum Dirac (\(k=0\)) and Rarita-Schwinger (\(k=1\)) actions for spin-\(\frac{2k+1}{2}\) may be written as
\[S_{k=0}^{0} = \int\left(\bar{\psi}\mathbf{\Gamma}^{1}\wedge i\mathbf{d}\psi-m \bar{\psi}\mathbf{\Gamma}^{0}\psi\right)\] \[S_{k=1}^{0} = \int\left(\boldsymbol{\bar{\psi}}\wedge\mathbf{\Gamma}^{3}\wedge i \mathbf{d}\boldsymbol{\psi}-m\boldsymbol{\bar{\psi}}\wedge\mathbf{\Gamma}^{2} \wedge\boldsymbol{\psi}\right)\]
The pattern seen here generalizes immediately to higher fermionic spins in any dimension \(n\geq 2k+1\), with the flat space kinetic term depending on \(\mathbf{\Gamma}^{2k+1}\) and the mass term depending on \(\mathbf{\Gamma}^{2k}\) for spin \(\frac{2k+1}{2}\) fields. Including the covariant derivative then adds torsion and anticommutator couplings.
\[S_{k=0} = \int\bar{\psi}\left(\frac{1}{2}\mathbf{\Gamma}^{1}\wedge i \overleftrightarrow{\mathbf{d}}-m\mathbf{\Gamma}^{0}\right)\psi-\frac{i}{4} \bar{\psi}\left\{\mathbf{\Gamma}^{1},\sigma^{ef}\right\}\psi\wedge\boldsymbol {\omega}_{ef}\] \[S_{k=1} = \int\boldsymbol{\bar{\psi}}\wedge\left(\mathbf{\Gamma}^{3}\wedge i \mathbf{d}\boldsymbol{\psi}-m\mathbf{\Gamma}^{2}\wedge\boldsymbol{\psi}\right)\] \[-\frac{i}{2}\int\left(\boldsymbol{\bar{\psi}}\wedge\mathbf{ \Gamma}^{3}\wedge\mathbf{T}^{a}\psi_{a}+\left(-1\right)^{n+1}\mathbf{T}^{a} \bar{\psi}_{a}\wedge\mathbf{\Gamma}^{3}\wedge\boldsymbol{\psi}\right)\] \[-\frac{i}{4}\int\boldsymbol{\bar{\psi}}\wedge\left\{\mathbf{ \Gamma}^{3},\sigma^{cd}\right\}\wedge\boldsymbol{\omega}_{cd}\wedge\boldsymbol {\psi}\]
#### 5.5.1 General case definitions
The covariant derivative is similar to that for the Rarita-Schwinger field (38), but for spin \(\frac{2k+1}{2}\) there are is a factor \(k\) times the torsion term. Expanding
\[\boldsymbol{\mathcal{D}}\boldsymbol{\psi}=\mathbf{e}^{a}\wedge\mathbf{e}^{b_{ 1}}\wedge\ldots\wedge\mathbf{e}^{b_{k}}\mathcal{D}_{a}\psi_{b_{1}\ldots b_{k}}\]
the expansion is clearest in coordinates,
\[\boldsymbol{\mathcal{D}}\boldsymbol{\psi} = D_{\mu}\psi_{\alpha\ldots\beta}\mathbf{d}x^{\mu}\wedge\mathbf{d }x^{\alpha}\wedge\ldots\wedge\mathbf{d}x^{\beta}\] \[= \partial_{\mu}\psi_{\alpha\ldots\beta}\mathbf{d}x^{\mu}\wedge \mathbf{d}x^{\alpha}\wedge\ldots\wedge\mathbf{d}x^{\beta}-\psi_{\rho\ldots \beta}\Gamma^{\rho}\,_{\alpha\mu}\mathbf{d}x^{\mu}\wedge\mathbf{d}x^{\alpha} \wedge\ldots\wedge\mathbf{d}x^{\beta}\] \[-\ldots-\psi_{\alpha\ldots\rho}\Gamma^{\rho}\,_{\beta\mu} \mathbf{d}x^{\mu}\wedge\mathbf{d}x^{\alpha}\wedge\ldots\wedge\mathbf{d}x^{ \beta}-\frac{1}{2}\omega_{ab\mu}\sigma^{ab}\psi_{\alpha\ldots\beta}\mathbf{d}x^ {\mu}\wedge\mathbf{d}x^{\alpha}\wedge\ldots\wedge\mathbf{d}x^{\beta}\]
Antisymmetrizing each \(\Gamma^{\rho}\,_{\alpha\mu}\) gives a torsion \(\psi_{\alpha\ldots\rho}\Gamma^{\rho}\,_{\beta\mu}\mathbf{d}x^{\mu}\wedge \mathbf{d}x^{\alpha}\wedge\ldots\wedge\mathbf{d}x^{\sigma}\wedge\mathbf{d}x^ {\beta}=\mathbf{T}^{\rho}\wedge\boldsymbol{\psi}_{\rho}\) where we define \(\boldsymbol{\psi}_{\rho}\equiv\psi_{\rho\alpha\ldots\sigma}\wedge\mathbf{d}x ^{\alpha}\wedge\ldots\wedge\mathbf{d}x^{\sigma}\). We get the same expression for each vector index so rearrangement gives
\[\boldsymbol{\mathcal{D}}\boldsymbol{\psi} = \mathbf{d}\boldsymbol{\psi}-k\mathbf{T}^{\rho}\wedge\boldsymbol{ \psi}_{\rho}-\frac{1}{2}\boldsymbol{\omega}_{ab}\wedge\sigma^{ab}\boldsymbol{\psi} \tag{40}\]
The same result follows in an orthogonal basis, but it is easiest to see using coordinates.
For the generalized \(\Gamma s\) it is useful to normalize to avoid overall signs. Setting \(h\mathbf{\Gamma}^{k}=\left(h\mathbf{\Gamma}^{k}\right)^{\dagger}\) introduces a factor of \(\left(-1\right)^{k}\), but including the fields the adjoint of the combination \(\boldsymbol{\bar{\psi}}\wedge\mathbf{\Gamma}^{2k+1}\wedge i\mathbf{d} \boldsymbol{\psi}\) introduces an additional factor of \(\left(-1\right)^{k}\). We therefore require no phase factor and can conveniently define
\[\mathbf{\Gamma}^{m} \equiv \frac{1}{m!}\,^{*}\left[\left(\wedge\boldsymbol{\gamma}\right)^{m}\right]\]
for all integers \(m\).
#### 5.5.2 Spin \(\frac{2k+1}{2}\) fields
To start, we take the flat space \(Spin\left(\frac{2k+1}{2}\right)\) action to be
\[S_{k}^{0}=\int\bar{\mathbf{\psi}}\wedge\left(\mathbf{\Gamma}^{2k+1}\wedge i \mathbf{d}\mathbf{\psi}-m\mathbf{\Gamma}^{2k}\wedge\mathbf{\psi}\right) \tag{41}\]
after taking the conjugate and expanding the forms explicitly to check that \(S_{k}^{0}\) is real. Notice that \(\bar{\mathbf{\psi}}\wedge\mathbf{d}\mathbf{\psi}\) is a \(\left(2k+1\right)\)-form and therefore \(S_{k}^{0}\) exists only for \(n\geq 2k+1\). This makes Rarita-Schwinger the maximal case in 4-dimensional spacetime. Then, replacing \(\mathbf{d}\Rightarrow\mathbf{\mathcal{D}}\) using Eq.(40) and symmetrizing, the gravitationally coupled \(Spin\left(\frac{2k+1}{2}\right)\) action is
\[S_{k} = \frac{1}{2}\left(\tilde{S}_{k}+\tilde{S}_{k}^{*}\right)\]
As with the Rarita-Schwinger case, we find the real part of the torsion and \(\sigma^{ab}\) parts. For the torsion terms
\[S_{k}\left(T\right) = \frac{1}{2}\int\bar{\mathbf{\psi}}\wedge\left(\mathbf{\Gamma}^{2k+1} \wedge\left(-ik\mathbf{T}^{a}\wedge\mathbf{\psi}_{a}\right)\right)+\frac{1}{2} \int\left[\bar{\mathbf{\psi}}\wedge\mathbf{\Gamma}^{2k+1}\wedge\left(-ik\mathbf{T}^{a} \wedge\mathbf{\psi}_{a}\right)\right]^{\dagger}\] \[= -\frac{ik}{2}\int\left(\bar{\mathbf{\psi}}\wedge\mathbf{\Gamma}^{2k+1} \wedge\mathbf{T}^{a}\wedge\mathbf{\psi}_{a}+\left(-1\right)^{n+k}\mathbf{T}^{a} \wedge\bar{\mathbf{\psi}}_{a}\wedge\mathbf{\Gamma}^{2k+1}\wedge\mathbf{\psi}\right)\]
while the \(\sigma^{ab}\) terms still give an anticommutator
\[S_{k}\left(\sigma\right) = \frac{1}{2}\int\bar{\mathbf{\psi}}\wedge\left\{\mathbf{\Gamma}^{2k+1}, \sigma^{cd}\right\}\wedge\left(-\frac{i}{2}\mathbf{\omega}_{cd}\wedge\mathbf{\psi}\right)\]
Therefore, the action for gravitationally coupled \(Spin\left(\frac{2k+1}{2}\right)\) fields is
\[S_{k} = \int\bar{\mathbf{\psi}}\wedge\left(\mathbf{\Gamma}^{2k+1}\wedge i \mathbf{d}\mathbf{\psi}-m\mathbf{\Gamma}^{2k}\wedge\mathbf{\psi}\right) \tag{42}\] \[-\frac{ik}{2}\int\left(\bar{\mathbf{\psi}}\wedge\mathbf{\Gamma}^{2k+1} \wedge\mathbf{T}^{a}\wedge\mathbf{\psi}_{a}+\left(-1\right)^{n+k}\mathbf{T}^{a} \wedge\bar{\mathbf{\psi}}_{a}\wedge\mathbf{\Gamma}^{2k+1}\wedge\mathbf{\psi}\right)\] \[+\frac{1}{2}\int\bar{\mathbf{\psi}}\wedge\left\{\mathbf{\Gamma}^{2k+1}, \sigma^{cd}\right\}\wedge\left(-\frac{i}{2}\mathbf{\omega}_{cd}\mathbf{\psi}\right)\]
The spin tensor always contains the anticommutator, which always brings in couplings involving \(\Gamma^{2k-1}\) and \(\Gamma^{2k+3}\) only (see the Appendix). The Dirac field has \(k=0\), so only the \(\Gamma^{3}\) term is possible, while for Rarita-Schwinger fields with \(k=1\) we see both \(\Gamma^{1}\) and \(\Gamma^{5}\).
There are also direct torsion couplings of the form
\[k\bar{\mathbf{\psi}}\wedge\mathbf{\Gamma}^{2k+1}\wedge\mathbf{T}^{a} \wedge\mathbf{\psi}_{a}+c.c.\]
so the \(Spin\left(\frac{2k+1}{2}\right)\) field may emit and absorb torsion. This is absent from Dirac interactions because there is no vector index on \(\psi\), but does show up in the Rarita-Schwinger spin tensor. If the action includes a dynamical torsion term this constitutes a new interaction unless there is a consistent interpretation of torsion in terms of known interactions.
The spin tensor is given by a simple variation, followed by reducing the basis forms to a volume form. The result is
\[\sigma^{cab} = \frac{ik}{2}\left(-1\right)^{kn-k-n+1}\left(\eta^{be}\delta_{f}^ {d}-\left(-1\right)^{k}\eta^{bd}\delta_{f}^{e}\right)\bar{\psi}_{df_{1}\dots f _{k-1}}\Gamma^{[acff_{1}\dots f_{k-1}g_{1}\dots g_{k-1}]}\psi_{eg_{1}\dots g_{ k-1}}\] \[+\frac{i}{4}\left(-1\right)^{kn-k-n+1}\bar{\psi}_{a_{1}\dots a_{ k}}\left\{\Gamma^{[a_{1}\dots a_{k}b_{1}\dots b_{k}c]},\sigma^{ab}\right\}\psi_{b_{1} \dots b_{k}}\delta_{c_{1}\dots c_{2k+1}}^{a_{1}\dots a_{k}b_{1}\dots b_{k}c}\]
The anticommutator is a linear combination of \(\Gamma^{2k-1},\Gamma^{2k+3}\) (See Appendix 6) so together with the torsion contribution we have the original and both adjacent couplings \(\Gamma^{2k-1},\mathbf{\Gamma}^{2k+1},\Gamma^{2k+3}\). It is extremely likely that, like the Rarita-Schwinger field, higher spin fermions drive all invariant parts of the torsion.
Conclusions
We implemented Poincare gauging in arbitrary dimension \(n\) and signature \((p,q)\) using Cartan's methods. The principal fields are the curvature and torsion 2-forms, given in terms of the solder form and local Lorentz spin connection. The inclusion of torsion produces a Riemann-Cartan geometry rather than Riemannian. We found the Bianchi identities and showed that the Riemann-Cartan identities hold if and only if the Riemannian Bianchi identities hold.
Replicating familiar results, we reproduced general relativity in Riemannian geometry by setting the torsion to zero and varying only the metric. The resulting Riemannian geometry is known to be consistent and metric variation leads to a symmetric energy tensor.
We examined sources for the ECSK theory, that is, the gravity theory in Riemann-Cartan geometry found by using the Einstein-Hilbert form of the action with the Einstein-Cartan curvature tensor. The vacuum theory agrees with general relativity even when both the solder form and connection are varied independently, but there are frequently nonvanishing matter sources for both the Einstein tensor and the torsion.
The first issue we dealt with in depth was the choice of independent variables. The spin connection was shown to be the sum of the solder-form-compatible connection and the contorsion tensor \(\boldsymbol{\omega}^{a}_{\ b}=\boldsymbol{\alpha}^{a}_{\ b}+\mathbf{C}^{a}_{\ b}\). We compared and constrasted the resulting two allowed sets of independent variables: the solder form and spin connection \((\mathbf{e}^{a},\boldsymbol{\omega}^{a}_{\ b})\) on the one hand and the solder form and the contorsion tensor \((\mathbf{e}^{a},\mathbf{C}^{a}_{\ b})\) on the other. When choosing the latter pair the compatible part of the spin connection \(\boldsymbol{\alpha}^{a}_{\ b}\) must be treated through its dependence on the solder form. We demonstrated explicitly how the two choices of independent variable differ in their relationship to the Lorentz fibers of the Riemann-Cartan space.
Changing independent variables changes the energy tensor. We showed that the difference between these two choices leads to the difference between the (asymmetric) canonical energy tensor and the (symmetric) Belinfante-Rosenfield energy tensor. When the field equations are combined both methods yield the same reduced system.
Our second main contribution was a more thorough analysis of sources for torsion. Many, perhaps most, of the research on ECSK theory or its generalizations to include dynamical torsion have restricted attention to Dirac fields as sources. This yields a single axial current and totally antisymmetric torsion. This amounts to only \(n\) of the \(\frac{1}{2}n^{2}\left(n-1\right)\) degrees of freedom of the torsion.
We took the opposite approach, considering fields of _all_ spin. Only scalar and Yang-Mills fields fail to determine nonvanishing torsion. In addition to these we looked at symmetric bosonic kinetic forms and found all to provide sources for torsion. We studied Dirac and Rarita-Schwinger fields in greater depth. After reproducing the well-known result for Dirac fields, we developed formalism to describe the spin-\(\frac{3}{2}\) Rarita-Schwinger field in arbitrary dimension. Surprisingly, in addition dependence on the anticommutator of three gammas with the spin generator, \(\left\{\gamma^{[a}\gamma^{b}\gamma^{c]},\sigma^{de}\right\}\), there is a direct coupling to torsion, \(\psi_{a}\mathbf{T}^{a}\). Continuing, we showed that Rarita-Schwinger fields drive all three independent parts of the torsion: the trace, the totally antisymmetric part, and the traceless, mixed-symmetry residual. Except in dimensions \(5,7,\) and \(9\) the Rarita-Schwinger field has enough degrees of freedom to produce generic torsion.
Acknowledgment: The author wishes to thank Joshua Leiter for numerous discussions, including the Gibbons-Hawking-York boundary term and the independent part of the torsion [47].
|
2304.10538 | Compact Steep Spectrum Radio Sources with Enhanced Star Formation are
Smaller than $10\,$kpc | Compact Steep Spectrum (CSS) radio sources are active galactic nuclei that
have radio jets propagating only on galactic scales, defined as having
projected linear sizes (LS) of up to $20\,$kpc. CSS sources are generally
hosted by massive early-type galaxies with little on-going star formation,
however a small fraction are known to have enhanced star formation. Using
archival data from the Faint Images of the Radio Sky at Twenty cm survey, the
Very Large Array Sky Survey and the Sloan Digital Sky Survey we identify a
volume-limited sample of $166$ CSS sources at $z<0.2$ with
$L_{1.4\,\text{GHz}}>10^{24}\,\text{W}\,\text{Hz}^{-1}$. Comparing the star
formation rates and linear sizes of these CSS sources, we find that the
$\approx14\,\%$ of CSS sources with specific star formation rates above
$0.01\,\text{Gyr}^{-1}$ all have $\text{LS}<10\,$kpc. We discuss the possible
mechanisms driving this result, concluding that it is likely the excess star
formation in these sources occurred in multiple bursts and ceased prior to the
AGN jet being triggered. | Yjan A. Gordon, Christopher P. O'Dea, Stefi A. Baum, Keith Bechtol, Chetna Duggal, Peter S. Ferguson | 2023-04-20T17:59:59Z | http://arxiv.org/abs/2304.10538v1 | # Compact Steep Spectrum Radio Sources with Enhanced Star Formation are Smaller than \(10\,\)kpc
###### Abstract
Compact Steep Spectrum (CSS) radio sources are active galactic nuclei that have radio jets propagating only on galactic scales, defined as having projected linear sizes (LS) of up to \(20\,\)kpc. CSS sources are generally hosted by massive early-type galaxies with little on-going star formation, however a small fraction are known to have enhanced star formation. Using archival data from the Faint Images of the Radio Sky at Twenty cm survey, the Very Large Array Sky Survey and the Sloan Digital Sky Survey we identify a volume-limited sample of 166 CSS sources at \(z<0.2\) with \(L_{1.4\,{\rm GHz}}>10^{24}\,{\rm W}\,{\rm Hz}^{-1}\). Comparing the star formation rates and linear sizes of these CSS sources, we find that the \(\approx 14\,\)% of CSS sources with specific star formation rates above \(0.01\,{\rm Gyr}^{-1}\) all have \({\rm LS}<10\,\)kpc. We discuss the possible mechanisms driving this result, concluding that it is likely the excess star formation in these sources occurred in multiple bursts and ceased prior to the AGN jet being triggered.
Active galactic nuclei (16), AGN host galaxies (2017), Extragalactic radio sources (508), Radio Galaxies (1343), Star formation (1569), +
Footnote †: journal: ApJL
0000-0002-4882-7884]Yjan A. Gordon
0000-0002-4880-7885]Christopher P. O'Dea
0000-0002-4880-7885]Stefi A. Baum
0000-0002-4880-3371]Keith Bechtol
0000-0002-1888-0885]Chetna Duggal
0000-0002-1888-0885]Peter S. Ferguson
## 1 Introduction
Active Galactic Nuclei (AGN) are the phenomenon whereby matter is accreting onto the central supermassive black hole of their host galaxies (Salpeter, 1964). A small fraction of AGN produce particle jets that result in radio emission via mechanisms such as synchrotron radiation and inverse Compton scattering (Padovani, 2017; Blandford et al., 2019). The jets produced by these radio loud AGN (RLAGN) can sometimes propagate well beyond the host galaxy, giving rise to large scale double-lobed structures such as Fanaroff and Riley class I and II radio galaxies (FRIs and FRIIs, Fanaroff and Riley, 1974) that can span hundreds of kiloparsecs or more (e.g., Willis et al., 1974; Ishwara-Chandra and Saikia, 1999; Dabhade et al., 2017). In contrast to FRIs and FRIIs are compact RLAGN that have radio emission on scales similar to or smaller than the host galaxy.
Compact Steep Spectrum (CSS) radio sources have radio extents smaller than \(\sim 20\,\)kpc and radio spectral indices of \(\alpha<-0.5\), where spectral index, \(\alpha\), is related to flux density, \(S\), and frequency, \(\nu\), by \(S\propto\nu^{\alpha}\)(Fanti et al., 1990; O'Dea, 1998; O'Dea and Saikia, 2021). It is thought that at least some CSS sources are young AGN that will evolve into larger radio morphologies (Fanti et al., 1995; O'Dea, 1998; An and Baan, 2012; O'Dea and Saikia, 2021). This hypothesis is based on very long baseline interferometry (VLBI) observations of powerful CSS sources that show double-lobed radio morphologies analogous to FRIs and FRIIs but on a much smaller scale (Spencer et al., 1991; Dallacasa et al., 1995), and jet proper motions indicative of a short travel time from the central engine (Owsianik and Conway, 1998; Polatidis and Conway, 2003; An et al., 2012). The young AGN scenario is further supported by CSS sources having host galaxies similar those of larger radio galaxies.
An alternative to the young AGN scenario, is that the radio jets in CSS sources are unable to travel as easily through the interstellar medium (ISM), a phenomenon
known as 'frustration' (van Breugel et al., 1984; Wilkinson et al., 1984; O'Dea et al., 1991). Frustration can occur either as a result of intrinsically weak jet power, the jet strongly interacting with a dense ISM, or a combination of these factors. The jet frustration paradigm is supported by high resolution images that show distinct asymmetry in some CSS radio sources (Saikia et al., 1995; Saikia and Gupta, 2003; Orienti et al., 2007).
The galaxies that host CSS sources are generally massive early-type galaxies with little star formation (SF), but a subset of the CSS population are known to exhibit enhanced SF (de Vries et al., 1998, 2000; Drake et al., 2004; Tadhunter et al., 2011; Dicken et al., 2012; O'Dea and Saikia, 2021). A systematic study of SF in a large sample of CSS host galaxies may help shed light on to why some CSS sources are star forming and constrain the evolutionary path of these RLAGN. Such studies have previously been problematic as most samples of CSS sources consisted of objects with high radio luminosity that are rare in the local Universe. Consequently, the relatively shallow wide-field multiwavelength surveys that can readily provide star formation rates (SFRs) usually don't cover the high luminosity radio sources and expensive targeted observations are often necessary. The advent of deep, wide-field radio continuum surveys with high angular resolution is now making these types of systematic studies feasible (Sadler, 2016).
The Faint Images of the Radio Sky at Twenty cm survey (FIRST, Becker et al., 1995) and the Very Large Array Sky Survey (VLASS, Lacy et al., 2020), which have angular resolutions of \(5.4^{\prime\prime}\) and \(3^{\prime\prime}\) respectively, are well suited to identifying compact radio sources brighter than \(\approx 1\) mJy. Furthermore, these surveys observe at different frequencies; FIRST at 1.4 GHz and VLASS at 3 GHz. Using FIRST and VLASS data together is a pragmatic approach to measuring the spectral indices of large numbers of faint compact radio sources (Gordon et al., 2021). Both FIRST and VLASS cover the \(\approx 10,000\) deg\({}^{2}\) footprint of the Sloan Digital Sky Survey (SDSS, York et al., 2000) which provides optical measurements and derived properties, including SFRs, for \(\sim 10^{6}\) galaxies. Combining FIRST, VLASS and SDSS therefore has the potential to be an effective method for studying SF in a large number of CSS sources in the local Universe.
In this Letter we use data from FIRST, VLASS and SDSS to investigate the relationship between radio source size and SF in CSS sources. The selection of CSS sources is described in Section 2. In Section 3 we compare the radio sizes and SFRs of our CSS sources. We discuss our results in Section 4 and state our conclusions in Section 5. Throughout this work we assume a flat \(\Lambda\)CDM cosmology with \(h=0.7\), \(H_{0}=100h\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{m}=0.3\) and \(\Omega_{\Lambda}=0.7\).
## 2 Sample Selection
To identify likely CSS sources we start with the Best and Heckman (2012) catalog of radio galaxies in the SDSS 7th Data Release (DR7, Abazajian et al., 2009) spectroscopic sample. This catalog contains the host IDs of the radio sources, identifies sources where the radio emission is likely due to SF rather than an AGN, and where possible classifies RLAGN as either low- or high-excitation radio galaxies (LERGs or HERGs). As we are interested in compact radio AGN in this work, we select objects from the Best and Heckman (2012) catalog that are associated with a single detection in FIRST, excluding multi-FIRST-component sources from consideration.
High frequency (\(\nu\sim 3\) GHz) information on our sources is obtained by cross matching with the VLASS Epoch 1 component catalog (Gordon et al., 2021). We only search for VLASS components brighter than 3 mJy/beam, as fainter components have less reliable flux density measurements (See section 3 of Gordon et al., 2021). A search radius of \(5^{\prime\prime}\) is used which, given the on-sky component density of VLASS at \(S>3\) mJy (\(\sim 18\) deg\({}^{-2}\)), has an expected contamination level from false-positive matches of less than 0.05 %. The 3 GHz flux density of our sources is then scaled by 1/0.87 to account for the systematic underestimation of flux density measurements in the VLASS catalog reported in Gordon et al. (2021). With flux densities at two different frequencies in hand, we determine the spectral index, \(\alpha\), between 1.4 GHz and 3 GHz for our sources.
The projected radio extents of CSS sources are smaller than 20 kpc (e.g. Fanti et al., 1985; O'Dea and Baum, 1997; O'Dea and Saikia, 2021). The VLASS catalog of Gordon et al. (2021) includes measurements of the source angular size after after deconvolution from the beam1. Where the deconvolved angular size is non-zero, this is used to calculate the projected linear size (LS) of the source. If the source is so compact that it has a deconvolved angular size of zero in VLASS, then we use the uncertainty in the angular size to estimate an upper limit on the LS.
Footnote 1: These measurements are produced by the source-finder PyBDSF (Mohan and Rafferty, 2015).
We select our likely CSS sources as having LS \(<20\) kpc and \(\alpha+\sigma_{\alpha}<-0.5\), identifying \(1,109\) objects. In Figure 1 we compare the redshifts and 1.4 GHz luminosities of this sample. By using only sources at \(z<0.2\) we select a volume-limited sample complete down radio luminosities of \(L_{1.4\,\rm GHz}>10^{24}\) W Hz\({}^{-1}\). This sample contains 259 CSS candidates, all but 38 (15 %) of which
are classified as LERGs. The radiatively efficient central engines in HERGs can impact the observed properties of the host galaxy, including spectral line measurements used in determining SFRs. Conversely, the radiatively inefficient central engines of LERGs don't produce the high energy photons necessary to bias spectral line measurements (Hardcastle et al., 2006). We therefore exclude the 38 sources not classified as LERGs. Finally, \(\approx 20\) % of single-FIRST-component RLAGN are expected to be multi-component sources in VLASS (Gordon et al., 2019). To ensure we are only using sources with reliable sizes and spectral indices, we visually inspect the VLASS maps using SAOImage DS9 (Joye & Mandel, 2003) with the catalog components overlaid. As a result we remove 55 multi-VLASS-component sources from our sample, leaving 166 CSS sources that we use for the analysis presented in this Letter.
## 3 Comparing Star Formation and Linear Size in CSS Sources
All of our CSS sources have host galaxies with spectral line measurements, stellar masses (\(M_{*}\)) and SFRs in the Max Plank institut fur Astrophysik/Johns Hopkins University (MPA/JHU) value added catalog for SDSS DR72(Kauffmann et al., 2003; Brinchmann et al., 2004; Tremonti et al., 2004). In Figure 2 we plot the stellar masses and SFRs of our CSS sources. For reference we also show the distribution of all galaxies at \(z<0.2\) in SDSS DR7 as grey shaded contours. Galaxies in SDSS are split into two populations of'star-forming' and 'passive' at a specific star formation rate (sSFR = SFR/\(M_{*}\)) of approximately \(0.01\,\mathrm{Gyr}^{-1}\) (shown by the black dashed line in Figure 2). The majority of our CSS sources are hosted by passive high-mass galaxies, with only 24 (14 %) having sSFR \(>0.01\,\mathrm{Gyr}^{-1}\). We confirm these have a similar redshift distribution to the passive CSS hosts in our sample by performing a Kolmogorov-Smirnov (KS) test, which returns a \(p-\)value of 0.75.
Footnote 2: [https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/](https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/)
With SF being rare in CSS sources, one might ask whether there are differences between CSS sources with SF and CSS sources hosted by passive galaxies? One of the most fundamental properties of RLAGN is their size, i.e., how far the jets have travelled from the central engine. To assess if the sizes of CSS sources with SF and passive hosts differ we plot the LS of our CSS sources versus their host sSFR in Figure 3a. CSS sources with sSFR \(<0.01\,\mathrm{Gyr}^{-1}\) are seen at all sizes in our sample (\(0<\mathrm{LS}<20\,\mathrm{kpc}\)). However, higher sSFRs (sSFR \(>0.01\,\mathrm{Gyr}^{-1}\)) are only seen in CSS sources with LS \(\lesssim 8\) kpc. If we divide the CSS population at LS \(=10\) kpc, \(17.9^{+3.8}_{-2.8}\) % of sources with LS \(<10\) kpc have sSFR \(>0.01\,\mathrm{Gyr}^{-1}\), compared to \(0.0^{+0.5}_{-0.0}\) % at LS \(\geq 10\) kpc. The uncertainties in these population fractions are estimated using the binomial approach outlined in Cameron (2011), and suggest a \(\approx 2.9\sigma\) excess of star-forming hosts in the smaller CSS sources.
The SFR measurements in SDSS are based on either the H\(\alpha\) luminosity or the strength of the \(4,000\) A break, \(D_{4000}\), depending on the spectral line properties of the galaxy (Brinchmann et al., 2004). In panels b and c of Figure 3 we show both of these observable proper
Figure 1: Redshift and 1.4 GHz luminosity distributions for our selection of CSS sources. Our volume limited sample (red circles) is defined as having \(z<0.2\) and \(L_{1.4\,\mathrm{GHz}}>10^{24}\,\mathrm{W}\,\mathrm{Hz}^{-1}\).
Figure 2: The stellar masses and star formation rates of our CSS sources (red circles). The grey shaded contours show the distributions for SDSS DR7. The black dashed line shows a fixed specific star formation rate of 0.01 Gyr\({}^{-1}\).
ties complement our findings with respect to the derived sSFRs shown in Figure 3a. Where H\(\alpha\) is detected (\(S/N>3\)), the strongest H\(\alpha\) emission lines are found nearly exclusively in CSS sources with LS \(<6\) kpc (Figure 3b). When considering \(D_{4000}\), the weakest breaks-indicating young stellar populations-are found only in CSS sources with LS \(\lesssim 10\) kpc (Figure 3c).
A further test of the relative compactness of CSS sources with enhanced SF is to investigate how the infrared (IR) colors of the host change with LS. To this end we obtain IR information from the Wide-field Infrared Survey Explorer telescope (WISE, Wright et al., 2010) AllWISE catalog (Cutri et al., 2012, 2013). The WISE W2 (4.3\(\mu\)m) and W3 (12\(\mu\)m) filters can be used to identify star-forming galaxies. Additionally, the W1 (3.4\(\mu\)m) and W2 filters can identify galaxies where the IR colors are contaminated by AGN emission. From our sample of CSS sources, 102 (61 %) are detected (\(S/N>2\)) in the W1, W2 and W3 bands. Of these 102 galaxies, 6 have W\(1-\mathrm{W2}>0.5\) indicating that their IR colors are dominated by the AGN (Mingo et al., 2016). For the remaining 96 CSS sources, we plot their W\(2-\mathrm{W3}\) color against LS in Figure 3d. Adopting the criteria of Mingo et al. (2016), galaxies with
* W\(2-\mathrm{W3}<1.6\) are passive,
Figure 3: Comparisons of star-forming indicators and linear size (LS) for our CSS sources. Panel a shows the SDSS sSFR measurements for our CSS sources with a black dashed line indicating sSFR = 0.01 Gyr\({}^{-1}\). Panel b shows the equivalent width of H\(\alpha\) (EW\({}_{\mathrm{H}\alpha}\)) for our CSS sources where this line is detected at \(S/N>3\), while panel c shows the strength of the 4,000 Å break. In Panel d the WISE colors are shown for galaxies where the AGN does not dominate the IR color (\(\mathrm{W1-W2}<0.5\)). Here the dot-dashed lines separate colors associated with passive galaxies, star-forming galaxies and (U)LIRGs. In all panels orange triangles denote CSS sources where the LS is an upper limit, and the black cross shows the median uncertainty for the data points.
* \(1.6<\mathrm{W2-W3}<3.4\) are star-forming,
* and \(\mathrm{W2-W3}>3.4\) are (Ultra) Luminous Infrared Galaxies ([U]LIRGs).
Panel d of Figure 3 is consistent with panels a-c, showing that nearly all CSS sources with IR colors indicative of SF have LS \(<8\) kpc. For CSS sources with LS \(<10\) kpc, \(40.0^{+5.5}_{-5.0}\) % have star-forming WISE colors. On the other hand, only \(5.9^{+11.3}_{-1.9}\) % of CSS sources with LS \(\geq 10\) kpc have WISE colors associated with star-forming galaxies-a deficit relative to the sub 10 kpc population at \(\approx 2.8\sigma\) confidence.
## 4 Discussion
### Physical Interpretation
Our data shows that where excess SF is present in CSS sources, those sources are limited to scales smaller than \(\approx 10\) kpc. At first glance, there are three likely possibilities that might explain this observation.
1. The jet itself has triggered a brief period of SF (e.g. Rees, 1989; Labiano et al., 2008; Duggal et al., 2021).
2. A dense ISM is inhibiting the propagation of the radio jet resulting in its confinement to scales \(\lesssim 10\) kpc.
3. The AGN is younger than the SF, limiting the time available for the radio jets to propagate away from the central engine.
To explore these scenarios we compare the expected evolution of radio jets in these sources to the timescale on which the SF is detectable. In order to estimate the typical age of the jets in our CSS sources with enhanced SF, we simulate three 'toy model' jets using the semi-analytical radio jet evolution code of Hardcastle (2018). The median 1.4 GHz luminosity of our sample is \(10^{24.3}\) W Hz\({}^{-1}\). For this simulation we assume a universal pressure profile (Arnaud et al., 2010) for galaxies in a halo of mass \(M_{500}=10^{13.5}\) M\({}_{\odot}\). In this scenario, a radio source with LS = 10 kpc and \(L_{1.4\,\mathrm{GHz}}=10^{24.3}\) W Hz\({}^{-1}\) is expected to have a jet power, \(Q\), of \(\sim 10^{36}\) W (see Figure 4a). Such a jet will have taken \(\approx 8\) Myr to reach its current size, and would reach a linear size of \(\approx 19\) kpc within 20 Myr of being switched on (see Figure 4b).
The increase in radio luminosity shown for our toy model jets as the radio source grows is consistent with our data. In Figure 5a we show the distributions of \(L_{1.4\,\mathrm{GHz}}\) for small (LS \(<10\) kpc) and larger (LS \(\geq 10\) kpc) CSS sources in our sample. Performing a KS test returns a \(p\)-value of \(8\times 10^{-3}\), showing these distributions to be statistically different. The smaller CSS sources have a median radio luminosity of \(10^{24.26}\) W Hz\({}^{-1}\), while the larger sources have a median value of \(10^{24.49}\) W Hz\({}^{-1}\). Such a change in luminosity for a \(10^{36}\) W jet would be expected as it grows from a linear size of \(\approx 6\) kpc to \(\approx 12\) kpc (see Figure 4a). The \(L_{1.4\,\mathrm{GHz}}\) distribution of the CSS sources with sSFR \(>0.01\) Gyr\({}^{-1}\) (Figure 5b) is consistent with the luminosity distribution of the smaller CSS sources, having a KS derived \(p\)-value of 0.26.
Figure 4: Three toy model jets simulated using the semi-analytical code of Hardcastle (2018) with powers \(Q=10^{35}\) W (circles), \(Q=10^{36}\) W (squares) and \(Q=10^{37}\) W (stars). Panel a shows the evolution of 1.4 GHz luminosity with linear size colored by the jet age. Panel b shows the linear size growth as a function jet age, with the points colored by \(L_{1.4\,\mathrm{GHz}}\). The red dashed line in panel b shows the 20 Myr typical lifetime of O-type stars.
The SF indicators shown in Figure 3 are visible for different time periods after SF ends. H\(\alpha\) emission is the result of ionisation of the ISM by massive O-type stars, limiting its visibility to \(\approx 20\) Myr after the cessation of SF (Kennicutt, 1998). Conversely, \(D_{4000}\) is affected by the entire stellar population and evolves slowly following a starburst, taking several hundred Myr for a strong break to develop (Goto et al., 2008). IR colors resulting from SF evolve on a timescale between the two extremes of H\(\alpha\) and \(D_{4000}\). The WISE W3 band traces SF through the polycyclic aromatic hydrocarbons associated with B-type stars that live for \(\approx 100\) Myr following a starburst (Peeters et al., 2004; Jarrett et al., 2011).
Assuming a jet age on the order of \(\sim 10\) Myr, the absence of low \(D_{4000}\) values in CSS sources with LS \(\gtrsim 10\) kpc suggests that the bulk of SF ceased hundreds of Myr prior to the jet being triggered. Our results are thus inconsistent with the AGN jet triggering the SF unless jet propagation is frustrated for hundreds of Myr. On the other hand, the presence of strong H\(\alpha\) emission in CSS sources with LS \(\lesssim 10\) kpc is indicative of active SF as recently as 10 Myr prior to the jet being triggered. Future observations that measure the jet (a)symmetry and hotspot proper motions are necessary to test if these sources are indeed frustrated.
A tantalising explanation for our results is that of galaxy mergers-a known trigger for both SF and AGN (e.g. Ellison et al., 2013; Pearson et al., 2019; Gao et al., 2020; Pierce et al., 2022, 2023). In galaxy mergers SF is episodic and the time required for gas to fall into the central engine means that RLAGN are not expected to be triggered until a few hundred Myr after the first starburst (Tadhunter et al., 2005; Peirani et al., 2010; Shabala et al., 2017). A final starburst in the merger sequence that ceases \(\approx 10\) Myr prior to the jet being triggered, and has a much smaller burst fraction than the initial starburst several hundred Myr earlier, might produce the observed H\(\alpha\) with a limited impact on \(D_{4000}\). It is therefore prudent to ask if our sample of CSS sources with enhanced SF are associated with mergers? To this end we visually inspect optical images obtained from the 9th data release of the Dark Energy Spectroscopic Instrument Legacy Imaging Surveys (Dey et al., 2019) for the 24 CSS sources with sSFR \(>0.01\) Gyr\({}^{-1}\). We find that 11 (46 %) show clear evidence of tidal features indicative of a recent major galaxy-galaxy interaction. This is a higher incidence than the 28 % of the LERG population shown to have tidal features in Gordon et al. (2019). The relatively high fraction of our sample with tidal features suggests that mergers likely explain at least some CSS sources with enhanced SF, and this warrants further study.
### Are our CSS Sources Really Variable Sources?
We have selected our CSS sources using legacy data from two different radio surveys. Because these observations were not made simultaneously-two decades separate FIRST and VLASS-it is possible the difference in flux density measurements may be an effect of radio source variability rather than the shape of spectral energy distribution. Variable radio sources typically have very compact morphologies. Nyland et al. (2020) show that sources showing variability between FIRST
Figure 5: Normalised distributions of \(L_{1.4\,\rm GHz}\) for subsamples of our CSS sources. Both panels show the full CSS sample as a solid grey histogram, and the CSS sources with LS \(<10\) kpc as blue solid line. Panel a shows a comparison with sources having LS \(\geq 10\) kpc (red dashed line), while Panel b shows the radio luminosities of CSS sources with sSFR \(>0.01\) Gyr\({}^{-1}\) (orange dot-dashed line).
and VLASS have \(\rm LS<1\,kpc\), while Wolowska et al. (2021) use VLBI imaging to show such sources typically have sizes of just a few tens of parsecs.
Of the 24 CSS sources with \(\rm sSFR>0.01\,Gyr^{-1}\), only 3 are completely unresolved by VLASS (shown as upper limits in Figure 3). A further 9 of these 24 sources have measured linear sizes below \(\rm 2\,kpc\), notably all of which are greater than \(\rm 1\,kpc\). The other half of our CSS sample with enhanced SF have \(\rm 2\,kpc<LS<8\,kpc\) and are therefore larger than variable sources are expected to be. If we were to cautiously assume that the 12 sources within our sample with enhanced SF and \(\rm LS<2\,kpc\) are all variable sources then our conclusion would still be valid: CSS sources with enhanced SF are smaller than \(\approx 10\,\rm kpc\).
## 5 Conclusions
In this Letter we have systematically investigated the relationship between star formation and radio source size in CSS sources. We find that where enhanced SF is present the radio source has \(\rm LS\lesssim 10\,kpc\), while passive hosts are seen in CSS sources with \(0\leq\rm LS<20\,kpc\). Based on simulated jet propagation times, the absence of CSS hosts with weak \(4,000\,\rm\AA\) break strengths at \(\rm LS\gtrsim 10\,kpc\) suggests the bulk of SF ceased several hundred Myr before the AGN jet was triggered. The presence of H\(\alpha\) emission in CSS sources with \(\rm LS<10\,kpc\) indicates that some SF occurred \(\approx 10\,\rm\ Myr\) prior to the jet triggering. We interpret this apparent ambiguity as being the result of episodic SF in these CSS sources where the later starbursts have a lower 'burst fraction', potentially resulting from galaxy-galaxy interactions.
We thank the anonymous referee for their helpful report that has improved the quality of this work. Y.A.G. is supported by U.S. National Science Foundation grant AST 20-09441. C.P.O., S.A.B. and C.D. are supported by NSERC, the National Sciences and Engineering Research Council of Canada. This work made use of observations from the Very Large Array (VLA), SDSS and WISE.
|
2306.06523 | Finding Hamiltonian cycles with graph neural networks | We train a small message-passing graph neural network to predict Hamiltonian
cycles on Erd\H{o}s-R\'enyi random graphs in a critical regime. It outperforms
existing hand-crafted heuristics after about 2.5 hours of training on a single
GPU. Our findings encourage an alternative approach to solving computationally
demanding (NP-hard) problems arising in practice. Instead of devising a
heuristic by hand, one can train it end-to-end using a neural network. This has
several advantages. Firstly, it is relatively quick and requires little
problem-specific knowledge. Secondly, the network can adjust to the
distribution of training samples, improving the performance on the most
relevant problem instances. The model is trained using supervised learning on
artificially created problem instances; this training procedure does not use an
existing solver to produce the supervised signal. Finally, the model
generalizes well to larger graph sizes and retains reasonable performance even
on graphs eight times the original size. | Filip Bosnić, Mile Šikić | 2023-06-10T21:18:31Z | http://arxiv.org/abs/2306.06523v1 | # Finding Hamiltonian cycles with graph neural networks
###### Abstract
We train a small message-passing graph neural network to predict Hamiltonian cycles on Erdos-Renyi random graphs in a critical regime. It outperforms existing hand-crafted heuristics after about 2.5 hours of training on a single GPU. Our findings encourage an alternative approach to solving computationally demanding (NP-hard) problems arising in practice. Instead of devising a heuristic by hand, one can train it end-to-end using a neural network. This has several advantages. Firstly, it is relatively quick and requires little problem-specific knowledge. Secondly, the network can adjust to the distribution of training samples, improving the performance on the most relevant problem instances. The model is trained using supervised learning on artificially created problem instances; this training procedure does not use an existing solver to produce the supervised signal. Finally, the model generalizes well to larger graph sizes and retains reasonable performance even on graphs eight times the original size.
Machine learning, Neural nets, Graph algorithms, Heuristics design
## I Introduction
When dealing with problems that are computationally too costly to solve explicitly, such as NP-hard problems, it is common to rely on heuristics. The idea of using neural networks to train such heuristics is quite appealing and has attracted considerable interest over the years. One aims to enhance an algorithm, such as greedy search, with a neural network module that is trained to improve the decision-making of the algorithm. See [4, 8] or [29] for an introduction and an overview of the area. In practice, problem instances typically come from a distribution with specific biases which are hard to describe explicitly. These can be exploited by a neural network. As an illustration, let us consider the Hamiltonian cycle problem (HCP), which is at the core of this paper (nodes in the _cycle_ can not repeat). It asks the following:
**Problem 1** (HCP).: _Determine whether or not there exists a cycle that passes through all vertices of a given graph. If it exists, such a cycle is called a Hamiltonian cycle, and the graph is said to be Hamiltonian._
The general HCP is known to be NP-complete and thus computationally intractable. Currently, the fastest known exact solution algorithm is due to [5] and has worst-case complexity of \(\mathcal{O}(1.657^{n})\).
As far as applications are concerned, HCP is used to improve runtimes of rendering engines, see [2]. To do so, one solves the HCP for a dual graph of triangulation and renders the triangles in that order which reduces the number of points to process. Another application of HCP comes from genomics, more specifically, the problem of de novo genome assembly. The task here is to reconstruct the genetic material of an organism, i.e. the exact sequence of nucleobases on all of its chromosomes, from a large number of sequenced fragments called _reads_. As chromosomes contain hundreds of millions bases, correctly assembling a single one is already a huge undertaking, see [19] for an example. Interpreting overlaps between reads as edges, after preprocessing and cleaning (see [32]), one ends up with a _string graph_ as proposed in [20]. The Hamiltonian cycle in the string graph corresponds to the correct assembly of the chromosome. For more details see [22, 3, 28] and [14]. Both triangular meshes of 3d objects and string graphs of various assemblers (such as [3] or [28]) have specific structures and statistical properties arising from the context. These could make solving the HCP easier but are difficult to exploit directly. We show here how to exploit them using graph neural networks in a similarly specific setting of Erdos-Renyi random graphs.
For HCP in general, heuristics based on Hopfield networks were already trained in the early 90-ties, see [17, 18]. More recently, however, the area of geometric deep learning and graph neural networks has seen rapid developments and produced neural network layers such as message passing [9] or graph attention layers [30]. These layers are built to exploit any graph structure in data and can handle arbitrarily large graphs with a limited set of parameters, resembling convolution layers in computer vision. They have found applications in image and text processing, combinatorial optimization, physics, chemistry [9] and biology [22]. See [35] and [7] for a deeper dive into the area. In particular, they are excellent candidates for heuristics of graph-based problem. However, most efforts so far have been directed towards combinatorial optimization problems, the two-dimensional traveling salesman problem in particular. Heuristics for the 2d-TSP based on transformer architecture were trained in [16, 6] and those based on graph
neural networks in [34] and [12]. The state-of-the-art result is achieved in [6] where a comprehensive list of references can be found as well. It has to be noted that previously mentioned models still perform worse than the Concorde TSP solver [1], a state-of-the-art _exact_ solver based on branch and bound search combined with the cutting plane method. Nevertheless, theoretical complexities of neural network models are superior to Concorde. Let us also mention [13, 26] and [27] which work with general combinatorial optimization and constraint satisfaction problems.
In this paper we present a HCP solver based on _graph_ neural networks and show that it easily outperforms most hand-made heuristics. The code is available at [https://github.com/lbcb-sci/GNNs-Hamiltonian-cycles](https://github.com/lbcb-sci/GNNs-Hamiltonian-cycles).
## II Relation to TSP and 2d-TSP
It is known that the HCP can be reformulated as a special case of the _general traveling salesman problem (TSP)_:
**Problem 2** (TSP).: _Given a graph with a non negative length assigned to each edge, find the shortest cycle passing through all its vertices._
Hence, TSP solvers can be used for HCP and we shall exploit this by using _Concorde TSP solver_, see [1], to evaluate the performance of our models in Section V. While it is tempting to assume that all papers studying TSP are immediately applicable to the HCP, this _is not the case at all_. In particular, papers presenting neural network TSP solvers, such as [6, 12, 16] or [34] only study the special case of _two-dimensional TSP_:
**Problem 3** (2d-TSP).: _Given a set of points in the unit square \([0,1]^{2}\), find the shortest (in terms of Euclidean distance) cycle which passes through all of them._
The 2d-TSP introduces two simplifications to the general TSP:
* graphs are always fully connected and
* distances between nodes comply with Euclidean structure (triangle inequality).
Only \(2n\) point coordinates are required to describe a 2d-TSP instance, in contrast to \(n^{2}-n\) adjacency matrix weights needed for the general TSP. Moreover, 2d-TSP solvers can not be used to solve the HCP. On the contrary, we find it better to view the HCP and the 2d-TSP as two quite different aspects of the general TSP. The HCP focuses on complexities arising from discrete connectivity structure while the 2d-TSP deals with difficulties coming from the choice of edge lengths.
## III Problem setup
We only consider simple, undirected graphs and denote a typical graph example by \(G\) and its size (number of nodes) by \(n\). The HCP is classically posed as a decision problem: _Determine whether the graph contains a Hamiltonian cycle or not_. However, to put more emphasis on finding the actual cycle, which is important in practice, we also require that solvers produce at least one Hamiltonian cycle. In case the output of a solver is not a valid Hamiltonian cycle, which is straightforward to check, we assume the solver predicted that no Hamiltonian cycle exists.
### _Inputs and outputs_
A solver receives as input a graph \(G\) and outputs a walk \(v_{1}v_{2}\ldots v_{k}\) on \(G\) proposing a Hamiltonian cycle. The walk is considered to be closed if \(v_{1}=v_{k}\) and thus is a Hamiltonian cycle only if \(k=n+1\) and nodes \(v_{1},v_{2},\ldots v_{k-1}\) are all distinct.
### _Evaluation distribution_
The performance of HCP heuristics depends heavily on properties of graphs they are required to solve. Indeed, it is reasonable to have heuristics constructed specifically to achieve good performance on particular types of graphs, such as duals of triangulations or string graphs mentioned in Section I. As there are many possible applications of the HCP, finding a good class of evaluation graphs is a challenging task. Currently at least, there seems to be no agreed-upon class for this purpose. There are datasets of collected HCP problems, see, for example, [23] or [10], but they are not quite large enough to train neural networks on. A natural approach, used in early works such as [17, 18, 33] is to use random graphs generated by adding edges between pairs of vertices independently with a fixed probability \(p\in(0,1)\). Such random graphs are known as _Erdos-Renyi random graphs_ with edge probability \(p\). Papers working with 2d-TSP typically use a similar idea of evaluation on randomly generated problems, concretely the _random uniform euclidean (RUE)_ sets of two-dimensional points chosen uniformly at random from the unit square \([0,1]^{2}\).
However, using Erdos-Renyi graphs with _constant_ edge probability parameter \(p\) for evaluating the HCP has a major flaw. Intuitively it is clear that the HCP gets more difficult as the size of the graph increases. This is not the case for Erdos-Renyi graphs with _constant_\(p\) as indicated by Table I. It tracks performances of Concorde TSP solver and HybridHam heuristic from [25]. The performance of either solver clearly improves as the graph size increases, suggesting that the problem is in fact getting easier. The issue is that large graphs end up having too many edges, leading to many Hamiltonian cycles thus making it easier to find one.
This can be mended by carefully decreasing parameter \(p\) as the size of the graph increases. We rely on the following theorem from [15].
**Theorem 1** (Paraphrase of [15], Theorem 1.).: _Let \(\text{ER}(n,p)\) denote the Erdos-Renyi graph on \(n\) nodes with edge proba
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{graph size} \\ \cline{2-6} Name & \(25\) & \(50\) & \(100\) & \(150\) & \(200\) \\ \hline Concorde & 0.80 & 1.0 & 1.0 & 1.0 & 1.0 \\ HybridHam & 0.41 & 0.68 & 0.79 & 0.84 & 0.87 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Fraction of solved instances out of 5000 in supercritical regime, \(p=0.25\)
bility parameter \(p\). For every \(p_{H}\in(0,1)\) there is an explicit sequence \((p_{n})_{n\in\mathbb{N}}\) such that_
\[\mathbb{P}\left(\text{ER}(n,p_{n})\text{ is Hamiltonian}\right)\xrightarrow{n \rightarrow\infty}p_{H}.\]
_Concretely, one can take \(p_{n}=\frac{\ln n+\ln\ln n-\ln\ln p_{H}^{-1}}{n-1}\)._
In other words, for any \(p_{H}\) there is a procedure of generating graphs such that they contain a Hamiltonian cycle with a probability approximately equal to \(p_{H}\). We call this the _critical regime_ for the HCP. If the asymptotic behavior of \(p_{n}\) is above the one from the previous theorem, we speak _of the supercritical regime_. Examining the performance of Concorde solver in Table II shows that the empirical fraction of Hamiltonian cycles remains relatively stable and is fairly close to the asymptotic value of \(p_{H}=0.8\). By controlling the existence probability of Hamiltonian cycles we control their expected number in a graph and hence also the difficulty of the HCP. This motivates our use of Erdos-Renyi random graphs in the critical regime as the evaluation class. For simplicity, we use \(p_{H}=0.8\) for the rest of the paper although other values of \(p_{H}\) would work equally well. Two examples of random graphs in the critical regime are shown Fig. III.1.
### _Datasets_
We work exclusively with generated datasets. Our test dataset is sampled from the evaluation distribution described in the previous section and consists of \(5000\) Erdos-Renyi graphs in critical regime with \(p_{H}=0.8\) for each size \(n=25\), \(50\), \(100\), \(150\) and \(200\). This sample size is large enough so that the fraction of Hamiltonian graphs stays within \(\pm 2\%\) interval with \(95\%\) probability for every size \(n\). Train and validation datasets are generated from a different distribution described in Section IV-B. They are never explicitly sampled. Instead, graph examples are generated on the fly when needed. The train dataset is _limited_ to graphs of size \(25\) in order to emphasize generalization properties of the model.
## IV Model details
Our model is autoregressive and decodes the Hamiltonian cycle a single node at a time. It begins by selecting a starting node and then chooses between neighbors in each following step. The chosen node is then appended to the partial solution and the process repeats until a node gets visited twice. There are two main components, a _neural network component_ that guides the neighbor selection at each step and a _search algorithm_ which combines selected nodes into a Hamiltonian cycle. Concretely, given a _partial solution walk_\(v_{1}v_{2}\ldots v_{k}\) at \(k+1\)-th step of autoregressive decoding, the neural network component estimates with \(\mathcal{P}(v|v_{1}\ldots v_{k})\) the probability that extending the walk by node \(v\) will eventually lead to a Hamiltonian cycle (HC):
\[\mathcal{P}(v|v_{1}\ldots v_{k})\approx\mathbb{P}\left(v_{1}\ldots v_{k}v \subseteq\text{HC}\big{|}v_{1}\ldots v_{k}\subseteq\text{HC}\right).\]
The search algorithm then selects the neighbor \(v\) greedily according to estimated probabilities. It stops decoding when a node gets visited twice, i.e. \(v\in\{v_{1},\ldots v_{k}\}\), and returns \(v_{1}v_{2}\ldots v_{k}v\) as the solution. The greedy approach is the simplest case of beam search algorithm with beam width \(\beta=1\). For beam width \(\beta>1\), at each step \(k\) the algorithm keeps track of the top \(\beta\) partial walks according to score
\[\mathcal{S}(v_{1}v_{2}\ldots v_{k}) :=\prod_{j=1}^{k}\mathcal{P}(v_{j}|v_{1}\ldots v_{j-1})\] \[\approx\mathbb{P}(v_{1}v_{2}\ldots v_{k}\text{ is contained in a HC})\]
and extends them over all possible neighbors. A new set of top \(\beta\) partial solutions is then selected and the process repeats. Clearly, larger beam width \(\beta\) compensates for the performance of neural network at the cost of additional computations. While we report the performance of various beam widths in Table II, our basic model employs the simplest possible search algorithm (\(\beta=1\)) in order to emphasize the neural network part.
Our neural network uses _persistent node features_\(\mathbf{h}\) in the same way as in [31]. These features are passed on between applications of the neural network, adding a sort of recurrent structure to the network. This provides a way for the network to access information from previous decoding steps.
### _GNN architecture_
Since _graph neural networks (GNN)_ form the central component of our model, HCP information needs to be represented
Fig. 1: Examples of random graphs in the critical HCP regime. \(25\) nodes in top and \(50\) nodes in bottom row. Graphs in each row are identical. Right column graph is ordered in circle following a Concorde TSP solution, with the HP predicted by our basic model shown in solid red.
in the suitable form. We represent the adjacency matrix of the graph as a list of edges and one hot-encode the following three node-level feature channels. Two channels to mark the start and the end node of the partial solution plus a channel to mark all nodes the solution contains. Note that this is precisely the information needed to correctly extend the walk by an unvisited node or close the HC if necessary.
We employ the _encode-process-decode_ architecture analogue to the one used in [31]. This means that our GNN is divided into the _encoder_, _processor_ and _decoder_ networks. The whole GNN has around \(22\) thousand parameters. Both encoder and decoder are single layer, fully connected networks with ReLU activation that operate on node features _individually for each node_. The processor network, containing about \(95\%\) of all parameters, is the core part. It is a residual stack of \(5\) max-aggregation message passing layers, see [9] for more details. As names suggest, an input example is encoded, then processed and finally decoded by applying the above networks successively. In addition, we augment the output of the encoder with a randomized vector of features which was shown to improve the performance of GNNs in [24]. Algorithm 1 presents the pseudocode of a single forward pass. A "free" index \(i\in G\) in a line indicates that this line should be repeated for each node; symbol \(\bigoplus\) denotes concatenation in feature dimension; operator \(\max_{j\sim i}\) stands for maximum over neighbors of \(i\).
```
Input:\(G\) - graph with \(n\) vertices; \(\mathbf{x}\in\mathbb{R}^{(n,d_{n})}\) - partial walk repr.; \(\mathbf{h}\in\mathbb{R}^{(n,d_{n})}\) - persistent features Output:\(\mathbf{p}\in[0,1]^{n}\) - next-step probabilities per node. Hyperparams:\(d_{\text{in}}=3,d_{h}=32,d_{r}=4,n_{p}=5\) Params:\(\theta\equiv\{W_{E},b_{E},W_{P},b_{P},\ldots\}\) - NN weights // Encoder - Initialize features \(\mathbf{z}_{i}=W_{E}(\mathbf{x}_{i}\bigoplus\mathbf{h}_{i})+b_{E}\in\mathbb{ R}^{d_{h}-d_{r}}\) \(\mathbf{r}=\text{Uniform}\left([0,1]^{n\times d_{r}}\right)\in\mathbb{R}^{(n,d_ {r})}\) \(\mathbf{h}_{i}=\mathbf{z}_{i}\bigoplus\mathbf{r}_{i}\in\mathbb{R}^{d_{h}}\) // Processor - Apply residual max-MPNN layers for\(k=1,2,\ldots n_{p}\)do \(\mathbf{m}_{i}=\max_{j\sim i}\operatorname{ReLU}\left(W_{M}^{k}(\mathbf{h}_{i} \bigoplus\mathbf{h}_{j})+b_{M}\right)\in\mathbb{R}^{d_{h}}\) \(\mathbf{h}_{i}=\mathbf{h}_{i}+\operatorname{ReLU}\left(W_{P}^{(k)}\left( \mathbf{h}_{i}\bigoplus\mathbf{m}_{i}\right)+b_{P}^{(k)}\right)\in\mathbb{R}^ {d_{h}}\) // Decoder - Extract logits and probabilities \(\mathbf{l}_{i}=W_{D}(\mathbf{z}_{i}\bigoplus\mathbf{h}_{i})+b_{D}\in\mathbb{R}\) for\(i=1,2,\ldots,n\)do if\(i\sim\text{GetLastNode}(\mathbf{x})\)then \(\mathbf{l}_{i}=-\infty\) \(\mathbf{p}=\operatorname{softmax}\mathbf{l}\in\mathbb{R}^{n}\) return\(\mathbf{p}\), \(\mathbf{h}\)
```
**Algorithm 1**ApplyGNN\((G,\mathbf{x},\mathbf{h};\theta)\).
### _Training_
Our supervised approach requires a large number of solved HCP instances during training. Even though they can easily be generated using existing HCP solvers, we will show it is possible to train on artificially generated graphs such that HCP solution is known in advance. We believe that such methods are useful when working with problems similar to HCP for which no exact solvers are available. The construction of a training example starts from a graph \(G\) of arbitrary size but with no edges. A random permutation of nodes is then connected into a single cycle by adding appropriate edges into \(G\). This will be a Hamiltonian cycle in the final graph and is stored as a supervision signal. Finally, for every pair of vertices in \(G\) we add and edge connecting them with probability \(p_{\text{edge}}=0.125\) (independently of other pairs). \(p_{\text{edge}}\) is treated as a training hyperparameter and was determined through experimentation. While the distribution of training samples generated in this way is quite different from the evaluation distribution which consists of ER graphs, the results show that the basic model still generalizes well. Note also that the final graph may have Hamiltonian cycles other than the original one. All such cycles are ignored during training.
The training procedure is summarized in Algorithm 2. A single training example consists of a graph \(G\) and a Hamiltonian cycle \(v_{1}v_{2}\ldots v_{n}v_{1}\) on \(G\). The network is trained using _teacher forcing_ along this Hamiltonian cycle on the conditional cross-entropy loss \(\mathcal{L}\) defined by
\[\mathcal{L}\left(v_{1}\ldots v_{n}v_{1}\right)=-\sum_{i=2}^{n+1}\ln\left( \mathcal{P}(v_{i}|v_{1}\ldots v_{i-1})\right),\]
where \(v_{n+1}:=v_{1}\) for notational convenience. Remark that the summation index starts from \(2\) because the choice of the first node in a cycle is completely arbitrary. Loss \(\mathcal{L}\) is minimized over minibatches of 8 training examples using Adam optimizer with a learning rate of \(10^{-4}\) for 2000 epochs of 100 gradient updates. The final model checkpoint was selected based on the fraction of solved instances on validation set generated in the same way as the training set. The whole training was performed on a single NVIDIA GeForce RTX 3080 GPU and took about 2.5 hours. Weight initialization and other optimizer hyperparameters are kept to default PyTorch 1.11.0 values, [21].
## V Results and discussion
We evaluate the performance of our models by measuring the fraction of successfully solved problems on test dataset described in Section III and compared it with following heuristics:
1. _Concorde TSP solver_ - the state-of-the-art exact TSP solver from [1],
2. _HybridHam_ - an HCP heuristic from [25],
3. _Ant-inspired heuristic_ - an HCP heuristic presented in [33],
4. _Least degree first heuristic_ - simple greedy heuristic always selecting the neighbor with the lowest degree.
Let us remark that the ant-inspired heuristic is a convergence procedure which we terminate after \(5n^{2}\ln n\) steps. This bound matches the theoretical complexity of the basic model leading to a relatively fair comparison. In [33], authors suggest to
terminate after \(\mathcal{O}(n^{3})\) iterations but this is very time consuming. We list evaluation results in Table II and average inference times in Table III. Keeping in mind that testing can be performed on a different sample of \(5000\) graphs, the 95% confidence interval for all values in Table II is below \(\pm 0.02\). Models were run on a single NVIDIA GeForce RTX 3080 GPU while all other solvers were run on a single core of Intel Core i7-12700 processor. Note also that HybridHam, least degree first and ant-inspired heuristic were reimplemented in Python 3.8 and could be optimized for better performance.
Our HCP setup makes it impossible for a solver to produce a false positive prediction. Consequently, all solvers have perfect precision and metrics such as \(F_{1}\), \(F_{2}\) are unnecessarily complicated. As the number of true positives (solvable HCPs) is stable by construction of the evaluation set (0.8 in the limit), accuracy, recall and fraction of solved instances have similar qualitative behavior. Thus we only report the fraction of solved instances for each model.
In conclusion, after only a few hours of training our basic model clearly outperformed existing heuristic solvers without using any pre-solved HCP. We believe that techniques similar to the ones presented here can be used to quickly develop heuristic for variations or generalizations of the HCP. For example, the task of finding the longest cycle in a graph. Or the task of finding the route of minimal length which covers all the nodes in the graph (some of them maybe more than once). The class of Erdos-Renyi random graphs is used for simplicity and evaluation convenience since it allows for rough estimate of the difficulty of the HCP with respect to its size. Another class of graphs can be used just as well, provided that it is specific enough so that the neural network can exploit its statistical or structural peculiarities. But this typical happens with graph instances coming from practical problems. Moreover, polynomial complexity of \(\mathcal{O}(n^{2}\log n)\) for our basic model is superior to exponential complexity of exact solvers. For example, Concorde TSP solver on the RUE 2d-TSP instances was experimentally found to have complexity of \(\mathcal{O}(1.24\sqrt{n})\) in [11], although it is not clear how this translates to the critical regime HCP. Nevertheless, neural network solvers are yet to achieve reasonable performance on large input graphs and Concorde TSP solver remains the best-performing HCP solver. This comes as no surprise since Concorde also outperforms all existing neural network solvers for the 2d-TSP problem.
## VI Ablation study & training stability
The neural network component from Section IV is enhanced with persistent features and vectors of randomized features but can function without either of them. To estimate their importance, we separately removed each one and trained the corresponding reduced model 5 times from scratch. Average performances and confidence intervals of 2 standard deviation are shown in Fig. VI.1.
As shown on Fig. VI.1, persistent features play a crucial role in our model. Without them the model can fail to converge during training. This is probably because persistent features allow the model to updated its internal node representations throughout decoding process which results in an RNN-like behavior and consequently increases the range of message passing neural network layers. The use of randomized features is not as significant but becomes noticeable when generalizing to large graphs. Note also that Fig. VI.1 shows the standard deviation of training procedure for the main model to be around 5% of graphs solved.
|
2303.03285 | Observation of $T_{cc}$ and a quark model | The recent discovery of the doubly charmed tetraquark $T_{cc}$
($\bar{u}\bar{d}cc$) provides a stringent constraint on its binding energy
relative to its lowest decay threshold. We use a fully convergent spatial wave
function and perform a simultaneous global fit to both the meson and baryon
spectra. Our analysis shows that a Yukawa type hyperfine potential leads to a
slight bound state for $T_{cc}$ with $(I,S) = (0,1)$ below its lowest
threshold, in agreement with recent experimental findings. We also find that
$T_{cc}$ is highly likely to be in a compact configuration. | Sungsik Noh, Woosung Park | 2023-03-06T17:00:18Z | http://arxiv.org/abs/2303.03285v2 | # Observation of \(T_{cc}\) and a quark model
###### Abstract
The recent discovery of the doubly charmed tetraquark \(T_{cc}\) (\(\bar{u}\bar{d}cc\)) provides a stringent constraint on its binding energy relative to its lowest decay threshold. We use a fully convergent spatial wave function and perform a simultaneous global fit to both the meson and baryon spectra. Our analysis shows that a Yukawa type hyperfine potential leads to a slight bound state for \(T_{cc}\) with \((I,S)=(0,1)\) below its lowest threshold, in agreement with recent experimental findings. We also find that \(T_{cc}\) is highly likely to be in a compact configuration.
There is a renewed excitement in hadron physics over the observations of numerous exotic hadron candidates [2; 3; 4; 5]. Of particular interest is the recently observed flavor exotic \(T_{cc}\) state, which is an explicit four quark state. \(T_{cc}\) was first predicted in Refs. [6; 7] based on the strong color-spin attraction of \(\bar{u}\bar{d}\) quark pair that can bind the tetraquark configuration. Many quark model calculations for \(T_{cc}\) followed [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30], which unfortunately varied on the predicted masses and even on whether the mass lies above or below the lowest threshold. Therefore, the recent observation of \(T_{cc}\)[1] provides a good opportunity where all the models can be tested in a hitherto untested multiquark configuration, thereby leading us to identify the correct model to describe the low energy confinement phenomena of quantum chromodynamcis (QCD).
Accurate model calculations are a primary requirement for fully testing a quark model. Thus, one first has to introduce a complete set of spatial wave functions, which necessarily contain all possible internal states. Also, a simultaneous global fit to both the meson and baryon spectra should be performed in the model calculations. However, only a few works [10; 15; 17; 30] satisfy these requirements among the works mentioned above [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Furthermore, no work could correctly predict both the mass and binding energy of \(T_{cc}\) simultaneously. For example, in our latest publication [30], we successfully predicted the mass of \(T_{cc}\) using a Gaussian type hyperfine potential, whereas the binding energy was obtained to be higher than the experimental measurement by 13 MeV.
The hyperfine potential strongly affects the binding energy of the multiquark configuration, with the strong color-spin attraction coming from the \(\bar{u}\bar{d}\) quark pair. We thus analyze the effect from a Yukawa type hyperfine potential on the binding energy in comparison with the Gaussian type hyperfine potential. Our analysis suggests that a Yukawa type hyperfine potential is necessary to accurately reproduce the experimentally observed slight binding of \(T_{cc}\), rather than a Gaussian type.
On the other hand, there are chiral quark models based on color-flavor interaction. However, those model studies predicted bindings that are too strong in the \(T_{cc}\) channel [11; 13; 15; 17; 18; 26]. Thus, any modification should start from the gluon exchange quark models.
_Model description:_ In our nonrelativisitc quark model, we solve the Schrodinger equation with the Hamiltonian given as follows.
\[H=\sum_{i=1}^{4}\left(m_{i}+\frac{{\bf p}_{i}^{2}}{2m_{i}}\right)-\frac{3}{4 }\sum_{i<j}^{4}\frac{\lambda_{i}^{c}}{2}\frac{\lambda_{j}^{c}}{2}\left(V_{ij} ^{C}+V_{ij}^{CS}\right), \tag{1}\]
where the confinement potential \(V_{ij}^{C}\) is identical to that used in previous studies [10; 15; 17; 30]. However, for the hyperfine potential \(V_{ij}^{CS}\), we introduce a Yukawa type potential given by
\[V_{ij}^{CS}=\frac{\hbar^{2}c^{2}\kappa^{\prime}}{m_{i}m_{j}c^{4}}\frac{e^{-r_{ ij}/r_{0ij}}}{(r_{0ij})r_{ij}}\mathbf{\sigma}_{i}\cdot\mathbf{\sigma}_{j}. \tag{2}\]
Our Yukawa type potential in Eq. (2) satisfies the contact term \(\delta(r_{ij})\) in the heavy quark limit as \(r_{ij}\) approaches zero. In addition, \(\kappa^{\prime}\) and \(r_{0ij}\) depend on masses of a quark pair as given in Ref. [30]. The model parameters in the Hamiltonian in Eq. (1) are determined by fitting them to a total of 33 ground state hadron masses listed in Tables 1, 2 of Ref. [30]. Further, we obtained an optimized set of model parameters in such a way that \(\chi^{2}\) value of the Pearson's chi-squared test formula should be minimized. The model parameters selected for this study are \(\kappa=97.7\) MeVfm, \(a_{0}=0.0327338\) (MeV\({}^{-1}\)fm)\({}^{1/2}\), \(m_{u}=315\) MeV, \(m_{s}=610\) MeV, \(m_{c}=1895\) MeV, \(m_{b}=5274\) MeV, \(\alpha=1.1349\) fm\({}^{-1}\), \(\beta=0.0011554\) (MeVfm)\({}^{-1}\), \(\gamma=0.001370\) MeV\({}^{-1}\), \(\kappa_{0}=213.244\) MeV, \(D=959\) MeV.
With this optimized set of parameters, we find that the masses of mesons which comprise the strong decay thresholds for \(T_{QQ^{\prime}}\) states fit well with the experimentally measured masses as shown in Table 1.
We utilize the same methods as in Ref. [30] to construct the wave function and calculate the masses of tetraquarks as well as those of mesons and baryons.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \(D\) & \(D^{*}\) & \(B\) & \(B^{**}\) \\ \hline \(M^{Exp}\) & 1864.8 & 2007.0 & 5279.3 & 5325.2 \\ \(M^{Mod}\) & 1865.0 & 2009.4 & 5276.3 & 5331.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Masses of the lowest decay threshold mesons for \(T_{QQ^{\prime}}\) states, where \(Q\) or \(Q^{\prime}\) is a heavy quark(\(c\) or \(b\)). \(M^{Exp}\) represents the experimentally measured mass, while \(M^{Mod}\) is the mass obtained in this work. All masses are given in MeV.
_Masses and binding energies of tetraquarks:_ The results of our calculations for \(T_{cc}(\bar{u}\bar{d}cc)\), \(T_{cb}(\bar{u}\bar{d}c\bar{b})\), and \(T_{bb}(\bar{u}\bar{d}bb)\) using our Yukawa type hyperfine potential are presented in Table 2. Comparing the present results for \(T_{cc}\) with those from our previous publication [30] which used a Gaussian type hyperfine potential, we find that both models reproduce the mass well. However, our previous calculation predicted an unbound \(T_{cc}\) state, while our current model indicates that it lies slightly below the threshold consistent with experimental observations. This suggests that the Yukawa type hyperfine potential used in our current calculation may better capture the strong interaction dynamics of the \(T_{cc}\) configuration. Further research is necessary to fully understand the implications of our findings.
Our present model using a Yukawa type hyperfine potential shows significantly stronger binding energies for \(T_{cc}\) and \(T_{cb}\) than the model with a Gaussian type hyperfine potential presented in Ref. [30].
The discovery of \(T_{cc}\) is of great significance since it allows for testing the validity of quark models. In this regard, we compare our present results with those from Refs.[10; 15; 17; 30] in Table2. The results from those quark models were also calculated with a complete set of spatial wave functions, but obtained using different forms of Gaussian or Yukawa type of hyperfine potential. Specifically, Refs.[10; 30] used a Gaussian type hyperfine potential, while Refs.[15; 17], including our present work, used a Yukawa type of hyperfine potential.
Comparing our results with those from Refs.[10; 15; 17; 30] in Table2, the bound \(T_{cc}\) ground state is found only in our present work and Refs.[15; 17] where the hyperfine potential is of the Yukawa type. Therefore, one can reach a conclusion that a Yukawa type hyperfine potential is necessary to accurately describe the short range interactions within the tetraquark system. The difference in the binding energy between Refs. [15; 17] and our present work consists largely in the difference in the detailed form of the hyperfine potentials used in each model.
To investigate the effect of the forms of hyperfine potential on the binding energy and size of the \(T_{cc}\) configuration, we compare the contributions from the Yukawa and Gaussian types of potentials.
_Detailed analysis of hyperfine potential:_ In the \(T_{cc}\) configuration, an important contribution to the binding comes from the hyperfine potential, which can be isolated as follows:
\[B^{Hyp}\equiv H^{Hyp}_{Tetraquark}-H^{Hyp}_{Meson1}-H^{Hyp}_{Meson2}\,, \tag{3}\]
where \(H^{Hyp}_{Tetraquark}\equiv\sum_{i<j}^{4}V^{CS}(i,j)\) and \(H^{Hyp}_{Meson}\)'s are the hyperfine part of the Hamiltonian calculated with the corresponding total wave functions for the tetraquark and the mesons, respectively.
We first analyze the contribution from the hyperfine potential of the \(D\) or \(D^{*}\) meson. Figure 1 compares the spatial functional form of the Yukawa and Gaussian type potentials, and indicates that the Yukawa potential is stronger than the Gaussian type in the vicinity of the origin. However, as shown in Table 3, the contributions from the hyperfine potential for the threshold are the opposite of what one might expect. This is due to two regard, we compare our present results with those from Refs.[10; 15; 17; 30] in Table2. The results from those quark models were also calculated with a complete set of spatial wave functions, but obtained using different forms of Gaussian or Yukawa type of hyperfine potential. Specifically, Refs.[10; 30] used a Gaussian type hyperfine potential, while Refs.[15; 17], including our present work, used a Yukawa type of hyperfine potential.
Comparing our results with those from Refs.[10; 15; 17; 30] in Table2, the bound \(T_{cc}\) ground state is found only in our present work and Refs.[15; 17] where the hyperfine potential is of the Yukawa type. Therefore, one can reach a conclusion that a Yukawa type hyperfine potential is necessary to accurately describe the short range interactions within the tetraquark system. The difference in the binding energy between Refs. [15; 17] and our present work consists largely in the difference in the detailed form of the hyperfine potentials used in each model.
To investigate the effect of the forms of hyperfine potential on the binding energy and size of the \(T_{cc}\) configuration, we compare the contributions from the Yukawa and Gaussian types of potentials.
at around 0.35 fm. Furthermore, the peak value of the Gaussian type is higher than that of the Yukawa type. Second, in this region, the spatial functional form of the Gaussian type is stronger than that of the Yukawa type, as shown in Figure 1.
Besides, Figure 3 shows that the size of the \(D\) or \(D^{*}\) meson is inversely proportional to the constituent quark masses. If the constituent quark masses in the Gaussian type are fitted to be lower than those used in Ref. [30], this leads to a larger size for the \(D\) or \(D^{*}\) meson in the Gaussian type. For a proper analysis, we scale the horizontal axis with the same rate of mass change for each constituent quark. It is also possible to understand a similar behavior of \(T_{cc}\) through the dependence of the relative size of \(T_{cc}\) on the constituent quark masses, as shown in Figure 3.
We now analyze the contributions from the hyperfine potential for each quark pair in \(T_{cc}\) given in Table 3. For the \(\bar{u}\bar{d}\) pair in \(T_{cc}\), Figure 2 shows that the peak of the probability density for both types of potential is located at around 0.45 fm. For the \(\bar{u}c\) and \(cc\) pairs, the peaks are slightly shifted from that of the \(\bar{u}\bar{d}\) pair towards the origin: the peak for the \(\bar{u}c\) pair (\(cc\) pair) is at around 0.35 fm (0.3 fm).
For the same reason as in the case of \(D\) or \(D^{*}\), the contribution from the Gaussian type potential is found to be stronger than that of the Yukawa type for the \(\bar{u}c\) and \(cc\) pairs as shown in Table 3. However, for the \(\bar{u}\bar{d}\) pair, the strength of the Yukawa type hyperfine potential is above that of the Gaussian type in all ranges. Thus, for the \(\bar{u}\bar{d}\) pair, we find that the relative contribution from the Gaussian and Yukawa types is opposite to those of the other pairs as shown in Table 3. Furthermore, the attraction from each type of hyperfine potential mainly comes from the \(\bar{u}\bar{d}\) pair. Therefore, we find that the binding energy in Eq. (3) obtained from the Yukawa type potential is relatively attractive as shown in Table 3. These suggest that the \(\bar{u}\bar{d}\) pair plays a crucial role in the hyperfine interaction in \(T_{cc}\).
To get a better understanding of the size of \(T_{cc}\), we examine the probability density. In Figure 2, we find that the peak for the \(\bar{u}\bar{d}\) pair of the Gaussian type is higher and closer to the origin than that of the Yukawa type. This trend is also observed for the \(\bar{u}c\) and \(cc\) pairs. Therefore, all relative quark pair sizes in \(T_{cc}\) for the Gaussian type are smaller than those of the Yukawa type, as shown in Table 3.
On the other hand, we calculate the root mean square (RMS) ratio using Eq. (10) from Ref. [17]. As discussed in Ref. [17], a RMS ratio smaller than 1 would represent a compact configuration when the state is bound. The RMS ratio of the Yukawa type in Table 3 indicates that \(T_{cc}\) is highly likely to be in a compact configuration.
_Principal differences between \(T_{cc}\) and \(T_{bb}\):_ One of the most interesting findings is that the confinement potential of the \(\bar{u}\bar{d}\) with \(I=0\) and \(cc\)(or \(bb\)) pairs significantly contributes to the binding energy of the tetraquark. In particular, investigating the matrix element of \(-\lambda_{i}^{c}\lambda_{j}^{c}\) is important because it affects the strength of the confinement potential. For both the light and heavy quark(\(Q\)) pairs, the value with respect to the most dominant color state, \(\mathbf{3}_{\bar{u}\bar{d}}\otimes\mathbf{\bar{3}}_{QQ}\), is \(\frac{8}{3}\). Thus, the linearizing potential gives repulsion while the Coulomb potential gives attraction in the confinement for both pairs.
Figure 2: Left panel shows the radial distribution of the probability density of the \(D\) meson. Right panel shows the distribution for the \(\bar{u}\bar{d}\) pair in the \(T_{cc}\) state, in terms of the most dominant color \(\mathbf{3}_{\bar{u}\bar{d}}\otimes\mathbf{\bar{3}}_{cc}\) state. The red and blue lines represent the distribution obtained from the hyperfine potential used in this work and in Ref.[30], respectively.
Figure 3: Left panel shows the change in size of the \(D\) meson with increasing constituent quark masses using spatial bases up to the 5th quanta. Right panel shows the same for the \(\bar{u}\bar{d}\) and \(cc\) pairs in \(T_{cc}\), but using up to the 3rd quanta. The horizontal axis represents the sum of masses of the \(u\) and \(c\) quarks. The Gaussian type of hyperfine potential from Ref. [30] is used to obtain the figure. The notion of quanta can be referred to Ref. [30].
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Type & \(T_{cc}\) & & Threshold & \(B^{Hyp}\) & RMS ratio \\ \cline{2-7} & (1,2) & (1,3) & (3,4) & \(D\) & \(D^{*}\) & & \\ \hline \(V_{CS}^{CS}\) & -112 & -16.8 & 5.0 & -120 & 32.5 & -87 & 0.89 \\ \(l_{\rm Y}\) & 0.87 & 0.71 & 0.64 & 0.55 & 0.61 & & \\ \hline \(V_{G}^{CS}\) & -109 & -17.4 & 5.3 & -127 & 34.1 & -80 & 0.87 \\ \(l_{\rm G}\) & 0.83 & 0.67 & 0.61 & 0.52 & 0.59 & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Hyperfine potentials \(V^{CS}\)(in MeV) and relative length \(l\)(in fm). For \(T_{cc}\) configuration, we label the position of each quark as \(\bar{u}(1)\bar{d}(2)c(3)c(4)\). The values for the quark pairs of \((1,4),(2,3)\), and \((2,4)\) are the same as those of the \((1,3)\) pair due to symmetry. The subscripts Y and G represent the results obtained using the hyperfine potential in Eq. (2) and the Gaussian potential from Ref. [30], respectively. The values of \(B^{Hyp}\) are calculated by Eq. (3). The RMS ratio of \(T_{cc}\) to its threshold \(DD^{*}\) is presented in column 8.
For the light quark pair, the dominant part of the confinement potential comes from the linearizing potential in both \(T_{cc}\) and \(T_{bb}\) because their sizes are comparable to those of hadrons. However, the confinement potential in \(T_{bb}\) is considerably more repulsive than that in \(T_{cc}\), as shown in Table 4, though the size of the light quark pair in \(T_{bb}\) is shorter than that of \(T_{cc}\).
For the heavy quark pair, the dominant part still comes from the linearizing potential for the \(T_{cc}\) despite the small size of the \(cc\) pair. In contrast, for \(T_{bb}\), it comes from the Coulomb potential because the size of heavy quark pair in \(T_{bb}\) shrinks much shorter than that in \(T_{cc}\). Thus, the confinement potential for \(T_{cc}\) gives small repulsion while it gives significant attraction for \(T_{bb}\). In addition to this, there is also hyperfine attraction for \(T_{bb}\), which can be evaluated by Eq. (3) and is comparable to that of \(T_{cc}\). Therefore, the ground state of \(T_{bb}\) is deeply bound as shown in Table 2.
To understand the confinement contributions in Table 4, it is necessary to divide the probability distribution of the ground state in terms of each color state of the tetraquark as in Figure 4. The values of \(-\lambda_{i}^{c}\lambda_{j}^{c}\) in the confinement are changed by the probability distribution for the two color states when considering the whole of bases to calculate the Hamiltonian. In Figure 4, the contribution from the color \(\bar{\mathbf{6}}_{\bar{u}\bar{d}}\otimes\mathbf{6}_{QQ}\) state, where the matrix element for both the light and heavy quark pairs is \(-\frac{4}{3}\), is negligible for \(T_{bb}\) but crucial for \(T_{cc}\). Apart from the contribution of \(\mathbf{3}_{\bar{u}\bar{d}}\otimes\bar{\mathbf{3}}_{QQ}\) to the values of \(-\lambda_{i}^{c}\lambda_{j}^{c}\), the contribution of \(\bar{\mathbf{6}}_{\bar{u}\bar{d}}\otimes\mathbf{6}_{QQ}\) to \(T_{cc}\) is relatively larger than that to \(T_{bb}\). Therefore, the confinement potential involving the value of the matrix element for the \(\bar{u}\bar{d}\) pair in \(T_{cc}\) decreases much compared to that in \(T_{bb}\).
Finally, we note that the significant contribution of \(\bar{\mathbf{6}}_{\bar{u}\bar{d}}\otimes\mathbf{6}_{QQ}\) to the confinement in \(T_{cc}\) is an essential feature that only appears when all the wave functions are expanded by the complete set of harmonic oscillator bases. This effect does not appear in a simple quark model [22], which performs calculations using a single spatial basis.
_Acknowledgement_ This work was supported by and by the Korea National Research Foundation under the grant number 2021R1A2C1009486(NRF).
|
2304.08377 | A new obstruction to the local lifting problem | We study the local lifting problem of actions of semidirect products of a
cyclic $p$-group by a cyclic prime to $p$ group, where $p$ is the
characteristic of the special fibre. We give a criterion based on
Harbater-Katz-Gabber compactification of local actions, which allows us to
decide whether a local action lifts or not. In particular for the case of
dihedral group we give an example of dihedral local action that can not lift
and in this way we give a stronger obstruction than the KGB-obstruction. | Aristides Kontogeorgis, Alexios Terezakis | 2023-04-17T15:41:48Z | http://arxiv.org/abs/2304.08377v3 | # A new obstruction to the local lifting problem
###### Abstract.
We study the local lifting problem of actions of semidirect products of a cyclic \(p\)-group by a cyclic prime to \(p\) group, where \(p\) is the characteristic of the special fibre. We give a criterion based on Harbater-Katz-Gabber compactification of local actions, which allows us to decide whether a local action lifts or not. In particular for the case of dihedral group we give an example of dihedral local action that can not lift and in this way we give a stronger obstruction than the KGB-obstruction.
## 1. Introduction
Let \(G\) be a finite group, \(k\) and algebraically closed field of characteristic \(p>0\) and consider the homomorphism
\[\rho:G\hookrightarrow\operatorname{Aut}(k[[t]]),\]
which will be called _a local \(G\)-action_. Let \(W(k)\) denote the ring of Witt vectors of \(k\). The local lifting problem considers the following question: Does there exist an extension \(\Lambda/W(k)\), and a representation
\[\tilde{\rho}:G\hookrightarrow\operatorname{Aut}(\Lambda[[T]]),\]
such that if \(t\) is the reduction of \(T\), then the action of \(G\) on \(\Lambda[[T]]\) reduces to the action of \(G\) on \(k[[t]]\)? If the answer to the above question is positive, then we say that the \(G\)-action lifts to characteristic zero. A group \(G\) for which every local \(G\)-action on \(k[[t]]\) lifts to characteristic zero is called _a local Oort group for \(k\)_.
After studying certain obstructions (the Bertin-obstruction, the KGB-obstruction, the Hurwitz tree obstruction etc) it is known that the only possible local Oort groups are known to be
1. Cyclic groups
2. Dihedral groups \(D_{p^{h}}\) of order \(2p^{h}\)
3. The alternating group \(A_{4}\)
The Oort conjecture states that every cyclic group \(C_{q}\) of order \(q=p^{h}\) lifts locally. This conjecture was proved recently by F. Pop [26] using the work of A. Obus and S. Wewers [24]. A. Obus proved that \(A_{4}\) is local Oort group in [21] and this was also known to F. Pop and I. Bouw and S. Wewers [6]. The case of dihedral groups \(D_{p}\) are known to be local Oort by I. Bouw and S. Wewers for \(p\) odd [6] and by G. Pagot [25]. Several cases of dihedral groups \(D_{p^{h}}\) for small \(p^{h}\) have been studied by A. Obus [22] and H. Dang, S. Das, K. Karagiannis, A. Obus, V. Thatte [11], while the \(D_{4}\) was studied by B. Weaver [30] For more details on the lifting problem we refer to [8], [9], [10], [20].
Probably, the most important of the known so far obstructions is the KGB obstruction [9]. It was conjectured that this is the only obstruction for the local lifting problem, see [20], [22]. In particular, the KGB-obstruction for the dihedral group \(D_{q}\) is known to vanish, so the conjecture asserts that the local action of \(D_{q}\) always lifts. We will provide in section 6.1 a counterexample to this conjecture by proving that the HKG-cover corresponding to \(D_{125}\), with a selection of lower jumps \(9,189,4689\), which does not lift.
In this article, we will give a necessary and sufficient condition for a \(C_{q}\rtimes C_{m}\)-action and in particular for a \(D_{q}\) to lift. In order to do so, we will employ the Harbater-Katz-Gabber-compactification (HKG for short), which can be used in order to construct complete curves out of local actions. In this way, we have a variety of tools at our disposal and we can transform the local action and its deformations into representations of lineal groups acting on spaces of differentials of the HKG-curve. We have laid the necessary tools in our article [17], where we have collected several facts about the relation of liftings of local actions, liftings of curves and liftings of linear representations.
More precisely let us consider a local action \(\rho:G\to\operatorname{Aut}\!k[[t]]\) of the group \(G=C_{q}\rtimes C_{m}\). The Harbater-Katz-Gabber compactification theorem asserts that there is a Galois cover \(X\to\mathbb{P}^{1}\) ramified wildly and completely only at one point \(P\) of \(X\) with Galois group \(G=\operatorname{Gal}(X/\mathbb{P}^{1})\) and tamely on a different point \(P^{\prime}\) with ramification group \(C_{m}\), so that the action of \(G\) on the completed local ring \(\mathscr{O}_{X,P}\) coincides with the original action of \(G\) on \(k[[t]]\). Moreover, it is known that the local action lifts if and only if the corresponding HKG-cover lifts.
In particular, we have proved that in order to lift a subgroup \(G\subset\operatorname{Aut}(X)\), the representation \(\rho:G\to\operatorname{GL}\!H^{0}(X,\Omega_{X})\) should be lifted to characteristic zero and also the lifting should be compatible with the deformation of the curve. More precisely, in [17] we have proved the following relative version of Petri's theorem
**Proposition 1**.: _Let \(f_{1},\dots,f_{r}\in S:=\operatorname{Sym}\!H^{0}(X,\Omega_{X})=k[\omega_{1},\dots,\omega_{g}]\) be quadratic polynomials which generate the canonical ideal \(I_{X}\) of a curve \(X\) defined over an algebraic closed field \(k\). Any deformation \(\mathscr{X}_{A}\) is given by quadratic polynomials \(\tilde{f}_{1},\dots,\tilde{f}_{r}\in\operatorname{Sym}\!H^{0}(\mathscr{X}_{ A},\Omega_{\mathscr{X}_{A}/A})=A[W_{1},\dots,W_{g}]\), which reduce to \(f_{1},\dots,f_{r}\) modulo the maximal ideal \(\mathfrak{m}_{A}\) of \(A\)._
And we also gave the following liftability criterion:
**Theorem 2**.: _Consider an epimorphism \(R\to k\to 0\) of local Artin rings. Let \(X\) be a curve which is is canonically embedded in \(\mathbb{P}^{g}_{k}\) and the canonical ideal is generated by quadratic polynomials, and acted on by the group \(G\). The curve \(X\to\operatorname{Spec}(k)\) can be lifted to a family \(\mathscr{X}\to\operatorname{Spec}(R)\in D_{\operatorname{gl}}(R)\) if and only if the representation \(\rho_{k}:G\to\operatorname{GL}_{g}(k)=\operatorname{GL}(H^{0}(X,\Omega_{X}))\) lifts to a representation \(\rho_{R}:G\to\operatorname{GL}_{g}(R)=\operatorname{GL}(H^{0}(\mathscr{X}, \Omega_{\mathscr{X}/R}))\) and moreover the lift of the canonical ideal is left invariant by the action of \(\rho_{R}(G)\)._
In section 3 we collect results concerning deformations of HKG covers, Artin representations and orbit actions and also provide a geometric explanation of the KGB-obstruction in remark 10. In section 4 we prove that the HKG-cover is canonically generated by quadratic polynomials, therefore theorem 2 can be applied.
In order to decide whether a linear representation of \(G=C_{q}\rtimes C_{m}\) can be lifted we will employ the following
**Theorem 3**.: _Consider a \(k[G]\)-module \(M\) which is decomposed as a direct sum_
\[M=V_{\alpha}(\epsilon_{1},\kappa_{1})\oplus\cdots\oplus V_{\alpha}(\epsilon_{s}, \kappa_{s}).\]
_The module lifts to an \(R[G]\)-module if and only if the set \(\{1,\ldots,s\}\) can be written as a disjoint union of sets \(I_{\nu}\), \(1\leq\nu\leq t\) so that_
1. \(\sum_{\mu\in I_{\nu}}\kappa_{\mu}\leq q\)_, for all_ \(1\leq\nu\leq t\)_._
2. \(\sum_{\mu\in I_{\nu}}\kappa_{\mu}\equiv a\bmod m\) _for all_ \(1\leq\nu\leq t\)_, where_ \(a\in\{0,1\}\)_._
3. _For each_ \(\nu\)_,_ \(1\leq\nu\leq t\) _there is an enumeration_ \(\sigma:\{1,\ldots,\#I_{\nu}\}\to I_{\nu}\subset\{1,..,s\}\)_, such that_ \[\epsilon_{\sigma(2)}=\epsilon_{\sigma(1)}\alpha^{\kappa_{\sigma(1)}}, \epsilon_{\sigma(3)}=\epsilon_{\sigma(3)}\alpha^{\kappa_{\sigma(3)}},\ldots, \epsilon_{\sigma(s)}=\epsilon_{\sigma(s-1)}\alpha^{\kappa_{\sigma(s-1)}}.\]
_Condition \(b\)., with \(a=1\) happens only if the lifted \(C_{q}\)-action in the generic fibre has an eigenvalue equal to \(1\) for the generator \(\tau\) of \(C_{q}\)._
Proof.: See [18].
The idea of the above theorem is that indecomposable \(k[G]\)-modules in the decomposition of \(H^{0}(X,\Omega_{X})\) of the special fibre, should be combined together in order to give indecomposable modules in the decomposition of holomorphic differentials of the relative curve.
We will have the following strategy. We will consider a HKG-cover
of the \(G\)-action. This has a cyclic subcover \(X\to\mathbb{P}^{1}\) with Galois group \(C_{q}\). We lift this cover using Oort's conjecture for \(C_{q}\)-groups to a cover \(\mathscr{X}\to\operatorname{Spec}\Lambda\). This gives rise to a representation
\[\rho:G\longrightarrow\operatorname{GL}H^{0}(X,\Omega_{X}), \tag{1}\]
together with a lifting
(2) \[\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{ 0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0}\diagram{0.0} \diagram{0.0}\diagram{0.}\diagram{0.0}\diagram{0.0}\diagram{0.
In our geometric setting on the other hand, we know that in the generic fibre cyclic actions do not have identity eigenvalues, see proposition 13. This means that we have to consider lifts that satisfy 3.b. with \(a=0\). Therefore, indecomposable modules for \(G=C_{q}\rtimes C_{2}=D_{q}\) of odd dimmension \(d_{1}\) should find an other indecomposable module of odd dimension \(d_{2}\) in order lift to an \(R[G]\)-indecomposable module of even dimension \(d_{1}+d_{2}\). Moreover this dimension should satisfy \(d_{1}+d_{2}\leq q\). If we also take care of the condition 3.c. we arrive at the following
**Criterion 4**.: The HKG-curve with action of \(D_{q}\) lifts in characteristic zero if and only if all indecomposable summands \(U(\epsilon,d)\), where \(\epsilon\in\{0,1\}\) and \(1\leq d\leq q^{h}\) with \(d\) odd have a pair \(U(\epsilon^{\prime},d^{\prime})\), with \(\epsilon^{\prime}\in\{0,1\}-\{\epsilon\}\) and \(d+d^{\prime}\leq q^{h}\).
In section 5 we will show that given a lifting \(\mathscr{X}\) of the \(C_{q}\) action using Oort conjecture, and a lifting of the linear representation satisfying criterion 4 the lift \(\mathscr{X}\) can be modified to a lift \(\mathscr{X}^{\prime}\), which lifts the action of \(D_{q}\). In order to apply this idea we need a detailed study of the direct \(k[G]\)-summands of \(H^{0}(X,\Omega_{X})\), for \(G=C_{q}\rtimes C_{m}\). This is considered in section 6, where we employ the joint work of the first author with F. Bleher and T. Chinburg [4], in order to compute the decomposition of \(H^{0}(X,\Omega_{X})\) into indecomposable \(kG\)-modules, in terms of the ramification filtration of the local action.
Then the lifting criterion of theorem 3 is applied. Our method gives rise to an algorithm which takes as input a group \(C_{q}\rtimes C_{m}\), with a given sequence of lower jumps and decides whether the action lifts to characteristic zero.
In section 6.1 we give an example of an \(C_{125}\rtimes C_{4}\) HKG-curve which does not lift and then we restrict ourselves to the case of dihedral groups. The possible ramification filtrations for local actions of the group \(C_{q}\rtimes C_{m}\) were computed in the work of A. Obus and R. Pries in [23]. We focus on the case of dihedral groups \(D_{q}\) with lower jumps
\[b_{\ell}=w_{0}\frac{p^{2\ell}+1}{p+1},0\leq\ell\leq h-1. \tag{3}\]
For the values \(w_{0}=9\) we show in this section that the local action does not lift, providing a counterexample to the conjecture that the KGB-obstruction is the only obstruction to the local lifting problem.
Finally, in section 6.2 we prove that the jumps of eq. (3) for the value \(w_{0}=1\) lift in characteristic zero.
We also have developed a programm in sage [28] in order to compute the decomposition of \(H^{0}(X,\Omega_{X})\) into indecomposable summands, which is freely available1.
Footnote 1: [https://www.dropbox.com/sh/uo0dg9110vuqulr/ACarhRxsru_zuIp5ogLy](https://www.dropbox.com/sh/uo0dg9110vuqulr/ACarhRxsru_zuIp5ogLy)
## 2. Notation
In this article we will study _metacyclic_ groups \(G=C_{q}\rtimes C_{m}\), where \(q=p^{h}\) is a power of the characteristic and \(m\in\mathbb{N},(m,p)=1\). Let \(\tau\) be a generator of the cyclic group \(C_{q}\) and \(\sigma\) be a generator of the cyclic group \(C_{m}\).
The group \(G\) is given in terms of generators and relations as follows:
\[G=\langle\sigma,\tau|\tau^{q}=1,\sigma^{m}=1,\sigma\tau\sigma^{-1}=\tau^{ \alpha}\text{ for some }\alpha\in\mathbb{N},1\leq\alpha\leq p^{h}-1,(\alpha,p)=1\rangle. \tag{4}\]
The integer \(\alpha\) satisfies the following congruence:
\[\alpha^{m}\equiv 1\bmod q \tag{5}\]
as one sees by computing \(\tau=\sigma^{m}\tau\sigma^{-m}=\tau^{\alpha^{m}}\). Also the \(\alpha\) can be seen as an element in the finite field \(\mathbb{F}_{p}\), and it is a \((p-1)\)-th root of unity, not necessarily primitive. In particular the following holds:
**Lemma 5**.: _Let \(\zeta_{m}\) be a fixed primitive \(m\)-th root of unity. There is a natural number \(a_{0}\), \(0\leq a_{0}<m-1\) such that \(\alpha=\zeta_{m}^{a_{0}}\)._
Proof.: The integer \(\alpha\) if we see it as an element in \(k\) is an element in the finite field \(\mathbb{F}_{p}\subset k\), therefore \(\alpha^{p-1}=1\) as an element in \(\mathbb{F}_{p}\). Let \(\operatorname{ord}_{p}(\alpha)\) be the order of \(\alpha\) in \(\mathbb{F}_{p}^{*}\). By eq. 5 we have that \(\operatorname{ord}_{p}(\alpha)\mid p-1\) and \(\operatorname{ord}_{p}(\alpha)\mid m\), that is \(\operatorname{ord}_{p}(\alpha)\mid(p-1,m)\).
The primitive \(m\)-th root of unity \(\zeta_{m}\) generates a finite field \(\mathbb{F}_{p}(\zeta_{m})=\mathbb{F}_{p^{\nu}}\) for some integer \(\nu\), which has cyclic multiplicative group \(\mathbb{F}_{p^{\nu}}\backslash\{0\}\) containing both the cyclic groups \(\langle\zeta_{m}\rangle\) and \(\langle\alpha\rangle\). Since for every divisor \(\delta\) of the order of a cyclic group \(C\) there is a unique subgroup \(C^{\prime}<C\) of order \(\delta\) we have that \(\alpha\in\langle\zeta_{m}\rangle\), and the result follows.
**Remark 6**.: For the case \(C_{q}\rtimes C_{m}\) the KGB-obstruction vanishes if and only if the first lower jump \(h\) satisfies \(h\equiv-1\bmod m\). For this to happen the conjugation action of \(C_{m}\) on \(C_{q}\) has to be faithful, see [20, prop. 5.9]. Also notice that by [23, th. 1.1], that if \(u_{0},u_{1},\ldots,u_{h-1}\) is the sequence upper ramification jumps for the \(C_{q}\) subgroup, then the condition \(h\equiv-1\bmod m\), then all upper jumps \(u_{i}\equiv-1\bmod m\). In remark 10 we will explain the necessity of the KGB-obstruction in terms of the action of \(C_{m}\), on the fixed horizontal divisor of the \(C_{q}\) group.
## 3. Deformation of covers
### Splitting the branch locus
Consider a deformation \(\mathscr{X}\to\operatorname{Spec}A\) of the curve \(X\) together with the action of \(G\). Denote by \(\tilde{\tau}=\tilde{\rho}(\tau)\) a lift of the action of the element \(\tau\in\operatorname{Aut}(X)\). Weierstrass preparation theorem [5, prop. VII.6] implies that:
\[\tilde{\tau}(T)-T=g_{\tilde{\tau}}(T)u_{\tilde{\tau}}(T),\]
where \(g_{\tilde{\tau}}(T)\) is a distinguished Weierstrass polynomial of degree \(m+1\) and \(u_{\tilde{\tau}}(T)\) is a unit in \(R[[T]]\).
The polynomial \(g_{\tilde{\tau}}(T)\) gives rise to a horizontal divisor that corresponds to the fixed points of \(\tilde{\tau}\). This horizontal divisor might not be irreducible. The branch
divisor corresponds to the union of the fixed points of any element in \(G_{1}(P)\). Next lemma gives an alternative definition of a horizontal branch divisor for the relative curves \(\mathscr{X}\to\mathscr{X}^{G}\), that works even when \(G\) is not a cyclic group.
**Lemma 7**.: _Let \(\mathscr{X}\to\mathrm{S}pecA\) be an \(A\)-curve, admitting a fibrewise action of the finite group \(G\), where \(A\) is a Noetherian local ring. Let \(S=\mathrm{S}pecA\), and \(\Omega_{\mathscr{X}/S}\), \(\Omega_{\mathscr{Y}/S}\) be the sheaves of relative differentials of \(\mathscr{X}\) over \(S\) and \(\mathscr{Y}\) over \(S\), respectively. Let \(\pi:\mathscr{X}\to\mathscr{Y}\) be the quotient map. The sheaf_
\[\mathscr{L}(-D_{\mathscr{X}/\mathscr{Y}})=\Omega_{\mathscr{X}/S}^{-1}\otimes_ {S}\pi^{*}\Omega_{\mathscr{Y}/S}\]
_is the ideal sheaf of the horizontal Cartier divisor \(D_{\mathscr{X}/\mathscr{Y}}\). The intersection of \(D_{\mathscr{X}/\mathscr{Y}}\) with the special and generic fibre of \(\mathscr{X}\) gives the ordinary branch divisors for curves._
Proof.: We will first prove that the above defined divisor \(D_{\mathscr{X}/\mathscr{Y}}\) is indeed an effective Cartier divisor. According to [16, Cor. 1.1.5.2] it is enough to prove that
* \(D_{\mathscr{X}/\mathscr{Y}}\) is a closed subscheme which is flat over \(S\).
* for all geometric points \(\mathrm{S}peck\to S\) of \(S\), the closed subscheme \(D_{\mathscr{X}/\mathscr{Y}}\otimes_{S}k\) of \(\mathscr{X}\otimes_{S}k\) is a Cartier divisor in \(\mathscr{X}\otimes_{S}k/k\).
In our case the special fibre is a nonsingular curve. Since the base is a local ring and the special fibre is nonsingular, the deformation \(\mathscr{X}\to\mathrm{S}pecA\) is smooth. (See the remark after the definition 3.35 p.142 in [19]). The smoothness of the curves \(\mathscr{X}\to S\), and \(\mathscr{Y}\to S\), implies that the sheaves \(\Omega_{\mathscr{X}/S}\) and \(\Omega_{\mathscr{X}/S}\) are \(S\)-flat, [19, cor. 2.6 p.222].
On the other hand the sheaf \(\Omega_{\mathscr{Y},\mathrm{S}pecA}\) is by [16, Prop. 1.1.5.1]\(\mathscr{O}_{\mathscr{Y}}\)-flat. Therefore, \(\pi^{*}(\Omega_{\mathscr{Y},\mathrm{S}pecA})\) is \(\mathscr{O}_{\mathscr{X}}\)-flat and \(\mathrm{S}pecA\)-flat [14, Prop. 9.2]. Finally, observe that the intersection with the special and generic fibre is the ordinary branch divisor for curves according to [14, IV p.301].
For a curve \(X\) and a branch point \(P\) of \(X\) we will denote by \(i_{G,P}\) the order function of the filtration of \(G\) at \(P\). The Artin representation of the group \(G\) is defined by \(\mathrm{ar}_{P}(\sigma)=-f_{P}i_{G,P}(\sigma)\) for \(\sigma\neq 1\) and \(\mathrm{ar}_{P}(1)=f_{P}\sum_{\sigma\neq 1}i_{G,P}(\sigma)\)[27, VI.2]. We are going to use the Artin representation at both the special and generic fibre. In the special fibre we always have \(f_{P}=1\) since the field \(k\) is algebraically closed. The field of quotients of \(A\) should not be algebraically closed therefore a fixed point there might have \(f_{P}\geq 1\). The integer \(i_{G,P}(\sigma)\) is equal to the multiplicity of \(P\times P\) in the intersection of \(\Delta.\Gamma_{\sigma}\) in the relative \(A\)-surface \(\mathscr{X}\times_{\mathrm{S}pecA}\mathscr{X}\), where \(\Delta\) is the diagonal and \(\Gamma_{\sigma}\) is the graph of \(\sigma\)[27, p. 105].
Since the diagonals \(\Delta_{0},\Delta_{\eta}\) and the graphs of \(\sigma\) in the special and generic fibres respectively of \(\mathscr{X}\times_{\mathrm{S}pecA}\mathscr{X}\) are algebraically equivalent divisors we have:
**Proposition 8**.: _Assume that \(A\) is an integral domain, and let \(\mathscr{X}\to\mathrm{S}pecA\) be a deformation of \(X\). Let \(\bar{P_{i}}\), \(i=1,\cdots,s\) be the horizontal branch divisors that intersect at the special fibre, at point \(P\), and let \(P_{i}\) be the corresponding points on the generic fibre. For the Artin representations attached to the points \(P,P_{i}\) we have:_
\[\mathrm{ar}_{P}(\sigma)=\sum_{i=1}^{s}\mathrm{ar}_{P_{i}}(\sigma). \tag{6}\]
This generalizes a result of J. Bertin [3]. Moreover if we set \(\sigma=1\) to the above formula we obtain a relation for the valuations of the differents in the special and
the generic fibre, since the value of the Artin's representation at \(1\) is the valuation of the different [27, prop. 4.IV,prop. 4.VI]. This observetion is equivalent to claim 3.2 in [13] and is one direction of a local criterion for good reduction theorem proved in [13, 3.4], [15, sec. 5].
### The Artin representation on the generic fibre
We can assume that after a base change of the family \(\mathscr{X}\to\operatorname{Spec}(A)\) the points \(P_{i}\) at the generic fibre have degree \(1\). Observe also that at the generic fibre the Artin representation can be computed as follows:
\[\operatorname{ar}_{Q}(\sigma)=\left\{\begin{array}{ll}1&\text{if }\sigma(Q)=Q,\\ 0&\text{if }\sigma(Q)\neq Q.\end{array}\right.\]
The set of points \(S:=\{P_{1},\ldots,P_{s}\}\) that are the intersections of the ramification divisor and the generic fibre are acted on by the group \(G\).
We will now restrict our attention to the case of a cyclic group \(H=C_{q}\) of order \(q\). Let \(S_{k}\) be the subset of \(S\) fixed by \(C_{p^{h-k}}\), i.e.
\[P\in S_{k}\text{ if and only if }H(P)=C_{p^{h-k}}.\]
Let \(s_{k}\) be the order of \(S_{k}\). Observe that since for a point \(Q\) in the generic fibre \(\sigma(Q)\) and \(Q\) have the same stabilizers (in general they are conjugate, but here \(H\) is abelian) the sets \(S_{k}\) are acted on by \(H\). Therefore \(\#S_{k}=:s_{k}=p^{k}i_{k}\) where \(i_{k}\) is the number of orbits of the action of \(H\) on \(S_{k}\).
Let \(b_{0},b_{1},\ldots,b_{h-1}\) be the jumps in the lower ramification filtration. Observe that
\[H_{j_{k}}=\left\{\begin{array}{ll}C_{p^{h-k}}&\text{ for }0\leq k\leq h-1 \\ \{1\}&\text{ for }k\geq h.\end{array}\right.\]
An element in \(H_{b_{k}}\) fixes only elements in \(S\) with stabilizers that contain \(H_{b_{k}}\). So \(H_{b_{0}}\) fixes only \(S_{0}\), \(H_{b_{1}}\) fixes both \(S_{0}\) and \(S_{1}\) and \(H_{b_{k}}\) fixes all elements in \(S_{0},S_{1},\ldots,S_{k}\). By definition of the Artin representation an element \(\sigma\) in \(H_{b_{k}}-G_{b_{k+1}}\) satisfies \(\operatorname{ar}_{P}(\sigma)=b_{k}+1\) and by using equation (6) we arrive at
\[b_{k}+1=i_{0}+pi_{1}+\cdots+p^{k}i_{k}.\]
**Remark 9**.: This gives us a geometric interpretation of the Hasse-Arf theorem, which states that for the cyclic \(p\)-group of order \(q=p^{h}\), the lower ramification filtration is given by
\[H_{0}=H_{1}=\cdots=H_{b_{0}}\gtrapprox H_{b_{0}+1}=\cdots=H_{b_{1}}\gtrapprox H_{b_{1}+1}=\cdots=H_{b_{h-1}}\gtrapprox\{1\},\]
i.e. the jumps of the ramification filtration appear at the integers \(b_{0},\ldots,b_{h-1}\). Then
\[b_{k}+1=i_{0}+i_{1}p+i_{2}p^{2}+\cdots+i_{k}p^{k}. \tag{7}\]
The set of horizontal branch divisors is illustrated in figure 1. Notice that the group \(C_{m}\) acts on the set of ramification points of \(H=C_{q}\) on the special fibre but it can't fix any of them since there are already fixed by a subgroup of \(C_{q}\) and if a branch point \(P\) of \(C_{q}\) was also fixed by an element of \(C_{m}\), then the isotropy subgroup of \(P\) could not be cyclic. This proves that \(m\) divides the numbers of all orbits \(i_{0},\ldots,i_{n-1}\).
**Remark 10**.: In this way we can recover the necessity of the KGB-obstruction since by eq. (7) the upper ramification jumps are \(i_{0}-1,i_{0}+i_{1}-1,\ldots,i_{0}+\cdots+i_{n-1}-1\)
The Galois cover \(X\to X/G\) breaks into two covers \(X\to X^{C_{q}}\) and \(X^{C_{q}}\to C^{G}\). The genus of \(C^{G}\) is zero by assumption and in the cover \(X^{C_{q}}\to C^{G}\) there are exactly two ramified points with ramification indices \(m\). An application of the Riemann-Hurwitz formula shows that the genus of \(X^{C_{q}}\) is zero as well.
The genus of the curve \(X\) can be computed either by the Riemann-Hurwitz formula in the special fibre
\[g =1-p^{n}+\frac{1}{2}\sum_{i=0}^{\infty}(|G_{i}|-1)\] \[=1-p^{n}+\frac{1}{2}\left((b_{0}+1)(p^{n}-1)+(b_{1}-b_{0})(p^{n-1 }-1)+(b_{2}-b_{1})(p^{n-2}-1)+\cdots\right.\] \[\left.\cdots+(b_{n}-b_{n-1})(p-1)\right)\]
or by the Riemann-Hurwitz formula on the generic fibre:
\[g=1-p^{n}+\frac{1}{2}\left(i_{0}(p^{n}-1)+i_{1}p(p^{n-1}-1)+\cdots i_{n-1}p^{n -1}(p-1)\right). \tag{8}\]
Using eq. (7) we see that the two formulas for \(g\) give the same result as expected.
## 4. HKG-covers and their canonical ideal
**Lemma 11**.: _Consider the Harbater-Katz-Gabber curve corresponding to the local group action \(C_{m}\rtimes C_{q}\), where \(q=p^{h}\) that is a power of the characteristic \(p\). If one of the following conditions holds:_
* \(h\geq 2\)__
* \(h=1\) _and the first jump_ \(i_{0}\) _in the ramification filtration for the cyclic group satisfies_ \(i_{0}\neq 1\) _and_ \(q\geq\frac{12}{i_{0}-1}+1\)_,_
_then the curve \(X\) has canonical ideal generated by quadratic polynomials._
Proof.: We will prove that the curve \(X\) has genus \(g\geq 6\) provided that \(p\) or \(h\) is big enough. We will also prove that the curve \(X\) is not hyperelliptic nor trigonal.
Figure 1. The horizontal Ramification divisor
**Remark 12**.: Let us first recall that a cyclic group of order \(q=p^{h}\) for \(h\geq 2\) can not act on the rational curve, see [29, thm 1]. Also let us recall that a cyclic group of order \(p\) can act on a rational curve and in this case the first and only break in the ramification filtration is \(i_{0}=1\). This latter case is excluded.
Consider first the case \(p^{h}=p\) and \(i_{0}\neq 1\). In this case we compute the genus \(g\) of the HKG-curve \(X\) using Riemann-Hurwitz formula:
\[2g=2-2mq+q(m-1)+qm-1+i_{0}(q-1),\]
where the contribution \(q(m-1)\) is from the \(q\)-points above the unique tame ramified point, while \(qm-1+i_{0}(q-1)\) is the contribution of the wild ramified point. This implies that,
\[2g=(i_{0}-1)(q-1),\]
therefore if \(i_{0}\geq 2\), it suffices to have \(q=p^{h}\geq 13\) and more generally it is enough to have \(q\geq\frac{12}{i_{0}-1}+1\) in order to ensure that \(g\geq 6\).
For the case \(h\geq 2\), we can write a stronger inequality based on Riemann-Hurwitz theorem as (recall that \(i_{0}\equiv i_{1}\bmod\)\(p\) so \(i_{0}-i_{1}\geq p\))
\[2g\geq(i_{0}-1)(p^{h}-1)+(i_{0}-i_{1})(p^{h-1}-1)\geq p^{h}-p, \tag{9}\]
which implies that \(g\geq 6\) for \(p>3\) or \(h>3\).
In order to prove that the curve is not hyperelliptic we observe that hyperelliptic curves have a normal subgroup generated by the hyperelliptic involution \(j\), so that \(X\to X/\langle j\rangle=\mathbb{P}^{1}\). It is known that the automorphism group of a hyperelliptic curve fits in the short exact sequence
\[1\to\langle j\rangle\to\operatorname{Aut}(X)\to H\to 1, \tag{10}\]
where \(H\) is a subgroup of \(\operatorname{PGL}(2,k)\), see [7]. If \(m\) is odd then the hyperelliptic involution is not an element in \(C_{m}\). If \(m\) is even, let \(\sigma\) be a generator of the cyclic group of order \(m\) and \(\tau\) a generator of the group \(C_{q}\). The involution \(\sigma^{m/2}\) again can't be the hyperelliptic involution. Indeed, the hyperelliptic involution is central, while the conjugation action of \(\sigma\) on \(\tau\) is faithful that is \(\sigma^{m/2}\tau\sigma^{-m/2}\neq\tau\). In this case \(G=C_{m}\rtimes C_{q}\) is a subgroup of \(H\) which should act on the rational function field. By the classification of such groups in [29, Th. 1] this is not possible. Thus \(X\) can't be hyperelliptic.
We will prove now that the curve is not trigonal. Using Clifford's theorem we can show [2, B-3 p.137] that a non hyperelliptic curve of genus \(g\geq 5\) cannot have two distinct \(g_{3}^{1}\). So if there is a \(g_{3}^{1}\), then this is unique. Moreover, the \(g_{3}^{1}\) gives rise to a map \(\pi:X\to\mathbb{P}^{1}\) and every automorphism of the curve \(X\) fixes this map. Therefore, we obtain a morphism \(\phi:C_{m}\rtimes C_{q}\to\operatorname{PGL}_{2}(k)\) and we arrive at the short exact sequence
\[1\to\ker\phi\to C_{m}\rtimes C_{q}\to H\to 1,\]
for some finite subgroup \(H\) of \(\operatorname{PGL}(2,k)\). If \(\ker\phi=\{1\}\), then we have the tower of curves \(X\xrightarrow{\pi}\mathbb{P}^{1}\xrightarrow{\pi^{\prime}}\mathbb{P}^{1}\), where \(\pi^{\prime}\) is a Galois cover with group \(C_{m}\rtimes C_{q}\). This implies that \(X\) is a rational curve contradicting remark 12. If \(\ker\phi\) is a cyclic group of order \(3\), then we have that \(3\mid m\) and the tower \(X\xrightarrow{\pi}\mathbb{P}^{1}\xrightarrow{\pi^{\prime}}\mathbb{P}^{1}\), where \(\pi\) is a cyclic Galois cover of order \(3\) and \(\pi^{\prime}\) is a Galois cover with group \(C_{m/3}\rtimes C_{q}\). As before this contradicts remark 12 and is not possible.
## 5. Invariant subspaces of vector spaces
The \(g\times g\) symmetric matrices \(A_{1},\ldots,A_{r}\) defining the quadratic canonical ideal of the curve \(X\), define a vector subspace of the vector space \(V\) of \(g\times g\) symmetric matrices. By Oort conjecture, we know that there are symmetric matrices \(\tilde{A}_{1},\ldots,\tilde{A}_{r}\) with entries in a local principal ideal domain \(R\), which reduce to the initial matrices \(A_{1},\ldots,A_{r}\). These matrices \(\tilde{A}_{1},\ldots,\tilde{A}_{r}\) correspond to the lifted relative curve \(\tilde{X}\). Moreover, the submodule \(\tilde{V}=\langle\tilde{A}_{1},\ldots,\tilde{A}_{r}\rangle\) is left invariant under the action of a lifting \(\tilde{\rho}\) of the representation \(\rho:C_{q}\to\operatorname{GL}_{g}(k)\).
**Proposition 13**.: _Let \(\tilde{g}\) be the genus of the quotient curve \(X/H\) for a subgroup \(H\) of the automorphism group of a curve \(X\) in characteristic zero. We have_
\[\dim H^{0}(X,\Omega_{X}^{\otimes d})^{H}=\begin{cases}\tilde{g}&\text{ if }d=1\\ (2d-1)(\tilde{g}-1)+\sum_{P\in X/G}\Big{\lfloor}d\left(1-\frac{1}{e(P)}\right) \Big{\rfloor}&\text{ if }d>1\end{cases}\]
Proof.: See [12, eq. 2.2.3,2.2.4 p. 254].
Therefore, a generator of \(C_{q}\) acting on \(H^{0}(X,\Omega_{X})\) has no identity eigenvalues and \(m\) should divide \(g\). This means that we have to consider liftings of indecomposable summands of \(C_{q}\), which satisfy condition 3.b. with \(a=0\). We now assume that condition 3.b. of theorem 3 can be fulfilled, so there is a lifting of the representation
satisfying condition, see also the discussion in the introduction after the statement of this theorem after eq. (2).
We have to show that we can modify the space \(\tilde{V}\subset\operatorname{Sym}_{g}(R)\) to a space \(\tilde{V}^{\prime}\) with the same reduction \(V\) modulo \(\mathfrak{m}_{R}\) so that \(\tilde{V}\) is \(C_{q}\rtimes C_{m}\)-invariant.
Consider the sum of the free modules
\[W=\tilde{V}+\tilde{\rho}(\sigma)\tilde{V}+\tilde{\rho}(\sigma^{2})\tilde{V}+ \cdots+\tilde{\rho}(\sigma^{m-1})\tilde{V}\subset R^{N}.\]
Observe that \(W\) is an \(R[C_{q}\rtimes C_{m}]\)-module and also it is a free submodule of \(R^{N}\) and by the theory of modules over local principal ideal domain there is a basis \(E_{1},\ldots,E_{N}\) of \(R^{N}\) such that
\[W=E_{1}\oplus\cdots\oplus E_{r}\oplus\pi^{a_{r+1}}E_{r+1}\oplus\cdots\oplus \pi^{a_{N}}E_{N},\]
where \(E_{1},\ldots,E_{r}\) form a basis of \(\tilde{V}\), while \(\pi^{a_{r+1}}E_{r+1},\ldots,\pi^{a_{N}}E_{N}\) form a basis of the kernel \(W_{1}\) of the reduction modulo \(\mathfrak{m}_{R}\). Since the reduction is compatible with the actions of \(\rho,\tilde{\rho}\) we have that \(W_{1}\) is an \(R[C_{q}\rtimes C_{m}]\)-module, while \(\tilde{V}\) is just a \(C_{q}\)-module.
Let \(\pi\) be the \(R[C_{q}]\)-equivariant projection map \(W=\tilde{V}\oplus_{R[C_{q}]-\text{modules}}W_{1}\to W_{1}\). Since \(m\) is an invertible element of \(R\), we can employ the proof of Mascke's theorem in order to construct a module \(\tilde{V}^{\prime}\), which is \(R[C_{q}\rtimes C_{m}]\) stable and reduces to \(V\) modulo \(\mathfrak{m}_{R}\), see also [1, I.3 p.12]. Indeed, consider the endomorphism \(\bar{\pi}:W\to W\) defined by
\[\bar{\pi}=\frac{1}{m}\sum_{i=0}^{m-1}\tilde{\rho}(\sigma^{i})\pi\tilde{\rho}( \sigma^{-i}).\]
We see that \(\bar{\pi}\) is the identity on \(W_{1}\) since \(\pi\) is the identity on \(W_{1}\). Moreover \(\tilde{V}^{\prime}:=\ker\bar{\pi}\) is both \(C_{q}\) and \(C_{m}\) invariant and reduces to \(V\) modulo \(\mathfrak{m}_{R}\).
## 6. Galois module structure of holomorphic differentials, special fibre
Consider the group \(C_{q}\rtimes C_{m}\). Let \(\tau\) be a generator of \(C_{q}\) and \(\sigma\) a generator of \(C_{m}\). It is known that \(\operatorname{Aut}(C_{q})\cong\mathbb{F}_{p}^{*}\times Q\), for some abelian group \(Q\). The representation \(\psi:C_{m}\to\operatorname{Aut}(C_{q})\) given by the action of \(C_{m}\) on \(C_{q}\) is known to factor through a character \(\chi:C_{m}\to\mathbb{F}_{p}^{*}\). The order of \(\chi\) divides \(p-1\) and \(\chi^{p-1}=\chi^{-(p-1)}\) is the trivial one dimensional character. In our setting, using the definition of \(G\) given in eq. (4) and lemma 5 we have that the character \(\chi\) is defined by
\[\chi(\sigma)=\alpha=\zeta_{m}^{a_{0}}\in\mathbb{F}_{p}. \tag{11}\]
For all \(i\in\mathbb{Z}\), \(\chi^{i}\) defines a simple \(k[C_{m}]\)-module of \(k\) dimension one, which we will denote by \(S_{\chi^{i}}\). For \(0\leq\ell\leq m-1\) denote by \(S_{\ell}\) the simple module on which \(\sigma\) acts as \(\zeta_{m}^{\ell}\). Both \(S_{\chi^{i}}\), \(S_{\ell}\) can be seen as \(k[C_{q}\rtimes C_{m}]\)-modules using inflation. Finally for \(0\leq\ell\leq m-1\) we define \(\chi^{i}(\ell)\in\{0,1,\dots,m-1\}\) such that \(S_{\chi^{i}(\ell)}\cong S_{\ell}\otimes_{k}S_{\chi^{i}}\). Using eq. (11) we arrive at
\[S_{\chi^{i}(\ell)}=S_{\ell+ia_{0}}. \tag{12}\]
There are \(q\cdot m\) isomorphism classes of indecomposable \(k[C_{q}\rtimes C_{m}]\)-modules and are all uniserial. An indecomposable \(k[C_{q}\rtimes C_{m}]\)-module \(U\) is uniquely determined by its socle, which is the kernel of the action of \(\tau-1\) on \(U\), and its \(k\)-dimension. For \(0\leq\ell\leq m-1\) and \(1\leq\mu\leq q\), let \(U_{\ell,\mu}\) be the indecomposable \(k[C_{q}\rtimes C_{m}]\) module with socle \(S_{\ell}\) and \(k\)-dimension \(\mu\). Then \(U_{\ell,\mu}\) is uniserial and its \(\mu\) ascending composition factors are the first \(\mu\) composition factors of the sequence
\[S_{\ell},S_{\chi^{-1}(\ell)},S_{\chi^{-2}(\ell)},\dots,S_{\chi^{-(p-2)}(\ell)},S_{\ell},S_{\chi^{-1}(\ell)},S_{\chi^{-2}(\ell)},\dots,S_{\chi^{-(p-2)}(\ell)}\]
Notice that in our notation \(V_{\alpha}(\lambda,k)=U_{\lambda+k,k}\).
Assume that \(X\to\mathbb{P}^{1}\) is an HKG-cover with Galois group \(C_{q}\rtimes C_{m}\). The subgroup \(I\) generated by the Sylow \(p\)-subgroups of the inertia groups of all closed points of \(X\) is equal to \(C_{q}\).
**Definition 14**.: For each \(0\leq j\leq q-1\) we define
\[D_{j}=\sum_{y\in\mathbb{P}^{1}}d_{y,j}y,\]
where the integers \(d_{y,j}\) are defined as follows. Let \(x\) be a point of \(X\) above \(y\) and consider the \(i\)-th ramification group \(I_{x,i}\) at \(x\). The order of the inertia group at \(x\) is assumed to be \(p^{n(x)}\) and we set \(i(x)=h-n(x)\). Let \(b_{0},b_{1},\dots,b_{n(x)-1}\) be the jumps in the numbering of the lower ramification filtration subgroups of \(I_{x}\). We define
\[d_{y,j}=\left\lfloor\frac{1}{p^{n(x)}}\sum_{l=1}^{n(x)}p^{n(x)-l}\big{(}p-1+(p -1-a_{l,t})b_{l-1}\big{)}\right\rfloor\]
for all \(t,j\geq 0\) satisfying
\[p^{i(x)}t\leq j<p^{i(x)}(t+1) \tag{13}\]
and
\[t=a_{1,t}+a_{2,t}p+\dots+a_{n(x),t}p^{n(x)-1}\]
is the \(p\)-adic expansion of \(t\). In particular \(D_{q-1}=0\). Observe that \(d_{y,j}\neq 0\) only for wildly ramified branch points.
**Remark 15**.: For a divisor \(D\) on a curve \(Y\) define \(\Omega_{Y}(D)=\Omega_{Y}\otimes\mathscr{O}_{Y}(D)\). In particular for \(Y=\mathbb{P}^{1}\), and for \(D=D_{j}=d_{P_{\infty},j}P_{\infty}\), where \(D_{j}\) is a divisor supported at the infinity point \(P_{\infty}\) we have
\[H^{0}(\mathbb{P}^{1},\Omega_{\mathbb{P}^{1}}(D_{j}))=\{f(x)dx:0\leq\deg f(x) \leq d_{P_{\infty},j}-2\}.\]
For the sake of simplicity, we will denote \(d_{P_{\infty},j}\) by \(d_{j}\). The space \(H^{0}(\mathbb{P}^{1},\Omega_{\mathbb{P}^{1}}(D_{j}))\) has a basis given by \(B=\{dx,xdx,\ldots,x^{d_{j}-2}dx\}\). Therefore, the number \(n_{j,\ell}\) of simple modules appearing in the decomposition \(\Omega_{\mathbb{P}^{1}}(D_{j})\) isomorphic to \(S_{\ell}\) for \(0\leq\ell<m\), is equal to the number of monomials \(x^{\nu}\) with
\[\nu\equiv\ell-1\;\mathrm{mod}m,0\leq\nu\leq d_{j}-2.\]
If \(d_{j}\leq 1\) then \(B=\emptyset\) and \(n_{j,\ell}=0\) for all \(0\leq\ell<m\). If \(d_{j}>1\), then we know that in the \(d_{j}-1\) elements of the basis \(B\), the first \(m\left\lfloor\frac{d_{j}-1}{m}\right\rfloor\) elements contribute to every representative modulo \(m\). Thus, we have at least \(\left\lfloor\frac{d_{j}-1}{m}\right\rfloor\) elements in isomorphic to \(S_{\ell}\) for every \(0\leq\ell<m\). We will now count the rest elements, of the form \(\{x^{\nu}dx\}\), where
\[m\left\lfloor\frac{d_{j}-1}{m}\right\rfloor\leq\nu\leq d_{j}-2\text{ and }\nu\equiv\overline{\ell-1}\;\mathrm{mod}m,\]
where \(\overline{\ell-1}\) is the unique integer in \(\{0,1,\ldots,m-1\}\) equivalent to \(\ell-1\) modulo \(m\). We observe that the number \(y_{j}(\ell)\) of such elements \(\nu\) is given by
\[y_{j}(\ell)=\begin{cases}1&\text{ if }\overline{\ell-1}\leq d_{j}-2-m\left\lfloor \frac{d_{j}-1}{m}\right\rfloor\\ 0&\text{ otherwise}\end{cases}\]
Therefore
\[n_{j,\ell}=\begin{cases}\left\lfloor\frac{d_{j}-1}{m}\right\rfloor+y_{j}( \ell)&\text{ if }d_{j}\geq 2\\ 0&\text{ if }d_{j}\leq 1\end{cases}\]
For example if \(d_{j}=9\) and \(m=3\), then a basis for \(H^{0}(\mathbb{P}^{1},\Omega_{\mathbb{P}^{1}}(9P_{\infty}))\) is given by \(\{dx,xdx,x^{2}dx,\ldots x^{7}dx\}\). This basis has 8 elements, and each triple \(\{dx,xdx,x^{2}dx\}\), \(\{x^{3}dx,x^{4}dx,x^{5}dx\}\) contributes one to each class \(S_{0},S_{1},S_{2}\), while there are two remaining basis elements \(\{x^{6}dx,x^{7}dx,\}\), which contribute one to \(S_{1},S_{2}\). Notice that \(\left\lfloor\frac{8}{3}\right\rfloor=2\) and \(y(\ell)=1\) for \(\ell=1,2\).
In particular if \(m=2\), then \(n_{j,\ell}=0\) if \(d_{j}\leq 1\) and for \(d_{j}\geq 2\) we have
\[n_{j,\ell}=\begin{cases}\frac{d_{j}-1}{2}&\text{ if }d_{j}\equiv 1\;\mathrm{mod }2\\ \frac{d_{j}}{2}-1&\text{ if }\ell=0\text{ and }d_{j}\equiv 0\;\mathrm{mod}2\\ \frac{d_{j}}{2}&\text{ if }\ell=1\text{ and }d_{j}\equiv 0\;\mathrm{mod}2 \end{cases} \tag{14}\]
**Lemma 16**.: _Assume that \(d_{j-1}=d_{j}+1\). Then if \(d_{j}\geq 2\)_
\[n_{j-1,\ell}-n_{j,\ell}=\begin{cases}1&\text{ if }d_{j-1}\equiv 1\;\mathrm{mod }2\text{ and }\ell=0\\ &\text{ or }d_{j-1}\equiv 0\;\mathrm{mod}2\text{ and }\ell=1\\ 0&\text{ if }d_{j-1}\equiv 1\;\mathrm{mod}2\text{ and }\ell=1\\ &\text{ or }d_{j-1}\equiv 0\;\mathrm{mod}2\text{ and }\ell=0\end{cases}\]
_If \(d_{j}\leq 1\), then_
\[n_{j-1,\ell}-n_{j,\ell}=\begin{cases}0&\text{ if }d_{j}=0\text{ or }(d_{j}=1\text{ and }\ell=0)\\ 1&\text{ if }d_{j}=1\text{ and }\ell=1\end{cases}\]
Proof.: Assume that \(d_{j}\geq 2\). We distinguish the following two cases, and we will use eq. (14)
* \(d_{j-1}\) is odd and \(d_{j}\) is even. Then, if \(\ell=0\) \[n_{j-1,\ell}-n_{j,\ell}=\frac{d_{j-1}-1}{2}-\frac{d_{j}}{2}+1=1\] while \(n_{j-1,\ell}-n_{j,\ell}=0\) if \(\ell=1\).
* \(d_{j-1}\) is even and \(d_{j}\) is odd. Then, if \(\ell=0\) \[n_{j-1,\ell}-n_{j,\ell}=\frac{d_{j-1}}{2}-1-\frac{d_{j}-1}{2}=0,\] while \(n_{j-1,\ell}-n_{j,\ell}=1\) if \(\ell=0\).
If now \(d_{j}=0\) and \(d_{j-1}=1\), then \(n_{j-1,\ell}-n_{j,\ell}=0\). If \(d_{j}=1\) and \(d_{j-1}=2\) then \(n_{j,\ell}=0\) while \(n_{j-1,\ell}=0\) if \(\ell=0\) and \(n_{j-1,\ell}=1\) if \(\ell=1\).
**Theorem 17**.: _Let \(M=H^{0}(X,\Omega_{X})\), let \(\tau\) be the generator of \(C_{q}\), and for all \(0\leq j<q\) we define \(M^{(j)}\) to be the kernel of the action of \(k[C_{q}](\tau-1)^{j}\). For \(0\leq a\leq m-1\) and \(1\leq b\leq q=p^{h}\), let \(n(a,b)\) be the number of indecomposable direct \(k[C_{q}\rtimes C_{m}]\)-module summands of \(M\) that are isomorphic to \(U_{a,b}\). Let \(n_{1}(a,b)\) be the number of indecomposable direct \(k[C_{m}]\)-summands of \(M^{(b)}/M^{(b-1)}\) with socle \(S_{\chi^{-(b-1)}(a)}\) and dimension \(1\). Let \(n_{2}(a,b)\) be the number of indecomposable direct \(k[C_{m}]\)-module summands of \(M^{(b+1)}/M^{(b)}\) with socle \(S_{\chi^{-b}(a)}\), where we set \(n_{2}(a,b)=0\) if \(b=q\)._
\[n(a,b)=n_{1}(a,b)-n_{2}(a,b).\]
_The numbers \(n_{1}(a,b),n_{2}(a,b)\) can be computed using the isomorphism_
\[M^{(j+1)}/M^{(j)}\cong S_{\chi^{-j}}\otimes_{k}H^{0}(Y,\Omega_{Y}(D_{j})),\]
_where \(Y=X/C_{q}\) and \(D_{j}\) are the divisors on \(Y\), given in definition 14._
For the case of HKG-covers, with \(\infty\) the wild ramified point and \(0\) the tame ramified point the divisors \(D_{j}\) are supported only at the wild ramified point and are given by
\[D_{j}=\left\lfloor\frac{1}{p^{h}}\sum_{l=1}^{h}p^{h-l}(p-1+(p-1-a_{l,t})b_{ l-1})\right\rfloor P_{\infty}\]
\[t=a_{1,t}+a_{2,t}p+\cdots+a_{h,t}p^{h-1}\]
is the \(p\)-adic expansion of \(t\). Notice that since \(i(x)=0\) eq. (13) reduces to \(t=j\).
**Corollary 18**.: _Set \(d_{j}=\left\lfloor\frac{1}{p^{h}}\sum_{l=1}^{h}p^{h-l}(p-1+(p-1-a_{l,t})b_{l- 1})\right\rfloor\). The numbers \(n(a,b),n_{1}(a,b)\) and \(n_{2}(a,b)\) are given by_
\[n(a,b)=n_{1}(a,b)-n_{2}(a,b)=n_{b-1,a}-n_{b,a}.\]
Proof.: We will treat the \(n_{1}(a,b)\) case and the \(n_{2}(a,b)\) follows similarly. By the equivariant isomorphism for \(M=H^{0}(X,\Omega_{X})\) we have that
\[M^{(b+1)}/M^{(b)}\cong S_{\chi^{-b}}\otimes_{k}H^{0}(\mathbb{P}^{1},\Omega_{ \mathbb{P}^{1}}(D_{b})).\]
The number of indecomposable \(k[C_{m}]\)-summands of \(M^{(b)}/M^{(b-1)}\) isomorphic to \(S_{\chi^{-(b-1)(a)}}=S_{a-(b-1)a_{0}}\) equals to the number of indecomposable \(k[C_{m}]\)-summands of \(H^{0}(\mathbb{P}^{1},\Omega_{\mathbb{P}^{1}}(D_{j}))\) isomorphic to \(S_{a}\) which is computed in remark 15.
In [23, Th. 1.1] A. Obus and R. Pries described the upper jumps in the ramification filtration of \(C_{p^{h}}\rtimes C_{m}\)-covers.
**Theorem 19**.: _Let \(G=C_{p^{h}}\rtimes C_{m}\), where \(p\nmid m\). Let \(m^{\prime}=|\mathrm{Cent}_{G}(\sigma)|/p^{h}\), where \(\langle\tau\rangle=C_{p^{h}}\). A sequence \(u_{1}\leq\cdots u_{n}\) of rational numbers occurs as the set of positive breaks in the upper numbering of the ramification filtration of a \(G\)-Galois extension of \(k((t))\) if and only if:_
1. \(u_{i}\in\frac{1}{m}\mathbb{N}\) _for_ \(1\leq i\leq h\)__
2. \(\gcd(m,mu_{1})=m^{\prime}\)__
3. \(p\nmid mu_{1}\) _and for_ \(1<i\leq h\)_, either_ \(u_{i}=pu_{i-1}\) _or both_ \(u_{i}>pu_{i-1}\) _and_ \(p\nmid mu_{i}\)_._
4. \(mu_{i}\equiv mu_{1}\bmod m\) _for_ \(1\leq i\leq n\)_._
Notice that in our setting \(\mathrm{Cent}_{G}(\tau)=\langle\tau\rangle\), therefore \(m^{\prime}=1\). Also the set of upper jumps of \(C_{p^{h}}\) is given by \(w_{1}=mu_{1},\ldots,w_{h}=mu_{h},w_{i}\in\mathbb{N}\), see [23, lemma 3.5].
The theorem of Hasse-Arf [27, p. 77] applied for cyclic groups, implies that there are strictly positive integers \(\iota_{0},\iota_{1},\ldots,\iota_{h-1}\) such that
\[b_{s}=\sum_{\nu=0}^{s-1}\iota_{\nu}p^{\nu},\text{ for }0\leq s\leq h-1\]
Also, the upper jumps for the \(C_{q}\) extension are given by
\[w_{0}=i_{0}-1,w_{1}=i_{0}+i_{1}-1,\ldots,w_{h}=i_{0}+i_{1}+\cdots+u_{h}-1. \tag{15}\]
Assume that for all \(0<\nu\leq h-1\) we have \(w_{\nu}=pw_{\nu-1}\). Equation (15) implies that
\[i_{1}=(p-1)w_{0},i_{2}=(p-1)pw_{0},i_{3}=(p-1)p^{2}w_{0},\ldots,u_{h-1}=(p-1)p ^{h-2}w_{0}.\]
Therefore,
\[b_{\ell}+1 =\sum_{\nu=0}^{\ell}i_{\nu}p^{\nu}=1+w_{0}+(p-1)w_{0}\cdot p+(p-1 )pw_{0}\cdot p^{2}+\cdots+(p-1)p^{\ell-1}w_{0}\cdot p^{\ell}\] \[=1+u_{0}+p(p-1)u_{0}\left(\sum_{\nu=0}^{\ell-1}p^{2\nu}\right)=1 +w_{0}+p(p-1)w_{0}\frac{p^{2\ell}-1}{p^{2}-1}\] \[=1+w_{0}+pw_{0}\frac{p^{2\ell}-1}{p+1}=1+w_{0}\frac{p^{2\ell+1}+1 }{p+1}\]
where we have used that \(w_{0}=b_{0}=i_{0}-1\).
### Examples of local actions that don't lift
Consider the curve with lower jumps \(1,21,521\) and higer jumps \(1,5,25\), acted on by \(C_{125}\rtimes C_{4}\). According to eq. (5), the only possible values for \(\alpha\) are \(1,57,68,124\). The value \(\alpha=1\) gives rise to a cyclic group \(G\), while the value \(\alpha=124\) has order \(2\) modulo \(125\). The values \(57,68\) have order \(4\) modulo \(125\). The cyclic group \(\mathbb{F}_{5}^{*}\) is generated by the primitive root \(2\) of order \(4\). We have that \(57\equiv 2\bmod 5\), while \(68\equiv 3\equiv 2^{3}\bmod 5\).
Using corollary 18 together with remark 15 we have that \(H^{0}(X,\Omega_{X})\) is decomposed into the following indecomposable modules, each one appearing with multiplicity one:
\[\begin{array}{l}U_{0,5},U_{3,11},U_{2,17},U_{1,23},U_{0,29},U_{3,35},U_{2,4 1},U_{1,47},U_{0,53},U_{3,59},\\ U_{2,65},U_{1,71},U_{0,77},U_{3,83},U_{2,89},U_{1,95},U_{0,101},U_{3,107},U_{2, 113},U_{1,119}\end{array}\]
We have that \(119\equiv 3\bmod 4\) so the module \(U_{1,119}\) can not be lifted by itself. Also it can't be paired with \(U_{0,5}\) since \(119+5\equiv 4\neq 1\bmod 4\). All other modules have dimension \(d\) such that \(d+119>125\). Therefore, the representation of \(H^{0}(G,\Omega_{X})\) cannot be lifted.
The case of dihedral groups is more difficult to find an example that does not lift. We have the following
The HKB-curve with lower jumps \(9,9\cdot 21=189,9\cdot 521=4689\) has genus \(11656\) and the following modules appear in its decomposition, each one appearing with multiplicity one:
\[\begin{array}{l}U_{0,1},U_{1,1},U_{0,2},U_{1,2},U_{1,3},U_{0,4},U_{1,4},U_{ 0,5},U_{1,6},U_{0,7},U_{1,7},U_{0,8},U_{1,8},U_{0,9},U_{1,9},U_{0,11},\\ U_{1,11},U_{0,12},U_{1,12},U_{0,13},U_{1,13},U_{0,14},U_{1,15},U_{0,16},U_{0,1 7},U_{1,17},U_{0,18},U_{1,18},U_{0,19},U_{1,19},\\ U_{0,21},U_{1,21},U_{0,22},U_{1,22},U_{0,23},U_{1,23},U_{1,24},U_{0,25},U_{1,26 },U_{0,27},U_{1,27},U_{0,28},U_{1,28},U_{0,29},\\ U_{1,29},U_{0,31},U_{1,31},U_{0,32},U_{1,32},U_{0,33},U_{0,34},U_{1,34},U_{1,3 5},U_{0,36},U_{0,37},U_{1,37},U_{0,38},U_{1,38},\\ U_{0,39},U_{1,39},U_{0,41},U_{1,41},U_{0,42},U_{1,42},U_{0,43},U_{1,43},U_{1,4 4},U_{0,45},U_{0,46},U_{1,46},U_{1,47},U_{0,48},\\ U_{1,48},U_{0,49},U_{1,49},U_{0,51},U_{1,51},U_{0,52},U_{1,52},U_{0,53},U_{0,54 },U_{1,54},U_{1,55},U_{0,56},U_{0,57},U_{1,57},\\ U_{0,58},U_{1,58},U_{0,59},U_{1,59},U_{0,61},U_{1,61},U_{0,62},U_{1,62},U_{0,63 },U_{1,63},U_{1,64},U_{0,65},U_{0,66},U_{1,66},\\ U_{1,67},U_{0,68},U_{1,68},U_{0,69},U_{1,69},U_{0,71},U_{1,71},U_{0,72},U_{1,72 },U_{0,73},U_{1,73},U_{0,74},U_{1,75},U_{0,76},\\ U_{0,77},U_{1,77},U_{0,78},U_{1,78},U_{0,79},U_{1,79},U_{0,81},U_{1,81},U_{0,82 },U_{1,82},U_{0,83},U_{1,83},U_{1,84},U_{0,85},\\ U_{1,86},U_{0,87},U_{1,87},U_{0,88},U_{1,88},U_{0,89},U_{1,89},U_{0,91},U_{1,91}, U_{0,92},U_{1,92},U_{0,93},U_{1,93},U_{0,94},\\ U_{1,95},U_{0,96},U_{1,96},U_{0,97},U_{0,98},U_{1,98},U_{0,99},U_{1,99},U_{0,101}, U_{1,101},U_{0,102},U_{1,102},U_{1,103},\\ U_{0,104},U_{1,104},U_{0,105},U_{1,106},U_{0,107},U_{1,107},U_{0,108},U_{1,108},U_{ 0,109},U_{1,109},U_{0,111},U_{1,111},\\ U_{0,112},U_{1,112},U_{0,113},U_{1,113},U_{0,114},U_{1,115},U_{0,116},U_{1,11 6},U_{0,117},U_{0,118},U_{1,118},U_{0,119},\\ U_{1,119},U_{0,121},U_{1,121},U_{0,122},U_{1,122},U_{0,123},U_{1,123},U_{1,124}, \end{array}\]
The above formulas were computed using Sage 9.8 [28]. In order to be completely sure that are correct we will compute the values we need by hand also. We have
\[d_{j} =\left\lfloor\frac{1}{125}\left(5^{2}\big{(}4+(4-a_{1})9\big{)}+ 5\big{(}4+(4-a_{2})189\big{)}+\big{(}4+(4-a_{3})4689\big{)}\right)\right\rfloor\] \[=\left\lfloor\frac{1}{125}\left(23560-225a_{1}-945a_{2}-4689a_{3} \right)\right\rfloor\]
\[\begin{array}{c|c|c|c|c|c|c}j&p-\mathrm{adic}&d_{j}&n_{j,0}&n_{j,1}&n_{j-1,0}-n_{j,0}&n_{j-1,1}-n_{j,1}\\ \hline 0&0,0,0&\lfloor\frac{23560}{125}\rceil=188&93&94&-&-\\ 1&1,0,0&\lfloor\frac{23335}{125}\rceil=186&92&93&1&1\\ 2&1,0,0&\lfloor\frac{23110}{125}\rceil=184&91&92&1&1\\ 3&1,0,0&\lfloor\frac{22885}{125}\rceil=183&91&91&0&1\\ 4&1,0,0&\lfloor\frac{22660}{125}\rceil=181&90&90&1&1\\ 5&0,1,0&\lfloor\frac{22615}{125}\rceil=180&89&90&1&0\\ 6&1,1,0&\lfloor\frac{22390}{125}\rceil=179&89&89&0&1\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 120&0,4,4&\lfloor\frac{1024}{125}\rceil=8&3&4&\\ 121&1,4,4&\lfloor\frac{799}{125}\rceil=6&2&3&1&1\\ 122&2,4,4&\lfloor\frac{574}{125}\rceil=4&1&2&1&1\\ 123&3,4,4&\lfloor\frac{349}{125}\rceil=2&0&1&1&1\\ 124&4,4&\lfloor\frac{124}{125}\rceil=0&0&0&0&1\\ \end{array}\]
Notice that \(U_{1,123},U_{0,123}\) can be paired with \(U_{1,0},U_{1,1}\), and then for \(U_{0,121},U_{1,121}\) there is only one \(U_{1,3}\) to be paired with. The lift is not possible.
### Examples of actions that lift
Our aim now is to prove the following
**Theorem 20**.: _Assume that the first lower jump equals \(b_{0}=w_{0}=1\) and each other lower jump is given by_
\[b_{\ell}=\frac{p^{2\ell+1}+1}{p+1}.\]
_Then, the local action of the dihedral group \(D_{p^{h}}\) lifts._
**Remark 21**.: Notice that in this case if \(d_{j-1}>d_{j}\) then \(d_{j-1}=d_{j}+1\).
**Lemma 22**.: _Write_
\[j-1 =(p-1)+(p-1)p+\cdots+(p-1)p^{s-1}+a_{s}p^{s}+\cdots\] \[j =(a_{s}+1)p^{s}+\cdots\]
_where \(s\) is the first power in the \(p\)-adic expansion of \(j-1\) such that the corresponding coefficient \(0\leq a_{s}<p-1\). Then_
\[B(j)-B(j-1)=p^{h-s}.\]
Proof.: By definition of the function \(B(j)\) we have that
\[B(j)-B(j-1) =b_{s-1}p^{h-s}-(p-1)(b_{0}p^{h-1}+\cdots+b_{s-2}p^{h-s+1})\] \[=\frac{p^{2s-1}+1}{p+1}p^{h-s}-(p-1)\sum_{\nu=1}^{s-1}p^{h-\nu} \frac{p^{2\nu-1}+1}{p+1}\] \[=p^{h-s}.\]
**Definition 23**.: We will call the element \(j\) of type \(s\) if all \(p\)-adic coefficients of \(j\) corresponding to \(p^{\nu}\) for \(\nu\leq s-1\) are \(p-1\), while the coefficient corresponding to \(p^{s}\) is not \(p-1\). For example \(j-1\) in lemma 22 is of type \(s\), while \(j\) is of type \(1\).
**Proposition 24**.: _Write \(\pi_{j}=\left\lfloor\frac{B(j)}{p^{h}}\right\rfloor\). Then, \(\pi_{j}=\pi_{j-1}+1\) if and only if \(j=k(p+1)\). Also \(p^{h}\nmid B(j)\) for all \(1\leq j\leq p^{h}-1\)._
Proof.: In the following table we present the change on \(B(j)\) after increasing \(j-1\) to \(j\), where \(j-1\) has type \(s\), using lemma 22.
\begin{tabular}{c|c|c|} \hline \(j\) & \(B(j)\) & \(\frac{B(j)}{p^{h}}\) \\ \hline \(0\) & \(0\) & \(0\) \\ \(1\) & \(p^{h-1}\) & \(0\) \\ \(a_{1}=2,\ldots,p-1\) & \(a_{1}p^{h-1}\) & \(0\) \\ \(p\) & \((p-1)p^{h-1}+p^{h-2}\) & \(0\) \\ \(\boxed{p+1}\) & \(p^{h}+p^{h-2}\) & \(\boxed{1}\) \\ \(p+2\) & \(p^{h}+p^{h-2}+p^{h-1}\) & \(1\) \\ \(p+a_{1},a_{1}=3,\ldots,p-1\) & \(p^{h}+p^{h-2}+(a_{1}-1)p^{h-1}\) & \(1\) \\ \(2p\) & \(p^{h}+2p^{h-2}+(p-2)p^{h-1}\) & \(1\) \\ \(2p+1\) & \(p^{h}+2p^{h-2}+(p-1)p^{h-1}\) & \(1\) \\ \(2p+2\) & \(2p^{h}+2p^{h-2}+p^{h-2}\) & \(\boxed{2}\) \\ \(2p+3\) & \(2p^{h}+2p^{h-2}+p^{h-1}\) & \(2\) \\ \(2p+a_{1}\) & \(2p^{h}+2p^{h-2}+(a_{1}-2)p^{h-1}\) & \(2\) \\ \(3p\) & \(2p^{h}+3p^{h-2}+(p-3)p^{h-1}\) & \(2\) \\ \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \((p-1)p\) & \((p-2)p^{h}+(p-1)p^{h-2}+p^{h-1}\) & \(p-2\) \\ \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \((p-1)+p^{2}\) & \((p-1)p^{h}+(p-1)p^{h-2}+p^{h-3}\) & \(p-1\) \\ \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \((p-1)p^{h}+(p-1)p^{h-2}+p^{h-3}+(p-1)p^{h-1}\) & \(p-1\) \\ \(\cdots\) & \(\cdots\) & \(\cdots\) \\ \(\boxed{p+p^{2}}\) & \((p-1)p^{h}+(p-1)p^{h-2}+p^{h-3}+(p-1)p^{h-1}\) & \(p-1\) \\ \(\cdots\) & \(p^{h+1}+p^{h-3}\) & \(\boxed{p}\) \\ \(p^{h+1}+p^{h-1}+p^{h-3}\) & \(\boxed{p}\) \\ \end{tabular}
Indeed, if the type of \(j-1\) is \(s=1\) then \(B(j)=B(j-1)+p^{h-1}\) and it is clear that we will get one more \(p^{h}\) at \(kp+k\), for \(1\leq k\leq p\). We will prove the result in full generality by induction. Observe that if \(j-1\) is of type \(s\), and \(\pi_{j}=\pi_{j-1}+1\), then \(B(j)=B(j-1)+p^{h-s}\) and moreover
\[B(j-1) =(p-1)p^{h-1}+(p-1)p^{h-2}+\cdots+(p-1)p^{h-s}+\pi_{j-1}p^{h}+u\] \[B(j) =p^{h}+\pi_{j-1}p^{h}+u\]
for some \(u=\sum_{\nu=0}^{h-2}\gamma_{\nu}p^{\nu}\), \(0\leq\gamma_{\nu}<p\). Set \(T=\pi_{j-1}p^{h}+u\). Assume by induction that this jump occurs at \(j=k(p+1)\). Then the next jump will occur at
\(k(p+1)+(p+1)\), since
\[B(j+1) =B(j)+p^{h-1}+T\] \[B(j+2) =B(j)+2p^{h-1}+T\] \[\cdots\] \[B(j+(p-1)) =B(j)+(p-1)p^{h-1}+T\] \[B(j+p) =B(j)+(p-1)p^{h-1}+p^{h-2}+T\] \[B(j+p+1) =B(j)+p^{h}+T+p^{h-2}.\]
**Theorem 25**.: _Assume that \(w_{0}=1\), and the jumps of the \(C_{q}\) action are as in theorem 20. Then each direct summand \(U(\epsilon,j)\) of \(H^{0}(X,\Omega_{X})\) has a compatible pair according to criterion 4, which is given by_
\[U(\epsilon^{\prime},p^{h}-1-j)\text{ if }h\text{ is odd}\] \[U(\epsilon^{\prime},p^{h}-p-j)\text{ if }h\text{ is even}\]
Proof.: For every \(1\leq j\leq p^{h}-1\), set \(\tilde{j}=p^{h}-1-j\). For every \(1\leq j\leq p^{h}-1\) write \(B(j)=\pi_{j}p^{h}+v_{j}\), \(0\leq v_{j}<p^{h}\). Recall that
\[d_{j}=\left\lfloor\frac{p^{h}-1+B(p^{h}-1)-B(j)}{p^{h}}\right\rfloor=\left\lfloor \frac{p^{h}-1+B(\tilde{j})}{p^{h}}\right\rfloor=1+\pi_{\tilde{j}}+\left\lfloor \frac{-1+v_{j}}{p^{h}}\right\rfloor.\]
Since \(v_{j}\neq 0\), we have that \(\left\lfloor\frac{-1+v_{j}}{p^{h}}\right\rfloor=0\). Therefore, \(d_{j-1}>d_{j}\) if and only if \(\pi_{\tilde{j}+1}<\pi_{\tilde{j}}\) that is
\[\tilde{j}+1=k(p+1)\Rightarrow\tilde{j}=k(p+1)-1.\]
Observe now that if \(h\) is odd and \(d_{j-1}=d_{j}+1\), that is \(\tilde{j}=k(p+1)-1\). Then
\[j=p^{h}-1-\tilde{j}=p^{h}-k(p+1).\]
* If \(h\) is odd, then \(\tilde{j}=p^{h}-(1+p^{h})+k(p+1)=p^{h}-k^{\prime}(p+1)\) for some integer \(k^{\prime}=\frac{p^{h}+1}{p+1}-k\), since in this case \(p+1\mid p^{h}+1\). Since \(\tilde{\tilde{j}}=j\) we can assume that \(j<\tilde{j}\). Then \(d_{j}-d_{\tilde{j}}\) is the number of jumps between \(d_{j},d_{\tilde{j}}\), that is the number of elements \(x=p^{h}-l_{x}(p+1)\in\mathbb{N}\) of the form \[j=p^{h}-k(p+1)<p^{h}-l_{x}(p+1)\leq p^{h}-k^{\prime}(p+1)\] that is \(k^{\prime}\leq l_{x}<k\). This number equals \(k-k^{\prime}=2k-\frac{p^{h}+1}{p+1}\), which is odd since \(\frac{p^{h}+1}{p+1}\sum_{\nu=0}^{h-1}(-p)^{\nu}\) is odd.
* If \(h\) is even, then \(j^{\prime}=p^{h}-p-j=p^{h}-(p+p^{h})+k(p+1)=p^{h}-k^{\prime}(p+1)\) for some integer \(k^{\prime}\), since in this case \(p+1\mid p^{h}+p\). Again since \(j^{\prime\prime}=j\) we can assume that \(j<j^{\prime}\). Again \(d_{j}-d_{j^{\prime}}\) is the number of jumps between \(d_{j},d_{j^{\prime}}\), which equals to \(2k-\frac{p^{h}+p}{p+1}=p\frac{p^{h-1}+1}{p+1}\), which is odd.
Observe that we have proved in both cases that \(d_{j}\) is odd if and only if \(d_{\tilde{j}}\) (resp. \(d_{j}^{\prime}\)) is even. The change of \(\epsilon\) to \(\epsilon^{\prime}\) follows by lemma 16. |
2304.07044 | A family of Lempert domains | In \cite{G-Z} G.~Ghosh and W. Zwonek introduced a new class of domains
$\bL_n$, $n\ge1$, which are 2-proper holomorphic images of the Cartan domains
of type four. This family contains biholomorphic images of the symmetrized
bidisc and the tetrablock. It is well-known, that symmetrized bidisc and
tetrablock are Lempert type domains. In our paper we show that the whole family
of domains $\bL_n$ are Lempert domains. | Armen Edigarian | 2023-04-14T10:36:39Z | http://arxiv.org/abs/2304.07044v1 | # A family of Lempert domains
###### Abstract.
In [11] G. Ghosh and W. Zwonek introduced a new class of domains \(\mathbb{L}_{n}\), \(n\geq 1\), which are 2-proper holomorphic images of the Cartan domains of type four. This family contains biholomorphic images of the symmetrized bidisc and the tetrablock. It is well-known, that symmetrized bidisc and tetrablock are Lempert type domains. In our paper we show that the whole family of domains \(\mathbb{L}_{n}\) are Lempert domains.
Key words and phrases:the classical Cartan domain of type four, automorphisms of the Cartan domain, Lempert theorem 2020 Mathematics Subject Classification: Primary: 32F45; Secondary: 32M15, 32Q02
## 1. Introduction
Let \(X\) be a complex manifold. In many applications, it is natural to consider on \(X\) a distance \(d_{X}\) related to the complex structure and consider \((X,d_{X})\) as a metric space. It turns out that there are many natural ways to introduce these distances. For a good reference of this subject, see e.g. [13]. The largest one is the _Lempert function_, related to the Kobayashi pseudodistance. We denote by \(\mathbb{D}\) the unit disc and by \(\mathbb{T}\) the unit circle in the complex plane \(\mathbb{C}\). For simplicity, we restrict ourselves to domains in the space \(\mathbb{C}^{n}\). Let \(D\subset\mathbb{C}^{n}\) be a domain and let \(z,w\in D\) be any points. The Lempert function is defined as
\[\ell_{D}(z,w)=\inf\{\rho(0,\sigma):f:\mathbb{D}\to D\ \text{holomorphic},f(0)=z,f( \sigma)=w\},\]
where \(\rho\) is the Poincare distance in the disc (see [13], Chapter III).
On the other hand, the smallest one is the _Caratheodory pseudodistance_ (see [13], Chapter II). We put
\[c_{D}(z,w)=\sup\{\rho(f(z),f(w)):f:D\to\mathbb{D}\ \text{holomorphic}\},\quad z,w\in D.\]
It is well-known that for the unit ball, products of the unit balls, and some modifications, these two functions agree, i.e., there is only one natural way to introduce a distance on these domains. More generally, it is true on classical Cartan domains,i.e., bounded symmetric homogeneous domains in \(\mathbb{C}^{n}\) (see [12]). The proof uses that these domains are
homogeneous and the Schwarz lemma type results at the origin. Domains, on which we have this property, sometimes are called _Lempert domains_.
In 1981 L. Lempert [14] proved a fundamental result that for a convex domain \(D\subset\mathbb{C}^{n}\) there is an equality
\[c_{D}=\ell_{D}. \tag{1}\]
Later it was extended by L. Lempert to bounded strongly linearly convex pseudoconvex domains (see [15]).
For more than 20 years it was an open question, whether there are other types of Lempert domains, besides biholomorphic images of convex ones and some simple modifications. In a series of papers [2, 3, 4] J. Agler and N.J. Young introduced and analysed the symmetrized bidisc, which was the first known bounded hyperconvex domain for which we have the equality of the Caratheodory distance and Lempert function but which cannot be exhausted by domains biholomorphic to convex domains. The symmetrized bidisc \(\mathbb{G}_{2}\) is the image of the bidisc \(\mathbb{D}^{2}\) under the symmetrization mapping
\[\mathbb{D}^{2}\ni(\lambda_{1},\lambda_{2})\to(\lambda_{1}+\lambda_{2},\lambda_ {1}\lambda_{2})\in\mathbb{C}^{2}.\]
In [5] the authors introduced another domain, tetrablock in \(\mathbb{C}^{3}\), which has the same properties (for the definition and the properties of the tetrablock see below).
Recently G. Ghosh and W. Zwonek (see [11]) proposed a new class of domains, which includes symmetrized bidisc and tetrablock. Our main aim is to show that for domains in this class equality (1) holds, so they are Lempert domains.
Let \(\langle z,w\rangle=\sum_{j=1}^{n}z_{j}\bar{w}_{j}\) for \(z,w\in\mathbb{C}^{n}\) be the Hermitian inner product in \(\mathbb{C}^{n}\) and let \(\|z\|^{2}=\langle z,z\rangle\) be the corresponding norm. We define another norm in \(\mathbb{C}^{n}C\), as follows
\[p(z)^{2}=\|z\|^{2}+\sqrt{\|z\|^{4}-|\langle z,\bar{z}\rangle|^{2}},\quad z\in \mathbb{C}^{n}.\]
The unit ball in this norm we denote by \(L_{n}=\{z\in\mathbb{C}^{n}:p(z)<1\}\) and call it the Cartan domain of type four (sometimes it is called the Lie ball of dimension \(n\geq 1\)). It is a bounded symmetric homogeneous domain (see [6], [7]).
Following [11] we define a 2-proper holomorphic mapping
\[\Lambda_{n}:\mathbb{C}^{n}\ni(z_{1},z_{2},\ldots,z_{n})\mapsto(z_{1}^{2},z_{2},\ldots,z_{n})\in\mathbb{C}^{n}\]
and put \(\mathbb{L}_{n}=\Lambda_{n}(L_{n})\).
The main result of the paper is the following.
**Theorem 1**.: _For any \(z,w\in\mathbb{L}_{n}\) we have_
\[c_{\mathbb{L}_{n}}(z,w)=\ell_{\mathbb{L}_{n}}(z,w).\]
As we mentioned above, Theorem 1 was known for \(n=2\) (symmetrized bidisc) and for \(n=3\) (tetrablock _see also_ E-K-Z. So, for any \(n\geq 4\) it provide us with example of non-trivial (i.e., product of domains biholomorphic to convex ones) non-convex domains so that the equality (1) holds. Moreover, our proof shows that these domains, being a modification of the homogeneous symmetric domains, in a sense, are quite natural in relation with the invariant distances and metrics.
## 2. The group of automorphisms
In this part we describe automorphisms of the Cartan domain of type four in \(\mathbb{C}^{n}\) and give some useful properties. For more information see [10], [12], [16].
For \(z\in\mathbb{C}^{n}\) we put
\[a(z)=\sqrt{\frac{\|z\|^{2}+|\langle z,\bar{z}\rangle|}{2}}\quad\text{ and }\quad b(z)=\sqrt{\frac{\|z\|^{2}-|\langle z,\bar{z}\rangle|}{2}}.\]
In [1], page 278, the numbers \(a(z)\) and \(b(z)\) are called the modules related to the Cartan domain of type four. Note that \(p(z)=a(z)+b(z)\) for any \(z\in\mathbb{C}^{n}\). For a point \(z\in\mathbb{C}^{n}\) we have \(p(z)<1\) if and only if
\[\|z\|<1\quad\text{ and }\quad 2\|z\|^{2}<1+|\langle z,\bar{z}\rangle|^{2}. \tag{2}\]
From this inequality we have.
**Lemma 2**.: _Let \(z=(z_{1},z^{\prime})\in\mathbb{C}\times\mathbb{C}^{n-1}\). Then \(z\in\mathbb{L}_{n}\) if and only if_
\[|z_{1}|+\|z^{\prime}\|^{2}<1\quad\text{ and }\quad 2|z_{1}|+2\|z^{\prime}\|^{2 }<1+|z_{1}+\langle z^{\prime},\bar{z}^{\prime}\rangle|^{2}. \tag{3}\]
Note that if \((z_{1},\dots,z_{n-1})\in L_{n-1}\) then \((z_{1},\dots,z_{n-1},0)\in L_{n}\). Let us show the following simple inequality.
**Lemma 3**.: _Let \(z=(z_{1},z^{\prime})\in\mathbb{C}\times\mathbb{C}^{n-1}\). Assume that \(\|z\|<1\). Then_
\[2\|z^{\prime}\|^{2}-|\langle z^{\prime},\bar{z}^{\prime}\rangle|^{2}\leq 2\|z \|^{2}-|\langle z,\bar{z}\rangle|^{2}.\]
_Moreover, the equality holds if and only if \(z_{1}=0\)._
Proof.: We have to show \(|\langle z,\bar{z}\rangle|^{2}-|\langle z^{\prime},\bar{z}^{\prime}\rangle|^{2 }\leq 2|z_{1}|^{2}\). This is equivalent with \(|z_{1}|^{4}+2\Re\big{(}\bar{z}_{1}^{2}\langle z^{\prime},\bar{z}^{\prime} \rangle\big{)}\leq 2|z_{1}|^{2}\). It suffices to show \(|z_{1}|^{4}+2|z_{1}|^{2}|\langle z^{\prime},\bar{z}^{\prime}\rangle|\leq 2|z_{1}|^{2}\). But we have \(|z_{1}|^{2}+2|\langle z^{\prime},\bar{z}^{\prime}\rangle|\leq|z_{1}|^{2}+2\|z ^{\prime}\|^{2}<2\).
Assume that \(z=(z_{1},\ldots,z_{n})\in L_{n}\). Then from the above Lemma we have \((z_{1},\ldots,z_{n-1})\in L_{n-1}\).
In all our considerations, the group of real special orthogonal matrices \(\mathrm{SO}_{n}(\mathbb{R})\), i.e., real matrices \(A\) such that \(A^{T}A=AA^{T}=\mathrm{I}_{n}\) and \(\det A=1\), is essencial. This follows from the properties: \(\langle Az,Aw\rangle=\langle z,w\rangle\) and \(\langle Az,\overline{Aw}\rangle=\langle z,\bar{w}\rangle\) for any \(z,w\in\mathbb{C}^{n}\) and any \(A\in\mathrm{SO}_{n}(\mathbb{R})\). In particular, \(p(Az)=p(z)\) for any \(z\in\mathbb{C}^{n}\) and any \(A\in\mathrm{SO}_{n}(\mathbb{R})\).
Let \(z\in\mathbb{C}^{n}\). Then there exists an \(\eta\in\mathbb{T}\) such that \(\eta^{2}\langle z,\bar{z}\rangle=|\langle z,\bar{z}\rangle|\). Note that \(\eta^{2}\langle z,\bar{z}\rangle=\langle\eta z,\overline{\eta z}\rangle\). If \(\eta z=u+iv\), where \(u,v\in\mathbb{R}^{n}\), then vectors \(u\) and \(v\) are orthogonal in \(\mathbb{R}^{n}\) and \(\|u\|\geq\|v\|\). Easy calculations show that \(\|u\|=a(z)\) and \(\|v\|=b(z)\). So, there exists a matrix \(A\in\mathrm{SO}_{n}(\mathbb{R})\) (a 'rotation' of \(\mathbb{R}^{n}\)) such that \(Au=(a(z),0,\ldots 0)\) and \(Av=(0,b(z),\ldots,0)\). We get \(\eta Az=A(\eta z)=(a(z),b(z)i,\ldots,0)\). We formulate the above considerations as a separate result.
**Lemma 4**.: _For any \(z\in\mathbb{C}^{n}\), \(n\geq 2\), there exists a matrix \(A\in\mathrm{SO}_{n}(\mathbb{R})\) and a number \(\eta\in\mathbb{T}\) such that_
\[\eta Az=A(\eta z)=(a(z),b(z)i,0,\ldots,0).\]
Fixing first \(k\)-coordinates and "rotating" last \(n-k\) coordinates we have
**Lemma 5**.: _For any \(z\in\mathbb{C}^{n}\), \(n\geq 3\), and any \(k\in\mathbb{N}\) with \(k\leq n\), there exists a matrix \(A\in\mathrm{SO}_{n}(\mathbb{R})\) and a number \(\eta\in\mathbb{T}\) such that \(Aw\in\{(w_{1},\ldots,w_{k})\}\times\mathbb{C}^{n-k}\) for any \(w=(w_{1},\ldots,w_{n})\in\mathbb{C}^{n}\) and_
\[Az=(z_{1},\ldots,z_{k},\eta a(z^{\prime}),\eta b(z^{\prime})i,0,\ldots,0),\]
_where \(z^{\prime}=(z_{k+1},\ldots,z_{n})\in\mathbb{C}^{n-k}\)._
Note that for any \(a,b\in\mathbb{R}\) we have \((a,bi,0,\ldots,0)\in L_{n}\) if and only if \(|a|+|b|<1\).
Recall the following well-known result.
**Theorem 6**.: _Let \(\Phi:L_{n}\to L_{n}\) be a biholomorphic mapping such that \(\Phi(0)=0\). Then there exist a matrix \(A\in\mathrm{SO}_{d}(\mathbb{R})\) and a number \(\eta\in\mathbb{T}\) such that \(\Phi(z)=\eta Az\) for any \(z\in L_{n}\)._
Automorphisms of \(L_{n}\) described in the above Theorem are called linear automorphisms. Now we are going to describe all automorphisms of the irreducible classical Cartan domain of type four (see [12], [10],
[16]). We define first a group of matrices.
\[G(n)=\Big{\{}g=\begin{bmatrix}A&B\\ C&D\end{bmatrix}:A\in\operatorname{GL}(n,\mathbb{R}),B\in M(n;2;\mathbb{R}),\\ C\in M(2,n;\mathbb{R}),D\in\operatorname{GL}(2,\mathbb{R}),\det D>0,\\ g^{t}\begin{bmatrix}\operatorname{I}&0\\ 0&-\operatorname{I}_{2}\end{bmatrix}g=\begin{bmatrix}\operatorname{I}&0\\ 0&-\operatorname{I}_{2}\end{bmatrix}\Big{\}}. \tag{4}\]
For any \(g\in G\) we define the following \(\mathbb{C}^{n}\)-valued holomorphic function
\[\Psi_{g}(z)=\frac{Az+BW(z)}{(1\ i)(Cz+DW(z))},\]
where \(W(z)=\begin{bmatrix}\frac{1}{2}(\langle z,\bar{z}\rangle+1)\\ \frac{1}{2}(\langle z,z\rangle-1)\end{bmatrix}\). One can show (see [10]) that for any \(g\in G\) we have
\[(1\ i)(Cz+DW(z))\neq 0\quad\text{ for all }z\in\overline{L_{d}}.\]
Thus, \(\Psi_{g}\) is well-defined on \(\overline{L_{n}}\). Note that we have a homomorphism of groups \(\kappa:G(n)\to G(n+1)\) defined by
\[\kappa(g)=\begin{bmatrix}1&0&0\\ 0&A&B\\ 0&C&D\end{bmatrix},\]
where \(g=\begin{bmatrix}A&B\\ C&D\end{bmatrix}\). As a simple corollary we get.
**Corollary 7**.: _For any \(\Psi\in\operatorname{Aut}(L_{n})\) there exists \(a\)\(\tilde{\Psi}\in\operatorname{Aut}(L_{n+1})\) such that \(\tilde{\Psi}(0,z)=(0,\Psi(z))\) for any \(z\in L_{n}\)._
Proof.: Assume that \(\Psi=\Psi_{g}\) for some \(g\in G(n)\). Put \(\tilde{\Psi}=\Psi_{\kappa(g)}\).
_Remark 8_.: Changing the coordinates we may take \(\tilde{\Psi}(z,0)=(\Psi(z),0)\).
In [11] the authors gave a description of automorphisms of the domain \(\mathbb{L}_{n}\). Using this description and results above we get.
**Theorem 9**.: _Let \(\Phi\in\operatorname{Aut}(\mathbb{L}_{n})\). Then there exists a \(g\in G(n-1)\) such that_
\[\Lambda_{n}\circ\Psi_{\kappa(g)}=\Phi\circ\Lambda_{n}.\]
_Moreover, for any \(z\in L_{n-1}\) we have \(\Phi(0,z)=(0,\Psi_{g}(z))\)._
From Corollary 7 we get (cf. [11], Theorem 5.3).
**Theorem 10**.: _Let \(\Phi\in\operatorname{Aut}(\mathbb{L}_{n})\). Then there exists a \(\tilde{\Phi}\in\operatorname{Aut}(\mathbb{L}_{n+1})\) such that \(\tilde{\Phi}(z,0)=(\Phi(z),0)\) for any \(z\in\mathbb{L}_{n}\)._
From the definition and similar property for \(L_{n}\) we have.
**Lemma 11**.: \(\Pi:\mathbb{L}_{n}\ni(z_{1},\ldots,z_{n})\to(z_{1},\ldots,z_{n-1})\in\mathbb{L}_{ n-1}\) _and \(Q:\mathbb{L}_{n-1}\ni(z_{1},\ldots,z_{n-1})\to(z_{1},\ldots,z_{n-1},0)\in\mathbb{L }_{n}\) are well-defined holomorphic mappings such that \(\Pi\circ Q=\operatorname{id}_{\mathbb{L}_{n-1}}\)._
**Corollary 12**.: _Let \(n\geq 3\). We have holomorphic mappings \(Q:\mathbb{L}_{3}\to\mathbb{L}_{n}\) and \(\Pi:\mathbb{L}_{n}\to\mathbb{L}_{3}\) such that \(\Pi\circ Q=\operatorname{id}_{\mathbb{L}_{3}}\). In particular, for any \(z,w\in Q(\mathbb{L}_{3})\) we have_
\[c_{\mathbb{L}_{n}}(z;w)=\ell_{\mathbb{L}_{n}}(z;w)\quad\text{ for any }z,w\in \mathbb{L}_{n}.\]
## 3. The properties of the tetrablock and of the domain \(\mathbb{L}_{3}\)
In our paper properties of the tetrablock and a domain \(\mathbb{L}_{3}\) are crucial. First recall the following definition (see [5], Definition 1.1).
**Definition 13**.: The _tetrablock_ is the domain
\[\mathbb{E}=\{z\in\mathbb{C}^{3}:1-\lambda_{1}z_{1}-\lambda_{2}z_{2}+\lambda_{ 1}\lambda_{2}z_{3}\neq 0\text{ whenever }\lambda_{1},\lambda_{2}\in\overline{\mathbb{D}}\}.\]
In [5] the authors gave several equivalent characterizations of the tetrablock. One of them is the following (see [5], Theorem 2.2).
**Proposition 14**.: _For \(z\in\mathbb{C}^{3}\) we have: \(z\in\mathbb{E}\) if and only if_
\[|z_{1}|^{2}+|z_{2}|^{2}+2|z_{1}z_{2}-z_{3}|<1+|z_{3}|^{2}\text{ and }|z_{3}|<1.\]
From this Proposition, definition of the Cartan domain, and the domain \(\mathbb{L}_{n}\) we have the following biholomorphism (see [11], Corollary 3.5).
**Lemma 15**.: _Put_
\[\Psi(z_{1},z_{2},z_{3})=(z_{2}+iz_{3},z_{2}-iz_{3},z_{1}+z_{2}^{2}+z_{3}^{2}).\]
_Then \(\Psi:\mathbb{L}_{3}\to\mathbb{E}\) is a biholomorphic mapping such that \(\Psi(r,0,0)=(0,0,r)\) for any \(r\in[0,1)\)._
Recall the following result (see [17], Theorem 5.2).
**Theorem 16**.: _Let \(z\in\mathbb{E}\) be any point. Then there exist an automorphism \(\Phi\) of \(\mathbb{E}\) and a number \(r\in[0,1)\) such that \(\Phi(z)=(r,0,0)\)._
From Theorem 16 and Lemma 15 we get.
**Corollary 17**.: _For any \(z\in\mathbb{L}_{3}\) there exists an automorphism of \(\mathbb{L}_{3}\) such that \(\Phi(z)=(r,0,0)\), where \(r\in[0,1)\)._
From this we get the following important result.
**Corollary 18**.: _For any \(z\in\mathbb{L}_{n}\) there exists an automorphism \(\Phi\) of \(\mathbb{L}_{n}\) such that \(\Phi(z)=(\rho,0,\ldots,0)\), where \(\rho\in[0;1)\)._
All these analyses and remarks imply the following crucial result.
**Theorem 19**.: _For any \(z,w\in\mathbb{L}_{n}\) there exists an automorphism \(\Phi\) of \(\mathbb{L}_{n}\) such that \(\Phi(z)=(\rho,0,\ldots,0)\) and \(\Phi(w)\in\mathbb{L}_{3}\times\{0\}_{n-3}\)._
Proof of Theorem 1.: The proof follows from Theorem 19 and Corollary 12.
## 4. Invariant distances and metrics in \(\mathbb{L}_{n}\)
Let \(D\subset\mathbb{C}^{n}\) be a domain. For a point \(z\in D\) and a vector \(X\in\mathbb{C}^{n}\), we recall that the infinitesimal Kobayashi metric at \(z\) in the direction \(X\) is defined to be
\[\kappa_{D}(z;X)=\sup\{\alpha>0:f:\mathbb{D}\to D\text{ holomorphic},\\ f(0)=z,f^{\prime}(0)=\frac{1}{\alpha}X\}.\]
We show the following.
**Theorem 20**.: _Let \(n\geq 3\)._
\[\kappa_{\mathbb{L}_{n}}(0;X)=|X_{1}|+p_{d-1}(X^{\prime}),\]
_where \(X=(X_{1},X^{\prime})\in\mathbb{C}\times\mathbb{C}^{n-1}\)._
Proof.: We know that (see [17], Theorem 2.1)
\[\kappa_{\mathbb{E}}(0;X)=\max\{|X_{1}|,|X_{2}|\}+|X_{3}|\quad\text{ for any }X\in\mathbb{C}^{3}.\]
From the biholomorphicity between \(\mathbb{E}\) and \(\mathbb{L}_{3}\) we infer
\[\kappa_{\mathbb{L}_{3}}(0;(X_{1},X_{2},X_{3}))=|X_{1}|+\max\{|X_{2}+iX_{3}|,|X _{2}-iX_{3}|\}.\]
For the general case, there exists an \(\eta\in\mathbb{T}\) and a matrix \(A\in\operatorname{SO}_{n}(\mathbb{R})\) such that
\[AX=(X_{1},\eta a(X^{\prime}),\eta ib(X^{\prime}),0,\ldots,0).\]
Take an automorphism \(\Phi\) of \(\mathbb{L}_{n}\) generated by \(A\). Then \(\Phi(0)=0\) and
\[\Phi^{\prime}(0)X=(X_{1},\eta a(X^{\prime}),i\eta b(X^{\prime}),0,\ldots,0).\]
Then
\[\kappa_{\mathbb{L}_{n}}(0;X)=\kappa_{\mathbb{L}_{n}}(0;\Phi^{\prime}(0)X)= \kappa_{\mathbb{L}_{3}}(0;(X_{1},\eta a(X^{\prime}),\eta ib(X^{\prime})),\]
and, therefore,
\[\kappa_{\mathbb{L}_{n}}(0;X)=|X_{1}|+a(X^{\prime})+b(X^{\prime})=|X_{1}|+p(X^ {\prime}).\]
Recall also the following result (see [5], Corollary 3.7).
**Theorem 21**.: _For any \(z=(z_{1},z_{2},z_{3})\in\mathbb{E}\) with \(|z_{1}|\leq|z_{2}|\), we have_
\[c_{\mathbb{E}}(0;z)=\tanh^{-1}\frac{|z_{2}-\bar{z}_{1}z_{3}|+|z_{1}z_{2}-z_{3}|} {1-|z_{1}|^{2}}.\]
As a corollary of this we get.
**Corollary 22**.: _Let \(z=(z_{1},z^{\prime})\in\mathbb{L}_{n}\), where \(z_{1}\in\mathbb{C}\) and \(z^{\prime}\in\mathbb{C}^{n-1}\). If \(z^{\prime}=0\) then \(c_{\mathbb{L}_{n}}(0;z)=\tanh^{-1}(|z_{1}|)\). If \(z^{\prime}\neq 0\) then we have_
\[c_{\mathbb{L}_{n}}(0;z)=\tanh^{-1}\Big{(}p(z^{\prime})\Big{|}1-\frac{z_{1} \langle\bar{z}^{\prime},z^{\prime}\rangle}{p^{2}(z^{\prime})-|\langle z^{ \prime},\bar{z}^{\prime}\rangle|^{2}}\Big{|}+\frac{|z_{1}|p(z^{\prime})^{2}}{p ^{2}(z^{\prime})-|\langle z^{\prime},\bar{z}^{\prime}\rangle|^{2}}\Big{)}.\]
Proof.: In a similar way as above, if \(|z_{2}+iz_{3}|\leq|z_{2}-iz_{3}|\) then
\[c_{\mathbb{L}_{3}}(0,(z_{1},z_{2},z_{3}))=\tanh^{-1}\Big{(}|z_{2 }-iz_{3}-\frac{z_{1}\overline{(z_{2}+iz_{3})}}{1-|z_{2}+iz_{3}|^{2}}\big{|}\\ +\frac{|z_{1}|}{1-|z_{2}+iz_{3}|^{2}}\Big{)}.\]
If \(z\in\mathbb{L}_{n}\), then using appropriate automorphism, we may assume that \(z=(z_{1},\bar{\eta}a(z^{\prime}),\bar{\eta}b(z^{\prime})i,\ldots,0)\) and, therefore,
\[c_{\mathbb{L}_{n}}(0;z)=\tanh^{-1}\Big{(}|\bar{\eta}p(z^{\prime}) -\frac{\eta z_{1}(a(z^{\prime})-b(z^{\prime}))}{1-|a(z^{\prime})-b(z^{\prime}) |^{2}}\big{|}\\ +\frac{|z_{1}|}{1-|a(z^{\prime})-b(z^{\prime})|^{2}}\Big{)},\]
where \(\eta\in\mathbb{T}\) is such that \(\eta^{2}\langle z^{\prime},\bar{z}^{\prime}\rangle=|\langle z^{\prime},\bar{z} ^{\prime}\rangle|\). From this we obtain the formula.
_Remark 23_.: In a forthcoming paper, using similar technique, we study complex geodesics in \(\mathbb{L}_{n}\) (see [8]).
|
2305.10611 | ACRoBat: Optimizing Auto-batching of Dynamic Deep Learning at Compile
Time | Dynamic control flow is an important technique often used to design
expressive and efficient deep learning computations for applications such as
text parsing, machine translation, exiting early out of deep models and so on.
The control flow divergence resulting from dynamic control flow makes batching,
an important optimization enabling high throughput and hardware utilization,
difficult to perform manually. In this paper, we present ACRoBat, a framework
that enables efficient automatic batching for dynamic deep learning
computations by performing hybrid static+dynamic compiler optimizations and
end-to-end tensor code generation. ACRoBat performs up to 8.5X better than
DyNet, a state-of-the-art framework for automatic batching, on an Nvidia
GeForce GPU. | Pratik Fegade, Tianqi Chen, Phillip B. Gibbons, Todd C. Mowry | 2023-05-17T23:43:55Z | http://arxiv.org/abs/2305.10611v2 | # ACRoBat: Optimizing Auto-batching of
###### Abstract
Dynamic control flow is an important technique often used to design expressive and efficient deep learning computations for applications such as text parsing, machine translation, exiting early out of deep models and so on. However, the resulting control flow divergence makes batching, an important performance optimization, difficult to perform manually. In this paper, we present ACRoBat, a framework that enables efficient automatic batching for dynamic deep learning computations by performing hybrid static+dynamic compiler optimizations and end-to-end tensor code generation. ACRoBat performs up to 8.5\(\times\) better than DyNet, a state-of-the-art framework for automatic batching, on an Nvidia GeForce RTX 3070 GPU.
Machine Learning, ICML
## 1 Introduction
Deep Learning (DL) has come to play an increasing role in a wide range of applications in the recent years. As their applications have become more and more complex, DL models themselves have increased in size and complexity. For inference serving as well as for training, these models place extreme demands on DL systems and hardware today.
An important source of complexity in DL models is the use of dynamic control flow as part of model execution. Unlike a static feed-forward model, the execution of a model with dynamic control flow, or a _dynamic model_ can differ across different inputs to the model. This property has been used effectively to (1) model structured data such as parse trees (Socher et al., 2013, 2012) and images (Shuai et al., 2015), (2) perform better quality machine translations and text parsing by employing beam search (Wiseman and Rush, 2016; Koehn, 2004; Buckman et al., 2016), and (3) exit early out of convolutional (Kaya and Dumitras, 2018; Teerapittayanon et al., 2017) and transformer (Xin et al., 2020; Elbayad et al., 2019) models for reduced inference latency. The adaptability afforded by dynamic control flow is thus useful in a variety of situations.
Batching is an important optimization that improves the throughput and hardware utilization during training and inference of a DL model. While straightforward for static DL computations, the presence of control flow divergence in dynamic computations makes manual batching difficult and error-prone. Thus, there has been significant past effort on performing automatic batching, or _auto-batching_, for dynamic DL computations. In order to handle the lack of execution knowledge of a dynamic computation during compilation, past works usually either (1) heavily rely on dynamic analyses, enabling them to handle general dynamic control flow (Neubig et al., 2017; Looks et al., 2017), or (2) are specialized for specific control flow patterns or models, thus relying more on static analyses (Xu et al., 2018; Fegade et al., 2021). The former frameworks often _incur high execution overheads_ caused by dynamic analysis, while the latter ones _lack the generality_ to support the wide range of existing and future control flow patterns in DL computations.
Further, past work often _heavily relies on vendor libraries_ such as cuDNN (Chetlur et al., 2014) and oneAPI (Intel, 2022). However, as implementing vendor libraries is an intensive process, they usually only implement commonly used, standard tensor operators. Further, as these kernels are optimized in isolation, without any contextual about the larger application they are used in, important optimizations such as kernel fusion can no longer be performed.
In order to overcome these limitations of past work, we propose ACRoBat1, an auto-batching framework for dynamic DL computations which relies on novel _hybrid static+dynamic optimizations_ and _end-to-end tensor kernel compilation_. Our main insight in designing ACRoBat is that despite the lack of perfect execution knowledge during compilation for dynamic models, the compiler can often perform static analysis and optimizations to aid the dynamic analysis. This reduces execution overheads while effectively exploiting parallelism in the input computation. ACRoBat relies on traditional compiler techniques such
as context-sensitivity (Aho et al., 2007) and taint analysis as well as on minimal user annotations to enable such static analysis. Further, ACRoBat's end-to-end tensor kernel generation enables it to automatically generate kernels optimized and specialized to the larger computation again using static analysis to identify and exploit data reuse opportunities (as we see in SS5). ACRoBat's generality allows one to express a wide variety of control flow patterns, ranging from simple conditional statements to complex recursive computations using a simple high-level language. Table 1 provides a qualitative comparison of ACRoBat with related work.
In short, this paper makes the following contributions:
1. We survey and characterize the dynamic control flow found in different DL computations.
2. Our design employs novel hybrid static+dynamic optimizations and automated end-to-end kernel code generation to reduce execution overheads and to generate efficient tensor kernels that effectively exploit data reuse opportunities. In developing these optimizations, we heavily rely on traditional compilation techniques.
3. We prototype ACRoBat, evaluate it against state-of-the-art deep learning frameworks (Xu et al., 2018; Neubig et al., 2017a; Paszke et al., 2019) and report significant performance gains on Nvidia GPUs.
## 2 Background
### Dynamic Control Flow in DL computations
ACRoBat's primary objective is to exploit parallelism across tensor operators in the batched execution of dynamic DL computation. Such a computation may exhibit (1) _Batch Parallelism_ that exists across different input instances in the mini-batch, and/or (2) _Instance Parallelism_ which refers to the control flow parallelism as defined in Table 2. Beyond parallelism, table 2 summarizes other properties of control flow dynamism found in common DL models (more details can be found in SSA of the appendix).
### Dynamic Batching
ACRoBat builds upon dynamic batching (Looks et al., 2017; Neubig et al., 2017b), a past technique to perform auto-batching in the presence of dynamic control flow. Given a mini-batch of input instances, dynamic batching involves lazily executing the model computation for each input instance while building dataflow graphs (DFGs) of tensor operators for each instance in the background. The execution of these DFGs is triggered when the value of a particular tensor is requested (when the model contains tensor-dependent control flow, for example). During this execution, the runtime can identify batching opportunities within the DFGs and launch batched kernels appropriately.
## 3 ACRoBat: Overview and API
Control flow dynamism necessitates reliance on potentially expensive runtime analysis for auto-batching. In ACRoBat, we observe that aggressive static analysis often provides sufficient information to reduce the overheads of such analyses. Such analyses further allow us to generate specialized and more efficient tensor kernels in an end-to-end manner.
We will now look at ACRoBat's compilation and execution workflows (illustrated in Fig. 1) that make use of the above insights. ACRoBat has been designed to take an unbatched DL computation expressed in a simple Turing-complete functional language as an input. This enables ACRoBat users to easily express models with dynamic control flow, such as the ones discussed in SS2.1. For example, Listing 1 illustrates a simple RNN model which ACRoBat can take as an input.
Given an input computation 1, compilation in ACRoBat begins with batched kernel generation 2. Here, ACRoBat performs novel static analysis (SS5.1) to identify data
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline Framework & PyTorch & DyNet & Cortex & TFFold & ACRoBat \\ \hline Auto-batch support & No & Yes & Yes & Yes & Yes \\ Auto-batch analysis & Dyn.only & Static only & Dyn. only & Hybrid \\ Vendor lib, use & High & High & None & High & None \\ Generality & High & High & Low & Mid & High \\ User impl. effort & Low & Low & High & Low & Low \\ Performance & Low & Low & High & Low & High \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between ACRoBat and other solutions for auto-batching dynamic DL computations. Purely static or dynamic approaches can be overly conservative, or have high overheads respectively, unlike ACRoBat’s hybrid analysis.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline Deep Learning Computations & \multicolumn{2}{c|}{ITE REC} & TDC & IFP & ICF & TCF \\ \hline RNN (Rumellhart et al., 1986)/LSTM (Hochreiter & & & & & & \\ \& Schmidhuber, 1997)/GRU (Cho et al., 2014), & & & & & \\ GraphRNN (You et al., 2018) & & & & & & \\ \hline DIGA (Drozdov et al., 2019), Chinese Segmentation (Chen et al., 2015) & & & & & & \\ \hline DIGA-RNN (Shuai et al., 2015), TrecL-STM (Socher et al., 2013), MV-RNN (Socher et al., 2012) & & & & & \\ \hline StackLSTM (Dyer et al., 2015) & & & & & & \\ \hline Beam search (Wisseman & & Rash, 2016) with LSTM & & & & \\ \hline Mixture-of-experts (Shazeer et al., 2017; Ma et al., 2018; Fedus et al., 2021) & & & & & \\ \hline Early cutli models (Kay & Dumitras, 2018; Terror-girtikyan et al., 2017; Elbayad et al., 2019) & & & & \\ \hline No U-Turn Sampler (Hoffman & & & & & \\ \hline Tree-to-tree NN (Chen et al., 2018), Doubly Recurrent NN (Alvarez-Melias & & & & \\ \hline R-CNN (Girshick et al., 2013), Fast R-CNN (Girshick, 2015) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Control flow properties found in DL computations. Legend: ITE: iterative control flow, REC: recursive control flow, TDC: model exhibits tensor-dependent control flow (where control flow decisions are predicated on values on intermediate tensors), CFP: computation exhibits high control flow parallelism (i.e., inter-operator parallelism that arises due to dynamic control flow dependences, such as recursive parallelism), ICF: model inference exhibits control flow, TCF: model training exhibits control flow.
reuse opportunities and accordingly generates batched kernels 3 implementing the tensor operators used in the input program. Further, gather operator fusion (SS5.2) enables us to generate specialized kernels that minimize data movement. These unoptimized kernels are then optimized by an auto-scheduler 4. Once optimized, target code 10 such as CUDA C++ can be generated for the batched kernels. Concurrently, the input program is further optimized and compiled 5 in an ahead-of-time (AOT) fashion to generate C++ code 7. As part of this compilation, ACRoBat generates code to (1) enable low overhead scheduling via our inline depth computation approach, and (2) automatically enable concurrent execution in the presence of tensor dependent control flow (SS4.2).
At runtime, ACRoBat lazily executes the AOT compiled input program 7 on a mini-batch of inputs 8, and constructs DFGs 8. The ACRoBat runtime library will then schedule these DFGs (using inline depth computation as mentioned above) 9, while looking for batching opportunities. Then, it will invoke the optimized batched kernels 10 for each identified batch of DFG nodes. If the input program exhibits tensor dependent control flow, the execution cycles back to the AOT compiled program which will execute further and create more DFGs.
We will now take a look at ACRoBat's hybrid optimizations in SS4 and its tensor kernel generation in SS5.
## 4 Hybrid Static+Dynamic Optimizations
Dynamic control flow often precludes static program transformations. Therefore, ACRoBat takes a hybrid approach whereby it exploits static program knowledge by either (1) providing hints to the dynamic analysis (SS4.1), or (2) generating code that affords the dynamic analysis greater freedom in exploiting parallelism (SS4.2). Further, static analysis also allows us to perform optimizations such as kernel fusion, which is important for high performance (SS7.3). Below, we provide more details regarding our hybrid analysis.
### Inline Depth Computation
As past work (Fegade et al., 2021) has noted, prior fully dynamic approaches incur significant scheduling overheads. For instance, as we will show in Table 5, DyNet's scheduling overheads dominate the time spent in tensor computations for the TreeLSTM model. Instead, as described below, ACRoBat devises a scheme to perform scheduling as it constructs the DFGs, thereby lowering scheduling overheads greatly (SS7).
A DFG scheduling algorithm has two goals:
1. [label=**G.0**, ref=]
2. **Correctness**: Scheduling tasks such that dependences between the tasks are respected.
3. **Performance**: Identifying and exploiting parallelism. Given a DFG(s), we can satisfy both these goals by executing DFG nodes (each of which represents one tensor operator) in the increasing order of their topological depth2, such that nodes at the same depth are executed concurrently (Ne
1. The order in which the unbatched program invokes the tensor operators, i.e. the order in which nodes are added to the DFGs, is a valid dependency order.
2. Information about instance parallelism (for example, recursive parallelism in the TreeLSTM model as seen in Table 2) is often available during compilation.
```
1List<Tensor>rnn(List<Tensor>in>,Tensor>state>,Tensor>bias,
2Tensor>L_wt,Tensor>L_wt,(Int<Soft>depth){
3if(imps=-Lits
the evaluation can be performed, and the concurrent executions resumed after as illustrated in Fig. 3. Correspondingly, in order to exploit instance parallelism in the presence of tensor dependent control flow, ACRoBat launches concurrent fibers, similar to the fork-join model of parallelism (McCool et al., 2012). ACRoBat thus combines the static knowledge of parallelism with dynamic concurrent execution as part of its hybrid analysis to effectively exploit parallelism in the presence of tensor dependent control flow.
## 5 End-to-end Tensor Kernel Generation
As we alluded to above, ACRoBat enables end-to-end, uniform and automatic tensor kernel code generation by avoiding the use of vendor libraries. This allows ACRoBat to support a larger set of operators without additional compiler development effort. More details about ACRoBat's tensor kernel generation are provided below.
### Exploiting Parameter Reuse
Given the input unbatched computation, ACRoBat needs to generate batched kernels implementing the tensor operators used in the computation. Generating these kernels is not straightforward because some input tensors (often model parameters) might be shared across calls to the operator. For example, across multiple calls to the element-wise addition operator add3 used in the input computation 1 in Fig. 1, the bias argument will be shared (as it is a model parameter) and hence should be reused across all values of the arguments input and state. This can be seen in the corresponding batched kernel 3 and 10 in Fig. 1.
Footnote 5: Context sensitivity is a static analysis technique that allows the compiler to reason about a function in the different contexts it may be called under leading to increased analysis precision. For the DL computations we worked with, we found that a 1-context sensitive analysis was sufficient. Deeper contexts might be useful, however, for more complex computations.
A completely dynamic approach to auto-batching, such as the one used in DyNet, is unable to accurately identify such parameter reuse, and instead relies on heuristics, which can be brittle, leading to sub-optimal performance (SS7.2). On the other hand, ACRoBat uses a 1-context sensitive6 taint analysis to identify such shared arguments to tensor operators. The use of static analysis here allows ACRoBat to obtain accurate knowledge about the parameter reuse patterns.
Footnote 6: The use of static analysis here allows ACRoBat to be able to identify such shared arguments to tensor operators. The use of static analysis here allows ACRoBat to obtain accurate knowledge about the parameter reuse patterns.
Beyond the analysis described above, ACRoBat further explores opportunities for data reuse by employing code duplication and horizontal fusion as described in SSC.1.
### Fusing Memory Gather Operations
As ACRoBat identifies batching opportunities across the DFGs dynamically, the input tensors to all DFG nodes in a batch may not be laid out contiguously in the accelerator's memory. In this scenario, prior work performs a memory gather before operating on the tensors (by invoking vendor library kernels), leading to significant data movement (SS7.3). Instead, ACRoBat generates specialized batched kernels to directly operate on tensors scattered in memory, in effect fusing the expensive gather operation with the batched kernel. The generated batched kernel 10 in Fig. 1 illustrates this. This fusion can lead to a significant performance improvement as seen in SS7.
## 6 Implementation Details
Out prototype of ACRoBat is built upon TVM (Chen et al., 2018) v0.9dev, a DL framework and a tensor compiler. It thus accepts as input computations expressed in Relay. Our prototype, ACRoBat also performs the grain size coarsening optimization (Zha et al., 2019; Xu et al., 2018; Fegade et al., 2021; Gao et al., 2018; Silka et al., 2020), which is discussed more in SSB.2 of the appendix.
As demonstrated in SSE.2 of the appendix, we find that using an interpreted virtual machine (VM) for executing the unbatched programs can incur significant VM overheads in the presence of control flow dynamism. Therefore, ACRoBat compiles the input computation to C++ in an AOT fashion (as discussed in the appendix in SSD). Further, as TVM does not support training, we evaluate ACRoBat for
\begin{table}
\begin{tabular}{l l l} \hline \hline Model & Description & Dataset \\ \hline \multirow{2}{*}{TreeLSTM} & \multirow{2}{*}{TreeLSTM} & Stanford sentiment \\ & & treebank (Socher et al., 2013) \\ \hline \multirow{2}{*}{MV-RNN} & \multirow{2}{*}{MV-RNN} & Stanford sentiment treebank \\ \hline BiRNN & Bidirectional RNNs & NXLI (Conneau et al., 2013) \\ \multirow{2}{*}{NeetRSRNN} & An RNN loop nested inside a GRU & GRU/RNN loops iterate for a number of iterations \\ & loop & \\ \hline \multirow{2}{*}{DRNN} & \multirow{2}{*}{Doubly recurrent neural networks for top-down tree generation} & \multirow{2}{*}{Randomly generated tensors.} \\ & & \\ \hline \multirow{2}{*}{Bexit} & Early exit for BERT inference (Xin et al., 2012) & \\ & & \\ \hline \multirow{2}{*}{StackRNN} & StackLSTM parser with LSTM cells & \\ & replaced by RNN cells. & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Models and datasets used in the evaluation.
Figure 4: Ghost operators can enable better batching.
Figure 3: Concurrent execution of the unbatched program in the presence of tensor-dependent control flow.
(batched) inference of DL computations. Other implementation details, including those on ACRoBat's use of TVM's auto-scheduler, can be found in the appendix in SSD.
## 7 Evaluation
We now evaluate ACRoBat against Cortex and DyNet on an Nvidia GPU. Cortex and DyNet are both state-of-the-art auto-batching frameworks for DL computations exhibiting recursive and general unrestricted control flow respectively. They have been shown to be faster than generic frameworks like PyTorch and TensorFlow (Neubig et al., 2017;b; Feagade et al., 2021). We also compare ACRoBat's performance with that of PyTorch, though due to space limitations, we include those results in SSE.3 in the appendix6.
Footnote 6: Overall, ACRoBat performs significantly better as compared to PyTorch as the latter does not perform auto-batching.
### Experimental Setup
**Models:** We use the models listed in Table 3 for the evaluation. For each model, we look at two model sizes--small and large. For the MV-RNN model, we use hidden sizes 64 and 128 for the small and large model sizes, while for the Berrit model, the small model uses the same hyper-parameters as the BERTBASE model (Devlin et al., 2018), while the large model uses the same hyper-parameters as the BERTLARGE model (Devlin et al., 2018), except that we use 18 layers instead of 24 in this case. For the remaining models, the small and the large model sizes use hidden sizes of 256 and 512 respectively.
**Experimental Environment:** We run our experiments on a Linux workstation with an AMD Ryzen Threadripper 3970X CPU (64 logical cores with 2-way hyperthreading) and an Nvidia RTX 3070 GPU. The machine runs Ubuntu 20.04, CUDA 11.1 and cuDNN 8.0.5. We compare against DyNet's commit 3e1b48c7 (March 2022) which uses the Eigen library (v3.3.90).
### Overall Performance
In this section, we compare ACRoBat's performance with that of DyNet and Cortex. Note that, as detailed further in SSE.2 of the appendix, we find that AOT compilation significantly reduces execution overheads, leading to up to 13.45\(\times\) faster execution as compared to the default Relay VM. Therefore, for the rest of this section, we evaluate ACRoBat's performance with AOT compilation turned on.
#### 7.2.1 Performance Comparison with DyNet
We now compare ACRoBat's performance with that of DyNet. As mentioned in SS6, TVM does not support the training of DL models. Therefore, due to lack of access to trained model parameters, we use psuedo-randomness to emulate tensor dependent control flow in the NestedRNN, DRNN, Berrit and StackRNN models as part of our evaluation. More details can be found in SSE.1 of the appendix.
The execution latencies for DyNet and ACRoBat are shown in Table 47. ACRoBat performs better than DyNet in most cases due to a number of reasons. Table 5 lists the time spent by the frameworks for different runtime activities for the TreeLSTM model. We see that ACRoBat's optimizations such as static kernel fusion and grain size coarsening reduce the number of tensor kernels invoked, thereby significantly reducing DFG construction and scheduling overheads. Further, inline depth computation allows ACRoBat to exploit available parallelism with lower overheads. Optimizations such as static kernel fusion and gather operator fusion enable ACRoBat to launch fewer GPU kernels, further reducing the time spent in the CUDA API. We look at the benefits of each of ACRoBat's optimizations in more detail in SS7.3.
Footnote 7: We consider the best of the two scheduling schemes DyNet implements (Neubig et al., 2017) for each model configuration.
While, overall, ACRoBat performs 2.3\(\times\) better than DyNet across all model configurations, DyNet performs slightly better than ACRoBat for some configurations of the BiRNN and NestedRNN models. For the former, Table 5 shows that while ACRoBat incurs lower runtime overheads for DFG construction, scheduling and memory transfer, it spends a higher amount of time in kernel execution compared to DyNet. We believe that better tensor kernel optimizations can help reduce this performance gap.
Beyond the above reasons, ACRoBat performs better on specific benchmarks for the reasons discussed below:
**Accurate parameter reuse inference and automated batched kernel generation:** As mentioned in SS5.1, ACRoBat's use of static analysis for inferring parameter reuse allows it to have accurate knowledge to statically generate the appropriate batched kernels. On the other hand, DyNet's heuristic-based approach is unable to batch instances of certain operators, forcing sequential unbatched execution which leads to low performance. Further, as described in SS5, ACRoBat's end-to-end kernel generation leads to a broader coverage over tensor operators for which batching is supported as compared to approaches such as DyNet which rely on vendor libraries. As a result, DyNet does not support batching for certain operators, again leading to sequential execution and low performance. Both these cases are discussed further in SSE.4 of the appendix.
**Automated code generation for handling tensor dependent control flow:** The DRNN model constructs a tree from an input vector representation in a top-down recursive manner. It exhibits both tensor-dependent control flow
as well as instance parallelism (multiple sub-trees can be generated concurrently). We saw how ACRoBat can automatically exploit instance parallelism in the presence of tensor-dependent control flow with the use of fibers in SS4.2. On the other hand, DyNet is unable to exploit this parallelism and therefore ACRoBat's performance on this model is significantly better than that of DyNet.
#### 7.2.2 Performance Comparison with Cortex
Table 6 compares the performance of ACRoBat with that of Cortex for the TreeLSTM, MV-RNN and the BiRNN models. Note that this is not an apples-to-apples comparison because, Cortex, being specialized for recursive computations, does not support general control flow (as is present in the other models in Table 3) unlike ACRoBat as mentioned in Table 1. Further, Cortex places a high development burden on users who are required to manually optimize and tune their models for specific hardware, unlike ACRoBat's automatic kernel generation8. Similarly, while ACRoBat can automatically hoist the input linear transformations out of the recursive computation in the TreeLSTM and BiRNN models (as described in SSB.1), they need to be manually hoisted and offloaded to cuBLAS in the case of Cortex.
Footnote 8: For example, implementing the MV-RNN model in Cortex requires 325 LoC in Python, as compared to the 79 LoC of Relay and 108 LoC of Python in ACRoBat.
Being highly specialized for recursive computations, Cortex is able to exploit aggressive kernel fusion, model persistence and incur low kernel call overheads, thus performing up to \(1.87\times\) better than ACRoBat for the TreeLSTM and BiRNN models. However, note that Cortex performs much worse than ACRoBat on the MV-RNN model. This is because Cortex's restrictive API necessitates additional copies of the embedding vectors for the leaves of the input parse trees, which ACRoBat can avoid due to its more flexible interface. Overall, ACRoBat delivers performance comparable to that of Cortex, while supporting a much wider range of DL computations with much lesser developer effort.
### Benefits of Optimizations
We now evaluate the relative benefits of the different optimizations ACRoBat performs. Fig. 5 shows the execution times for the models in Table 3 (for the large model size at a batch size of 64) as we progressively perform optimizations. Standard kernel fusion (i.e. kernel fusion not including gather operator fusion as discussed in SS5.2) provides significant benefits for all models9. Grain size coarsening and inline depth computation, both of which reduce scheduling overheads, are most beneficial for models with a relatively high amount of control flow such as TreeLSTM and MV-RNN. Further, in the case of the DRNN model, inline depth computation also enables ACRoBat to exploit the instance parallelism inherent in the computation (SS4.2) leading to lower execution time. The BiRNN model involves per-token output linear operators as in token classification. Here, program phases allow ACRoBat to batch all these operators together as described in SS4.1. The StackRNN model executes different tensor operators depending on the current parser action, which involves a conditional statement. Ghost operators therefore enable more optimal exploitation of parallelism leading to better performance.
Footnote 9: The kernels used in the implementations with and without standard kernel fusion were auto-scheduled for the same number of auto-scheduler iterations.
Gather operator fusion is advantageous for some benchmarks and but not others. Such fusion leads to indirect memory accesses which can cause a slowdown in the kernel execution. While ACRoBat does hoist such loads out of loops when appropriate, this is not always possible de
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Hidden & Batch & \multicolumn{2}{c}{TreeLSTM} & \multicolumn{2}{c}{MV-RNN} & \multicolumn{2}{c}{BiRNN} \\ Size & Size & \multicolumn{1}{c}{Certex} & ACRoBat & Cortex & ACRoBat & Cortex & ACRoBat \\ \hline small & 8 & **0.79** & 1.48 & 1.14 & **0.54** & **1.28** & 2.16 \\ small & 64 & **3.62** & 5.81 & 6.92 & **1.48** & **3.48** & 4.86 \\ large & 8 & **1.84** & 2.4 & 5.3 & **1.04** & **2.47** & 4.43 \\ large & 64 & **10.23** & 11.44 & 41.15 & **4.46** & **10.74** & 13.11 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Cortex vs. ACRoBat: Inference latencies in \(ms\). Note that unlike ACRoBat, Cortex is limited to recursive computations, and does not support the other models in Table 3. Further, Cortex places a high development burden on its users by relying on manual kernel optimization.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline Hidden & Batch & \multicolumn{2}{c}{TreeLSTM} & \multicolumn{2}{c}{MV-RNN} & \multicolumn{2}{c}{BiRNN} & \multicolumn{2}{c}{NearestRNN} & \multicolumn{2}{c}{DBNN} & \multicolumn{2}{c}{Break} & \multicolumn{2}{c}{StackRNN} \\ Size & Size & Time & Speedup & Time & Speedup & Time & Speedup & Time & Speedup & Time & Speedup & Time & Speedup & Time & Speedup \\ \hline small & 8 & 4.31/**1.48** & 2.93 & 21.11/**5.4** & 3.96 & 3.13/**2.16** & 1.45 & **29.3**/**0.51** & 0.95 & 6.71/**7.14** & 3.87 & 63.5/**48.9** & 1.66 & 47.78/**22.69** & 2.11 \\ small & 64 & 26.18/**5.81** & 4.51 & 2.45/**2.48** & 8.47 & 12.0/**4.26** & 2.49 & 84.5/**56.73** & 1.29 & 2.52/**35.24** & 4.84 & 2.04/**51.5** & - & - & 21.36/**09.06** & 5.48 \\ large & 64 & **4.58/2.4** & 1.92 & **227/1.04** & 2.19 & **3.96**/**4.33** & 0.9 & 64.03/**0.56** & 1.3 & 8.44/**42.45** & 3.45 & 113/**18/18/16.4** & 1.76 & 64.67/**43/2.75** & 1.48 \\ large & 64 & 26.53/**11.44** & 2.33 & 13.89/**4.46** & 3.13 & **12.11/**13.11** & 0.93 & **94.97**/100.17 & 0.95 & 26.5/**9.99** & 2.66 & -/315.3 & - & 230.74/**486.52** & 2.66 \\ \hline \hline \end{tabular}
\end{table}
Table 4: DyNet vs. ACRoBat: Inference latencies (DyNet/ACRoBat) in \(ms\) and speedups. The DyNet implementation of the Bexr model was killed due to out-of-memory errors for a batch size of 64.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Activity} & \multicolumn{2}{c}{TreeLSTM, small} & \multicolumn{2}{c}{BiRNN, large} \\ \cline{2-5} & DyNet & ACRoBat & DyNet & ACRoBat \\ \hline DFG construction & 8.8 & 1.5 & 4.5 & 1.0 \\ Scheduling & 9.7 & 0.4 & 3.3 & 0.4 \\ Mem. copy time & 3.1 & 0.1 & 2.3 & 0.2 \\ GPU kernel time2 & 6.1 \\ \#Kernel calls & 1653 & 183 & 580 & 380 \\ CUDA API time3 & 16.5 & 3.9 & 12.0 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Time spent (\(ms\)) in various activities1 for DyNet and ACRoBat for batch size 64.
pending on the schedule generated by the auto-scheduler. Further, gather operator fusion leads to a slowdown mostly in models with iterative execution and little instance parallelism. As in DyNet, when gather operator fusion is turned off, ACRoBat perform the explicit memory gather only when the input tensors are not already contiguous in memory. This is more likely to the case in such iterative models, thus blunting the advantages of gather operator fusion. Also, in models such as Berrit, the relatively high tensor computation cost of a coarsened static block further reduces any benefits gather operator fusion might provide.
Overall, models with a relatively lower amount of control flow or a higher amount of tensor computations such as Berrit or NestedRNN or models with the large size benefit less from optimizations that reduce scheduling overheads.
## 8 Related Work
**Auto-Batching for Dynamic Control Flow:** There has been significant work on auto-batching techniques for dynamic computations. Beyond dynamic batching (which is used in various forms in DyNet, TensorFlow Fold, Cavs and Cortex), static program transformations (Bradbury and Fu, 2018; Agarwal, 2019; Agarwal and Ganichev, 2019; Frostig et al., 2018; Radul et al., 2020) have also been explored for auto-batching. Such techniques are often unable to fully exploit all the available parallelism in the program as noted in (Radul et al., 2020). ACRoBat builds on these past techniques and effectively uses both static as well as dynamic analysis thus achieving lower runtime overheads while exploiting all the available parallelism. Online batching approaches for low latency RNN inference such as Batch-Maker (Gao et al., 2018) and E-BATCH (Silfa et al., 2020) are complementary to ACRoBat. (Qiao and Taura, 2019) proposes improvements to the dynamic batching technique for back propagation. Further, while grain size coarsening has been explored in past work, we use it statically in the context of general purpose auto-batching framework.
**Optimizing Dynamic DL Computations:** Beyond auto-batching, there is a large body of work on optimizing the execution of dynamic DL computations. Past work (Jeong et al., 2019; Kim et al., 2021; Suhan et al., 2021) has explored the lazy creation of DFGs that can be optimized to accelerate dynamic models. These techniques, which do not perform batching, are complementary to ACRoBat's techniques. While ACRoBat builds upon TVM, our techniques can be implemented in other commonly used compiler frameworks with expressive representations (PyTorch, 2020; Lattner et al., 2020) in a straightforward manner.
The gather operator fusion optimization is similar to the gather and scatter fusion (CUTLASS, 2022) performed for sparse GEMM in the CUTLASS library though we perform this optimization automatically as part of compilation. As mentioned in SSD.1, ACRoBat borrows some techniques from DietCode for efficient code generation. DietCode's techniques are complementary to ours and it can be fully integrated into ACRoBat for better kernel performance.
**Traditional Compiler Techniques:** ACRoBat uses compilation techniques for programs written in general-purpose languages. These include context-sensitivity (Aho et al., 2007), taint analysis which is extensively used for security purposes (Tripp et al., 2009; Huang et al., 2015), profile-guided optimization (Chen et al., 2006; Gupta et al., 2002) (as discussed in SSD.1 of the appendix) and program phases, which have been used to adaptively optimize systems for different parts of a program for optimal performance (Huang et al., 2001; Barnes et al., 2002). ACRoBat's inline depth computation and DFG scheduling more generally are similar to work on static and dynamic instruction scheduling for pipelined and superscalar processors (Smith, 1989; Ponomarev et al., 2001; Fisher, 1981; Gibbons and Muchnick, 1986). However, ACRoBat applies these techniques in the context of a DL framework.
## 9 Conclusion
This paper presents ACRoBat, a compiler and runtime framework that performs auto-batching of dynamic DL computations. ACRoBat employs hybrid static+dynamic analysis to enable effective batching with low runtime overheads, and end-to-end code generation to generate highly optimized tensor kernels for efficient execution. While we evaluated these techniques only for the case of batched inference, we believe that they also apply to DL training. In the context of the rising importance of dynamism in DL computations, we believe that ACRoBat is an important step towards more collaborative relationships between various components of a DL framework such as the tensor compiler, the high-level language compiler as well as the runtime.
Figure 5: Benefits of different optimizations. The unfused executions of Berrit were killed due to out-of-memory errors.
## Acknowledgements
This work was supported in part by grants from the National Science Foundation, Oracle and IBM, and by the Parallel Data Lab (PDL) Consortium (Amazon, Facebook, Google, Hewlett-Packard Enterprise, Hitachi, IBM, Intel, Microsoft, NetApp, Oracle, Pure Storage, Salesforce, Samsung, Seagate, TwoSigma and Western Digital). We would like to thank Saman Amarasinghe, Dominic Chen, Stephen Chou, Chris Fallin, Graham Neubig, Olatunji Ruwase and the Catalyst Research Group at Carnegie Mellon University for their valuable suggestions and feedback on our work.
|
2304.11632 | Towards Effective and Interpretable Human-Agent Collaboration in MOBA
Games: A Communication Perspective | MOBA games, e.g., Dota2 and Honor of Kings, have been actively used as the
testbed for the recent AI research on games, and various AI systems have been
developed at the human level so far. However, these AI systems mainly focus on
how to compete with humans, less on exploring how to collaborate with humans.
To this end, this paper makes the first attempt to investigate human-agent
collaboration in MOBA games. In this paper, we propose to enable humans and
agents to collaborate through explicit communication by designing an efficient
and interpretable Meta-Command Communication-based framework, dubbed MCC, for
accomplishing effective human-agent collaboration in MOBA games. The MCC
framework consists of two pivotal modules: 1) an interpretable communication
protocol, i.e., the Meta-Command, to bridge the communication gap between
humans and agents; 2) a meta-command value estimator, i.e., the Meta-Command
Selector, to select a valuable meta-command for each agent to achieve effective
human-agent collaboration. Experimental results in Honor of Kings demonstrate
that MCC agents can collaborate reasonably well with human teammates and even
generalize to collaborate with different levels and numbers of human teammates.
Videos are available at https://sites.google.com/view/mcc-demo. | Yiming Gao, Feiyu Liu, Liang Wang, Zhenjie Lian, Weixuan Wang, Siqin Li, Xianliang Wang, Xianhan Zeng, Rundong Wang, Jiawei Wang, Qiang Fu, Wei Yang, Lanxiao Huang, Wei Liu | 2023-04-23T12:11:04Z | http://arxiv.org/abs/2304.11632v1 | Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective
###### Abstract
MOBA games, e.g., _Dota2_ and _Honor of Kings_, have been actively used as the testbed for the recent AI research on games, and various AI systems have been developed at the human level so far. However, these AI systems mainly focus on how to compete with humans, less on exploring how to collaborate with humans. To this end, this paper makes the first attempt to investigate human-agent collaboration in MOBA games. In this paper, we propose to enable humans and agents to collaborate through explicit communication by designing an efficient and interpretable **M**eta-**C**ommand **C**ommunication-based framework, dubbed MCC, for accomplishing effective human-agent collaboration in MOBA games. The MCC framework consists of two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, to bridge the communication gap between humans and agents; 2) a meta-command value estimator, i.e., the Meta-Command Selector, to select a valuable meta-command for each agent to achieve effective human-agent collaboration. Experimental results in _Honor of Kings_ demonstrate that MCC agents can collaborate reasonably well with human teammates and even generalize to collaborate with different levels and numbers of human teammates. Videos are available at [https://sites.google.com/view/mcc-demo](https://sites.google.com/view/mcc-demo).
## 1 Introduction
Games, as the microcosm of real-world problems, have been widely used as testbeds to evaluate the performance of Artificial Intelligence (AI) techniques for decades. Recently, many researchers focus on developing various human-level AI systems for complex games, such as board games like _Go_(Silver et al., 2016, 2017), Real-Time Strategy (RTS) games like _StarCraft 2_(Vinyals et al., 2019), and Multi-player Online Battle Arena (MOBA) games like _Dota 2_(OpenAI et al., 2019). However, these AI systems mainly focus on how to compete instead of collaborating with humans, leaving Human-Agent Collaboration (HAC) in complex environments still to be investigated. In this paper, we study the HAC problem in complex MOBA games (Silva & Chaimowicz, 2017), which is characterized by multi-agent cooperation and competition mechanisms, long time horizons, enormous state-action spaces (\(10^{20000}\)), and imperfect information (OpenAI et al., 2019; Ye et al., 2020).
HAC requires the agent to collaborate reasonably with various human partners (Dafoe et al., 2020). One straightforward approach is to improve the generalization of agents, that is, to collaborate with a sufficiently diverse population of teammates during training. Recently, some population-based methods proposed to improve the generalization of agents by constructing a diverse population of partners in different ways, succeeding in video games (Jaderberg et al., 2017, 2019; Carroll et al., 2019; Strouse et al., 2021) and card games (Hu et al., 2020; Andrei et al., 2021). Furthermore, to better evaluate HAC agents, several objective as well as subjective metrics have been proposed (Du et al., 2020; Siu et al., 2021; McKee et al., 2022). However, the policy space in complex MOBA
games is enormous (Gao et al., 2021) and requires massive computing resources to build a sufficiently diverse population of agents, posing a big obstacle to the scalability of these methods.
The communication ability to explicitly share information with others is important for agents to collaborate effectively with humans (Dafoe et al., 2020). In Multi-Agent Reinforcement Learning (MARL), communication is often used to improve inter-agent collaboration. Previous work (Sukhbaatar et al., 2016; Foerster et al., 2016; Lazaridou et al., 2016; Peng et al., 2017; Mordatch & Abbeel, 2018; Singh et al., 2018; Das et al., 2019; Wang et al., 2020) mainly focused on exploring communication protocols between multiple agents. Other work (Ghavamzadeh & Mahadevan, 2004; Jiang & Lu, 2018; Kim et al., 2019) proposed to model the value of multi-agent communication for effective collaboration. However, these methods all model communication in latent spaces without considering the human-interpretable _common ground_(Clark & Brennan, 1991; Stalnaker, 2002) or _lingua franca_Kambhampati et al. (2022), making themselves less interpretable to humans. Explicit communication dominated by natural language is often considered in human-robot interaction (Kartoun et al., 2010; Liu et al., 2019; Shafiti et al., 2020; Gupta et al., 2021). However, these studies are mainly limited to collaboration between a robot and a human through one-way communication, i.e., humans give robots orders. Therefore, there is still a large room to study RL with the participation of humans.
Success in MOBA games requires subtle individual micro-operations and excellent communication and collaboration among teammates on macro-strategies, i.e., long-term intentions (Wu, 2019; Gao et al., 2021). The micro-operation ability of the existing State-Of-The-Art (SOTA) MOBA agents has exceeded the high-level (top 1%) humans (Ye et al., 2020). However, these agents' macro-strategies are deterministic and quite different from those of humans (Ye et al., 2020). Moreover, all existing SOTA MOBA AI systems lack bridges for explicit communication between agents and humans on macro-strategies. These result in the agent's behavior not being understood immediately by humans (Ye et al., 2020) and not performing well when collaborating with humans (see Section 4.3).
To this end, we propose an efficient and interpretable Meta-Command Communication-based human-agent collaboration framework, dubbed MCC, to achieve effective HAC in MOBA games through explicit communication. First, we design an interpretable communication protocol, i.e., the Meta-Command, as a general representation of macro-strategies to bridge the communication gap between agents and humans. Both macro-strategies sent by humans and messages outputted by agents can be converted into unified meta-commands (see Figure 1(b)). Second, following Gao et al. (2021), we construct a hierarchical model that includes the command encoding network (macro-strategy layer) and the meta-command conditioned action network (micro-action layer), used for agents to generate and execute meta-commands, respectively. Third, we propose a meta-command value estimator, i.e., the Meta-Command Selector, to select the optimal meta-command for each agent to execute. The training process of the MCC agent consists of three phases. We first train the command encoding network to ensure that the agent learns the distribution of meta-commands sent by humans. Afterward, we train the meta-command conditioned action network to ensure that the agent learns to execute meta-commands. Finally, we train the meta-command selector to ensure that the agent learns to select the optimal meta-commands to execute. We train and evaluate the agent in _Honor of Kings_ 5v5 mode with a full hero pool (over 100 heroes). Experimental results demonstrate the effectiveness of the MCC framework. In general, our contributions are as follows:
Figure 1: **Introduction of _Honor of Kings_. (a) Key elements in _Honor of Kings_, including the game environment, micro-operation buttons, examples of macro-strategy, and the signaling system. (b) Example of collaboration via meta-commands. The _Cone And Kill The Dragon_ is more valuable for humans A and B and agent D to collaborate, while the _Clean Up Top-Lane Minions_ is more valuable for human C and agent E to collaborate.**
* To the best of our knowledge, we are the first to investigate the HAC problem in MOBA games. We propose the MCC framework to achieve effective HAC in MOBA games.
* We design the Meta-Command to bridge the communication gap between humans and agents. We also propose the Meta-Command Selector to model the agent's value system for meta-commands.
* We introduce the training process of the MCC agent in a typical MOBA game _Honor of Kings_ and evaluate it in practical human-agent collaboration tests. Experimental results show that the MCC agent can reasonably collaborate with different levels and numbers of human teammates.
## 2 Background
### MOBA Games
MOBA games have recently received much attention from researchers, especially _Honor of Kings_(Wu, 2019; Ye et al., 2020; Ye et al., 2020; Gao et al., 2021), one of the most popular MOBA games worldwide. The gameplay is to divide ten players into two camps to compete on the same map. The game environment is shown in Figure 1(a). Each camp competes for resources through individual micro-operations (A1, A2) and team collaboration on macro-strategies (B), and finally wins the game by destroying the enemy's crystal. Players can communicate and collaborate with teammates through the in-game signaling system. Particularly, players can send macro-strategies by dragging signal buttons (C1, C2) to the corresponding locations in the mini-map (D), and these signals display to teammates in the mini-map (E). See Appendix A for detailed game introductions.
### Human-Agent Collaboration
We consider an interpretable communicative human-agent collaboration task, which can be extended from the Partially Observable Markov Decision Process (POMDP) and formulated as a tuple \(<N,H,\mathbf{S},\mathbf{A}^{N},\mathbf{A}^{H},\mathbf{O},\mathbf{M},r,P,\gamma>\), where \(N\) and \(H\) represent the numbers of agents and humans, respectively. \(\mathbf{S}\) is the space of global states. \(\mathbf{A}^{N}=\{A_{i}^{N}\}_{i=1,\dots,N}\) and \(\mathbf{A}^{H}=\{A_{i}^{H}\}_{i=1,\dots,H}\) denote the spaces of actions of \(N\) agents and \(H\) humans, respectively. \(\mathbf{O}=\{O_{i}\}_{i=1,\dots,N+H}\) denotes the space of observations of \(N\) agents and \(H\) humans. \(\mathbf{M}\) represents the space of interpretable messages. \(P:\mathbf{S}\times\mathbf{A}^{N}\times\mathbf{A}^{H}\rightarrow\mathbf{S}\) and \(r:\mathbf{S}\times\mathbf{A}^{N}\times\mathbf{A}^{H}\rightarrow\mathbb{R}\) denote the shared state transition probability function and reward function of \(N\) agents, respectively. Note that \(r\) includes both individual rewards and team rewards. \(\gamma\in[0,1)\) denotes the discount factor. For each agent \(i\) in state \(s_{t}\in\mathbf{S}\), it receives an observation \(o_{t}^{i}\in O_{i}\) and a selected message \(c_{t}^{i}\in\mathbf{M}\), and then outputs an action \(a_{t}^{i}=\pi_{\theta}(o_{t}^{i},c_{t}^{i})\in A_{i}^{N}\) and a new message \(m_{t+1}^{i}=\pi_{\phi}(o_{t}^{i})\in\mathbf{M}\), where \(\pi_{\theta}\) and \(\pi_{\phi}\) are action network and message encoding network, respectively. A message selector \(c_{t}^{i}=\pi_{\omega}(o_{t}^{i},C_{t})\) is introduced to select a message \(c_{t}^{i}\) from a message set \(C_{t}=\{m_{t}^{i}\}_{i=1,\dots,N+H}\subset\mathbf{M}\).
We divide the HAC problem in MOBA games into the Human-to-Agent (H2A) and the Agent-to-Human (A2H) scenarios. **The H2A Scenario:** Humans send their macro-strategies as messages to agents, and agents select the optimal one to collaborate with humans based on their value systems. **The A2H Scenario:** Agents send their messages as macro-strategies to humans, and humans select the optimal one to collaborate with agents based on their value systems. The goal of both scenarios is that agents and humans communicate macro-strategies with pre-defined communication protocols and then select valuable macro-strategies for effective collaboration to win the game.
## 3 Meta-Command Communication-Based Framework
In this section, we present the proposed MCC framework in detail. We first briefly describe three key stages of the MCC framework (Section 3.1). Then we introduce its two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, as a general representation of macro-strategies to bridge the communication gap between agents and humans (Section 3.2); 2) a meta-command value estimator, i.e., the Meta-Command Selector, to model the agent's value system for meta-commands to achieve effective HAC in MOBA games (Section 3.3).
### Overview
The process of the MCC framework consists of three stages: (I) the **Meta-Command Conversion** Stage, (II) the **Meta-Command Communication** Stage, and (III) the **Human-Agent Collaboration**
Stage, as shown in Figure 2. Notably, Stage I and II are executed at each communication step, and Stage III is executed at each time step. At Stage I, the MCC framework converts humans' explicit messages and agents' implicit messages into unified meta-commands \(m_{t}^{H},m_{t}\), respectively, and broadcasts them to all agents and humans. At Stage II, the MCC framework estimates the values of all received meta-commands \(C_{t}\) and selects the optimal one \(c_{t}\in C_{t}\) for each agent to execute. The selected meta-command will remain unchanged between each two communication steps (e.g. within \([t,t+T^{mc})\) time steps). At stage III, the MCC framework predicts a sequence of actions for each agent to perform based on its selected meta-command \(c_{t}\). In each game, humans and agents collaborate multiple times, that is, execute the three stages multiple times, to win the game.
### Meta-Command
We divide a macro-strategy into three components: where to go, what to do, and how long. For example, a macro-strategy can be _Come And Kill The Dragon_, which consists of _Come To The Dragon Location_ (where to go), _Attack The Dragon_ (what to do), and _Until The Dragon Is Killed_ (how long). Thus, a general representation of macro-strategies, i.e., the Meta-Command, can be formulated as a tuple \(<L,E,T^{mc}>\), as shown in Figure 1(b), where \(L\) is the _Location_ to go, \(E\) is the _Event_ to do after reaching \(L\), and \(T^{mc}\) is the _Time Limit_ for executing the meta-command.
**Meta-Command Conversion.** To realize bidirectional interpretable human-agent communication, the MCC framework converts humans' explicit messages and agents' implicit messages into unified meta-commands. To achieve the former, we use the Command Converter function \(f^{cc}\) (Appendix B.6.1) to extract corresponding location \(L^{H}\) and event \(E^{H}\) from explicit messages sent by humans in the in-game signaling system. To achieve the latter, we use a command encoder network (CEN) \(\pi_{\phi}(m|o)\) to generate \(L\) and \(E\) based on the agent's observation \(o\). The CEN is trained via supervised learning (SL) with the goal of learning the distribution of meta-commands sent by humans (Appendix B.6.2). In MOBA game settings, we use a common location description, i.e., divide \(L\) of meta-commands in the map into 144 grids. Since the macro-strategy space is enormous (Gao et al., 2021), customizing corresponding rewards for each specific event to train the agent is not conducive to generalization and is even impossible. Instead, we train a micro-action network to learn to do optimal event \(E^{*}\) at location \(L\), just as humans do optimal micro-operations at location \(L\) based on their own value systems. We also do not specify a precise \(T^{mc}\) for the execution of each specific meta-command. Instead, we set \(T^{mc}\) to how long it takes a human to complete a macro-strategy in MOBA games. Usually, 20 seconds corresponds to an 80% completion rate, based on our statistics (Appendix B.6.2). Thus, the MCC framework converts humans' explicit messages into meta-commands \(m^{H}=<L^{H},E^{H},T^{mc}>\), generates meta-commands \(m=<L,E^{*},T^{mc}>\) for agents based on their observations (Figure 2(I)), and then broadcasts them to all agents and humans.
**Meta-Command Execution.** After receiving a meta-command candidate set, agents can select one meta-command from it to execute. Note that the MCC framework will replace \(E^{H}\) with \(E^{*}\) when the agent selects a meta-command from humans. We adopt the MCCAN \(\pi_{\theta}(a|o,m)\) for agents to perform
Figure 2: **The MCC framework. (a) Three key stages of MCC: (I) the meta-command conversion stage, (II) the meta-command communication stage, and (III) the human-agent collaboration stage. (b) The temporal process of MCC. Stage I and II are executed at each communication step. Stage III is executed at each time step.**
actions based on the selected meta-command, as shown in Figure 3(a)(II). The MCCAN is trained via self-play RL with the goal of achieving a high completion rate for the meta-commands while ensuring that the win rate is not reduced. To achieve this, we introduce extrinsic rewards \(r\) (including individual and team rewards) and intrinsic rewards \(r_{t}^{int}(s_{t},m_{t},s_{t+1})=\left|f^{ce}(s_{t})-m_{t}\right|-\left|f^{ce}(s _{t+1})-m_{t}\right|\), where \(f^{ce}\) extracts the agent's location from state \(s_{t}\), and \(\left|f^{ce}(s_{t})-m_{t}\right|\) is the distance between the agent's location and the meta-command's location at time step \(t\). Intuitively, the intrinsic rewards are adopted to guide the agent to reach \(L\) of the meta-command and stay at \(L\) to do some event \(E\). The extrinsic rewards are adopted to guide the agent to perform optimal actions to reach \(L\) and do optimal event \(E^{*}\) at \(L\). Overall, the _optimization objective_ is maximizing the expectation over extrinsic and intrinsic discounted total rewards \(G_{t}=\mathbb{E}_{s\sim d_{\pi_{\omega}},a\sim\pi_{\theta}}\left[\sum_{i=0}^{ \infty}\gamma^{i}r_{t+i}+\alpha\sum_{j=0}^{T^{mc}}\gamma^{j}r_{t+j}^{int}\right]\), where \(d_{\pi}(s)=\lim_{t\rightarrow\infty}P\left(s_{t}=s\mid s_{0},\pi\right)\) is the probability when following \(\pi\) for \(t\) time steps from \(s_{0}\). We use \(\alpha\) to weigh the intrinsic and extrinsic rewards.
After training the CEN and the MCCAN, we can achieve HAC by simply setting an agent to randomly select a meta-command derived from humans to execute. However, such collaboration is non-intelligent and can even be a disaster for game victory because agents have no mechanism to model meta-commands' values and cannot choose the optimal meta-command to execute. While humans usually choose the optimal one based on their value systems for achieving effective collaboration to win the game. Thus, we propose a meta-command value estimator to model the agent's value systems for meta-commands, as described in the following subsection.
### Meta-Command Selector
In MOBA games, the same macro-strategy often has different values for different humans in different situations. For example, a macro-strategy can be _Come And Kill The Dragon_, as shown in Figure 1(b). It is more valuable for humans A and B and agent D to collaborate. However, another macro-strategy _Clean Up Top-Lane Minions_ is more valuable for human C and agent E than the others. Therefore, agents must select the most valuable meta-command from the received meta-command candidate set \(C\) to achieve effective human-agent collaboration. We propose a meta-command value estimator, i.e., the Meta-Command Selector (CS) \(\pi_{\omega}(o,C)\), to estimate the values of all received meta-commands and select the most valuable one for each agent to execute.
**CS Optimization Objective.** Typically, executing a meta-command involves reaching location \(L\) and doing event \(E\), of which the latter is more important to the value of the meta-command. For example, for the meta-command _Come And Kill The Dragon_, if the event _Kill The Dragon_ cannot be done within \(T^{mc}\) time steps, then it is pointless to _Come To The Dragon_. Thus, the _optimization objective_ of CS is to select the optimal meta-command \(m_{t}^{*}=\pi_{\omega}(o_{t},C_{t})\) for each agent to maximize the expected discounted meta-command execution return \(G_{t}^{mc}\),
\[G_{t}^{mc}=\mathbb{E}_{s\sim d_{\pi_{\theta}},m\sim\pi_{\omega} \sim\pi_{\theta}}\left[\sum_{i=0}^{\infty}\gamma_{mc}^{mc}R_{t+i:T^{mc}}^{mc} \right],\quad R_{t}^{mc}=\underbrace{\sum_{i=0}^{T^{L}}r_{t+i}}_{(\text{I})}+ \underbrace{\beta\sum_{j=T^{L}}^{mc}r_{t+j}}_{(\text{II})},\]
Figure 3: **The training process and model structure of MCC.** (a) The training process is divided into three phases: we first (I) train the CEN via supervised learning (SL), then (II) train the MCCAN via goal-conditioned RL, and finally (III) train the CS via RL. Among them, the dashed box represents the frozen model. (b) The detailed CS model structure, including CNN feature extraction, gating mechanism, target attention module, etc.
where \(o_{t}\in\mathbf{O}\), \(C_{t}\) is the meta-command candidate set in state \(s_{t}\), \(\gamma_{mc}\in[0,1)\) is the discount factor, and \(R_{t}^{mc}\) is a generalized meta-command execution reward function. For \(R_{t}^{mc}\), (I) and (II) are the total extrinsic rewards \(r\) before reaching location \(L\) and doing event \(E\), respectively. \(T^{L}\leq T^{mc}\) is the time for reaching \(L\), and \(\beta>1\) is a trade-off parameter representing the relative importance of \(E\).
**CS Training Process.** We construct a self-play training environment for CS, where agents can send messages to each other, as shown in Figure 3(a)(III). Specifically, three tricks are adopted to increase the sample efficiency while ensuring efficient exploration. First, each meta-command \(m\) is sampled with the argmax rule from the results outputted by the pre-trained CEN. Second, each agent sends its meta-command with a probability \(p\) every \(T^{mc}\) time steps. Finally, each agent selects the final meta-command \(c\) sampled with the softmax rule from its CS output results and hands it over to the pre-trained MCCAN for execution. We use the multi-head value mechanism (Ye et al., 2020) to model the value of the meta-command execution, which can be formulated as:
\[L^{V}(\omega)=\mathbb{E}_{O,C}\left[\sum_{head_{k}}\|G_{k}^{mc}-V_{\omega}^{k }(O,C)\|_{2}\right],\]
where \(V_{k}^{k}(S,C)\) is the value of the \(k\)-th head. For DQN-based methods (Mnih et al., 2015; Van Hasselt et al., 2016; Wang et al., 2016), the \(Q\) loss is:
\[L^{Q}(\omega)=\mathbb{E}_{O,C,M}\left[\|G_{total}-Q_{\omega}^{k}(O,C,M)\|_{2} \right],\quad G_{total}=\sum_{head_{k}}w_{k}G_{k}^{mc},\]
where \(w_{k}\) is the weight of the \(k\)-th head and \(G_{k}^{mc}\) is the Temporal Difference (TD) estimated value error \(R_{k}^{mc}+\gamma_{mc}V_{\omega}^{k}(O,C^{\prime})-V_{\omega}^{k}(O,C)\).
**CS Model Structure.** We design a general network structure for CS, as shown in Figure 3(b). In MOBA games, the meta-commands in adjacent regions have similar values. Thus, we divide the meta-commands in the map into grids, a common location description for MOBA games, and use the shared Convolutional Neural Network (CNN) to extract region-related information to improve the generalization of CS to adjacent meta-commands. Then, the map embeddings of all received meta-commands are integrated into a map set embedding by max-pooling. Besides, we use the gating mechanism (Liu et al., 2021) to fuse the map set embedding and the state embedding of the observation information. Finally, to directly construct the relationship between the observation information and each meta-command, we introduce a target attention module, where the query is the fused embedding and the key is the map embedding of each meta-command. The fused embedding is fed into the subsequent state-action value network \(Q(o,C,m)\) and state value network \(V(o,C)\) of CS. In this way, we can also easily convert the state-action value network \(Q(o,C,m)\) to the policy network \(\pi(m|o,C)\). Thus, the CS model structure can be easily applied to the most popular RL algorithms, such as PPO (Schulman et al., 2017), DQN (Mnih et al., 2015), etc.
## 4 Experiments
In this section, we evaluate the proposed MCC framework by performing both agent-only and human-agent experiments in _Honor of Kings_. All experiments were conducted in the 5v5 mode with a full hero pool (over 100 heroes, see Appendix A.4).
### Experimental Setup
Due to the complexity of MOBA games and limited resources, we train the CEN, the MCCAN, and the CS sequentially instead of training the MCC framework jointly. Specifically, we first train the CEN via SL until it converges for 26 hours using 8 NVIDIA P40 GPUs. The batch size of each GPU is set to 512. Then, we train the MCCAN by fine-tuning the pre-trained WuKong model (Ye et al., 2020) conditioned on the meta-command sampled from the pre-trained CEN. The MCCAN is trained until it converges for 48 hours using a physical computer cluster with 63,000 CPUs and 560 NVIDIA V100 GPUs. The batch size of each GPU is set to 256. The parameter \(\alpha\) is set to 16. After that, we train the CS via self-play until it converges for 24 hours using a physical computer cluster with 70,000 CPUs and 680 NVIDIA V100 GPUs. The batch size of each GPU is set to 256. The parameter \(\beta\) is set to 2. Each agent sends a meta-command with a probability \(p\) of 0.8 and an interval \(T^{mc}\) of 20s, as shown in Figure 4(a). For the entire training process of the MCC framework,
the location \(L\) of meta-commands in the game map is divided into 144 grids, and the time limit \(T^{mc}\) for the meta-command execution is set to 20s. Finally, we obtain the trained MCC agent that can receive meta-commands from other agents and humans and select the most valuable one to execute.
To evaluate the performance of the MCC agent, we conduct both agent-only and human-agent experiments and compare the MCC agent with three different types of agents: the MC-Base agent (agent only executes its own meta-command without communication), the MC-Rand agent (agent randomly selects a meta-command to execute), and the MC-Rule agent (agent selects the nearest meta-command to execute). Note that the MC-Base agent can be considered as a State-Of-The-Art (SOTA) in _Honor of Kings_ since it maintains the same capabilities as the WuKong agent (Ye et al., 2020) (Appendix B.6.3). Results are reported over five random seeds.
### Agent-Only Collaboration
Directly evaluating agents with humans is expensive, which is not conducive to model selection and iteration. Instead, we built two agent-only testing environments, Test I and Test II, to evaluate agents, as shown in Figure 4(b). Test I is a complex environment where all agent teammates can send and receive meta-commands simultaneously with an interval of 20s. Test I evaluates the agents' performance under complex situations. Test II is a simple environment to simulate most practical game scenarios, where at most one human can send his/her macro-strategy at a time step. Thus, in Test II, only one agent is randomly selected to send its meta-command with an interval of 20s, and the other agents only receive meta-commands. See the detailed experimental results of the CEN and MCCAN in Appendixes B.6.2 and B.6.3, respectively.
**Finding 1: MCC outperforms all baselines.**
We first evaluate the capabilities of the MCC agent and baselines to examine the effectiveness of CS. Figure 5(a) and (b) show the win rates (WRs) of four types of agent teams who play against each other for 600 matches in Test I and Test II, respectively. Figure 5(c) demonstrates the final Elo scores (Coulom, 2008) of these agents. We see that the MCC agent team significantly outperforms the MC-Rand and MC-Rule agent teams. This indicates that compared to selecting meta-commands randomly or by specified rules, the CS can select valuable meta-commands for agents to execute, resulting in effective collaboration. Such collaboration manners in the MCC agent team can even be conducive to winning the game, bringing about 10% WR improvement against the MC-Base agent team. On the contrary, the unreasonable collaboration manners in the MC-Rand and MC-Rule agent teams can hurt performance, leading to significant decreases in the WR against the MC-Base agent team. Note that the MC-Base agent has the same capabilities as the WuKong agent (Ye et al., 2020), the SOTA in _Honor of Kings_. Overall, the MCC agent achieves the highest Elo scores compared to all baselines in both testing environments, validating the effectiveness of CS. Notably, we also find that the WRs of the MCC agent in Test I and Test II are close, suggesting that the MCC agent can generalize to different numbers of meta-commands. We also investigate the influence of different components, including CNN feature extraction with the gating mechanism (Liu et al., 2021), target attention module, and optimization algorithms on the performance of CS (Appendix B.6.4).
Figure 4: **Communication environments in the experiment. The orange arrows indicate sending meta-commands, and the blue arrows indicate receiving meta-commands. The dashed line denotes sending meta-commands with probability \(p\).**
### Human-Agent Collaboration
In this section, we conduct an online experiment to evaluate the MCC agent and baselines in collaborating with humans, as shown in Figure 4(c). We contacted the game provider and got a test authorization. The game provider helped us recruit 30 experienced participants with personal information stripped, including 15 high-level (top1%) and 15 general-level (top30%) participants. We used a within-participant design: _m Human + n Agent_ (_mH + nA_) team mode to evaluate the performance of agents teaming up with different numbers of participants, where \(m+n=5\). This design allowed us to evaluate both objective performances as well as subjective preferences.
All participants read detailed guidelines and provided informed consent before the testing. Participants tested 20 matches for the _1H + 4A_ team mode. High-level participants tested additional 10 matches for the _2H + 3A_ and the _3H + 2A_ team modes, respectively. After each test, participants reported their preference over the agent teammates. For fair comparisons, participants were not told the type of their agent teammates. The MC-Base agent team was adopted as the fixed opponent for all tests. To eliminate the effects of collaboration between agents, we prohibit communication between agents. Thus the agents can only communicate with their human teammates. See Appendix C for additional experimental details, including experimental design, result analysis, and ethical review.
#### Finding 1: Human-MCC team achieves the highest WR across team modes and human levels.
We first compare the human-agent team objective performance metrics supported by the MCC agent and baselines, as shown in Table 1. We see that the human-MCC team significantly outperforms all other human-agent teams across different team modes and human levels. This indicates that the MCC agent can generalize to different levels and numbers of human teammates. Note that the SOTA agent can easily beat the high-level human players (Nair, 2019; Chen, 2021). So as the number of participants increases, the WRs of all human-agent teams decrease. Surprisingly, the WR increased significantly when the participants teamed up with the MCC agent. We suspect that the human-MCC team has also achieved effective communication and collaboration on macro-strategies. To verify this, we count the Response Rates (RRs) of agents and participants to the meta-commands sent from their teammates, as shown in Table 2. We find that the RRs of the MCC agents to high-level participants (73.05%) and the high-level participants to the MCC agents (78.5%) are close to the RR of high-level participants themselves (74.91%). This suggests that the CS is close to the value system of high-level humans. Besides, the RRs of participants to the MCC agents (73.43% and 78.5%) are higher than those of the MC-Rand agents (41.07% and 35.69%), indicating that participants collaborated with the MCC agents more often and more effectively.
### Finding 2: Participants prefer MCC over all baselines.
We then compare the subjective preference metrics, i.e., the Reasonableness of H2A, the Reasonableness of A2H, and the Overall Preference, reported by participants over their agent teammates, as shown in Figure 6. Participants believed that the MCC agent responded more reasonably and gave the highest score in the Reasonableness of the H2A metric (Figure 6(a)). Besides, participants also believed that the meta-commands sent by the MCC agent are more aligned with their own value system and rated the MCC agent much better than the MC-Rand agent in the Reasonableness of A2H metric (Figure 6(b)). In general, participants were satisfied with the MCC agent over the other agents and gave the highest score in the Overall Preference metric (Figure 6(c)). The results of these subjective preference metrics are also consistent with the results of objective performance metrics.
### Collaborative Interpretability Analysis
To better understand how the MCC agents and humans can collaborate effectively and interpretably. We visualize the comparison of CS and high-level participants' value systems on a game scene with three meta-commands existing, as shown in Figure 7. We see that the CS selects the meta-command B for the two heroes in the red dashed box to collaborate, selects the meta-command C for the two heroes in the purple dashed box to collaborate, and selects the meta-command A for the remaining hero to execute alone. The CS selection results are consistent with the ranking results of high-level participants, confirming the effectiveness of the collaboration behaviors between the MCC agents and humans. Such collaboration enhances the human interpretability of agent behavior.
## 5 Conclusion
In this work, we proposed an efficient and interpretable Meta-Command Communication-based framework, dubbed MCC, to achieve effective human-agent collaboration in MOBA games. To bridge the communication gap between humans and agents, we designed the Meta-Command - a common ground between humans and agents for bidirectional communication. To achieve effective collaboration, we constructed the Meta-Command Selector - a value estimator for agents to select
Figure 6: **Participants’ preference over their agent teammates. (a) Reasonableness of H2A: how well the agents respond to your meta-commands. (b) Reasonableness of A2H: how reasonable the meta-commands sent by agents are. (c) Overall Preference: your overall preference for the agent teammates. Participants scored (1: Terrible, 2: Poor, 3: Normal, 4: Good, 5: Perfect) in these metrics after each game test. Error bars represent 95% confidence intervals, calculated over games. See Appendix C.2.3 for detailed wording and scale descriptions.**
Figure 7: **Case study on the value estimation of CS and average rank of high-level participants.** |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.